WorldWideScience

Sample records for web accessible predictions

  1. Web Accessibility and Guidelines

    Science.gov (United States)

    Harper, Simon; Yesilada, Yeliz

    Access to, and movement around, complex online environments, of which the World Wide Web (Web) is the most popular example, has long been considered an important and major issue in the Web design and usability field. The commonly used slang phrase ‘surfing the Web’ implies rapid and free access, pointing to its importance among designers and users alike. It has also been long established that this potentially complex and difficult access is further complicated, and becomes neither rapid nor free, if the user is disabled. There are millions of people who have disabilities that affect their use of the Web. Web accessibility aims to help these people to perceive, understand, navigate, and interact with, as well as contribute to, the Web, and thereby the society in general. This accessibility is, in part, facilitated by the Web Content Accessibility Guidelines (WCAG) currently moving from version one to two. These guidelines are intended to encourage designers to make sure their sites conform to specifications, and in that conformance enable the assistive technologies of disabled users to better interact with the page content. In this way, it was hoped that accessibility could be supported. While this is in part true, guidelines do not solve all problems and the new WCAG version two guidelines are surrounded by controversy and intrigue. This chapter aims to establish the published literature related to Web accessibility and Web accessibility guidelines, and discuss limitations of the current guidelines and future directions.

  2. Providing access to risk prediction tools via the HL7 XML-formatted risk web service.

    Science.gov (United States)

    Chipman, Jonathan; Drohan, Brian; Blackford, Amanda; Parmigiani, Giovanni; Hughes, Kevin; Bosinoff, Phil

    2013-07-01

    Cancer risk prediction tools provide valuable information to clinicians but remain computationally challenging. Many clinics find that CaGene or HughesRiskApps fit their needs for easy- and ready-to-use software to obtain cancer risks; however, these resources may not fit all clinics' needs. The HughesRiskApps Group and BayesMendel Lab therefore developed a web service, called "Risk Service", which may be integrated into any client software to quickly obtain standardized and up-to-date risk predictions for BayesMendel tools (BRCAPRO, MMRpro, PancPRO, and MelaPRO), the Tyrer-Cuzick IBIS Breast Cancer Risk Evaluation Tool, and the Colorectal Cancer Risk Assessment Tool. Software clients that can convert their local structured data into the HL7 XML-formatted family and clinical patient history (Pedigree model) may integrate with the Risk Service. The Risk Service uses Apache Tomcat and Apache Axis2 technologies to provide an all Java web service. The software client sends HL7 XML information containing anonymized family and clinical history to a Dana-Farber Cancer Institute (DFCI) server, where it is parsed, interpreted, and processed by multiple risk tools. The Risk Service then formats the results into an HL7 style message and returns the risk predictions to the originating software client. Upon consent, users may allow DFCI to maintain the data for future research. The Risk Service implementation is exemplified through HughesRiskApps. The Risk Service broadens the availability of valuable, up-to-date cancer risk tools and allows clinics and researchers to integrate risk prediction tools into their own software interface designed for their needs. Each software package can collect risk data using its own interface, and display the results using its own interface, while using a central, up-to-date risk calculator. This allows users to choose from multiple interfaces while always getting the latest risk calculations. Consenting users contribute their data for future

  3. Moving toward a universally accessible web: Web accessibility and education.

    Science.gov (United States)

    Kurt, Serhat

    2017-12-08

    The World Wide Web is an extremely powerful source of information, inspiration, ideas, and opportunities. As such, it has become an integral part of daily life for a great majority of people. Yet, for a significant number of others, the internet offers only limited value due to the existence of barriers which make accessing the Web difficult, if not impossible. This article illustrates some of the reasons that achieving equality of access to the online world of education is so critical, explores the current status of Web accessibility, discusses evaluative tools and methods that can help identify accessibility issues in educational websites, and provides practical recommendations and guidelines for resolving some of the obstacles that currently hinder the achievability of the goal of universal Web access.

  4. From Web accessibility to Web adaptability.

    Science.gov (United States)

    Kelly, Brian; Nevile, Liddy; Sloan, David; Fanou, Sotiris; Ellison, Ruth; Herrod, Lisa

    2009-07-01

    This article asserts that current approaches to enhance the accessibility of Web resources fail to provide a solid foundation for the development of a robust and future-proofed framework. In particular, they fail to take advantage of new technologies and technological practices. The article introduces a framework for Web adaptability, which encourages the development of Web-based services that can be resilient to the diversity of uses of such services, the target audience, available resources, technical innovations, organisational policies and relevant definitions of 'accessibility'. The article refers to a series of author-focussed approaches to accessibility through which the authors and others have struggled to find ways to promote accessibility for people with disabilities. These approaches depend upon the resource author's determination of the anticipated users' needs and their provision. Through approaches labelled as 1.0, 2.0 and 3.0, the authors have widened their focus to account for contexts and individual differences in target audiences. Now, the authors want to recognise the role of users in determining their engagement with resources (including services). To distinguish this new approach, the term 'adaptability' has been used to replace 'accessibility'; new definitions of accessibility have been adopted, and the authors have reviewed their previous work to clarify how it is relevant to the new approach. Accessibility 1.0 is here characterised as a technical approach in which authors are told how to construct resources for a broadly defined audience. This is known as universal design. Accessibility 2.0 was introduced to point to the need to account for the context in which resources would be used, to help overcome inadequacies identified in the purely technical approach. Accessibility 3.0 moved the focus on users from a homogenised universal definition to recognition of the idiosyncratic needs and preferences of individuals and to cater for them. All of

  5. Web Accessibility in Romania: The Conformance of Municipal Web Sites to Web Content Accessibility Guidelines

    OpenAIRE

    Costin PRIBEANU; Ruxandra-Dora MARINESCU; Paul FOGARASSY-NESZLY; Maria GHEORGHE-MOISII

    2012-01-01

    The accessibility of public administration web sites is a key quality attribute for the successful implementation of the Information Society. The purpose of this paper is to present a second review of municipal web sites in Romania that is based on automated accessibility checking. A number of 60 web sites were evaluated against WCAG 2.0 recommendations. The analysis of results reveals a relatively low web accessibility of municipal web sites and highlights several aspects. Firstly, a slight ...

  6. 2B-Alert Web: An Open-Access Tool for Predicting the Effects of Sleep/Wake Schedules and Caffeine Consumption on Neurobehavioral Performance.

    Science.gov (United States)

    Reifman, Jaques; Kumar, Kamal; Wesensten, Nancy J; Tountas, Nikolaos A; Balkin, Thomas J; Ramakrishnan, Sridhar

    2016-12-01

    Computational tools that predict the effects of daily sleep/wake amounts on neurobehavioral performance are critical components of fatigue management systems, allowing for the identification of periods during which individuals are at increased risk for performance errors. However, none of the existing computational tools is publicly available, and the commercially available tools do not account for the beneficial effects of caffeine on performance, limiting their practical utility. Here, we introduce 2B-Alert Web, an open-access tool for predicting neurobehavioral performance, which accounts for the effects of sleep/wake schedules, time of day, and caffeine consumption, while incorporating the latest scientific findings in sleep restriction, sleep extension, and recovery sleep. We combined our validated Unified Model of Performance and our validated caffeine model to form a single, integrated modeling framework instantiated as a Web-enabled tool. 2B-Alert Web allows users to input daily sleep/wake schedules and caffeine consumption (dosage and time) to obtain group-average predictions of neurobehavioral performance based on psychomotor vigilance tasks. 2B-Alert Web is accessible at: https://2b-alert-web.bhsai.org. The 2B-Alert Web tool allows users to obtain predictions for mean response time, mean reciprocal response time, and number of lapses. The graphing tool allows for simultaneous display of up to seven different sleep/wake and caffeine schedules. The schedules and corresponding predicted outputs can be saved as a Microsoft Excel file; the corresponding plots can be saved as an image file. The schedules and predictions are erased when the user logs off, thereby maintaining privacy and confidentiality. The publicly accessible 2B-Alert Web tool is available for operators, schedulers, and neurobehavioral scientists as well as the general public to determine the impact of any given sleep/wake schedule, caffeine consumption, and time of day on performance of a

  7. Non-visual Web Browsing: Beyond Web Accessibility.

    Science.gov (United States)

    Ramakrishnan, I V; Ashok, Vikas; Billah, Syed Masum

    2017-07-01

    People with vision impairments typically use screen readers to browse the Web. To facilitate non-visual browsing, web sites must be made accessible to screen readers, i.e., all the visible elements in the web site must be readable by the screen reader. But even if web sites are accessible, screen-reader users may not find them easy to use and/or easy to navigate. For example, they may not be able to locate the desired information without having to listen to a lot of irrelevant contents. These issues go beyond web accessibility and directly impact web usability. Several techniques have been reported in the accessibility literature for making the Web usable for screen reading. This paper is a review of these techniques. Interestingly, the review reveals that understanding the semantics of the web content is the overarching theme that drives these techniques for improving web usability.

  8. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  9. Ocean Drilling Program: Web Site Access Statistics

    Science.gov (United States)

    web site ODP/TAMU Science Operator Home Ocean Drilling Program Web Site Access Statistics* Overview See statistics for JOIDES members. See statistics for Janus database. 1997 October November December

  10. Web accessibility standards and disability: developing critical perspectives on accessibility.

    Science.gov (United States)

    Lewthwaite, Sarah

    2014-01-01

    Currently, dominant web accessibility standards do not respect disability as a complex and culturally contingent interaction; recognizing that disability is a variable, contrary and political power relation, rather than a biological limit. Against this background there is clear scope to broaden the ways in which accessibility standards are understood, developed and applied. Commentary. The values that shape and are shaped by legislation promote universal, statistical and automated approaches to web accessibility. This results in web accessibility standards conveying powerful norms fixing the relationship between technology and disability, irrespective of geographical, social, technological or cultural diversity. Web accessibility standards are designed to enact universal principles; however, they express partial and biopolitical understandings of the relation between disability and technology. These values can be limiting, and potentially counter-productive, for example, for the majority of disabled people in the "Global South" where different contexts constitute different disabilities and different experiences of web access. To create more robust, accessible outcomes for disabled people, research and standards practice should diversify to embrace more interactional accounts of disability in different settings. Implications for Rehabilitation Creating accessible experiences is an essential aspect of rehabilitation. Web standards promote universal accessibility as a property of an online resource or service. This undervalues the importance of the user's intentions, expertize, their context, and the complex social and cultural nature of disability. Standardized, universal approaches to web accessibility may lead to counterproductive outcomes for disabled people whose impairments and circumstances do not meet Western disability and accessibility norms. Accessible experiences for rehabilitation can be enhanced through an additional focus on holistic approaches to

  11. Web accessibility of public universities in Andalusia

    Directory of Open Access Journals (Sweden)

    Luis Alejandro Casasola Balsells

    2017-06-01

    Full Text Available This paper describes an analysis conducted in 2015 to evaluate the accessibility of content on Andalusian public university websites. In order to determinate whether these websites are accessible, an assessment has been carried out to check conformance with the latest Web Content Accessibility Guidelines (WCAG 2.0 established by the World Wide Web Consortium (W3C. For this purpose, we have designed a methodology for analysis that combines the use of three automatic tools (eXaminator, MINHAP web accessibility tool, and TAW with a manual analysis to provide a greater reliability and validity of the results. Although the results are acceptable overall, a detailed analysis shows that more is still needed for achieving full accessibility for the entire university community. In this respect, we suggest several corrections to common accessibility errors for facilitating the design of university web portals.

  12. Web accessibility and open source software.

    Science.gov (United States)

    Obrenović, Zeljko

    2009-07-01

    A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long-term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities, for which existing browsers tend to provide only partial solutions. Although functionality for Web accessibility can be found as open source and free software components, their reuse and integration is complex because they were developed in diverse implementation environments, following standards and conventions incompatible with the Web. To address these problems, we have started several activities that aim at exploiting the potential of open-source software for Web accessibility. The first of these activities is the development of Adaptable Multi-Interface COmmunicator (AMICO):WEB, an infrastructure that facilitates efficient reuse and integration of open source software components into the Web environment. The main contribution of AMICO:WEB is in enabling the syntactic and semantic interoperability between Web extension mechanisms and a variety of integration mechanisms used by open source and free software components. Its design is based on our experiences in solving practical problems where we have used open source components to improve accessibility of rich media Web applications. The second of our activities involves improving education, where we have used our platform to teach students how to build advanced accessibility solutions from diverse open-source software. We are also partially involved in the recently started Eclipse projects called Accessibility Tools Framework (ACTF), the aim of which is development of extensible infrastructure, upon which developers can build a variety of utilities that help to evaluate and enhance the accessibility of applications and content for people with disabilities. In this article we briefly report on these activities.

  13. Web Based Remote Access Microcontroller Laboratory

    OpenAIRE

    H. Çimen; İ. Yabanova; M. Nartkaya; S. M. Çinar

    2008-01-01

    This paper presents a web based remote access microcontroller laboratory. Because of accelerated development in electronics and computer technologies, microcontroller-based devices and appliances are found in all aspects of our daily life. Before the implementation of remote access microcontroller laboratory an experiment set is developed by teaching staff for training microcontrollers. Requirement of technical teaching and industrial applications are considered when expe...

  14. Web services interface to EPICS channel access

    Institute of Scientific and Technical Information of China (English)

    DUAN Lei; SHEN Liren

    2008-01-01

    Web services is used in Experimental Physics and Industrial Control System (EPICS). Combined with EPICS Channel Access protocol, Web services' high usability, platform independence and language independence can be used to design a fully transparent and uniform software interface layer, which helps us complete channel data acquisition, modification and monitoring functions. This software interface layer, a cross-platform of cross-language,has good interopcrability and reusability.

  15. Web services interface to EPICS channel access

    International Nuclear Information System (INIS)

    Duan Lei; Shen Liren

    2008-01-01

    Web services is used in Experimental Physics and Industrial Control System (EPICS). Combined with EPICS Channel Access protocol, Web services high usability, platform independence and language independence can be used to design a fully transparent and uniform software interface layer, which helps us complete channel data acquisition, modification and monitoring functions. This software interface layer, a cross-platform of cross-language, has good interoperability and reusability. (authors)

  16. Checklist of accessibility in Web informational environments

    Directory of Open Access Journals (Sweden)

    Christiane Gomes dos Santos

    2017-01-01

    Full Text Available This research deals with the process of search, navigation and retrieval of information by the person with blindness in web environment, focusing on knowledge of the areas of information recovery and architecture, to understanding the strategies used by these people to access the information on the web. It aims to propose the construction of an accessibility verification instrument, checklist, to be used to analyze the behavior of people with blindness in search actions, navigation and recovery sites and pages. It a research exploratory and descriptive of qualitative nature, with the research methodology, case study - the research to establish a specific study with the simulation of search, navigation and information retrieval using speech synthesis system, NonVisual Desktop Access, in assistive technologies laboratory, to substantiate the construction of the checklist for accessibility verification. It is considered the reliability of performed research and its importance for the evaluation of accessibility in web environment to improve the access of information for people with limited reading in order to be used on websites and pages accessibility check analysis.

  17. A Technique to Speedup Access to Web Contents

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. Web Caching - A Technique to Speedup Access to Web Contents. Harsha Srinath Shiva Shankar Ramanna. General Article Volume 7 Issue 7 July 2002 pp 54-62 ... Keywords. World wide web; data caching; internet traffic; web page access.

  18. Working with WebQuests: Making the Web Accessible to Students with Disabilities.

    Science.gov (United States)

    Kelly, Rebecca

    2000-01-01

    This article describes how students with disabilities in regular classes are using the WebQuest lesson format to access the Internet. It explains essential WebQuest principles, creating a draft Web page, and WebQuest components. It offers an example of a WebQuest about salvaging the sunken ships, Titanic and Lusitania. A WebQuest planning form is…

  19. Enabling web users and developers to script accessibility with Accessmonkey.

    Science.gov (United States)

    Bigham, Jeffrey P; Brudvik, Jeremy T; Leung, Jessica O; Ladner, Richard E

    2009-07-01

    Efficient web access remains elusive for blind computer users. Previous efforts to improve web accessibility have focused on developer awareness, automated improvement, and legislation, but these approaches have left remaining concerns. First, while many tools can help produce accessible content, most are difficult to integrate into existing developer workflows and rarely offer specific suggestions that developers can implement. Second, tools that automatically improve web content for users generally solve specific problems and are difficult to combine and use on a diversity of existing assistive technology. Finally, although blind web users have proven adept at overcoming the shortcomings of the web and existing tools, they have been only marginally involved in improving the accessibility of their own web experience. In a step toward addressing these concerns, we have developed Accessmonkey, a common scripting framework that web users, web developers and web researchers can use to collaboratively improve accessibility. This framework advances the idea that Javascript and dynamic web content can be used to improve inaccessible content instead of being a cause of it. Using Accessmonkey, web users and developers on different platforms and with potentially different goals can collaboratively make the web more accessible. In this article, we first present the design of the Accessmonkey framework and offer several example scripts that demonstrate the utility of our approach. We conclude by discussing possible future extensions that will provide easy access to scripts as users browse the web and enable non-technical blind users to independently create and share improvements.

  20. Global Web Accessibility Analysis of National Government Portals and Ministry Web Sites

    DEFF Research Database (Denmark)

    Goodwin, Morten; Susar, Deniz; Nietzio, Annika

    2011-01-01

    Equal access to public information and services for all is an essential part of the United Nations (UN) Declaration of Human Rights. Today, the Web plays an important role in providing information and services to citizens. Unfortunately, many government Web sites are poorly designed and have...... accessibility barriers that prevent people with disabilities from using them. This article combines current Web accessibility benchmarking methodologies with a sound strategy for comparing Web accessibility among countries and continents. Furthermore, the article presents the first global analysis of the Web...... accessibility of 192 United Nation Member States made publically available. The article also identifies common properties of Member States that have accessible and inaccessible Web sites and shows that implementing antidisability discrimination laws is highly beneficial for the accessibility of Web sites, while...

  1. A Framework for Transparently Accessing Deep Web Sources

    Science.gov (United States)

    Dragut, Eduard Constantin

    2010-01-01

    An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…

  2. AN AUTOMATIC AND METHODOLOGICAL APPROACH FOR ACCESSIBLE WEB APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Lourdes Moreno

    2007-06-01

    Full Text Available Semantic Web approaches try to get the interoperability and communication among technologies and organizations. Nevertheless, sometimes it is forgotten that the Web must be useful for every user, consequently it is necessary to include tools and techniques doing Semantic Web be accessible. Accessibility and usability are two usually joined concepts widely used in web application development, however their meaning are different. Usability means the way to make easy the use but accessibility is referred to the access possibility. For the first one, there are many well proved approaches in real cases. However, accessibility field requires a deeper research that will make feasible the access to disable people and also the access to novel non-disable people due to the cost to automate and maintain accessible applications. In this paper, we propose one architecture to achieve the accessibility in web-environments dealing with the WAI accessibility standard and the Universal Design paradigm. This architecture tries to control the accessibility in web applications development life-cycle following a methodology starting from a semantic conceptual model and leans on description languages and controlled vocabularies.

  3. GROUPING WEB ACCESS SEQUENCES uSING SEQUENCE ALIGNMENT METHOD

    OpenAIRE

    BHUPENDRA S CHORDIA; KRISHNAKANT P ADHIYA

    2011-01-01

    In web usage mining grouping of web access sequences can be used to determine the behavior or intent of a set of users. Grouping websessions is how to measure the similarity between web sessions. There are many shortcomings in traditional measurement methods. The taskof grouping web sessions based on similarity and consists of maximizing the intra-group similarity while minimizing the inter-groupsimilarity is done using sequence alignment method. This paper introduces a new method to group we...

  4. Mfold web server for nucleic acid folding and hybridization prediction.

    Science.gov (United States)

    Zuker, Michael

    2003-07-01

    The abbreviated name, 'mfold web server', describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces), the server circumvents the problem of portability of this software. Detailed output, in the form of structure plots with or without reliability information, single strand frequency plots and 'energy dot plots', are available for the folding of single sequences. A variety of 'bulk' servers give less information, but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/mfold. This URL will be referred to as 'MFOLDROOT'.

  5. Guidelines for Making Web Content Accessible to All Users

    Science.gov (United States)

    Thompson, Terrill; Primlani, Saroj; Fiedor, Lisa

    2009-01-01

    The main goal of accessibility standards and guidelines is to design websites everyone can use. The "IT Accessibility Constituent Group" developed this set of draft guidelines to help EQ authors, reviewers, and staff and the larger EDUCAUSE community ensure that web content is accessible to all users, including those with disabilities. This…

  6. TMFoldWeb: a web server for predicting transmembrane protein fold class.

    Science.gov (United States)

    Kozma, Dániel; Tusnády, Gábor E

    2015-09-17

    Here we present TMFoldWeb, the web server implementation of TMFoldRec, a transmembrane protein fold recognition algorithm. TMFoldRec uses statistical potentials and utilizes topology filtering and a gapless threading algorithm. It ranks template structures and selects the most likely candidates and estimates the reliability of the obtained lowest energy model. The statistical potential was developed in a maximum likelihood framework on a representative set of the PDBTM database. According to the benchmark test the performance of TMFoldRec is about 77 % in correctly predicting fold class for a given transmembrane protein sequence. An intuitive web interface has been developed for the recently published TMFoldRec algorithm. The query sequence goes through a pipeline of topology prediction and a systematic sequence to structure alignment (threading). Resulting templates are ordered by energy and reliability values and are colored according to their significance level. Besides the graphical interface, a programmatic access is available as well, via a direct interface for developers or for submitting genome-wide data sets. The TMFoldWeb web server is unique and currently the only web server that is able to predict the fold class of transmembrane proteins while assigning reliability scores for the prediction. This method is prepared for genome-wide analysis with its easy-to-use interface, informative result page and programmatic access. Considering the info-communication evolution in the last few years, the developed web server, as well as the molecule viewer, is responsive and fully compatible with the prevalent tablets and mobile devices.

  7. Usability and Accessibility of Air Force Intranet Web Sites

    National Research Council Canada - National Science Library

    Bentley, Richard S

    2006-01-01

    .... This research effort seeks to establish an understanding of how well common practice usability design principles and government mandated accessibility guidelines are followed by Air Force intranet web sites...

  8. Web accessibility practical advice for the library and information professional

    CERN Document Server

    Craven, Jenny

    2008-01-01

    Offers an introduction to web accessibility and usability for information professionals, offering advice on the concerns relevant to library and information organizations. This book can be used as a resource for developing staff training and awareness activities. It will also be of value to website managers involved in web design and development.

  9. Lightweight methodology to improve web accessibility

    CSIR Research Space (South Africa)

    Greeff, M

    2009-10-01

    Full Text Available to improve score. Colour Contrast Fujitsu ColorSelector [9] Each colour combination has to be selected manually. Didn’t identify colour contrast problems that were highlighted by the other two tools. JuicyStudio Colour Contrast Analyser Firefox..., but this is not tested by AccessKeys AccessColor. However, AccessKeys AccessColor provides a link to the specific line in the code where the problem occurs. This is not provided by JuicyStudio Colour Contrast Analyser. According to these two tools, many colour...

  10. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11.

    Science.gov (United States)

    Lundegaard, Claus; Lamberth, Kasper; Harndahl, Mikkel; Buus, Søren; Lund, Ole; Nielsen, Morten

    2008-07-01

    NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding. The predictions are based on artificial neural networks trained on data from 55 MHC alleles (43 Human and 12 non-human), and position-specific scoring matrices (PSSMs) for additional 67 HLA alleles. As only the MHC class I prediction server is available, predictions are possible for peptides of length 8-11 for all 122 alleles. artificial neural network predictions are given as actual IC(50) values whereas PSSM predictions are given as a log-odds likelihood scores. The output is optionally available as download for easy post-processing. The training method underlying the server is the best available, and has been used to predict possible MHC-binding peptides in a series of pathogen viral proteomes including SARS, Influenza and HIV, resulting in an average of 75-80% confirmed MHC binders. Here, the performance is further validated and benchmarked using a large set of newly published affinity data, non-redundant to the training set. The server is free of use and available at: http://www.cbs.dtu.dk/services/NetMHC.

  11. Accessible Web Design - The Power of the Personal Message.

    Science.gov (United States)

    Whitney, Gill

    2015-01-01

    The aim of this paper is to describe ongoing research being carried out to enable people with visual impairments to communicate directly with designers and specifiers of hobby and community web sites to maximise the accessibility of their sites. The research started with an investigation of the accessibility of community and hobby web sites as perceived by a group of visually impaired end users. It is continuing with an investigation into how to best to communicate with web designers who are not experts in web accessibility. The research is making use of communication theory to investigate how terminology describing personal experience can be used in the most effective and powerful way. By working with the users using a Delphi study the research has ensured that the views of the visually impaired end users is successfully transmitted.

  12. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lamberth, K; Harndahl, M

    2008-01-01

    NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding...

  13. SIDECACHE: Information access, management and dissemination framework for web services.

    Science.gov (United States)

    Doderer, Mark S; Burkhardt, Cory; Robbins, Kay A

    2011-06-14

    Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.

  14. SWS: accessing SRS sites contents through Web Services

    OpenAIRE

    Romano, Paolo; Marra, Domenico

    2008-01-01

    Background Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available datab...

  15. Analisis Web Accessibility Pada Perancangan Website Chat

    OpenAIRE

    Yushan, Subhansyah

    2011-01-01

    Chat is a popular application where one user can communicate to another using text. Nowadays in the internet, many websites provide chat applications, such as Instant Messaging, Yahoo Messanger, and etc. Website which provides chat application cannot accomodate users who have any dissabilities, especialy users with visual disabilities. This situation makes communication process more complicated, where accessibility level of sending and receiving information has became low. The ...

  16. SWS: accessing SRS sites contents through Web Services.

    Science.gov (United States)

    Romano, Paolo; Marra, Domenico

    2008-03-26

    Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available databanks can thus be queried through programmatic interfaces. SRS is a well know indexing and search engine for biomedical databanks offering public access to many databanks and analysis tools. Unfortunately, these data are not easily and efficiently accessible through Web Services. We have developed 'SRS by WS' (SWS), a tool that makes information available in SRS sites accessible through Web Services. Information on known sites is maintained in a database, srsdb. SWS consists in a suite of WS that can query both srsdb, for information on sites and databases, and SRS sites. SWS returns results in a text-only format and can be accessed through a WSDL compliant client. SWS enables interoperability between workflow systems and SRS implementations, by also managing access to alternative sites, in order to cope with network and maintenance problems, and selecting the most up-to-date among available systems. Development and implementation of Web Services, allowing to make a programmatic access to an exhaustive set of biomedical databases can significantly improve automation of in-silico analysis. SWS supports this activity by making biological databanks that are managed in public SRS sites available through a programmatic interface.

  17. EnviroAtlas - Accessibility Characteristics in the Conterminous U.S. Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service includes maps that illustrate factors affecting transit accessibility, and indicators of accessibility. Accessibility measures how...

  18. Learning Task Knowledge from Dialog and Web Access

    Directory of Open Access Journals (Sweden)

    Vittorio Perera

    2015-06-01

    Full Text Available We present KnoWDiaL, an approach for Learning and using task-relevant Knowledge from human-robot Dialog and access to the Web. KnoWDiaL assumes that there is an autonomous agent that performs tasks, as requested by humans through speech. The agent needs to “understand” the request, (i.e., to fully ground the task until it can proceed to plan for and execute it. KnoWDiaL contributes such understanding by using and updating a Knowledge Base, by dialoguing with the user, and by accessing the web. We believe that KnoWDiaL, as we present it, can be applied to general autonomous agents. However, we focus on our work with our autonomous collaborative robot, CoBot, which executes service tasks in a building, moving around and transporting objects between locations. Hence, the knowledge acquired and accessed consists of groundings of language to robot actions, and building locations, persons, and objects. KnoWDiaL handles the interpretation of voice commands, is robust regarding speech recognition errors, and is able to learn commands involving referring expressions in an open domain, (i.e., without requiring a lexicon. We present in detail the multiple components of KnoWDiaL, namely a frame-semantic parser, a probabilistic grounding model, a web-based predicate evaluator, a dialog manager, and the weighted predicate-based Knowledge Base. We illustrate the knowledge access and updates from the dialog and Web access, through detailed and complete examples. We further evaluate the correctness of the predicate instances learned into the Knowledge Base, and show the increase in dialog efficiency as a function of the number of interactions. We have extensively and successfully used KnoWDiaL in CoBot dialoguing and accessing the Web, and extract a few corresponding example sequences from captured videos.

  19. Access Control of Web- and Java-Based Applications

    Science.gov (United States)

    Tso, Kam S.; Pajevski, Michael J.

    2013-01-01

    Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers

  20. 3PAC: Enforcing Access Policies for Web Services

    NARCIS (Netherlands)

    van Bemmel, J.; Wegdam, M.; Lagerberg, K.

    Web Services fail to deliver on the promise of ubiquitous deployment and seamless interoperability due to the lack of a uniform, standards-based approach to all aspects of security. In particular, the enforcement of access policies in a Service Oriented Architecture is not addressed adequately. We

  1. A physical implementation of the Turing machine accessed through Web

    Directory of Open Access Journals (Sweden)

    Marijo Maracic

    2008-11-01

    Full Text Available A Turing machine has an important role in education in the field of computer science, as it is a milestone in courses related to automata theory, theory of computation and computer architecture. Its value is also recognized in the Computing Curricula proposed by the Association for Computing Machinery (ACM and IEEE Computer Society. In this paper we present a physical implementation of the Turing machine accessed through Web. To enable remote access to the Turing machine, an implementation of the client-server architecture is built. The web interface is described in detail and illustrations of remote programming, initialization and the computation of the Turing machine are given. Advantages of such approach and expected benefits obtained by using remotely accessible physical implementation of the Turing machine as an educational tool in the teaching process are discussed.

  2. Secure, web-accessible call rosters for academic radiology departments.

    Science.gov (United States)

    Nguyen, A V; Tellis, W M; Avrin, D E

    2000-05-01

    Traditionally, radiology department call rosters have been posted via paper and bulletin boards. Frequently, changes to these lists are made by multiple people independently, but often not synchronized, resulting in confusion among the house staff and technical staff as to who is on call and when. In addition, multiple and disparate copies exist in different sections of the department, and changes made would not be propagated to all the schedules. To eliminate such difficulties, a paperless call scheduling application was developed. Our call scheduling program allowed Java-enabled web access to a database by designated personnel from each radiology section who have privileges to make the necessary changes. Once a person made a change, everyone accessing the database would see the modification. This eliminates the chaos resulting from people swapping shifts at the last minute and not having the time to record or broadcast the change. Furthermore, all changes to the database were logged. Users are given a log-in name and password and can only edit their section; however, all personnel have access to all sections' schedules. Our applet was written in Java 2 using the latest technology in database access. We access our Interbase database through the DataExpress and DB Swing (Borland, Scotts Valley, CA) components. The result is secure access to the call rosters via the web. There are many advantages to the web-enabled access, mainly the ability for people to make changes and have the changes recorded and propagated in a single virtual location and available to all who need to know.

  3. CCTOP: a Consensus Constrained TOPology prediction web server.

    Science.gov (United States)

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Web-page Prediction for Domain Specific Web-search using Boolean Bit Mask

    OpenAIRE

    Sinha, Sukanta; Duttagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Search Engine is a Web-page retrieval tool. Nowadays Web searchers utilize their time using an efficient search engine. To improve the performance of the search engine, we are introducing a unique mechanism which will give Web searchers more prominent search results. In this paper, we are going to discuss a domain specific Web search prototype which will generate the predicted Web-page list for user given search string using Boolean bit mask.

  5. Simple Enough--Even for Web Virgins: Lisa Mitten's Access to Native American Web Sites. Web Site Review Essay.

    Science.gov (United States)

    Belgarde, Mary Jiron

    1998-01-01

    A mixed-blood Mohawk urban Indian and university librarian, Lisa Mitten provides access to Web sites with solid information about American Indians. Links are provided to 10 categories--Native nations, Native organizations, Indian education, Native media, powwows and festivals, Indian music, Native arts, Native businesses, and Indian-oriented home…

  6. Predicting consumer behavior with Web search.

    Science.gov (United States)

    Goel, Sharad; Hofman, Jake M; Lahaie, Sébastien; Pennock, David M; Watts, Duncan J

    2010-10-12

    Recent work has demonstrated that Web search volume can "predict the present," meaning that it can be used to accurately track outcomes such as unemployment levels, auto and home sales, and disease prevalence in near real time. Here we show that what consumers are searching for online can also predict their collective future behavior days or even weeks in advance. Specifically we use search query volume to forecast the opening weekend box-office revenue for feature films, first-month sales of video games, and the rank of songs on the Billboard Hot 100 chart, finding in all cases that search counts are highly predictive of future outcomes. We also find that search counts generally boost the performance of baseline models fit on other publicly available data, where the boost varies from modest to dramatic, depending on the application in question. Finally, we reexamine previous work on tracking flu trends and show that, perhaps surprisingly, the utility of search data relative to a simple autoregressive model is modest. We conclude that in the absence of other data sources, or where small improvements in predictive performance are material, search queries provide a useful guide to the near future.

  7. Access Control of Web and Java Based Applications

    Science.gov (United States)

    Tso, Kam S.; Pajevski, Michael J.; Johnson, Bryan

    2011-01-01

    Cyber security has gained national and international attention as a result of near continuous headlines from financial institutions, retail stores, government offices and universities reporting compromised systems and stolen data. Concerns continue to rise as threats of service interruption, and spreading of viruses become ever more prevalent and serious. Controlling access to application layer resources is a critical component in a layered security solution that includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. In this paper we discuss the development of an application-level access control solution, based on an open-source access manager augmented with custom software components, to provide protection to both Web-based and Java-based client and server applications.

  8. An Introduction to Web Accessibility, Web Standards, and Web Standards Makers

    Science.gov (United States)

    McHale, Nina

    2011-01-01

    Librarians and libraries have long been committed to providing equitable access to information. In the past decade and a half, the growth of the Internet and the rapid increase in the number of online library resources and tools have added a new dimension to this core duty of the profession: ensuring accessibility of online resources to users with…

  9. mORCA: ubiquitous access to life science web services.

    Science.gov (United States)

    Diaz-Del-Pino, Sergio; Trelles, Oswaldo; Falgueras, Juan

    2018-01-16

    Technical advances in mobile devices such as smartphones and tablets have produced an extraordinary increase in their use around the world and have become part of our daily lives. The possibility of carrying these devices in a pocket, particularly mobile phones, has enabled ubiquitous access to Internet resources. Furthermore, in the life sciences world there has been a vast proliferation of data types and services that finish as Web Services. This suggests the need for research into mobile clients to deal with life sciences applications for effective usage and exploitation. Analysing the current features in existing bioinformatics applications managing Web Services, we have devised, implemented, and deployed an easy-to-use web-based lightweight mobile client. This client is able to browse, select, compose parameters, invoke, and monitor the execution of Web Services stored in catalogues or central repositories. The client is also able to deal with huge amounts of data between external storage mounts. In addition, we also present a validation use case, which illustrates the usage of the application while executing, monitoring, and exploring the results of a registered workflow. The software its available in the Apple Store and Android Market and the source code is publicly available in Github. Mobile devices are becoming increasingly important in the scientific world due to their strong potential impact on scientific applications. Bioinformatics should not fall behind this trend. We present an original software client that deals with the intrinsic limitations of such devices and propose different guidelines to provide location-independent access to computational resources in bioinformatics and biomedicine. Its modular design makes it easily expandable with the inclusion of new repositories, tools, types of visualization, etc.

  10. KBWS: an EMBOSS associated package for accessing bioinformatics web services.

    Science.gov (United States)

    Oshita, Kazuki; Arakawa, Kazuharu; Tomita, Masaru

    2011-04-29

    The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS) UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS), adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded) and http://soap.g-language.org/kbws_dl.wsdl (Document/literal).

  11. KBWS: an EMBOSS associated package for accessing bioinformatics web services

    Directory of Open Access Journals (Sweden)

    Tomita Masaru

    2011-04-01

    Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.

  12. Ensemble learned vaccination uptake prediction using web search queries

    DEFF Research Database (Denmark)

    Hansen, Niels Dalum; Lioma, Christina; Mølbak, Kåre

    2016-01-01

    We present a method that uses ensemble learning to combine clinical and web-mined time-series data in order to predict future vaccination uptake. The clinical data is official vaccination registries, and the web data is query frequencies collected from Google Trends. Experiments with official...... vaccine records show that our method predicts vaccination uptake eff?ectively (4.7 Root Mean Squared Error). Whereas performance is best when combining clinical and web data, using solely web data yields comparative performance. To our knowledge, this is the ?first study to predict vaccination uptake...

  13. Context mining and integration into predictive web analytics

    NARCIS (Netherlands)

    Kiseleva, Y.

    2013-01-01

    Predictive Web Analytics is aimed at understanding behavioural patterns of users of various web-based applications: e-commerce, ubiquitous and mobile computing, and computational advertising. Within these applications business decisions often rely on two types of predictions: an overall or

  14. Web Archiving: Issues and Problems in Collection Building and Access

    Directory of Open Access Journals (Sweden)

    Grethe Jacobsen

    2008-12-01

    Full Text Available Denmark began web archiving in 2005 and the experiences are presented with a specific focus on collection-building and issues concerning access. In creating principles for what internet materials to collect for a national collection, one can in many ways build on existing practice and guidelines. The actual collection requires strategies for harvesting relevant segments of the internet in order to assure as complete a coverage as possible. Rethinking is also necessary when it comes to the issue of description, but cataloguing expertise can be utilised to find new ways for users to retrieve information. Technical problems in harvesting and archiving are identifiable and can be solved through international cooperation. Access to the archived materials, on the other hand, has become the major challenge to national libraries. Legal obstacles prevent national libraries from offering generel access to their archived internet materials. In Europe the principal obstacles are the EU Directive on Data Protection (Directive 95/46/EC and local data protection legislation based on this directive. LIBER is urged to take political action on this issue in order that the general public may have the same access to the collection of internet materials as it has to other national collections.

  15. File access prediction using neural networks.

    Science.gov (United States)

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar

    2010-06-01

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors.

  16. ARCAS (ACACIA Regional Climate-data Access System) -- a Web Access System for Climate Model Data Access, Visualization and Comparison

    Science.gov (United States)

    Hakkarinen, C.; Brown, D.; Callahan, J.; hankin, S.; de Koningh, M.; Middleton-Link, D.; Wigley, T.

    2001-05-01

    A Web-based access system to climate model output data sets for intercomparison and analysis has been produced, using the NOAA-PMEL developed Live Access Server software as host server and Ferret as the data serving and visualization engine. Called ARCAS ("ACACIA Regional Climate-data Access System"), and publicly accessible at http://dataserver.ucar.edu/arcas, the site currently serves climate model outputs from runs of the NCAR Climate System Model for the 21st century, for Business as Usual and Stabilization of Greenhouse Gas Emission scenarios. Users can select, download, and graphically display single variables or comparisons of two variables from either or both of the CSM model runs, averaged for monthly, seasonal, or annual time resolutions. The time length of the averaging period, and the geographical domain for download and display, are fully selectable by the user. A variety of arithmetic operations on the data variables can be computed "on-the-fly", as defined by the user. Expansions of the user-selectable options for defining analysis options, and for accessing other DOD-compatible ("Distributed Ocean Data System-compatible") data sets, residing at locations other than the NCAR hardware server on which ARCAS operates, are planned for this year. These expansions are designed to allow users quick and easy-to-operate web-based access to the largest possible selection of climate model output data sets available throughout the world.

  17. Inclusão digital via acessibilidade web | Digital inclusion via web accessibility

    Directory of Open Access Journals (Sweden)

    Cesar Augusto Cusin

    2009-03-01

    Full Text Available Resumo A natureza atual da web, que destaca a participação colaborativa dos usuários em diversos ambientes informacionais digitais, conduz ao desenvolvimento de diretrizes que enfocam a arquitetura da informação digital inclusiva para diferentes públicos nas mais diversas ambiências informacionais. A pesquisa propõe e objetiva um ambiente informacional digital inclusivo, visando apontar os elementos de acessibilidade que permitam a promoção da inclusão informacional digital, de forma a destacar os referenciais da Arquitetura da Informação Digital, de recomendações internacionais, com o olhar da Ciência da Informação e das novas tecnologias de informação e comunicação (TIC. Palavras-chave inclusão digital; web; acessibilidade; ciência da informação; arquitetura da informação. Abstract The current nature of the web, which highlights the collaborative participation of users in various digital informational environments, leads to the development of guidelines that focus on the digital inclusive information architecture for different audiences in diverse informational environments. The study proposes an inclusive digital information environment, aiming to establish the elements of accessibility that  enable the promotion of digital inclusion information in order to highlight the references of digital information architecture, the international recommendations, with the perspective of Information Science and the new information and communication technologies (ICT. Keywords digital inclusion; web; accessibility; information science; information architecture.

  18. Prediction of users webpage access behaviour using association ...

    Indian Academy of Sciences (India)

    pages mainly depended on the support and lift measure whereas confidence assumed ... Apriori algorithm; association rules; data mining; MSNBC; web usage .... clustering was used in finding the user access patterns from web access log. .... satisfied the minimum support and confidence of 0.6% and 100% respectively.

  19. Accessing NASA Technology with the World Wide Web

    Science.gov (United States)

    Nelson, Michael L.; Bianco, David J.

    1995-01-01

    NASA Langley Research Center (LaRC) began using the World Wide Web (WWW) in the summer of 1993, becoming the first NASA installation to provide a Center-wide home page. This coincided with a reorganization of LaRC to provide a more concentrated focus on technology transfer to both aerospace and non-aerospace industry. Use of WWW and NCSA Mosaic not only provides automated information dissemination, but also allows for the implementation, evolution and integration of many technology transfer and technology awareness applications. This paper describes several of these innovative applications, including the on-line presentation of the entire Technology OPportunities Showcase (TOPS), an industrial partnering showcase that exists on the Web long after the actual 3-day event ended. The NASA Technical Report Server (NTRS) provides uniform access to many logically similar, yet physically distributed NASA report servers. WWW is also the foundation of the Langley Software Server (LSS), an experimental software distribution system which will distribute LaRC-developed software. In addition to the more formal technology distribution projects, WWW has been successful in connecting people with technologies and people with other people.

  20. Hera-FFX: a Firefox add-on for Semi-automatic Web Accessibility Evaluation

    OpenAIRE

    Fuertes Castro, José Luis; González, Ricardo; Gutiérrez, Emmanuelle; Martínez Normand, Loïc

    2009-01-01

    Website accessibility evaluation is a complex task requiring a combination of human expertise and software support. There are several online and offline tools to support the manual web accessibility evaluation process. However, they all have some weaknesses because none of them includes all the desired features. In this paper we present Hera-FFX, an add-on for the Firefox web browser that supports semi-automatic web accessibility evaluation.

  1. Open access web technology for mathematics learning in higher education

    Directory of Open Access Journals (Sweden)

    Mari Carmen González-Videgaray

    2016-05-01

    Full Text Available Problems with mathematics learning, “math anxiety” or “statistics anxiety” among university students can be avoided by using teaching strategies and technological tools. Besides personal suffering, low achievement in mathematics reduces terminal efficiency and decreases enrollment in careers related to science, technology and mathematics. This paper has two main goals: 1 to offer an organized inventory of open access web resources for math learning in higher education, and 2 to explore to what extent these resources are currently known and used by students and teachers. The first goal was accomplished by running a search in Google and then classifying resources. For the second, we conducted a survey among a sample of students (n=487 and teachers (n=60 from mathematics and engineering within the largest public university in Mexico. We categorized 15 high-quality web resources. Most of them are interactive simulations and computer algebra systems. ResumenLos problemas en el aprendizaje de las matemáticas, como “ansiedad matemática” y “ansiedad estadística” pueden evitarse si se usan estrategias de enseñanza y herramientas tecnológicas. Además de un sufrimiento personal, el bajo rendimiento en matemáticas reduce la eficiencia terminal y decrementa la matrícula en carreras relacionadas con ciencia, tecnología y matemáticas. Este artículo tiene dos objetivos: 1 ofrecer un inventario organizado de recursos web de acceso abierto para aprender matemáticas en la universidad, y 2 explorar en qué medida estos recursos se usan actualmente entre alumnos y profesores. El primer objetivo se logró con un perfil de búsqueda en Google y una clasificación. Para el segundo, se condujo una encuesta en una muestra de estudiantes (n=487 y maestros (n=60 de matemáticas e ingeniería de la universidad más grande de México. Categorizamos 15 recursos web de alta calidad. La mayoría son simulaciones interactivas y

  2. Predictive access control for distributed computation

    DEFF Research Database (Denmark)

    Yang, Fan; Hankin, Chris; Nielson, Flemming

    2013-01-01

    We show how to use aspect-oriented programming to separate security and trust issues from the logical design of mobile, distributed systems. The main challenge is how to enforce various types of security policies, in particular predictive access control policies — policies based on the future beh...... behavior of a program. A novel feature of our approach is that we can define policies concerning secondary use of data....

  3. Architecture for large-scale automatic web accessibility evaluation based on the UWEM methodology

    DEFF Research Database (Denmark)

    Ulltveit-Moe, Nils; Olsen, Morten Goodwin; Pillai, Anand B.

    2008-01-01

    The European Internet Accessibility project (EIAO) has developed an Observatory for performing large scale automatic web accessibility evaluations of public sector web sites in Europe. The architecture includes a distributed web crawler that crawls web sites for links until either a given budget...... of web pages have been identified or the web site has been crawled exhaustively. Subsequently, a uniform random subset of the crawled web pages is sampled and sent for accessibility evaluation and the evaluation results are stored in a Resource Description Format (RDF) database that is later loaded...... challenges that the project faced and the solutions developed towards building a system capable of regular large-scale accessibility evaluations with sufficient capacity and stability. It also outlines some possible future architectural improvements....

  4. Accessibility and content of individualized adult reconstructive hip and knee/musculoskeletal oncology fellowship web sites.

    Science.gov (United States)

    Young, Bradley L; Cantrell, Colin K; Patt, Joshua C; Ponce, Brent A

    2018-06-01

    Accessible, adequate online information is important to fellowship applicants. Program web sites can affect which programs applicants apply to, subsequently altering interview costs incurred by both parties and ultimately impacting rank lists. Web site analyses have been performed for all orthopaedic subspecialties other than those involved in the combined adult reconstruction and musculoskeletal (MSK) oncology fellowship match. A complete list of active programs was obtained from the official adult reconstruction and MSK oncology society web sites. Web site accessibility was assessed using a structured Google search. Accessible web sites were evaluated based on 21 previously reported content criteria. Seventy-four adult reconstruction programs and 11 MSK oncology programs were listed on the official society web sites. Web sites were identified and accessible for 58 (78%) adult reconstruction and 9 (82%) MSK oncology fellowship programs. No web site contained all content criteria and more than half of both adult reconstruction and MSK oncology web sites failed to include 12 of the 21 criteria. Several programs participating in the combined Adult Reconstructive Hip and Knee/Musculoskeletal Oncology Fellowship Match did not have accessible web sites. Of the web sites that were accessible, none contained comprehensive information and the majority lacked information that has been previously identified as being important to perspective applicants.

  5. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  6. TAPIR, a web server for the prediction of plant microRNA targets, including target mimics.

    Science.gov (United States)

    Bonnet, Eric; He, Ying; Billiau, Kenny; Van de Peer, Yves

    2010-06-15

    We present a new web server called TAPIR, designed for the prediction of plant microRNA targets. The server offers the possibility to search for plant miRNA targets using a fast and a precise algorithm. The precise option is much slower but guarantees to find less perfectly paired miRNA-target duplexes. Furthermore, the precise option allows the prediction of target mimics, which are characterized by a miRNA-target duplex having a large loop, making them undetectable by traditional tools. The TAPIR web server can be accessed at: http://bioinformatics.psb.ugent.be/webtools/tapir. Supplementary data are available at Bioinformatics online.

  7. Size-based predictions of food web patterns

    DEFF Research Database (Denmark)

    Zhang, Lai; Hartvig, Martin; Knudsen, Kim

    2014-01-01

    We employ size-based theoretical arguments to derive simple analytic predictions of ecological patterns and properties of natural communities: size-spectrum exponent, maximum trophic level, and susceptibility to invasive species. The predictions are brought about by assuming that an infinite number...... of species are continuously distributed on a size-trait axis. It is, however, an open question whether such predictions are valid for a food web with a finite number of species embedded in a network structure. We address this question by comparing the size-based predictions to results from dynamic food web...... simulations with varying species richness. To this end, we develop a new size- and trait-based food web model that can be simplified into an analytically solvable size-based model. We confirm existing solutions for the size distribution and derive novel predictions for maximum trophic level and invasion...

  8. Accessibility Trends among Academic Library and Library School Web Sites in the USA and Canada

    Science.gov (United States)

    Schmetzke, Axel; Comeaux, David

    2009-01-01

    This paper focuses on the accessibility of North American library and library school Web sites for all users, including those with disabilities. Web accessibility data collected in 2006 are compared to those of 2000 and 2002. The findings of this follow-up study continue to give cause for concern: Despite improvements since 2002, library and…

  9. Accessibility and content of individualized adult reconstructive hip and knee/musculoskeletal oncology fellowship web sites

    Directory of Open Access Journals (Sweden)

    Bradley L. Young, MD

    2018-06-01

    Conclusions: Several programs participating in the combined Adult Reconstructive Hip and Knee/Musculoskeletal Oncology Fellowship Match did not have accessible web sites. Of the web sites that were accessible, none contained comprehensive information and the majority lacked information that has been previously identified as being important to perspective applicants.

  10. Designing A General Deep Web Access Approach Based On A Newly Introduced Factor; Harvestability Factor (HF)

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; van Keulen, Maurice; Hiemstra, Djoerd

    2014-01-01

    The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester

  11. Building Accessible Educational Web Sites: The Law, Standards, Guidelines, Tools, and Lessons Learned

    Science.gov (United States)

    Liu, Ye; Palmer, Bart; Recker, Mimi

    2004-01-01

    Professional education is increasingly facing accessibility challenges with the emergence of webbased learning. This paper summarizes related U.S. legislation, standards, guidelines, and validation tools to make web-based learning accessible for all potential learners. We also present lessons learned during the implementation of web accessibility…

  12. DIANA-microT web server: elucidating microRNA functions through target prediction.

    Science.gov (United States)

    Maragkakis, M; Reczko, M; Simossis, V A; Alexiou, P; Papadopoulos, G L; Dalamagas, T; Giannopoulos, G; Goumas, G; Koukis, E; Kourtis, K; Vergoulis, T; Koziris, N; Sellis, T; Tsanakas, P; Hatzigeorgiou, A G

    2009-07-01

    Computational microRNA (miRNA) target prediction is one of the key means for deciphering the role of miRNAs in development and disease. Here, we present the DIANA-microT web server as the user interface to the DIANA-microT 3.0 miRNA target prediction algorithm. The web server provides extensive information for predicted miRNA:target gene interactions with a user-friendly interface, providing extensive connectivity to online biological resources. Target gene and miRNA functions may be elucidated through automated bibliographic searches and functional information is accessible through Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. The web server offers links to nomenclature, sequence and protein databases, and users are facilitated by being able to search for targeted genes using different nomenclatures or functional features, such as the genes possible involvement in biological pathways. The target prediction algorithm supports parameters calculated individually for each miRNA:target gene interaction and provides a signal-to-noise ratio and a precision score that helps in the evaluation of the significance of the predicted results. Using a set of miRNA targets recently identified through the pSILAC method, the performance of several computational target prediction programs was assessed. DIANA-microT 3.0 achieved there with 66% the highest ratio of correctly predicted targets over all predicted targets. The DIANA-microT web server is freely available at www.microrna.gr/microT.

  13. Content accessibility of Web documents: Overview of concepts and needed standards

    DEFF Research Database (Denmark)

    Alapetite, A.

    2006-01-01

    The concept of Web accessibility refers to a combined set of measures, namely, how easily and how efficiently different types of users may make use of a given service. While some recommendations for accessibility are focusing on people with variousspecific disabilities, this document seeks...... to broaden the scope to any type of user and any type of use case. The document provides an introduction to some required concepts and technical standards for designing accessible Web sites. A brief review of thelegal requirements in a few countries for Web accessibility complements the recommendations...

  14. BioServices: a common Python package to access biological Web Services programmatically.

    Science.gov (United States)

    Cokelaer, Thomas; Pultz, Dennis; Harder, Lea M; Serra-Musach, Jordi; Saez-Rodriguez, Julio

    2013-12-15

    Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework. BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb). Wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming. BioServices releases and documentation are available at http://pypi.python.org/pypi/bioservices under a GPL-v3 license.

  15. Towards Uniform Access to Web Data and Services

    NARCIS (Netherlands)

    Harth, Andreas; Norton, Barry; Polleres, Axel; Sapkota, Brahmananda; Speiser, Sebastian; Stadtmüller, Steffen; Suominen, Osma

    2011-01-01

    A sizable amount of data on the Web is currently available via Web APIs that expose data in formats such as JSON or XML. Combining data from different APIs and data sources requires glue code which is typically not shared and hence not reused. We derive requirements for a mechanism that brings data

  16. H1DS: A new web-based data access system

    Energy Technology Data Exchange (ETDEWEB)

    Pretty, D.G., E-mail: david.pretty@anu.edu.au; Blackwell, B.D.

    2014-05-15

    Highlights: • We present H1DS, a new RESTful web service for accessing fusion data. • We examine the scalability and extensibility of H1DS. • We present a fast and user friendly web browser client for the H1DS web service. • A summary relational database is presented as an application of the H1DS API. - Abstract: A new data access system, H1DS, has been developed and deployed for the H-1 Heliac at the Australian Plasma Fusion Research Facility. The data system provides access to fusion data via a RESTful web service. With the URL acting as the API to the data system, H1DS provides a scalable and extensible framework which is intuitive to new users, and allows access from any internet connected device. The H1DS framework, originally designed to work with MDSplus, has a modular design which can be extended to provide access to alternative data storage systems.

  17. H1DS: A new web-based data access system

    International Nuclear Information System (INIS)

    Pretty, D.G.; Blackwell, B.D.

    2014-01-01

    Highlights: • We present H1DS, a new RESTful web service for accessing fusion data. • We examine the scalability and extensibility of H1DS. • We present a fast and user friendly web browser client for the H1DS web service. • A summary relational database is presented as an application of the H1DS API. - Abstract: A new data access system, H1DS, has been developed and deployed for the H-1 Heliac at the Australian Plasma Fusion Research Facility. The data system provides access to fusion data via a RESTful web service. With the URL acting as the API to the data system, H1DS provides a scalable and extensible framework which is intuitive to new users, and allows access from any internet connected device. The H1DS framework, originally designed to work with MDSplus, has a modular design which can be extended to provide access to alternative data storage systems

  18. 网络无障碍的发展:政策、理论和方法%Development of Web Accessibility: Policies, Theories and Apporoaches

    Institute of Scientific and Technical Information of China (English)

    Xiaoming Zeng

    2006-01-01

    The article is intended to introduce the readers to the concept and background of Web accessibility in the United States. I will first discuss different definitions of Web accessibility. The beneficiaries of accessible Web or the sufferers from inaccessible Web will be discussed based on the type of disability. The importance of Web accessibility will be introduced from the perspectives of ethical, demographic, legal, and financial importance. Web accessibility related standards and legislations will be discussed in great detail. Previous research on evaluating Web accessibility will be presented. Lastly, a system for automated Web accessibility transformation will be introduced as an alternative approach for enhancing Web accessibility.

  19. Hand Society and Matching Program Web Sites Provide Poor Access to Information Regarding Hand Surgery Fellowship.

    Science.gov (United States)

    Hinds, Richard M; Klifto, Christopher S; Naik, Amish A; Sapienza, Anthony; Capo, John T

    2016-08-01

    The Internet is a common resource for applicants of hand surgery fellowships, however, the quality and accessibility of fellowship online information is unknown. The objectives of this study were to evaluate the accessibility of hand surgery fellowship Web sites and to assess the quality of information provided via program Web sites. Hand fellowship Web site accessibility was evaluated by reviewing the American Society for Surgery of the Hand (ASSH) on November 16, 2014 and the National Resident Matching Program (NRMP) fellowship directories on February 12, 2015, and performing an independent Google search on November 25, 2014. Accessible Web sites were then assessed for quality of the presented information. A total of 81 programs were identified with the ASSH directory featuring direct links to 32% of program Web sites and the NRMP directory directly linking to 0%. A Google search yielded direct links to 86% of program Web sites. The quality of presented information varied greatly among the 72 accessible Web sites. Program description (100%), fellowship application requirements (97%), program contact email address (85%), and research requirements (75%) were the most commonly presented components of fellowship information. Hand fellowship program Web sites can be accessed from the ASSH directory and, to a lesser extent, the NRMP directory. However, a Google search is the most reliable method to access online fellowship information. Of assessable programs, all featured a program description though the quality of the remaining information was variable. Hand surgery fellowship applicants may face some difficulties when attempting to gather program information online. Future efforts should focus on improving the accessibility and content quality on hand surgery fellowship program Web sites.

  20. Advantages of combined transmembrane topology and signal peptide prediction--the Phobius web server

    DEFF Research Database (Denmark)

    Käll, Lukas; Krogh, Anders; Sonnhammer, Erik L L

    2007-01-01

    . The method makes an optimal choice between transmembrane segments and signal peptides, and also allows constrained and homology-enriched predictions. We here present a web interface (http://phobius.cgb.ki.se and http://phobius.binf.ku.dk) to access Phobius. Udgivelsesdato: 2007-Jul......When using conventional transmembrane topology and signal peptide predictors, such as TMHMM and SignalP, there is a substantial overlap between these two types of predictions. Applying these methods to five complete proteomes, we found that 30-65% of all predicted signal peptides and 25-35% of all...

  1. Snippet-based relevance predictions for federated web search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd

    How well can the relevance of a page be predicted, purely based on snippets? This would be highly useful in a Federated Web Search setting where caching large amounts of result snippets is more feasible than caching entire pages. The experiments reported in this paper make use of result snippets and

  2. Systems and Services for Real-Time Web Access to NPP Data, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Science & Technology, Inc. (GST) proposes to investigate information processing and delivery technologies to provide near-real-time Web-based access to...

  3. Facilitating access to the web of data a guide for librarians

    CERN Document Server

    Stuart, David

    2011-01-01

    Offers an introduction to the web of data and the semantic web, exploring technologies including APIs, microformats and linked data. This title includes topical commentary and practical examples that explore how information professionals can harness the power of this phenomenon to inform strategy and become facilitators of access to data.

  4. Task 28: Web Accessible APIs in the Cloud Trade Study

    Science.gov (United States)

    Gallagher, James; Habermann, Ted; Jelenak, Aleksandar; Lee, Joe; Potter, Nathan; Yang, Muqun

    2017-01-01

    This study explored three candidate architectures for serving NASA Earth Science Hierarchical Data Format Version 5 (HDF5) data via Hyrax running on Amazon Web Services (AWS). We studied the cost and performance for each architecture using several representative Use-Cases. The objectives of the project are: Conduct a trade study to identify one or more high performance integrated solutions for storing and retrieving NASA HDF5 and Network Common Data Format Version 4 (netCDF4) data in a cloud (web object store) environment. The target environment is Amazon Web Services (AWS) Simple Storage Service (S3).Conduct needed level of software development to properly evaluate solutions in the trade study and to obtain required benchmarking metrics for input into government decision of potential follow-on prototyping. Develop a cloud cost model for the preferred data storage solution (or solutions) that accounts for different granulation and aggregation schemes as well as cost and performance trades.

  5. NEW WEB-BASED ACCESS TO NUCLEAR STRUCTURE DATASETS.

    Energy Technology Data Exchange (ETDEWEB)

    WINCHELL,D.F.

    2004-09-26

    As part of an effort to migrate the National Nuclear Data Center (NNDC) databases to a relational platform, a new web interface has been developed for the dissemination of the nuclear structure datasets stored in the Evaluated Nuclear Structure Data File and Experimental Unevaluated Nuclear Data List.

  6. ProTox: a web server for the in silico prediction of rodent oral toxicity.

    Science.gov (United States)

    Drwal, Malgorzata N; Banerjee, Priyanka; Dunkel, Mathias; Wettig, Martin R; Preissner, Robert

    2014-07-01

    Animal trials are currently the major method for determining the possible toxic effects of drug candidates and cosmetics. In silico prediction methods represent an alternative approach and aim to rationalize the preclinical drug development, thus enabling the reduction of the associated time, costs and animal experiments. Here, we present ProTox, a web server for the prediction of rodent oral toxicity. The prediction method is based on the analysis of the similarity of compounds with known median lethal doses (LD50) and incorporates the identification of toxic fragments, therefore representing a novel approach in toxicity prediction. In addition, the web server includes an indication of possible toxicity targets which is based on an in-house collection of protein-ligand-based pharmacophore models ('toxicophores') for targets associated with adverse drug reactions. The ProTox web server is open to all users and can be accessed without registration at: http://tox.charite.de/tox. The only requirement for the prediction is the two-dimensional structure of the input compounds. All ProTox methods have been evaluated based on a diverse external validation set and displayed strong performance (sensitivity, specificity and precision of 76, 95 and 75%, respectively) and superiority over other toxicity prediction tools, indicating their possible applicability for other compound classes. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Accessing the SEED genome databases via Web services API: tools for programmers.

    Science.gov (United States)

    Disz, Terry; Akhter, Sajia; Cuevas, Daniel; Olson, Robert; Overbeek, Ross; Vonstein, Veronika; Stevens, Rick; Edwards, Robert A

    2010-06-14

    The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST) server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.

  8. The East London glaucoma prediction score: web-based validation of glaucoma risk screening tool

    Science.gov (United States)

    Stephen, Cook; Benjamin, Longo-Mbenza

    2013-01-01

    AIM It is difficult for Optometrists and General Practitioners to know which patients are at risk. The East London glaucoma prediction score (ELGPS) is a web based risk calculator that has been developed to determine Glaucoma risk at the time of screening. Multiple risk factors that are available in a low tech environment are assessed to provide a risk assessment. This is extremely useful in settings where access to specialist care is difficult. Use of the calculator is educational. It is a free web based service. Data capture is user specific. METHOD The scoring system is a web based questionnaire that captures and subsequently calculates the relative risk for the presence of Glaucoma at the time of screening. Three categories of patient are described: Unlikely to have Glaucoma; Glaucoma Suspect and Glaucoma. A case review methodology of patients with known diagnosis is employed to validate the calculator risk assessment. RESULTS Data from the patient records of 400 patients with an established diagnosis has been captured and used to validate the screening tool. The website reports that the calculated diagnosis correlates with the actual diagnosis 82% of the time. Biostatistics analysis showed: Sensitivity = 88%; Positive predictive value = 97%; Specificity = 75%. CONCLUSION Analysis of the first 400 patients validates the web based screening tool as being a good method of screening for the at risk population. The validation is ongoing. The web based format will allow a more widespread recruitment for different geographic, population and personnel variables. PMID:23550097

  9. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks

    Science.gov (United States)

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-01-01

    Hybrid mobile applications (apps) combine the features of Web applications and “native” mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources—file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies “bridges” that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources—the ability to read and write contacts list, local files, etc.—to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign

  10. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks.

    Science.gov (United States)

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-02-01

    Hybrid mobile applications (apps) combine the features of Web applications and "native" mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources-file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies "bridges" that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources-the ability to read and write contacts list, local files, etc.-to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content

  11. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric Martin; van der Geest, Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  12. Security Guidelines for the Development of Accessible Web Applications through the implementation of intelligent systems

    Directory of Open Access Journals (Sweden)

    Luis Joyanes Aguilar

    2009-12-01

    Full Text Available Due to the significant increase in threats, attacks and vulnerabilities that affect the Web in recent years has resulted the development and implementation of pools and methods to ensure security measures in the privacy, confidentiality and data integrity of users and businesses. Under certain circumstances, despite the implementation of these tools do not always get the flow of information which is passed in a secure manner. Many of these security tools and methods cannot be accessed by people who have disabilities or assistive technologies which enable people to access the Web efficiently. Among these security tools that are not accessible are the virtual keyboard, the CAPTCHA and other technologies that help to some extent to ensure safety on the Internet and are used in certain measures to combat malicious code and attacks that have been increased in recent times on the Web. Through the implementation of intelligent systems can detect, recover and receive information on the characteristics and properties of the different tools and hardware devices or software with which the user is accessing a web application and through analysis and interpretation of these intelligent systems can infer and automatically adjust the characteristics necessary to have these tools to be accessible by anyone regardless of disability or navigation context. This paper defines a set of guidelines and specific features that should have the security tools and methods to ensure the Web accessibility through the implementation of intelligent systems.

  13. EST-PAC a web package for EST annotation and protein sequence prediction

    Directory of Open Access Journals (Sweden)

    Strahm Yvan

    2006-10-01

    Full Text Available Abstract With the decreasing cost of DNA sequencing technology and the vast diversity of biological resources, researchers increasingly face the basic challenge of annotating a larger number of expressed sequences tags (EST from a variety of species. This typically consists of a series of repetitive tasks, which should be automated and easy to use. The results of these annotation tasks need to be stored and organized in a consistent way. All these operations should be self-installing, platform independent, easy to customize and amenable to using distributed bioinformatics resources available on the Internet. In order to address these issues, we present EST-PAC a web oriented multi-platform software package for expressed sequences tag (EST annotation. EST-PAC provides a solution for the administration of EST and protein sequence annotations accessible through a web interface. Three aspects of EST annotation are automated: 1 searching local or remote biological databases for sequence similarities using Blast services, 2 predicting protein coding sequence from EST data and, 3 annotating predicted protein sequences with functional domain predictions. In practice, EST-PAC integrates the BLASTALL suite, EST-Scan2 and HMMER in a relational database system accessible through a simple web interface. EST-PAC also takes advantage of the relational database to allow consistent storage, powerful queries of results and, management of the annotation process. The system allows users to customize annotation strategies and provides an open-source data-management environment for research and education in bioinformatics.

  14. Differential private collaborative Web services QoS prediction

    KAUST Repository

    Liu, An

    2018-04-04

    Collaborative Web services QoS prediction has proved to be an important tool to estimate accurately personalized QoS experienced by individual users, which is beneficial for a variety of operations in the service ecosystem, such as service selection, composition and recommendation. While a number of achievements have been attained on the study of improving the accuracy of collaborative QoS prediction, little work has been done for protecting user privacy in this process. In this paper, we propose a privacy-preserving collaborative QoS prediction framework which can protect the private data of users while retaining the ability of generating accurate QoS prediction. We introduce differential privacy, a rigorous and provable privacy model, into the process of collaborative QoS prediction. We first present DPS, a method that disguises a user’s observed QoS values by applying differential privacy to the user’s QoS data directly. We show how to integrate DPS with two representative collaborative QoS prediction approaches. To improve the utility of the disguised QoS data, we present DPA, another QoS disguising method which first aggregates a user’s QoS data before adding noise to achieve differential privacy. We evaluate the proposed methods by conducting extensive experiments on a real world Web services QoS dataset. Experimental results show our approach is feasible in practice.

  15. Differential private collaborative Web services QoS prediction

    KAUST Repository

    Liu, An; Shen, Xindi; Li, Zhixu; Liu, Guanfeng; Xu, Jiajie; Zhao, Lei; Zheng, Kai; Shang, Shuo

    2018-01-01

    Collaborative Web services QoS prediction has proved to be an important tool to estimate accurately personalized QoS experienced by individual users, which is beneficial for a variety of operations in the service ecosystem, such as service selection, composition and recommendation. While a number of achievements have been attained on the study of improving the accuracy of collaborative QoS prediction, little work has been done for protecting user privacy in this process. In this paper, we propose a privacy-preserving collaborative QoS prediction framework which can protect the private data of users while retaining the ability of generating accurate QoS prediction. We introduce differential privacy, a rigorous and provable privacy model, into the process of collaborative QoS prediction. We first present DPS, a method that disguises a user’s observed QoS values by applying differential privacy to the user’s QoS data directly. We show how to integrate DPS with two representative collaborative QoS prediction approaches. To improve the utility of the disguised QoS data, we present DPA, another QoS disguising method which first aggregates a user’s QoS data before adding noise to achieve differential privacy. We evaluate the proposed methods by conducting extensive experiments on a real world Web services QoS dataset. Experimental results show our approach is feasible in practice.

  16. Vfold: a web server for RNA structure and folding thermodynamics prediction.

    Science.gov (United States)

    Xu, Xiaojun; Zhao, Peinan; Chen, Shi-Jie

    2014-01-01

    The ever increasing discovery of non-coding RNAs leads to unprecedented demand for the accurate modeling of RNA folding, including the predictions of two-dimensional (base pair) and three-dimensional all-atom structures and folding stabilities. Accurate modeling of RNA structure and stability has far-reaching impact on our understanding of RNA functions in human health and our ability to design RNA-based therapeutic strategies. The Vfold server offers a web interface to predict (a) RNA two-dimensional structure from the nucleotide sequence, (b) three-dimensional structure from the two-dimensional structure and the sequence, and (c) folding thermodynamics (heat capacity melting curve) from the sequence. To predict the two-dimensional structure (base pairs), the server generates an ensemble of structures, including loop structures with the different intra-loop mismatches, and evaluates the free energies using the experimental parameters for the base stacks and the loop entropy parameters given by a coarse-grained RNA folding model (the Vfold model) for the loops. To predict the three-dimensional structure, the server assembles the motif scaffolds using structure templates extracted from the known PDB structures and refines the structure using all-atom energy minimization. The Vfold-based web server provides a user friendly tool for the prediction of RNA structure and stability. The web server and the source codes are freely accessible for public use at "http://rna.physics.missouri.edu".

  17. Evaluation of the content and accessibility of web sites for accredited orthopaedic sports medicine fellowships.

    Science.gov (United States)

    Mulcahey, Mary K; Gosselin, Michelle M; Fadale, Paul D

    2013-06-19

    The Internet is a common source of information for orthopaedic residents applying for sports medicine fellowships, with the web sites of the American Orthopaedic Society for Sports Medicine (AOSSM) and the San Francisco Match serving as central databases. We sought to evaluate the web sites for accredited orthopaedic sports medicine fellowships with regard to content and accessibility. We reviewed the existing web sites of the ninety-five accredited orthopaedic sports medicine fellowships included in the AOSSM and San Francisco Match databases from February to March 2012. A Google search was performed to determine the overall accessibility of program web sites and to supplement information obtained from the AOSSM and San Francisco Match web sites. The study sample consisted of the eighty-seven programs whose web sites connected to information about the fellowship. Each web site was evaluated for its informational value. Of the ninety-five programs, fifty-one (54%) had links listed in the AOSSM database. Three (3%) of all accredited programs had web sites that were linked directly to information about the fellowship. Eighty-eight (93%) had links listed in the San Francisco Match database; however, only five (5%) had links that connected directly to information about the fellowship. Of the eighty-seven programs analyzed in our study, all eighty-seven web sites (100%) provided a description of the program and seventy-six web sites (87%) included information about the application process. Twenty-one web sites (24%) included a list of current fellows. Fifty-six web sites (64%) described the didactic instruction, seventy (80%) described team coverage responsibilities, forty-seven (54%) included a description of cases routinely performed by fellows, forty-one (47%) described the role of the fellow in seeing patients in the office, eleven (13%) included call responsibilities, and seventeen (20%) described a rotation schedule. Two Google searches identified direct links for

  18. JavaScript Access to DICOM Network and Objects in Web Browser.

    Science.gov (United States)

    Drnasin, Ivan; Grgić, Mislav; Gogić, Goran

    2017-10-01

    Digital imaging and communications in medicine (DICOM) 3.0 standard provides the baseline for the picture archiving and communication systems (PACS). The development of Internet and various communication media initiated demand for non-DICOM access to PACS systems. Ever-increasing utilization of the web browsers, laptops and handheld devices, as opposed to desktop applications and static organizational computers, lead to development of different web technologies. The DICOM standard officials accepted those subsequently as tools of alternative access. This paper provides an overview of the current state of development of the web access technology to the DICOM repositories. It presents a different approach of using HTML5 features of the web browsers through the JavaScript language and the WebSocket protocol by enabling real-time communication with DICOM repositories. JavaScript DICOM network library, DICOM to WebSocket proxy and a proof-of-concept web application that qualifies as a DICOM 3.0 device were developed.

  19. Web access to radioactivity measurements. A case study

    International Nuclear Information System (INIS)

    Salzano, Gabriella

    2013-01-01

    This research analyzes the French national network monitoring radioactivity (RNM) which aims to increase transparency and quality in this complex area. RNM opened its public web site in February 2010. Our approach combines humanities and social sciences (understanding information's issues and democratic debates) as well as computers sciences (engineering evolutions of information systems). Based on the analysis of institutional and national platforms, reports and interviews it highlights the French specificities on nuclear information, analyses the RNM information system and releases tracks for other platforms providing health related public data

  20. Texting and accessing the web while driving: traffic citations and crashes among young adult drivers.

    Science.gov (United States)

    Cook, Jerry L; Jones, Randall M

    2011-12-01

    We examined relations between young adult texting and accessing the web while driving with driving outcomes (viz. crashes and traffic citations). Our premise is that engaging in texting and accessing the web while driving is not only distracting but that these activities represent a pattern of behavior that leads to an increase in unwanted outcomes, such as crashes and citations. College students (N = 274) on 3 campuses (one in California and 2 in Utah) completed an electronic questionnaire regarding their driving experience and cell phone use. Our data indicate that 3 out of 4 (74.3%) young adults engage in texting while driving, over half on a weekly basis (51.8%), and some engage in accessing the web while driving (16.8%). Data analysis revealed a relationship between these cell phone behaviors and traffic citations and crashes. The findings support Jessor and Jessor's (1977) "problem behavior syndrome" by showing that traffic citations are related to texting and accessing the web while driving and that crashes are related to accessing the web while driving. Limitations and recommendations are discussed.

  1. Web accessibility: a longitudinal study of college and university home pages in the northwestern United States.

    Science.gov (United States)

    Thompson, Terrill; Burgstahler, Sheryl; Moore, Elizabeth J

    2010-01-01

    This article reports on a follow-up assessment to Thompson et al. (Proceedings of The First International Conference on Technology-based Learning with Disability, July 19-20, Dayton, Ohio, USA; 2007. pp 127-136), in which higher education home pages were evaluated over a 5-year period on their accessibility to individuals with disabilities. The purpose of this article is to identify trends in web accessibility and long-term impact of outreach and education. Home pages from 127 higher education institutions in the Northwest were evaluated for accessibility three times over a 6-month period in 2004-2005 (Phase I), and again in 2009 (Phase II). Schools in the study were offered varying degrees of training and/or support on web accessibility during Phase I. Pages were evaluated for accessibility using a set of manual checkpoints developed by the researchers. Over the 5-year period reported in this article, significant positive gains in accessibility were revealed on some measures, but accessibility declined on other measures. The areas of improvement are arguably the more basic, easy-to-implement accessibility features, while the area of decline is keyboard accessibility, which is likely associated with the emergence of dynamic new technologies on web pages. Even on those measures where accessibility is improving, it is still strikingly low. In Phase I of the study, institutions that received extensive training and support were more likely than other institutions to show improved accessibility on the measures where institutions improved overall, but were equally or more likely than others to show a decline on measures where institutions showed an overall decline. In Phase II, there was no significant difference between institutions who had received support earlier in the study, and those who had not. Results suggest that growing numbers of higher education institutions in the Northwest are motivated to add basic accessibility features to their home pages, and that

  2. The Complexities of Developing Accessible Web Content for Mobile Devices

    Science.gov (United States)

    Hancock, Richard

    This paper is concerned with the development of accessible mobile content and the complexities that arise during this process. The paper gives an overview of the popularity and advantages of mobile devices before tackling the issues surrounding content development, particularly within an educational context. The paper concludes with an overview of an application that was developed for higher education students within a UK college that had a mobile counterpart, allowing the artefact to transcend the typical desktop environment.

  3. Development of Remote Monitoring and a Control System Based on PLC and WebAccess for Learning Mechatronics

    OpenAIRE

    Wen-Jye Shyr; Te-Jen Su; Chia-Ming Lin

    2013-01-01

    This study develops a novel method for learning mechatronics using remote monitoring and control, based on a programmable logic controller (PLC) and WebAccess. A mechatronics module, a Web‐CAM and a PLC were integrated with WebAccess software to organize a remote laboratory. The proposed system enables users to access the Internet for remote monitoring and control of the mechatronics module via a web browser, thereby enhancing work flexibility by enabling personnel to control mechatronics equ...

  4. Unlocking the Potential of Web Localizers as Contributors to Image Accessibility: What Do Evaluation Tools Have to Offer?

    OpenAIRE

    Rodriguez Vazquez, Silvia

    2015-01-01

    Creating appropriate text alternatives to render images accessible in the web is a shared responsibility among all actors involved in the web development cycle, including web localization professionals. However, they often lack the knowledge needed to correctly transfer image accessibility across different website language versions. In this paper, we provide insight into translators' performance as regards their accessibility achievements during text alternatives adaptation from English into ...

  5. A System for Web-based Access to the HSOS Database

    Science.gov (United States)

    Lin, G.

    Huairou Solar Observing Station's (HSOS) magnetogram and dopplergram are world-class instruments. Access to their data has opened to the world. Web-based access to the data will provide a powerful, convenient tool for data searching and solar physics. It is necessary that our data be provided to users via the Web when it is opened to the world. In this presentation, the author describes general design and programming construction of the system. The system will be generated by PHP and MySQL. The author also introduces basic feature of PHP and MySQL.

  6. Performance Issues Related to Web Service Usage for Remote Data Access

    International Nuclear Information System (INIS)

    Pais, V. F.; Stancalie, V.; Mihailescu, F. A.; Totolici, M. C.

    2008-01-01

    Web services are starting to be widely used in applications for remotely accessing data. This is of special interest for research based on small and medium scale fusion devices, since scientists participating remotely to experiments are accessing large amounts of data over the Internet. Recent tests were conducted to see how the new network traffic, generated by the use of web services, can be integrated in the existing infrastructure and what would be the impact over existing applications, especially those used in a remote participation scenario

  7. Predicting Subcontractor Performance Using Web-Based Evolutionary Fuzzy Neural Networks

    Directory of Open Access Journals (Sweden)

    Chien-Ho Ko

    2013-01-01

    Full Text Available Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs, Fuzzy Logic (FL, and Neural Networks (NNs. FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism.

  8. Predicting subcontractor performance using web-based Evolutionary Fuzzy Neural Networks.

    Science.gov (United States)

    Ko, Chien-Ho

    2013-01-01

    Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism.

  9. Accessibility of dynamic web applications with emphasis on visually impaired users

    Directory of Open Access Journals (Sweden)

    Kingsley Okoye

    2014-09-01

    Full Text Available As the internet is fast migrating from static web pages to dynamic web pages, the users with visual impairment find it confusing and challenging when accessing the contents on the web. There is evidence that dynamic web applications pose accessibility challenges for the visually impaired users. This study shows that a difference can be made through the basic understanding of the technical requirement of users with visual impairment and addresses a number of issues pertinent to the accessibility needs for such users. We propose that only by designing a framework that is structurally flexible, by removing unnecessary extras and thereby making every bit useful (fit-for-purpose, will visually impaired users be given an increased capacity to intuitively access e-contents. This theory is implemented in a dynamic website for the visually impaired designed in this study. Designers should be aware of how the screen reading software works to enable them make reasonable adjustments or provide alternative content that still corresponds to the objective content to increase the possibility of offering faultless service to such users. The result of our research reveals that materials can be added to a content repository or re-used from existing ones by identifying the content types and then transforming them into a flexible and accessible one that fits the requirements of the visually impaired through our method (no-frill + agile methodology rather than computing in advance or designing according to a given specification.

  10. Meta4: a web application for sharing and annotating metagenomic gene predictions using web services.

    Science.gov (United States)

    Richardson, Emily J; Escalettes, Franck; Fotheringham, Ian; Wallace, Robert J; Watson, Mick

    2013-01-01

    Whole-genome shotgun metagenomics experiments produce DNA sequence data from entire ecosystems, and provide a huge amount of novel information. Gene discovery projects require up-to-date information about sequence homology and domain structure for millions of predicted proteins to be presented in a simple, easy-to-use system. There is a lack of simple, open, flexible tools that allow the rapid sharing of metagenomics datasets with collaborators in a format they can easily interrogate. We present Meta4, a flexible and extensible web application that can be used to share and annotate metagenomic gene predictions. Proteins and predicted domains are stored in a simple relational database, with a dynamic front-end which displays the results in an internet browser. Web services are used to provide up-to-date information about the proteins from homology searches against public databases. Information about Meta4 can be found on the project website, code is available on Github, a cloud image is available, and an example implementation can be seen at.

  11. Cloud-based Web Services for Near-Real-Time Web access to NPP Satellite Imagery and other Data

    Science.gov (United States)

    Evans, J. D.; Valente, E. G.

    2010-12-01

    We are building a scalable, cloud computing-based infrastructure for Web access to near-real-time data products synthesized from the U.S. National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP) and other geospatial and meteorological data. Given recent and ongoing changes in the the NPP and NPOESS programs (now Joint Polar Satellite System), the need for timely delivery of NPP data is urgent. We propose an alternative to a traditional, centralized ground segment, using distributed Direct Broadcast facilities linked to industry-standard Web services by a streamlined processing chain running in a scalable cloud computing environment. Our processing chain, currently implemented on Amazon.com's Elastic Compute Cloud (EC2), retrieves raw data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) and synthesizes data products such as Sea-Surface Temperature, Vegetation Indices, etc. The cloud computing approach lets us grow and shrink computing resources to meet large and rapid fluctuations (twice daily) in both end-user demand and data availability from polar-orbiting sensors. Early prototypes have delivered various data products to end-users with latencies between 6 and 32 minutes. We have begun to replicate machine instances in the cloud, so as to reduce latency and maintain near-real time data access regardless of increased data input rates or user demand -- all at quite moderate monthly costs. Our service-based approach (in which users invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored and composite (e.g., false-color multiband) products on demand. To facilitate broad impact and adoption of our technology, we have emphasized open, industry-standard software interfaces and open source software. Through our work, we envision the widespread establishment of similar, derived, or interoperable systems for

  12. Towards automated processing of the right of access in inter-organizational Web Service compositions

    DEFF Research Database (Denmark)

    Herkenhöner, Ralph; De Meer, Hermann; Jensen, Meiko

    2010-01-01

    with trade secret protection. In this paper, we present an automated architecture to enable exercising the right of access in the domain of inter-organizational business processes based on Web Services technology. Deriving its requirements from the legal, economical, and technical obligations, we show...

  13. Pilot Evaluation of a Web-Based Intervention Targeting Sexual Health Service Access

    Science.gov (United States)

    Brown, K. E.; Newby, K.; Caley, M.; Danahay, A.; Kehal, I.

    2016-01-01

    Sexual health service access is fundamental to good sexual health, yet interventions designed to address this have rarely been implemented or evaluated. In this article, pilot evaluation findings for a targeted public health behavior change intervention, delivered via a website and web-app, aiming to increase uptake of sexual health services among…

  14. Penerapan Bahasa Alami Sederhana pada Online Public Access Catalog (Opac) Berbasis Web Semantik

    OpenAIRE

    Andri, Andri

    2012-01-01

    Online Public Access Catalog (OPAC) merupakan sistem katalog online yang memanfaatkan teknologi komputer dan internet sebagai media pengaksesan dan penyimpanan datanya. Sebuah katalog biasanya memberikan informasi mengenai koleksi yang disimpan dalam sebuah perpustakaan digital. Dalam penelitian ini akan dibuat sebuah prototipe aplikasi pencarian pada katalog online di perpustakaan Universitas Binadarma Palembang berbasis teknologi web semantik serta menerapkan pengolahan bahasa alami sederha...

  15. Design and Implementation of Open-Access Web-Based Education ...

    African Journals Online (AJOL)

    ... is not flexible as it does not permit access to educational resource at any time or place feasible. ... In this paper, a web-based education useful for e-learning was designed and ... using an open source platform which will be more flexible, and cost effective due to free licensing. The programming languages used are. VB.

  16. Prediction of toxicity and comparison of alternatives using WebTEST (Web-services Toxicity Estimation Software Tool)

    Science.gov (United States)

    A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...

  17. InterProSurf: a web server for predicting interacting sites on protein surfaces

    Science.gov (United States)

    Negi, Surendra S.; Schein, Catherine H.; Oezguen, Numan; Power, Trevor D.; Braun, Werner

    2009-01-01

    Summary A new web server, InterProSurf, predicts interacting amino acid residues in proteins that are most likely to interact with other proteins, given the 3D structures of subunits of a protein complex. The prediction method is based on solvent accessible surface area of residues in the isolated subunits, a propensity scale for interface residues and a clustering algorithm to identify surface regions with residues of high interface propensities. Here we illustrate the application of InterProSurf to determine which areas of Bacillus anthracis toxins and measles virus hemagglutinin protein interact with their respective cell surface receptors. The computationally predicted regions overlap with those regions previously identified as interface regions by sequence analysis and mutagenesis experiments. PMID:17933856

  18. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    Science.gov (United States)

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  19. New data access with HTTP/WebDAV in the ATLAS experiment

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration; Serfon, Cedric; Garonne, Vincent; Blunier, Sylvain; Lavorini, Vincenzo; Nilsson, Paul

    2015-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyse collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.

  20. New data access with HTTP/WebDAV in the ATLAS experiment

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration; Serfon, Cedric; Garonne, Vincent; Blunier, Sylvain; Lavorini, Vincenzo; Nilsson, Paul

    2015-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyze collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.

  1. Developing Access Control Model of Web OLAP over Trusted and Collaborative Data Warehouses

    Science.gov (United States)

    Fugkeaw, Somchart; Mitrpanont, Jarernsri L.; Manpanpanich, Piyawit; Juntapremjitt, Sekpon

    This paper proposes the design and development of Role- based Access Control (RBAC) model for the Single Sign-On (SSO) Web-OLAP query spanning over multiple data warehouses (DWs). The model is based on PKI Authentication and Privilege Management Infrastructure (PMI); it presents a binding model of RBAC authorization based on dimension privilege specified in attribute certificate (AC) and user identification. Particularly, the way of attribute mapping between DW user authentication and privilege of dimensional access is illustrated. In our approach, we apply the multi-agent system to automate flexible and effective management of user authentication, role delegation as well as system accountability. Finally, the paper culminates in the prototype system A-COLD (Access Control of web-OLAP over multiple DWs) that incorporates the OLAP features and authentication and authorization enforcement in the multi-user and multi-data warehouse environment.

  2. Beyond Section 508: The Spectrum of Legal Requirements for Accessible e-Government Web Sites in the United States

    Science.gov (United States)

    Jaeger, Paul T.

    2004-01-01

    In the United States, a number of federal laws establish requirements that electronic government (e-government) information and services be accessible to individuals with disabilities. These laws affect e-government Web sites at the federal, state, and local levels. To this point, research about the accessibility of e-government Web sites has…

  3. 78 FR 67881 - Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and...

    Science.gov (United States)

    2013-11-12

    ... ticket agents are providing schedule and fare information and marketing covered air transportation... corresponding accessible pages on a mobile Web site by one year after the final rule's effective date; and (3... criteria) as the required accessibility standard for all public-facing Web pages involved in marketing air...

  4. Search, Read and Write: An Inquiry into Web Accessibility for People with Dyslexia.

    Science.gov (United States)

    Berget, Gerd; Herstad, Jo; Sandnes, Frode Eika

    2016-01-01

    Universal design in context of digitalisation has become an integrated part of international conventions and national legislations. A goal is to make the Web accessible for people of different genders, ages, backgrounds, cultures and physical, sensory and cognitive abilities. Political demands for universally designed solutions have raised questions about how it is achieved in practice. Developers, designers and legislators have looked towards the Web Content Accessibility Guidelines (WCAG) for answers. WCAG 2.0 has become the de facto standard for universal design on the Web. Some of the guidelines are directed at the general population, while others are targeted at more specific user groups, such as the visually impaired or hearing impaired. Issues related to cognitive impairments such as dyslexia receive less attention, although dyslexia is prevalent in at least 5-10% of the population. Navigation and search are two common ways of using the Web. However, while navigation has received a fair amount of attention, search systems are not explicitly included, although search has become an important part of people's daily routines. This paper discusses WCAG in the context of dyslexia for the Web in general and search user interfaces specifically. Although certain guidelines address topics that affect dyslexia, WCAG does not seem to fully accommodate users with dyslexia.

  5. Design and Implementation of a Web-based Monitoring System by using EPICS Channel Access Protocol

    International Nuclear Information System (INIS)

    An, Eun Mi; Song, Yong Gi

    2009-01-01

    Proton Engineering Frontier Project (PEFP) has developed a 20MeV proton accelerator, and established a distributed control system based on EPICS for sub-system components such as vacuum unit, beam diagnostics, and power supply system. The control system includes a real-time monitoring and alarm functions. From the aspect of a efficient maintenance of a control system and a additional extension of subsystems, EPICS software framework was adopted. In addition, a control system should be capable of providing an easy access for users and a real-time monitoring on a user screen. Therefore, we have implemented a new web-based monitoring server with several libraries. By adding DB module, the new IOC web monitoring system makes it possible to monitor the system through the web. By integrating EPICS Channel Access (CA) and Database libraries into a Database module, the web-based monitoring system makes it possible to monitor the sub-system status through user's internet browser. In this study, we developed a web based monitoring system by using EPICS IOC (Input Output Controller) with IBM server

  6. A University Web Portal redesign applying accessibility patterns. Breaking Down Barriers for Visually Impaired Users

    Directory of Open Access Journals (Sweden)

    Hernán Sosa

    2015-08-01

    Full Text Available Definitely, the WWW and ICTs have become the preferred media for the interaction between society and its citizens, and public and private organizations have today the possibility of deploying their activities through the Web. In particular, university education is a domain where the benefits of these technological resources can strongly contribute in caring for students. However, most university Web portals are inaccessible to their user community (students, professors, and non-teaching staff, between others, since these portals do not take into account the needs of people with different capabilities. In this work, we propose an accessibility pattern driven process to the redesign of university Web portals, aiming to break down barriers for visually impaired users. The approach is implemented to a real case study: the Web portal of Universidad Nacional de la Patagonia Austral (UNPA. The results come from applying accessibility recommendations and evaluation tools (automatic and manual from internationally recognized organizations, to both versions of the Web portal: the original and the redesign one.

  7. MEGADOCK-Web: an integrated database of high-throughput structure-based protein-protein interaction predictions.

    Science.gov (United States)

    Hayashi, Takanori; Matsuzaki, Yuri; Yanagisawa, Keisuke; Ohue, Masahito; Akiyama, Yutaka

    2018-05-08

    Protein-protein interactions (PPIs) play several roles in living cells, and computational PPI prediction is a major focus of many researchers. The three-dimensional (3D) structure and binding surface are important for the design of PPI inhibitors. Therefore, rigid body protein-protein docking calculations for two protein structures are expected to allow elucidation of PPIs different from known complexes in terms of 3D structures because known PPI information is not explicitly required. We have developed rapid PPI prediction software based on protein-protein docking, called MEGADOCK. In order to fully utilize the benefits of computational PPI predictions, it is necessary to construct a comprehensive database to gather prediction results and their predicted 3D complex structures and to make them easily accessible. Although several databases exist that provide predicted PPIs, the previous databases do not contain a sufficient number of entries for the purpose of discovering novel PPIs. In this study, we constructed an integrated database of MEGADOCK PPI predictions, named MEGADOCK-Web. MEGADOCK-Web provides more than 10 times the number of PPI predictions than previous databases and enables users to conduct PPI predictions that cannot be found in conventional PPI prediction databases. In MEGADOCK-Web, there are 7528 protein chains and 28,331,628 predicted PPIs from all possible combinations of those proteins. Each protein structure is annotated with PDB ID, chain ID, UniProt AC, related KEGG pathway IDs, and known PPI pairs. Additionally, MEGADOCK-Web provides four powerful functions: 1) searching precalculated PPI predictions, 2) providing annotations for each predicted protein pair with an experimentally known PPI, 3) visualizing candidates that may interact with the query protein on biochemical pathways, and 4) visualizing predicted complex structures through a 3D molecular viewer. MEGADOCK-Web provides a huge amount of comprehensive PPI predictions based on

  8. ORION: a web server for protein fold recognition and structure prediction using evolutionary hybrid profiles.

    Science.gov (United States)

    Ghouzam, Yassine; Postic, Guillaume; Guerin, Pierre-Edouard; de Brevern, Alexandre G; Gelly, Jean-Christophe

    2016-06-20

    Protein structure prediction based on comparative modeling is the most efficient way to produce structural models when it can be performed. ORION is a dedicated webserver based on a new strategy that performs this task. The identification by ORION of suitable templates is performed using an original profile-profile approach that combines sequence and structure evolution information. Structure evolution information is encoded into profiles using structural features, such as solvent accessibility and local conformation -with Protein Blocks-, which give an accurate description of the local protein structure. ORION has recently been improved, increasing by 5% the quality of its results. The ORION web server accepts a single protein sequence as input and searches homologous protein structures within minutes. Various databases such as PDB, SCOP and HOMSTRAD can be mined to find an appropriate structural template. For the modeling step, a protein 3D structure can be directly obtained from the selected template by MODELLER and displayed with global and local quality model estimation measures. The sequence and the predicted structure of 4 examples from the CAMEO server and a recent CASP11 target from the 'Hard' category (T0818-D1) are shown as pertinent examples. Our web server is accessible at http://www.dsimb.inserm.fr/ORION/.

  9. EntrezAJAX: direct web browser access to the Entrez Programming Utilities

    Directory of Open Access Journals (Sweden)

    Pallen Mark J

    2010-06-01

    Full Text Available Abstract Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/

  10. Checking an integrated model of web accessibility and usability evaluation for disabled people.

    Science.gov (United States)

    Federici, Stefano; Micangeli, Andrea; Ruspantini, Irene; Borgianni, Stefano; Corradi, Fabrizio; Pasqualotto, Emanuele; Olivetti Belardinelli, Marta

    2005-07-08

    A combined objective-oriented and subjective-oriented method for evaluating accessibility and usability of web pages for students with disability was tested. The objective-oriented approach is devoted to verifying the conformity of interfaces to standard rules stated by national and international organizations responsible for web technology standardization, such as W3C. Conversely, the subjective-oriented approach allows assessing how the final users interact with the artificial system, accessing levels of user satisfaction based on personal factors and environmental barriers. Five kinds of measurements were applied as objective-oriented and subjective-oriented tests. Objective-oriented evaluations were performed on the Help Desk web page for students with disability, included in the website of a large Italian state university. Subjective-oriented tests were administered to 19 students labeled as disabled on the basis of their own declaration at the University enrolment: 13 students were tested by means of the SUMI test and six students by means of the 'Cooperative evaluation'. Objective-oriented and subjective-oriented methods highlighted different and sometimes conflicting results. Both methods have pointed out much more consistency regarding levels of accessibility than of usability. Since usability is largely affected by individual differences in user's own (dis)abilities, subjective-oriented measures underscored the fact that blind students encountered much more web surfing difficulties.

  11. ngLOC: software and web server for predicting protein subcellular localization in prokaryotes and eukaryotes

    Directory of Open Access Journals (Sweden)

    King Brian R

    2012-07-01

    Full Text Available Abstract Background Understanding protein subcellular localization is a necessary component toward understanding the overall function of a protein. Numerous computational methods have been published over the past decade, with varying degrees of success. Despite the large number of published methods in this area, only a small fraction of them are available for researchers to use in their own studies. Of those that are available, many are limited by predicting only a small number of organelles in the cell. Additionally, the majority of methods predict only a single location for a sequence, even though it is known that a large fraction of the proteins in eukaryotic species shuttle between locations to carry out their function. Findings We present a software package and a web server for predicting the subcellular localization of protein sequences based on the ngLOC method. ngLOC is an n-gram-based Bayesian classifier that predicts subcellular localization of proteins both in prokaryotes and eukaryotes. The overall prediction accuracy varies from 89.8% to 91.4% across species. This program can predict 11 distinct locations each in plant and animal species. ngLOC also predicts 4 and 5 distinct locations on gram-positive and gram-negative bacterial datasets, respectively. Conclusions ngLOC is a generic method that can be trained by data from a variety of species or classes for predicting protein subcellular localization. The standalone software is freely available for academic use under GNU GPL, and the ngLOC web server is also accessible at http://ngloc.unmc.edu.

  12. osFP: a web server for predicting the oligomeric states of fluorescent proteins.

    Science.gov (United States)

    Simeon, Saw; Shoombuatong, Watshara; Anuwongcharoen, Nuttapat; Preeyanon, Likit; Prachayasittikul, Virapong; Wikberg, Jarl E S; Nantasenamat, Chanin

    2016-01-01

    Currently, monomeric fluorescent proteins (FP) are ideal markers for protein tagging. The prediction of oligomeric states is helpful for enhancing live biomedical imaging. Computational prediction of FP oligomeric states can accelerate the effort of protein engineering efforts of creating monomeric FPs. To the best of our knowledge, this study represents the first computational model for predicting and analyzing FP oligomerization directly from the amino acid sequence. After data curation, an exhaustive data set consisting of 397 non-redundant FP oligomeric states was compiled from the literature. Results from benchmarking of the protein descriptors revealed that the model built with amino acid composition descriptors was the top performing model with accuracy, sensitivity and specificity in excess of 80% and MCC greater than 0.6 for all three data subsets (e.g. training, tenfold cross-validation and external sets). The model provided insights on the important residues governing the oligomerization of FP. To maximize the benefit of the generated predictive model, it was implemented as a web server under the R programming environment. osFP affords a user-friendly interface that can be used to predict the oligomeric state of FP using the protein sequence. The advantage of osFP is that it is platform-independent meaning that it can be accessed via a web browser on any operating system and device. osFP is freely accessible at http://codes.bio/osfp/ while the source code and data set is provided on GitHub at https://github.com/chaninn/osFP/.Graphical Abstract.

  13. Design and Implementation of File Access and Control System Based on Dynamic Web

    Institute of Scientific and Technical Information of China (English)

    GAO Fuxiang; YAO Lan; BAO Shengfei; YU Ge

    2006-01-01

    A dynamic Web application, which can help the departments of enterprise to collaborate with each other conveniently, is proposed. Several popular design solutions are introduced at first. Then, dynamic Web system is chosen for developing the file access and control system. Finally, the paper gives the detailed process of the design and implementation of the system, which includes some key problems such as solutions of document management and system security. Additionally, the limitations of the system as well as the suggestions of further improvement are also explained.

  14. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    Science.gov (United States)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  15. Intelligent Access to Sequence and Structure Databases (IASSD) - an interface for accessing information from major web databases.

    Science.gov (United States)

    Ganguli, Sayak; Gupta, Manoj Kumar; Basu, Protip; Banik, Rahul; Singh, Pankaj Kumar; Vishal, Vineet; Bera, Abhisek Ranjan; Chakraborty, Hirak Jyoti; Das, Sasti Gopal

    2014-01-01

    With the advent of age of big data and advances in high throughput technology accessing data has become one of the most important step in the entire knowledge discovery process. Most users are not able to decipher the query result that is obtained when non specific keywords or a combination of keywords are used. Intelligent access to sequence and structure databases (IASSD) is a desktop application for windows operating system. It is written in Java and utilizes the web service description language (wsdl) files and Jar files of E-utilities of various databases such as National Centre for Biotechnology Information (NCBI) and Protein Data Bank (PDB). Apart from that IASSD allows the user to view protein structure using a JMOL application which supports conditional editing. The Jar file is freely available through e-mail from the corresponding author.

  16. HTSstation: a web application and open-access libraries for high-throughput sequencing data analysis.

    Science.gov (United States)

    David, Fabrice P A; Delafontaine, Julien; Carat, Solenne; Ross, Frederick J; Lefebvre, Gregory; Jarosz, Yohan; Sinclair, Lucas; Noordermeer, Daan; Rougemont, Jacques; Leleu, Marion

    2014-01-01

    The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch.

  17. RaptorX-Property: a web server for protein structure property prediction.

    Science.gov (United States)

    Wang, Sheng; Li, Wei; Liu, Shiwang; Xu, Jinbo

    2016-07-08

    RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Documenting historical data and accessing it on the World Wide Web

    Science.gov (United States)

    Malchus B. Baker; Daniel P. Huebner; Peter F. Ffolliott

    2000-01-01

    New computer technologies facilitate the storage, retrieval, and summarization of watershed-based data sets on the World Wide Web. These data sets are used by researchers when testing and validating predictive models, managers when planning and implementing watershed management practices, educators when learning about hydrologic processes, and decisionmakers when...

  19. Guide on Project Web Access of SFR R and D and Technology Monitoring System

    International Nuclear Information System (INIS)

    Lee, Dong Uk; Won, Byung Chool; Lee, Yong Bum; Kim, Young In; Hahn, Do Hee

    2008-09-01

    The SFR R and D and technology monitoring system based on the MS enterprise project management is developed for systematic effective management of 'Development of Basic Key Technologies for Gen IV SFR' project which was performed under the Mid- and Long-term Nuclear R and D Program sponsored by the Ministry of Education, Science and Technology. This system is a tool for project management based on web access. Therefore this manual is a detailed guide for Project Web Access(PWA). Section 1 describes the common guide for using of system functions such as project server 2007 client connection setting, additional outlook function setting etc. The section 2 describes the guide for system administrator. It is described the guide for project management in section 3, 4

  20. Guide on Project Web Access of SFR R and D and Technology Monitoring System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Uk; Won, Byung Chool; Lee, Yong Bum; Kim, Young In; Hahn, Do Hee

    2008-09-15

    The SFR R and D and technology monitoring system based on the MS enterprise project management is developed for systematic effective management of 'Development of Basic Key Technologies for Gen IV SFR' project which was performed under the Mid- and Long-term Nuclear R and D Program sponsored by the Ministry of Education, Science and Technology. This system is a tool for project management based on web access. Therefore this manual is a detailed guide for Project Web Access(PWA). Section 1 describes the common guide for using of system functions such as project server 2007 client connection setting, additional outlook function setting etc. The section 2 describes the guide for system administrator. It is described the guide for project management in section 3, 4.

  1. Conceptual Web Users' Actions Prediction for Ontology-Based Browsing Recommendations

    Science.gov (United States)

    Robal, Tarmo; Kalja, Ahto

    The Internet consists of thousands of web sites with different kinds of structures. However, users are browsing the web according to their informational expectations towards the web site searched, having an implicit conceptual model of the domain in their minds. Nevertheless, people tend to repeat themselves and have partially shared conceptual views while surfing the web, finding some areas of web sites more interesting than others. Herein, we take advantage of the latter and provide a model and a study on predicting users' actions based on the web ontology concepts and their relations.

  2. A performance study of WebDav access to storages within the Belle II collaboration

    Science.gov (United States)

    Pardi, S.; Russo, G.

    2017-10-01

    WebDav and HTTP are becoming popular protocols for data access in the High Energy Physics community. The most used Grid and Cloud storage solutions provide such kind of interfaces, in this scenario tuning and performance evaluation became crucial aspects to promote the adoption of these protocols within the Belle II community. In this work, we present the results of a large-scale test activity, made with the goal to evaluate performances and reliability of the WebDav protocol, and study a possible adoption for the user analysis. More specifically, we considered a pilot infrastructure composed by a set of storage elements configured with the WebDav interface, hosted at the Belle II sites. The performance tests include a comparison with xrootd and gridftp. As reference tests we used a set of analysis jobs running under the Belle II software framework, accessing the input data with the ROOT I/O library, in order to simulate as much as possible a realistic user activity. The final analysis shows the possibility to achieve promising performances with WebDav on different storage systems, and gives an interesting feedback, for Belle II community and for other high energy physics experiments.

  3. A Web Service for File-Level Access to Disk Images

    Directory of Open Access Journals (Sweden)

    Sunitha Misra

    2014-07-01

    Full Text Available Digital forensics tools have many potential applications in the curation of digital materials in libraries, archives and museums (LAMs. Open source digital forensics tools can help LAM professionals to extract digital contents from born-digital media and make more informed preservation decisions. Many of these tools have ways to display the metadata of the digital media, but few provide file-level access without having to mount the device or use complex command-line utilities. This paper describes a project to develop software that supports access to the contents of digital media without having to mount or download the entire image. The work examines two approaches in creating this tool: First, a graphical user interface running on a local machine. Second, a web-based application running in web browser. The project incorporates existing open source forensics tools and libraries including The Sleuth Kit and libewf along with the Flask web application framework and custom Python scripts to generate web pages supporting disk image browsing.

  4. GeneDig: a web application for accessing genomic and bioinformatics knowledge.

    Science.gov (United States)

    Suciu, Radu M; Aydin, Emir; Chen, Brian E

    2015-02-28

    With the exponential increase and widespread availability of genomic, transcriptomic, and proteomic data, accessing these '-omics' data is becoming increasingly difficult. The current resources for accessing and analyzing these data have been created to perform highly specific functions intended for specialists, and thus typically emphasize functionality over user experience. We have developed a web-based application, GeneDig.org, that allows any general user access to genomic information with ease and efficiency. GeneDig allows for searching and browsing genes and genomes, while a dynamic navigator displays genomic, RNA, and protein information simultaneously for co-navigation. We demonstrate that our application allows more than five times faster and efficient access to genomic information than any currently available methods. We have developed GeneDig as a platform for bioinformatics integration focused on usability as its central design. This platform will introduce genomic navigation to broader audiences while aiding the bioinformatics analyses performed in everyday biology research.

  5. On the best learning algorithm for web services response time prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Popentiu-Vladicescu, Florin

    2013-01-01

    In this article we will examine the effect of different learning algorithms, while training the MLP (Multilayer Perceptron) with the intention of predicting web services response time. Web services do not necessitate a user interface. This may seem contradictory to most people's concept of what...... an application is. A Web service is better imagined as an application "segment," or better as a program enabler. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the response of web services during their operation is very important....

  6. Sign Language Translation in State Administration in Germany: Barrier Free Web Accessibility

    OpenAIRE

    Lišková, Kateřina

    2014-01-01

    The aim of this thesis is to describe Web accessibility in state administration in the Federal Republic of Germany in relation to the socio-demographic group of deaf sign language users who did not have the opportunity to gain proper knowledge of a written form of the German language. The demand of the Deaf to information in an accessible form as based on legal documents is presented in relation to the theory of translation. How translating from written texts into sign language works in pract...

  7. E-serials cataloging access to continuing and integrating resources via the catalog and the web

    CERN Document Server

    Cole, Jim

    2014-01-01

    This comprehensive guide examines the state of electronic serials cataloging with special attention paid to online capacities. E-Serials Cataloging: Access to Continuing and Integrating Resources via the Catalog and the Web presents a review of the e-serials cataloging methods of the 1990s and discusses the international standards (ISSN, ISBD[ER], AACR2) that are applicable. It puts the concept of online accessibility into historical perspective and offers a look at current applications to consider. Practicing librarians, catalogers and administrators of technical services, cataloging and serv

  8. World Wide Webs: Crossing the Digital Divide through Promotion of Public Access

    Science.gov (United States)

    Coetzee, Liezl

    “As Bill Gates and Steve Case proclaim the global omnipresence of the Internet, the majority of non-Western nations and 97 per cent of the world's population remain unconnected to the net for lack of money, access, or knowledge. This exclusion of so vast a share of the global population from the Internet sharply contradicts the claims of those who posit the World Wide Web as a ‘universal' medium of egalitarian communication.” (Trend 2001:2)

  9. Working without a Crystal Ball: Predicting Web Trends for Web Services Librarians

    Science.gov (United States)

    Ovadia, Steven

    2008-01-01

    User-centered design is a principle stating that electronic resources, like library Web sites, should be built around the needs of the users. This article interviews Web developers of library and non-library-related Web sites, determining how they assess user needs and how they decide to adapt certain technologies for users. According to the…

  10. Enhancing Accessibility of Web Content for the Print-Impaired and Blind People

    Science.gov (United States)

    Chalamandaris, Aimilios; Raptis, Spyros; Tsiakoulis, Pirros; Karabetsos, Sotiris

    Blind people and in general print-impaired people are often restricted to use their own computers, enhanced most often with expensive, screen reading programs, in order to access the web, and in a form that every screen reading program allows to. In this paper we present SpellCast Navi, a tool that is intended for people with visual impairments, which attempts to combine advantages from both customized and generic web enhancement tools. It consists of a generically designed engine and a set of case-specific filters. It can run on a typical web browser and computer, without the need of installing any additional application locally. It acquires and parses the content of web pages, converts bi-lingual text into synthetic speech using high quality speech synthesizer, and supports a set of common functionalities such as navigation through hotkeys, audible navigation lists and more. By using a post-hoc approach based on a-priori information of the website's layout, the audible presentation and navigation through the website is more intuitive a more efficient than with a typical screen reading application. SpellCast Navi poses no requirements on web pages and introduces no overhead to the design and development of a website, as it functions as a hosted proxy service.

  11. Web Accessibility of the Higher Education Institute Websites Based on the World Wide Web Consortium and Section 508 of the Rehabilitation Act

    Science.gov (United States)

    Alam, Najma H.

    2014-01-01

    The problem observed in this study is the low level of compliance of higher education website accessibility with Section 508 of the Rehabilitation Act of 1973. The literature supports the non-compliance of websites with the federal policy in general. Studies were performed to analyze the accessibility of fifty-four sample web pages using automated…

  12. SalanderMaps: A rapid overview about felt earthquakes through data mining of web-accesses

    Science.gov (United States)

    Kradolfer, Urs

    2013-04-01

    While seismological observatories detect and locate earthquakes based on measurements of the ground motion, they neither know a priori whether an earthquake has been felt by the public nor is it known, where it has been felt. Such information is usually gathered by evaluating feedback reported by the public through on-line forms on the web. However, after a felt earthquake in Switzerland, many people visit the webpages of the Swiss Seismological Service (SED) at the ETH Zurich and each such visit leaves traces in the logfiles on our web-servers. Data mining techniques, applied to these logfiles and mining publicly available data bases on the internet open possibilities to obtain previously unknown information about our virtual visitors. In order to provide precise information to authorities and the media, it would be desirable to rapidly know from which locations these web-accesses origin. The method 'Salander' (Seismic Activitiy Linked to Area codes - Nimble Detection of Earthquake Rumbles) will be introduced and it will be explained, how the IP-addresses (each computer or router directly connected to the internet has a unique IP-address; an example would be 129.132.53.5) of a sufficient amount of our virtual visitors were linked to their geographical area. This allows us to unprecedentedly quickly know whether and where an earthquake was felt in Switzerland. It will also be explained, why the method Salander is superior to commercial so-called geolocation products. The corresponding products of the Salander method, animated SalanderMaps, which are routinely generated after each earthquake with a magnitude of M>2 in Switzerland (http://www.seismo.ethz.ch/prod/salandermaps/, available after March 2013), demonstrate how the wavefield of earthquakes propagates through Switzerland and where it was felt. Often, such information is available within less than 60 seconds after origin time, and we always get a clear picture within already five minutes after origin time

  13. Accessibility and preferred use of online Web applications among WIC participants with Internet access.

    Science.gov (United States)

    Bensley, Robert J; Hovis, Amanda; Horton, Karissa D; Loyo, Jennifer J; Bensley, Kara M; Phillips, Diane; Desmangles, Claudia

    2014-01-01

    This study examined the current technology use of clients in the western Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) region and the preferences these current clients have for using new technologies to interact with WIC. Cross-sectional convenience sample for online survey of WIC clients over 2 months in 2011. A weighted sample of 8,144 participants showed that the majority of WIC clients have access to the Internet using a computer or mobile phone. E-mail, texting, and Facebook were technologies most often used for communication. Significant differences (P video chat. Technologies should be considered for addressing WIC clients' needs, including use of text messaging and smartphone apps for appointments, education, and other WIC services; online scheduling and nutrition education; and a stronger Facebook presence for connecting with WIC clients and breastfeeding support. Published by Elsevier Inc.

  14. Predictive networks: a flexible, open source, web application for integration and analysis of human gene networks.

    Science.gov (United States)

    Haibe-Kains, Benjamin; Olsen, Catharina; Djebbari, Amira; Bontempi, Gianluca; Correll, Mick; Bouton, Christopher; Quackenbush, John

    2012-01-01

    Genomics provided us with an unprecedented quantity of data on the genes that are activated or repressed in a wide range of phenotypes. We have increasingly come to recognize that defining the networks and pathways underlying these phenotypes requires both the integration of multiple data types and the development of advanced computational methods to infer relationships between the genes and to estimate the predictive power of the networks through which they interact. To address these issues we have developed Predictive Networks (PN), a flexible, open-source, web-based application and data services framework that enables the integration, navigation, visualization and analysis of gene interaction networks. The primary goal of PN is to allow biomedical researchers to evaluate experimentally derived gene lists in the context of large-scale gene interaction networks. The PN analytical pipeline involves two key steps. The first is the collection of a comprehensive set of known gene interactions derived from a variety of publicly available sources. The second is to use these 'known' interactions together with gene expression data to infer robust gene networks. The PN web application is accessible from http://predictivenetworks.org. The PN code base is freely available at https://sourceforge.net/projects/predictivenets/.

  15. The new ALICE DQM client: a web access to ROOT-based objects

    International Nuclear Information System (INIS)

    Von Haller, B; Carena, F; Carena, W; Chapeland, S; Barroso, V Chibante; Costa, F; Delort, C; Diviá, R.; Fuchs, U; Niedziela, J; Simonetti, G; Soós, C; Telesca, A; Vyvre, P Vande; Wegrzynek, A; Dénes, E

    2015-01-01

    A Large Ion Collider Experiment (ALICE) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). The online Data Quality Monitoring (DQM) plays an essential role in the experiment operation by providing shifters with immediate feedback on the data being recorded in order to quickly identify and overcome problems.An immediate access to the DQM results is needed not only by shifters in the control room but also by detector experts worldwide. As a consequence, a new web application has been developed to dynamically display and manipulate the ROOT-based objects produced by the DQM system in a flexible and user friendly interface.The architecture and design of the tool, its main features and the technologies that were used, both on the server and the client side, are described. In particular, we detail how we took advantage of the most recent ROOT JavaScript I/O and web server library to give interactive access to ROOT objects stored in a database. We describe as well the use of modern web techniques and packages such as AJAX, DHTMLX and jQuery, which has been instrumental in the successful implementation of a reactive and efficient application.We finally present the resulting application and how code quality was ensured. We conclude with a roadmap for future technical and functional developments. (paper)

  16. Integrated Automatic Workflow for Phylogenetic Tree Analysis Using Public Access and Local Web Services.

    Science.gov (United States)

    Damkliang, Kasikrit; Tandayya, Pichaya; Sangket, Unitsa; Pasomsub, Ekawat

    2016-11-28

    At the present, coding sequence (CDS) has been discovered and larger CDS is being revealed frequently. Approaches and related tools have also been developed and upgraded concurrently, especially for phylogenetic tree analysis. This paper proposes an integrated automatic Taverna workflow for the phylogenetic tree inferring analysis using public access web services at European Bioinformatics Institute (EMBL-EBI) and Swiss Institute of Bioinformatics (SIB), and our own deployed local web services. The workflow input is a set of CDS in the Fasta format. The workflow supports 1,000 to 20,000 numbers in bootstrapping replication. The workflow performs the tree inferring such as Parsimony (PARS), Distance Matrix - Neighbor Joining (DIST-NJ), and Maximum Likelihood (ML) algorithms of EMBOSS PHYLIPNEW package based on our proposed Multiple Sequence Alignment (MSA) similarity score. The local web services are implemented and deployed into two types using the Soaplab2 and Apache Axis2 deployment. There are SOAP and Java Web Service (JWS) providing WSDL endpoints to Taverna Workbench, a workflow manager. The workflow has been validated, the performance has been measured, and its results have been verified. Our workflow's execution time is less than ten minutes for inferring a tree with 10,000 replicates of the bootstrapping numbers. This paper proposes a new integrated automatic workflow which will be beneficial to the bioinformaticians with an intermediate level of knowledge and experiences. All local services have been deployed at our portal http://bioservices.sci.psu.ac.th.

  17. U-Access: a web-based system for routing pedestrians of differing abilities

    Science.gov (United States)

    Sobek, Adam D.; Miller, Harvey J.

    2006-09-01

    For most people, traveling through urban and built environments is straightforward. However, for people with physical disabilities, even a short trip can be difficult and perhaps impossible. This paper provides the design and implementation of a web-based system for the routing and prescriptive analysis of pedestrians with different physical abilities within built environments. U-Access, as a routing tool, provides pedestrians with the shortest feasible route with respect to one of three differing ability levels, namely, peripatetic (unaided mobility), aided mobility (mobility with the help of a cane, walker or crutches) and wheelchair users. U-Access is also an analytical tool that can help identify obstacles in built environments that create routing discrepancies among pedestrians with different physical abilities. This paper discusses the system design, including database, algorithm and interface specifications, and technologies for efficiently delivering results through the World Wide Web (WWW). This paper also provides an illustrative example of a routing problem and an analytical evaluation of the existing infrastructure which identifies the obstacles that pose the greatest discrepancies between physical ability levels. U-Access was evaluated by wheelchair users and route experts from the Center for Disability Services at The University of Utah, USA.

  18. The quality and accessibility of Australian depression sites on the World Wide Web.

    Science.gov (United States)

    Griffiths, Kathleen M; Christensen, Helen

    2002-05-20

    To provide information about Australian depression sites and the quality of their content; to identify possible indicators of the quality of site content; and determine the accessibility of Australian depression web sites. Cross-sectional survey of 15 Australian depression web sites. (i) Quality of treatment content (concordance of site information with evidence-based guidelines, number of evidence-based treatments recommended, discussion of other relevant issues, subjective rating of treatment content); (ii) potential quality indicators (conformity with DISCERN criteria, citation of scientific evidence); (iii) accessibility (search engine rank). Mean content quality scores were not high and site accessibility was poor. There was a consistent association between the quality-of-content measures and the DISCERN and scientific accountability scores. Search engine rank was not associated with content quality. The quality of information about depression on Australian websites could be improved. DISCERN may be a useful indicator of website quality, as may scientific accountability. The sites that received the highest quality-of-content ratings were beyondblue, BluePages, CRUfAD and InfraPsych.

  19. Remote Internet access to advanced analytical facilities: a new approach with Web-based services.

    Science.gov (United States)

    Sherry, N; Qin, J; Fuller, M Suominen; Xie, Y; Mola, O; Bauer, M; McIntyre, N S; Maxwell, D; Liu, D; Matias, E; Armstrong, C

    2012-09-04

    Over the past decade, the increasing availability of the World Wide Web has held out the possibility that the efficiency of scientific measurements could be enhanced in cases where experiments were being conducted at distant facilities. Examples of early successes have included X-ray diffraction (XRD) experimental measurements of protein crystal structures at synchrotrons and access to scanning electron microscopy (SEM) and NMR facilities by users from institutions that do not possess such advanced capabilities. Experimental control, visual contact, and receipt of results has used some form of X forwarding and/or VNC (virtual network computing) software that transfers the screen image of a server at the experimental site to that of the users' home site. A more recent development is a web services platform called Science Studio that provides teams of scientists with secure links to experiments at one or more advanced research facilities. The software provides a widely distributed team with a set of controls and screens to operate, observe, and record essential parts of the experiment. As well, Science Studio provides high speed network access to computing resources to process the large data sets that are often involved in complex experiments. The simple web browser and the rapid transfer of experimental data to a processing site allow efficient use of the facility and assist decision making during the acquisition of the experimental results. The software provides users with a comprehensive overview and record of all parts of the experimental process. A prototype network is described involving X-ray beamlines at two different synchrotrons and an SEM facility. An online parallel processing facility has been developed that analyzes the data in near-real time using stream processing. Science Studio and can be expanded to include many other analytical applications, providing teams of users with rapid access to processed results along with the means for detailed

  20. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes.

    Science.gov (United States)

    Chien, Tsair-Wei; Lin, Weir-Sen

    2016-03-02

    The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients' true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access.

  1. CentroidFold: a web server for RNA secondary structure prediction

    OpenAIRE

    Sato, Kengo; Hamada, Michiaki; Asai, Kiyoshi; Mituyama, Toutai

    2009-01-01

    The CentroidFold web server (http://www.ncrna.org/centroidfold/) is a web application for RNA secondary structure prediction powered by one of the most accurate prediction engine. The server accepts two kinds of sequence data: a single RNA sequence and a multiple alignment of RNA sequences. It responses with a prediction result shown as a popular base-pair notation and a graph representation. PDF version of the graph representation is also available. For a multiple alignment sequence, the ser...

  2. Web 2.0 Sites for Collaborative Self-Access: The Learning Advisor vs. Google®

    Directory of Open Access Journals (Sweden)

    Craig D. Howard

    2011-09-01

    Full Text Available While Web 2.0 technologies provide motivated, self-access learners with unprecedented opportunities for language learning, Web 2.0 designs are not of universally equal value for learning. This article reports on research carried out at Indiana University Bloomington using an empirical method to select websites for self-access language learning. Two questions related to Web 2.0 recommendations were asked: (1 How do recommended Web 2.0 sites rank in terms of interactivity features? (2 How likely is a learner to find highly interactive sites on their own? A list of 20 sites used for supplemental and self-access activities in language programs at five universities was compiled and provided the initial data set. Purposive sampling criteria revealed 10 sites truly represented Web 2.0 design. To address the first question, a feature analysis was applied (Herring, The international handbook of internet research. Berlin: Springer, 2008. An interactivity framework was developed from previous research to identify Web 2.0 design features, and sites were ranked according to feature quantity. The method used to address the second question was an interconnectivity analysis that measured direct and indirect interconnectivity within Google results. Highly interactive Web 2.0 sites were not prominent in Google search results, nor were they often linked via third party sites. It was determined that, using typical keywords or searching via blogs and recommendation sites, self-access learners were highly unlikely to find the most promising Web 2.0 sites for language learning. A discussion of the role of the learning advisor in guiding Web 2.0 collaborative self-access, as well as some strategic short cuts to quick analysis, conclude the article.

  3. Implementing Recommendations From Web Accessibility Guidelines: A Comparative Study of Nondisabled Users and Users With Visual Impairments.

    Science.gov (United States)

    Schmutz, Sven; Sonderegger, Andreas; Sauer, Juergen

    2017-09-01

    The present study examined whether implementing recommendations of Web accessibility guidelines would have different effects on nondisabled users than on users with visual impairments. The predominant approach for making Web sites accessible for users with disabilities is to apply accessibility guidelines. However, it has been hardly examined whether this approach has side effects for nondisabled users. A comparison of the effects on both user groups would contribute to a better understanding of possible advantages and drawbacks of applying accessibility guidelines. Participants from two matched samples, comprising 55 participants with visual impairments and 55 without impairments, took part in a synchronous remote testing of a Web site. Each participant was randomly assigned to one of three Web sites, which differed in the level of accessibility (very low, low, and high) according to recommendations of the well-established Web Content Accessibility Guidelines 2.0 (WCAG 2.0). Performance (i.e., task completion rate and task completion time) and a range of subjective variables (i.e., perceived usability, positive affect, negative affect, perceived aesthetics, perceived workload, and user experience) were measured. Higher conformance to Web accessibility guidelines resulted in increased performance and more positive user ratings (e.g., perceived usability or aesthetics) for both user groups. There was no interaction between user group and accessibility level. Higher conformance to WCAG 2.0 may result in benefits for nondisabled users and users with visual impairments alike. Practitioners may use the present findings as a basis for deciding on whether and how to implement accessibility best.

  4. Access and completion of a Web-based treatment in a population-based sample of tornado-affected adolescents.

    Science.gov (United States)

    Price, Matthew; Yuen, Erica K; Davidson, Tatiana M; Hubel, Grace; Ruggiero, Kenneth J

    2015-08-01

    Although Web-based treatments have significant potential to assess and treat difficult-to-reach populations, such as trauma-exposed adolescents, the extent that such treatments are accessed and used is unclear. The present study evaluated the proportion of adolescents who accessed and completed a Web-based treatment for postdisaster mental health symptoms. Correlates of access and completion were examined. A sample of 2,000 adolescents living in tornado-affected communities was assessed via structured telephone interview and invited to a Web-based treatment. The modular treatment addressed symptoms of posttraumatic stress disorder, depression, and alcohol and tobacco use. Participants were randomized to experimental or control conditions after accessing the site. Overall access for the intervention was 35.8%. Module completion for those who accessed ranged from 52.8% to 85.6%. Adolescents with parents who used the Internet to obtain health-related information were more likely to access the treatment. Adolescent males were less likely to access the treatment. Future work is needed to identify strategies to further increase the reach of Web-based treatments to provide clinical services in a postdisaster context. (c) 2015 APA, all rights reserved).

  5. Modification of CAS-protocol for improvement of security web-applications from unauthorized access

    Directory of Open Access Journals (Sweden)

    Alexey I Igorevich Alexandrov

    2017-07-01

    Full Text Available Dissemination of information technologies and the expansion of their application demand constantly increasing security level for users, operating with confidential information and personal data. The problem of setting up secure user identification is probably one of the most common tasks, which occur in the process of software development. Today, despite the availability of a large amount of authentication tools, new solutions, mechanisms and technologies are being introduced regularly. Primarily, it is done to increase the security level of data protection against unauthorized access. This article describes the experience of using central user authentication service based on CAS-protocol (CAS – Central Authentication Service and free open source software, analyzing its main advantages and disadvantages and describing the possibility of its modification, which would increase security of web-based information systems from being accessed illegally. The article contains recommendations for setting a maximum time limit for users working on services, integrated with central authentication; and, analyses the research of implementing modern web-technologies while using user authentication system based on CAS-protocol. In addition, it describes the ways of CAS-server modernization for developing additional modules: a module for collecting and analyzing the use of information systems, and another one, for a user management system. Furthermore, CAS-protocol can be used at universities and other organizations for creating a unified information environment in education.

  6. Web-accessible molecular modeling with Rosetta: The Rosetta Online Server that Includes Everyone (ROSIE).

    Science.gov (United States)

    Moretti, Rocco; Lyskov, Sergey; Das, Rhiju; Meiler, Jens; Gray, Jeffrey J

    2018-01-01

    The Rosetta molecular modeling software package provides a large number of experimentally validated tools for modeling and designing proteins, nucleic acids, and other biopolymers, with new protocols being added continually. While freely available to academic users, external usage is limited by the need for expertise in the Unix command line environment. To make Rosetta protocols available to a wider audience, we previously created a web server called Rosetta Online Server that Includes Everyone (ROSIE), which provides a common environment for hosting web-accessible Rosetta protocols. Here we describe a simplification of the ROSIE protocol specification format, one that permits easier implementation of Rosetta protocols. Whereas the previous format required creating multiple separate files in different locations, the new format allows specification of the protocol in a single file. This new, simplified protocol specification has more than doubled the number of Rosetta protocols available under ROSIE. These new applications include pK a determination, lipid accessibility calculation, ribonucleic acid redesign, protein-protein docking, protein-small molecule docking, symmetric docking, antibody docking, cyclic toxin docking, critical binding peptide determination, and mapping small molecule binding sites. ROSIE is freely available to academic users at http://rosie.rosettacommons.org. © 2017 The Protein Society.

  7. ChEMBL web services: streamlining access to drug discovery data and utilities.

    Science.gov (United States)

    Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P

    2015-07-01

    ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. The TOPCONS web server for consensus prediction of membrane protein topology and signal peptides.

    Science.gov (United States)

    Tsirigos, Konstantinos D; Peters, Christoph; Shu, Nanjiang; Käll, Lukas; Elofsson, Arne

    2015-07-01

    TOPCONS (http://topcons.net/) is a widely used web server for consensus prediction of membrane protein topology. We hereby present a major update to the server, with some substantial improvements, including the following: (i) TOPCONS can now efficiently separate signal peptides from transmembrane regions. (ii) The server can now differentiate more successfully between globular and membrane proteins. (iii) The server now is even slightly faster, although a much larger database is used to generate the multiple sequence alignments. For most proteins, the final prediction is produced in a matter of seconds. (iv) The user-friendly interface is retained, with the additional feature of submitting batch files and accessing the server programmatically using standard interfaces, making it thus ideal for proteome-wide analyses. Indicatively, the user can now scan the entire human proteome in a few days. (v) For proteins with homology to a known 3D structure, the homology-inferred topology is also displayed. (vi) Finally, the combination of methods currently implemented achieves an overall increase in performance by 4% as compared to the currently available best-scoring methods and TOPCONS is the only method that can identify signal peptides and still maintain a state-of-the-art performance in topology predictions. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Making It Work for Everyone: HTML5 and CSS Level 3 for Responsive, Accessible Design on Your Library's Web Site

    Science.gov (United States)

    Baker, Stewart C.

    2014-01-01

    This article argues that accessibility and universality are essential to good Web design. A brief review of library science literature sets the issue of Web accessibility in context. The bulk of the article explains the design philosophies of progressive enhancement and responsive Web design, and summarizes recent updates to WCAG 2.0, HTML5, CSS…

  10. Fuzzy-logic based learning style prediction in e-learning using web ...

    Indian Academy of Sciences (India)

    tion, especially in web environments and proposes to use Fuzzy rules to handle the uncertainty in .... learning in safe and supportive environment ... working of the proposed Fuzzy-logic based learning style prediction in e-learning. Section 4.

  11. Release Early, Release Often: Predicting Change in Versioned Knowledge Organization Systems on the Web

    OpenAIRE

    Meroño-Peñuela, Albert; Guéret, Christophe; Schlobach, Stefan

    2015-01-01

    The Semantic Web is built on top of Knowledge Organization Systems (KOS) (vocabularies, ontologies, concept schemes) that provide a structured, interoperable and distributed access to Linked Data on the Web. The maintenance of these KOS over time has produced a number of KOS version chains: subsequent unique version identifiers to unique states of a KOS. However, the release of new KOS versions pose challenges to both KOS publishers and users. For publishers, updating a KOS is a knowledge int...

  12. Distill: a suite of web servers for the prediction of one-, two- and three-dimensional structural features of proteins

    Directory of Open Access Journals (Sweden)

    Walsh Ian

    2006-09-01

    Full Text Available Abstract Background We describe Distill, a suite of servers for the prediction of protein structural features: secondary structure; relative solvent accessibility; contact density; backbone structural motifs; residue contact maps at 6, 8 and 12 Angstrom; coarse protein topology. The servers are based on large-scale ensembles of recursive neural networks and trained on large, up-to-date, non-redundant subsets of the Protein Data Bank. Together with structural feature predictions, Distill includes a server for prediction of Cα traces for short proteins (up to 200 amino acids. Results The servers are state-of-the-art, with secondary structure predicted correctly for nearly 80% of residues (currently the top performance on EVA, 2-class solvent accessibility nearly 80% correct, and contact maps exceeding 50% precision on the top non-diagonal contacts. A preliminary implementation of the predictor of protein Cα traces featured among the top 20 Novel Fold predictors at the last CASP6 experiment as group Distill (ID 0348. The majority of the servers, including the Cα trace predictor, now take into account homology information from the PDB, when available, resulting in greatly improved reliability. Conclusion All predictions are freely available through a simple joint web interface and the results are returned by email. In a single submission the user can send protein sequences for a total of up to 32k residues to all or a selection of the servers. Distill is accessible at the address: http://distill.ucd.ie/distill/.

  13. Comparison of RF spectrum prediction methods for dynamic spectrum access

    Science.gov (United States)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  14. Assessing the Library Homepages of COPLAC Institutions for Section 508 Accessibility Errors: Who's Accessible, Who's Not, and How the Online WebXACT Assessment Tool Can Help

    Science.gov (United States)

    Huprich, Julia; Green, Ravonne

    2007-01-01

    The Council on Public Liberal Arts Colleges (COPLAC) libraries websites were assessed for Section 508 errors using the online WebXACT tool. Only three of the twenty-one institutions (14%) had zero accessibility errors. Eighty-six percent of the COPLAC institutions had an average of 1.24 errors. Section 508 compliance is required for institutions…

  15. Development of Remote Monitoring and a Control System Based on PLC and WebAccess for Learning Mechatronics

    Directory of Open Access Journals (Sweden)

    Wen-Jye Shyr

    2013-02-01

    Full Text Available This study develops a novel method for learning mechatronics using remote monitoring and control, based on a programmable logic controller (PLC and WebAccess. A mechatronics module, a Web-CAM and a PLC were integrated with WebAccess software to organize a remote laboratory. The proposed system enables users to access the Internet for remote monitoring and control of the mechatronics module via a web browser, thereby enhancing work flexibility by enabling personnel to control mechatronics equipment from a remote location. Mechatronics control and long-distance monitoring were realized by establishing communication between the PLC and WebAccess. Analytical results indicate that the proposed system is feasible. The suitability of this system is demonstrated in the department of industrial education and technology at National Changhua University of Education, Taiwan. Preliminary evaluation of the system was encouraging and has shown that it has achieved success in helping students understand concepts and master remote monitoring and control techniques.

  16. JASPAR 2018: update of the open-access database of transcription factor binding profiles and its web framework.

    Science.gov (United States)

    Khan, Aziz; Fornes, Oriol; Stigliani, Arnaud; Gheorghe, Marius; Castro-Mondragon, Jaime A; van der Lee, Robin; Bessy, Adrien; Chèneby, Jeanne; Kulkarni, Shubhada R; Tan, Ge; Baranasic, Damir; Arenillas, David J; Sandelin, Albin; Vandepoele, Klaas; Lenhard, Boris; Ballester, Benoît; Wasserman, Wyeth W; Parcy, François; Mathelier, Anthony

    2018-01-04

    JASPAR (http://jaspar.genereg.net) is an open-access database of curated, non-redundant transcription factor (TF)-binding profiles stored as position frequency matrices (PFMs) and TF flexible models (TFFMs) for TFs across multiple species in six taxonomic groups. In the 2018 release of JASPAR, the CORE collection has been expanded with 322 new PFMs (60 for vertebrates and 262 for plants) and 33 PFMs were updated (24 for vertebrates, 8 for plants and 1 for insects). These new profiles represent a 30% expansion compared to the 2016 release. In addition, we have introduced 316 TFFMs (95 for vertebrates, 218 for plants and 3 for insects). This release incorporates clusters of similar PFMs in each taxon and each TF class per taxon. The JASPAR 2018 CORE vertebrate collection of PFMs was used to predict TF-binding sites in the human genome. The predictions are made available to the scientific community through a UCSC Genome Browser track data hub. Finally, this update comes with a new web framework with an interactive and responsive user-interface, along with new features. All the underlying data can be retrieved programmatically using a RESTful API and through the JASPAR 2018 R/Bioconductor package. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Apollo: giving application developers a single point of access to public health models using structured vocabularies and Web services.

    Science.gov (United States)

    Wagner, Michael M; Levander, John D; Brown, Shawn; Hogan, William R; Millett, Nicholas; Hanna, Josh

    2013-01-01

    This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem-which we define as a configuration and a query of results-exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services.

  18. The bovine QTL viewer: a web accessible database of bovine Quantitative Trait Loci

    Directory of Open Access Journals (Sweden)

    Xavier Suresh R

    2006-06-01

    Full Text Available Abstract Background Many important agricultural traits such as weight gain, milk fat content and intramuscular fat (marbling in cattle are quantitative traits. Most of the information on these traits has not previously been integrated into a genomic context. Without such integration application of these data to agricultural enterprises will remain slow and inefficient. Our goal was to populate a genomic database with data mined from the bovine quantitative trait literature and to make these data available in a genomic context to researchers via a user friendly query interface. Description The QTL (Quantitative Trait Locus data and related information for bovine QTL are gathered from published work and from existing databases. An integrated database schema was designed and the database (MySQL populated with the gathered data. The bovine QTL Viewer was developed for the integration of QTL data available for cattle. The tool consists of an integrated database of bovine QTL and the QTL viewer to display QTL and their chromosomal position. Conclusion We present a web accessible, integrated database of bovine (dairy and beef cattle QTL for use by animal geneticists. The viewer and database are of general applicability to any livestock species for which there are public QTL data. The viewer can be accessed at http://bovineqtl.tamu.edu.

  19. A dialogue-based web application enhances personalized access to healthcare professionals – an intervention study

    Directory of Open Access Journals (Sweden)

    Bjoernes Charlotte D

    2012-09-01

    Full Text Available Abstract Background In today’s short stay hospital settings the contact time for patients is reduced. However, it seems to be more important for the patients that the healthcare professionals are easy to get in contact with during the whole course of treatment, and to have the opportunity to exchange information, as a basis for obtaining individualized information and support. Therefore, the aim was to explore the ability of a dialogue-based application to contribute to accessibility of the healthcare professionals and exchangeability of information. Method An application for online written and asynchronous contacts was developed, implemented in clinical practice, and evaluated. The qualitative effect of the online contact was explored using a Web-based survey comprised of open-ended questions. Results Patients valued the online contacts and experienced feelings of partnership in dialogue, in a flexible and calm environment, which supported their ability to be active partners and feelings of freedom and security. Conclusion The online asynchronous written environment can contribute to accessibility and exchangeability, and add new possibilities for dialogues from which the patients can benefit. The individualized information obtained via online contact empowers the patients. The Internet-based contacts are a way to differentiate and expand the possibilities for contacts outside the few scheduled face-to-face hospital contacts.

  20. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    International Nuclear Information System (INIS)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-01-01

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster

  1. A web accessible scientific workflow system for vadoze zone performance monitoring: design and implementation examples

    Science.gov (United States)

    Mattson, E.; Versteeg, R.; Ankeny, M.; Stormberg, G.

    2005-12-01

    Long term performance monitoring has been identified by DOE, DOD and EPA as one of the most challenging and costly elements of contaminated site remedial efforts. Such monitoring should provide timely and actionable information relevant to a multitude of stakeholder needs. This information should be obtained in a manner which is auditable, cost effective and transparent. Over the last several years INL staff has designed and implemented a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition from diverse sensors (geophysical, geochemical and hydrological) with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic javascript and html/css) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This system has been implemented and is operational for several sites, including the Ruby Gulch Waste Rock Repository (a capped mine waste rock dump on the Gilt Edge Mine Superfund Site), the INL Vadoze Zone Research Park and an alternative cover landfill. Implementations for other vadoze zone sites are currently in progress. These systems allow for autonomous performance monitoring through automated data analysis and report generation. This performance monitoring has allowed users to obtain insights into system dynamics, regulatory compliance and residence times of water. Our system uses modular components for data selection and graphing and WSDL compliant webservices for external functions such as statistical analyses and model invocations. Thus, implementing this system for novel sites and extending functionality (e.g. adding novel models) is relatively straightforward. As system access requires a standard webbrowser

  2. Kaptive Web: User-Friendly Capsule and Lipopolysaccharide Serotype Prediction for Klebsiella Genomes.

    Science.gov (United States)

    Wick, Ryan R; Heinz, Eva; Holt, Kathryn E; Wyres, Kelly L

    2018-06-01

    As whole-genome sequencing becomes an established component of the microbiologist's toolbox, it is imperative that researchers, clinical microbiologists, and public health professionals have access to genomic analysis tools for the rapid extraction of epidemiologically and clinically relevant information. For the Gram-negative hospital pathogens such as Klebsiella pneumoniae , initial efforts have focused on the detection and surveillance of antimicrobial resistance genes and clones. However, with the resurgence of interest in alternative infection control strategies targeting Klebsiella surface polysaccharides, the ability to extract information about these antigens is increasingly important. Here we present Kaptive Web, an online tool for the rapid typing of Klebsiella K and O loci, which encode the polysaccharide capsule and lipopolysaccharide O antigen, respectively. Kaptive Web enables users to upload and analyze genome assemblies in a web browser. The results can be downloaded in tabular format or explored in detail via the graphical interface, making it accessible for users at all levels of computational expertise. We demonstrate Kaptive Web's utility by analyzing >500 K. pneumoniae genomes. We identify extensive K and O locus diversity among 201 genomes belonging to the carbapenemase-associated clonal group 258 (25 K and 6 O loci). The characterization of a further 309 genomes indicated that such diversity is common among the multidrug-resistant clones and that these loci represent useful epidemiological markers for strain subtyping. These findings reinforce the need for rapid, reliable, and accessible typing methods such as Kaptive Web. Kaptive Web is available for use at http://kaptive.holtlab.net/, and the source code is available at https://github.com/kelwyres/Kaptive-Web. Copyright © 2018 Wick et al.

  3. Internet access, awareness and utilisation of web based evidence: a survey of ANZ and Singaporean radiation oncology registrars in 2003

    International Nuclear Information System (INIS)

    Wong, K.; Veness, M.

    2003-01-01

    The past decade has seen an 'explosion' in electronically archived evidence available on the Internet. Access to, and awareness of, pre-appraised web based evidence such as is available at the Cochrane Library, and more recently the Cancer Library, is now easily accessible to both clinicians and patients. A postal survey was recently sent out to all Radiation Oncology registrars in Australia, New Zealand and Singapore. The aim of the survey was to ascertain previous training in literature searching and critical appraisal, the extent of Internet access and use of web based evidence and awareness of databases including the Cochrane Library. Sixty six (66) out of ninety (90) registrars responded (73% response rate). Fifty five percent of respondents had previously undertaken some form of training related to literature searching or critical appraisal. The majority (68%) felt confident in performing a literature search, although 80% of respondents indicated interest in obtaining further training. The majority (68%) reported accessing web-based evidence for literature searching in the previous week, and 92% in the previous month. Nearly all respondents (89%) accessed web-based evidence at work. Most (94%) were aware of the Cochrane Library with 48% of respondents having used this database. Sixty-eight percent were aware of the Cancer Library. In 2000 a similar survey revealed only 68% of registrars aware and 30% having used the Cochrane Library. These findings reveal almost universal access to the Internet and use of web-based evidence amongst Radiation Oncology registrars. There has been a marked increase in awareness and use of the Cochrane Library with the majority also aware of the recently introduced Cancer Library

  4. J-TEXT WebScope: An efficient data access and visualization system for long pulse fusion experiment

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Wei, E-mail: zhenghaku@gmail.com [State Key Laboratory of Advanced Electromagnetic Engineering and Technology in Huazhong University of Science and Technology, Wuhan 430074 (China); School of Electrical and Electronic Engineering in Huazhong University of Science and Technology, Wuhan 430074 (China); Wan, Kuanhong; Chen, Zhi; Hu, Feiran; Liu, Qiang [State Key Laboratory of Advanced Electromagnetic Engineering and Technology in Huazhong University of Science and Technology, Wuhan 430074 (China); School of Electrical and Electronic Engineering in Huazhong University of Science and Technology, Wuhan 430074 (China)

    2016-11-15

    Highlights: • No matter how large the data is, the response time is always less than 500 milliseconds. • It is intelligent and just gives you the data you want. • It can be accessed directly over the Internet without installing special client software if you already have a browser. • Adopt scale and segment technology to organize data. • To support a new database for the WebScope is quite easy. • With the configuration stored in user’s profile, you have your own portable WebScope. - Abstract: Fusion research is an international collaboration work. To enable researchers across the world to visualize and analyze the experiment data, a web based data access and visualization tool is quite important [1]. Now, a new WebScope based on RIA (Rich Internet Application) is designed and implemented to meet these requirements. On the browser side, a fluent and intuitive interface is provided for researchers at J-TEXT laboratory and collaborators from all over the world to view experiment data and related metadata. The fusion experiments will feature long pulse and high sampling rate in the future. The data access and visualization system in this work has adopted segment and scale concept. Large data samples are re-sampled in different scales and then split into segments for instant response. It allows users to view extremely large data on the web browser efficiently, without worrying about the limitation on the size of the data. The HTML5 and JavaScript based web front-end can provide intuitive and fluent user experience. On the server side, a RESTful (Representational State Transfer) web API, which is based on ASP.NET MVC (Model View Controller), allows users to access the data and its metadata through HTTP (HyperText Transfer Protocol). An interface to the database has been designed to decouple the data access and visualization system from the data storage. It can be applied upon any data storage system like MDSplus or JTEXTDB, and this system is very easy to

  5. J-TEXT WebScope: An efficient data access and visualization system for long pulse fusion experiment

    International Nuclear Information System (INIS)

    Zheng, Wei; Wan, Kuanhong; Chen, Zhi; Hu, Feiran; Liu, Qiang

    2016-01-01

    Highlights: • No matter how large the data is, the response time is always less than 500 milliseconds. • It is intelligent and just gives you the data you want. • It can be accessed directly over the Internet without installing special client software if you already have a browser. • Adopt scale and segment technology to organize data. • To support a new database for the WebScope is quite easy. • With the configuration stored in user’s profile, you have your own portable WebScope. - Abstract: Fusion research is an international collaboration work. To enable researchers across the world to visualize and analyze the experiment data, a web based data access and visualization tool is quite important [1]. Now, a new WebScope based on RIA (Rich Internet Application) is designed and implemented to meet these requirements. On the browser side, a fluent and intuitive interface is provided for researchers at J-TEXT laboratory and collaborators from all over the world to view experiment data and related metadata. The fusion experiments will feature long pulse and high sampling rate in the future. The data access and visualization system in this work has adopted segment and scale concept. Large data samples are re-sampled in different scales and then split into segments for instant response. It allows users to view extremely large data on the web browser efficiently, without worrying about the limitation on the size of the data. The HTML5 and JavaScript based web front-end can provide intuitive and fluent user experience. On the server side, a RESTful (Representational State Transfer) web API, which is based on ASP.NET MVC (Model View Controller), allows users to access the data and its metadata through HTTP (HyperText Transfer Protocol). An interface to the database has been designed to decouple the data access and visualization system from the data storage. It can be applied upon any data storage system like MDSplus or JTEXTDB, and this system is very easy to

  6. Web search queries can predict stock market volumes.

    Science.gov (United States)

    Bordino, Ilaria; Battiston, Stefano; Caldarelli, Guido; Cristelli, Matthieu; Ukkonen, Antti; Weber, Ingmar

    2012-01-01

    We live in a computerized and networked society where many of our actions leave a digital trace and affect other people's actions. This has lead to the emergence of a new data-driven research field: mathematical methods of computer science, statistical physics and sociometry provide insights on a wide range of disciplines ranging from social science to human mobility. A recent important discovery is that search engine traffic (i.e., the number of requests submitted by users to search engines on the www) can be used to track and, in some cases, to anticipate the dynamics of social phenomena. Successful examples include unemployment levels, car and home sales, and epidemics spreading. Few recent works applied this approach to stock prices and market sentiment. However, it remains unclear if trends in financial markets can be anticipated by the collective wisdom of on-line users on the web. Here we show that daily trading volumes of stocks traded in NASDAQ-100 are correlated with daily volumes of queries related to the same stocks. In particular, query volumes anticipate in many cases peaks of trading by one day or more. Our analysis is carried out on a unique dataset of queries, submitted to an important web search engine, which enable us to investigate also the user behavior. We show that the query volume dynamics emerges from the collective but seemingly uncoordinated activity of many users. These findings contribute to the debate on the identification of early warnings of financial systemic risk, based on the activity of users of the www.

  7. Web search queries can predict stock market volumes.

    Directory of Open Access Journals (Sweden)

    Ilaria Bordino

    Full Text Available We live in a computerized and networked society where many of our actions leave a digital trace and affect other people's actions. This has lead to the emergence of a new data-driven research field: mathematical methods of computer science, statistical physics and sociometry provide insights on a wide range of disciplines ranging from social science to human mobility. A recent important discovery is that search engine traffic (i.e., the number of requests submitted by users to search engines on the www can be used to track and, in some cases, to anticipate the dynamics of social phenomena. Successful examples include unemployment levels, car and home sales, and epidemics spreading. Few recent works applied this approach to stock prices and market sentiment. However, it remains unclear if trends in financial markets can be anticipated by the collective wisdom of on-line users on the web. Here we show that daily trading volumes of stocks traded in NASDAQ-100 are correlated with daily volumes of queries related to the same stocks. In particular, query volumes anticipate in many cases peaks of trading by one day or more. Our analysis is carried out on a unique dataset of queries, submitted to an important web search engine, which enable us to investigate also the user behavior. We show that the query volume dynamics emerges from the collective but seemingly uncoordinated activity of many users. These findings contribute to the debate on the identification of early warnings of financial systemic risk, based on the activity of users of the www.

  8. 76 FR 71914 - Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and...

    Science.gov (United States)

    2011-11-21

    ... Disability in Air Travel: Accessibility of Web Sites and Automated Kiosks at U.S. Airports AGENCY: Office of... respond to the SNPRM. The Air Transport Association, the International Air Transport Association, the Air Carrier Association of America, the Regional Airline Association, and the Association of Asia Pacific...

  9. RS-WebPredictor

    DEFF Research Database (Denmark)

    Zaretzki, J.; Bergeron, C.; Huang, T.-W.

    2013-01-01

    Regioselectivity-WebPredictor (RS-WebPredictor) is a server that predicts isozyme-specific cytochrome P450 (CYP)-mediated sites of metabolism (SOMs) on drug-like molecules. Predictions may be made for the promiscuous 2C9, 2D6 and 3A4 CYP isozymes, as well as CYPs 1A2, 2A6, 2B6, 2C8, 2C19 and 2E1....... RS-WebPredictor is the first freely accessible server that predicts the regioselectivity of the last six isozymes. Server execution time is fast, taking on average 2s to encode a submitted molecule and 1s to apply a given model, allowing for high-throughput use in lead optimization projects.......Availability: RS-WebPredictor is accessible for free use at http://reccr.chem.rpi.edu/ Software/RS-WebPredictor....

  10. Empirical comparison of web-based antimicrobial peptide prediction tools.

    Science.gov (United States)

    Gabere, Musa Nur; Noble, William Stafford

    2017-07-01

    Antimicrobial peptides (AMPs) are innate immune molecules that exhibit activities against a range of microbes, including bacteria, fungi, viruses and protozoa. Recent increases in microbial resistance against current drugs has led to a concomitant increase in the need for novel antimicrobial agents. Over the last decade, a number of AMP prediction tools have been designed and made freely available online. These AMP prediction tools show potential to discriminate AMPs from non-AMPs, but the relative quality of the predictions produced by the various tools is difficult to quantify. We compiled two sets of AMP and non-AMP peptides, separated into three categories-antimicrobial, antibacterial and bacteriocins. Using these benchmark data sets, we carried out a systematic evaluation of ten publicly available AMP prediction methods. Among the six general AMP prediction tools-ADAM, CAMPR3(RF), CAMPR3(SVM), MLAMP, DBAASP and MLAMP-we find that CAMPR3(RF) provides a statistically significant improvement in performance, as measured by the area under the receiver operating characteristic (ROC) curve, relative to the other five methods. Surprisingly, for antibacterial prediction, the original AntiBP method significantly outperforms its successor, AntiBP2 based on one benchmark dataset. The two bacteriocin prediction tools, BAGEL3 and BACTIBASE, both provide very good performance and BAGEL3 outperforms its predecessor, BACTIBASE, on the larger of the two benchmarks. gaberemu@ngha.med.sa or william-noble@uw.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  11. miRNAFold: a web server for fast miRNA precursor prediction in genomes.

    Science.gov (United States)

    Tav, Christophe; Tempel, Sébastien; Poligny, Laurent; Tahi, Fariza

    2016-07-08

    Computational methods are required for prediction of non-coding RNAs (ncRNAs), which are involved in many biological processes, especially at post-transcriptional level. Among these ncRNAs, miRNAs have been largely studied and biologists need efficient and fast tools for their identification. In particular, ab initio methods are usually required when predicting novel miRNAs. Here we present a web server dedicated for miRNA precursors identification at a large scale in genomes. It is based on an algorithm called miRNAFold that allows predicting miRNA hairpin structures quickly with high sensitivity. miRNAFold is implemented as a web server with an intuitive and user-friendly interface, as well as a standalone version. The web server is freely available at: http://EvryRNA.ibisc.univ-evry.fr/miRNAFold. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Discursive Policy Webs in a Globalisation Era: A Discussion of Access to Professions and Trades for Immigrant Professionals in Ontario, Canada

    Science.gov (United States)

    Goldberg, Michelle P.

    2006-01-01

    This article explores the link between discourse and policy using a discursive web metaphor. It develops the notion of policy as a discursive web based on a post-positivist framework that recognises the way multiple discourses from multiple voices interact in a complex web of power relationships to influence reality. Using Ontario's Access to…

  13. Compact Web browsing profiles for click-through rate prediction

    DEFF Research Database (Denmark)

    Fruergaard, Bjarne Ørum; Hansen, Lars Kai

    2014-01-01

    In real time advertising we are interested in finding features that improve click-through rate prediction. One source of available information is the bipartite graph of websites previously engaged by identifiable users. In this work, we investigate three different decompositions of such a graph...

  14. Online Access to Weather Satellite Imagery Through the World Wide Web

    Science.gov (United States)

    Emery, W.; Baldwin, D.

    1998-01-01

    Both global area coverage (GAC) and high-resolution picture transmission (HRTP) data from the Advanced Very High Resolution Radiometer (AVHRR) are made available to laternet users through an online data access system. Older GOES-7 data am also available. Created as a "testbed" data system for NASA's future Earth Observing System Data and Information System (EOSDIS), this testbed provides an opportunity to test both the technical requirements of an onune'd;ta system and the different ways in which the -general user, community would employ such a system. Initiated in December 1991, the basic data system experienced five major evolutionary changes In response to user requests and requirements. Features added with these changes were the addition of online browse, user subsetting, dynamic image Processing/navigation, a stand-alone data storage system, and movement,from an X-windows graphical user Interface (GUI) to a World Wide Web (WWW) interface. Over Its lifetime, the system has had as many as 2500 registered users. The system on the WWW has had over 2500 hits since October 1995. Many of these hits are by casual users that only take the GIF images directly from the interface screens and do not specifically order digital data. Still, there b a consistent stream of users ordering the navigated image data and related products (maps and so forth). We have recently added a real-time, seven- day, northwestern United States normalized difference vegetation index (NDVI) composite that has generated considerable Interest. Index Terms-Data system, earth science, online access, satellite data.

  15. Study of HTML Meta-Tags Utilization in Web-based Open-Access Journals

    Directory of Open Access Journals (Sweden)

    Pegah Pishva

    2007-04-01

    Full Text Available The present study investigates the extent of utilization of two meta tags – “keywords” and “descriptors” – in Web-based Open-Access Journals. A sample composed of 707 journals taken from DOAJ was used. These were analyzed on the account of utilization of the said meta tags. Findings demonstrated that these journals utilized “keywords” and “descriptors” meta-tags, 33.1% and 29.9% respectively. It was further demonstrated that among various subject classifications, “General Journals” had been the highest while “Mathematics and Statistics Journals” had the least utilization as “keywords” meta-tags. Moreover, “General Journals” and “Chemistry journals”, with 55.6% and 15.4% utilization respectively, had the highest and the lowest “descriptors” meta-tag usage rate. Based on our findings, and when compared against other similar research findings, there had been no significant growth experienced in utilization of these meta tags.

  16. Managing Large Scale Project Analysis Teams through a Web Accessible Database

    Science.gov (United States)

    O'Neil, Daniel A.

    2008-01-01

    Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.

  17. Prediction of toxicity and comparison of alternatives using WebTEST (Web-services Toxicity Estimation Software Tool)(Bled Slovenia)

    Science.gov (United States)

    A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...

  18. An Open-Source Web-Based Tool for Resource-Agnostic Interactive Translation Prediction

    Directory of Open Access Journals (Sweden)

    Daniel Torregrosa

    2014-09-01

    Full Text Available We present a web-based open-source tool for interactive translation prediction (ITP and describe its underlying architecture. ITP systems assist human translators by making context-based computer-generated suggestions as they type. Most of the ITP systems in literature are strongly coupled with a statistical machine translation system that is conveniently adapted to provide the suggestions. Our system, however, follows a resource-agnostic approach and suggestions are obtained from any unmodified black-box bilingual resource. This paper reviews our ITP method and describes the architecture of Forecat, a web tool, partly based on the recent technology of web components, that eases the use of our ITP approach in any web application requiring this kind of translation assistance. We also evaluate the performance of our method when using an unmodified Moses-based statistical machine translation system as the bilingual resource.

  19. ESB-Based Sensor Web Integration for the Prediction of Electric Power Supply System Vulnerability

    Science.gov (United States)

    Stoimenov, Leonid; Bogdanovic, Milos; Bogdanovic-Dinic, Sanja

    2013-01-01

    Electric power supply companies increasingly rely on enterprise IT systems to provide them with a comprehensive view of the state of the distribution network. Within a utility-wide network, enterprise IT systems collect data from various metering devices. Such data can be effectively used for the prediction of power supply network vulnerability. The purpose of this paper is to present the Enterprise Service Bus (ESB)-based Sensor Web integration solution that we have developed with the purpose of enabling prediction of power supply network vulnerability, in terms of a prediction of defect probability for a particular network element. We will give an example of its usage and demonstrate our vulnerability prediction model on data collected from two different power supply companies. The proposed solution is an extension of the GinisSense Sensor Web-based architecture for collecting, processing, analyzing, decision making and alerting based on the data received from heterogeneous data sources. In this case, GinisSense has been upgraded to be capable of operating in an ESB environment and combine Sensor Web and GIS technologies to enable prediction of electric power supply system vulnerability. Aside from electrical values, the proposed solution gathers ambient values from additional sensors installed in the existing power supply network infrastructure. GinisSense aggregates gathered data according to an adapted Omnibus data fusion model and applies decision-making logic on the aggregated data. Detected vulnerabilities are visualized to end-users through means of a specialized Web GIS application. PMID:23955435

  20. ESB-Based Sensor Web Integration for the Prediction of Electric Power Supply System Vulnerability

    Directory of Open Access Journals (Sweden)

    Milos Bogdanovic

    2013-08-01

    Full Text Available Electric power supply companies increasingly rely on enterprise IT systems to provide them with a comprehensive view of the state of the distribution network. Within a utility-wide network, enterprise IT systems collect data from various metering devices. Such data can be effectively used for the prediction of power supply network vulnerability. The purpose of this paper is to present the Enterprise Service Bus (ESB-based Sensor Web integration solution that we have developed with the purpose of enabling prediction of power supply network vulnerability, in terms of a prediction of defect probability for a particular network element. We will give an example of its usage and demonstrate our vulnerability prediction model on data collected from two different power supply companies. The proposed solution is an extension of the GinisSense Sensor Web-based architecture for collecting, processing, analyzing, decision making and alerting based on the data received from heterogeneous data sources. In this case, GinisSense has been upgraded to be capable of operating in an ESB environment and combine Sensor Web and GIS technologies to enable prediction of electric power supply system vulnerability. Aside from electrical values, the proposed solution gathers ambient values from additional sensors installed in the existing power supply network infrastructure. GinisSense aggregates gathered data according to an adapted Omnibus data fusion model and applies decision-making logic on the aggregated data. Detected vulnerabilities are visualized to end-users through means of a specialized Web GIS application.

  1. ESB-based Sensor Web integration for the prediction of electric power supply system vulnerability.

    Science.gov (United States)

    Stoimenov, Leonid; Bogdanovic, Milos; Bogdanovic-Dinic, Sanja

    2013-08-15

    Electric power supply companies increasingly rely on enterprise IT systems to provide them with a comprehensive view of the state of the distribution network. Within a utility-wide network, enterprise IT systems collect data from various metering devices. Such data can be effectively used for the prediction of power supply network vulnerability. The purpose of this paper is to present the Enterprise Service Bus (ESB)-based Sensor Web integration solution that we have developed with the purpose of enabling prediction of power supply network vulnerability, in terms of a prediction of defect probability for a particular network element. We will give an example of its usage and demonstrate our vulnerability prediction model on data collected from two different power supply companies. The proposed solution is an extension of the GinisSense Sensor Web-based architecture for collecting, processing, analyzing, decision making and alerting based on the data received from heterogeneous data sources. In this case, GinisSense has been upgraded to be capable of operating in an ESB environment and combine Sensor Web and GIS technologies to enable prediction of electric power supply system vulnerability. Aside from electrical values, the proposed solution gathers ambient values from additional sensors installed in the existing power supply network infrastructure. GinisSense aggregates gathered data according to an adapted Omnibus data fusion model and applies decision-making logic on the aggregated data. Detected vulnerabilities are visualized to end-users through means of a specialized Web GIS application.

  2. Analysing Twitter and web queries for flu trend prediction.

    Science.gov (United States)

    Santos, José Carlos; Matos, Sérgio

    2014-05-07

    Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (puser-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results.

  3. PDTD: a web-accessible protein database for drug target identification

    Directory of Open Access Journals (Sweden)

    Gao Zhenting

    2008-02-01

    Full Text Available Abstract Background Target identification is important for modern drug discovery. With the advances in the development of molecular docking, potential binding proteins may be discovered by docking a small molecule to a repository of proteins with three-dimensional (3D structures. To complete this task, a reverse docking program and a drug target database with 3D structures are necessary. To this end, we have developed a web server tool, TarFisDock (Target Fishing Docking http://www.dddc.ac.cn/tarfisdock, which has been used widely by others. Recently, we have constructed a protein target database, Potential Drug Target Database (PDTD, and have integrated PDTD with TarFisDock. This combination aims to assist target identification and validation. Description PDTD is a web-accessible protein database for in silico target identification. It currently contains >1100 protein entries with 3D structures presented in the Protein Data Bank. The data are extracted from the literatures and several online databases such as TTD, DrugBank and Thomson Pharma. The database covers diverse information of >830 known or potential drug targets, including protein and active sites structures in both PDB and mol2 formats, related diseases, biological functions as well as associated regulating (signaling pathways. Each target is categorized by both nosology and biochemical function. PDTD supports keyword search function, such as PDB ID, target name, and disease name. Data set generated by PDTD can be viewed with the plug-in of molecular visualization tools and also can be downloaded freely. Remarkably, PDTD is specially designed for target identification. In conjunction with TarFisDock, PDTD can be used to identify binding proteins for small molecules. The results can be downloaded in the form of mol2 file with the binding pose of the probe compound and a list of potential binding targets according to their ranking scores. Conclusion PDTD serves as a comprehensive and

  4. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction

    Science.gov (United States)

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients’ psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller’s mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023

  5. MCTBI: a web server for predicting metal ion effects in RNA structures.

    Science.gov (United States)

    Sun, Li-Zhen; Zhang, Jing-Xiang; Chen, Shi-Jie

    2017-08-01

    Metal ions play critical roles in RNA structure and function. However, web servers and software packages for predicting ion effects in RNA structures are notably scarce. Furthermore, the existing web servers and software packages mainly neglect ion correlation and fluctuation effects, which are potentially important for RNAs. We here report a new web server, the MCTBI server (http://rna.physics.missouri.edu/MCTBI), for the prediction of ion effects for RNA structures. This server is based on the recently developed MCTBI, a model that can account for ion correlation and fluctuation effects for nucleic acid structures and can provide improved predictions for the effects of metal ions, especially for multivalent ions such as Mg 2+ effects, as shown by extensive theory-experiment test results. The MCTBI web server predicts metal ion binding fractions, the most probable bound ion distribution, the electrostatic free energy of the system, and the free energy components. The results provide mechanistic insights into the role of metal ions in RNA structure formation and folding stability, which is important for understanding RNA functions and the rational design of RNA structures. © 2017 Sun et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  6. Using Forecasting to Predict Long-Term Resource Utilization for Web Services

    Science.gov (United States)

    Yoas, Daniel W.

    2013-01-01

    Researchers have spent years understanding resource utilization to improve scheduling, load balancing, and system management through short-term prediction of resource utilization. Early research focused primarily on single operating systems; later, interest shifted to distributed systems and, finally, into web services. In each case researchers…

  7. Procedures can be learned on the Web: a randomized study of ultrasound-guided vascular access training.

    Science.gov (United States)

    Chenkin, Jordan; Lee, Shirley; Huynh, Thien; Bandiera, Glen

    2008-10-01

    Web-based learning has several potential advantages over lectures, such as anytime-anywhere access, rich multimedia, and nonlinear navigation. While known to be an effective method for learning facts, few studies have examined the effectiveness of Web-based formats for learning procedural skills. The authors sought to determine whether a Web-based tutorial is at least as effective as a didactic lecture for learning ultrasound-guided vascular access (UGVA). Participating staff emergency physicians (EPs) and junior emergency medicine (EM) residents with no UGVA experience completed a precourse test and were randomized to either a Web-based or a didactic group. The Web-based group was instructed to use an online tutorial and the didactic group attended a lecture. Participants then practiced on simulators and live models without any further instruction. Following a rest period, participants completed a four-station objective structured clinical examination (OSCE), a written examination, and a postcourse questionnaire. Examination results were compared using a noninferiority data analysis with a 10% margin of difference. Twenty-one residents and EPs participated in the study. There were no significant differences in mean OSCE scores (absolute difference = -2.8%; 95% confidence interval [CI] = -9.3% to 3.8%) or written test scores (absolute difference = -1.4%; 95% CI = -7.8% to 5.0%) between the Web group and the didactic group. Both groups demonstrated similar improvements in written test scores (26.1% vs. 25.8%; p = 0.95). Ninety-one percent (10/11) of the Web group and 80% (8/10) of the didactic group participants found the teaching format to be effective (p = 0.59). Our Web-based tutorial was at least as effective as a traditional didactic lecture for teaching the knowledge and skills essential for UGVA. Participants expressed high satisfaction with this teaching technology. Web-based teaching may be a useful alternative to didactic teaching for learning procedural

  8. PONGO: a web server for multiple predictions of all-alpha transmembrane proteins

    DEFF Research Database (Denmark)

    Amico, M.; Finelli, M.; Rossi, I.

    2006-01-01

    of the organism and more importantly with the same sequence profile for a given sequence when required. Here we present a new web server that incorporates the state-of-the-art topology predictors in a single framework, so that putative users can interactively compare and evaluate four predictions simultaneously...... for a given sequence. Together with the predicted topology, the server also displays a signal peptide prediction determined with SPEP. The PONGO web server is available at http://pongo.biocomp.unibo.it/pongo .......The annotation efforts of the BIOSAPIENS European Network of Excellence have generated several distributed annotation systems (DAS) with the aim of integrating Bioinformatics resources and annotating metazoan genomes ( http://www.biosapiens.info/ ). In this context, the PONGO DAS server ( http...

  9. AthMethPre: a web server for the prediction and query of mRNA m6A sites in Arabidopsis thaliana.

    Science.gov (United States)

    Xiang, Shunian; Yan, Zhangming; Liu, Ke; Zhang, Yaou; Sun, Zhirong

    2016-10-18

    N 6 -Methyladenosine (m 6 A) is the most prevalent and abundant modification in mRNA that has been linked to many key biological processes. High-throughput experiments have generated m 6 A-peaks across the transcriptome of A. thaliana, but the specific methylated sites were not assigned, which impedes the understanding of m 6 A functions in plants. Therefore, computational prediction of mRNA m 6 A sites becomes emergently important. Here, we present a method to predict the m 6 A sites for A. thaliana mRNA sequence(s). To predict the m 6 A sites of an mRNA sequence, we employed the support vector machine to build a classifier using the features of the positional flanking nucleotide sequence and position-independent k-mer nucleotide spectrum. Our method achieved good performance and was applied to a web server to provide service for the prediction of A. thaliana m 6 A sites. The server also provides a comprehensive database of predicted transcriptome-wide m 6 A sites and curated m 6 A-seq peaks from the literature for query and visualization. The AthMethPre web server is the first web server that provides a user-friendly tool for the prediction and query of A. thaliana mRNA m 6 A sites, which is freely accessible for public use at .

  10. Inequalities versus Utilization: Factors Predicting Access to Healthcare in Ghana

    Directory of Open Access Journals (Sweden)

    Dominic Buer Boyetey

    2016-08-01

    Full Text Available Universal access to health care remains a significant source of inequality especially among vulnerable groups. Challenges such as lack of insurance coverage, absence of certain types of care, as well as high individual financial care cost can be blamed for the growing inequality in the healthcare sector. The concern is worrying especially when people are denied care. It is in this light that the study set to find out what factors are likely to impact the chances of access to health care, so far as the Ghana Demographic and Health Survey Data 2014 data are concerned, particularly to examine the differences in access to healthcare in connection with varying income groups, educational levels and residential locations. The study relied on the logistic regression analysis to establish that people with some level of education have greater chances of accessing health care compared with those without education. Also chances of access to health care in the sample were high for people in the lower quartile and upper quartile of the household wealth index and a local minimum for those in the middle class. It became evident also that increased number of people with NHIS or PHIS or combination of cash with NHIS or PHIS will give rise to a corresponding increment in the probability of gaining access to health care.

  11. DelPhiPKa web server: predicting pKa of proteins, RNAs and DNAs.

    Science.gov (United States)

    Wang, Lin; Zhang, Min; Alexov, Emil

    2016-02-15

    A new pKa prediction web server is released, which implements DelPhi Gaussian dielectric function to calculate electrostatic potentials generated by charges of biomolecules. Topology parameters are extended to include atomic information of nucleotides of RNA and DNA, which extends the capability of pKa calculations beyond proteins. The web server allows the end-user to protonate the biomolecule at particular pH based on calculated pKa values and provides the downloadable file in PQR format. Several tests are performed to benchmark the accuracy and speed of the protocol. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhiPKa program. The computation is performed on the Palmetto supercomputer cluster and results/download links are given back to the end-user via http protocol. The web server takes advantage of MPI parallel implementation in DelPhiPKa and can run a single job on up to 24 CPUs. The DelPhiPKa web server is available at http://compbio.clemson.edu/pka_webserver. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. A web server for analysis, comparison and prediction of protein ligand binding sites.

    Science.gov (United States)

    Singh, Harinder; Srivastava, Hemant Kumar; Raghava, Gajendra P S

    2016-03-25

    One of the major challenges in the field of system biology is to understand the interaction between a wide range of proteins and ligands. In the past, methods have been developed for predicting binding sites in a protein for a limited number of ligands. In order to address this problem, we developed a web server named 'LPIcom' to facilitate users in understanding protein-ligand interaction. Analysis, comparison and prediction modules are available in the "LPIcom' server to predict protein-ligand interacting residues for 824 ligands. Each ligand must have at least 30 protein binding sites in PDB. Analysis module of the server can identify residues preferred in interaction and binding motif for a given ligand; for example residues glycine, lysine and arginine are preferred in ATP binding sites. Comparison module of the server allows comparing protein-binding sites of multiple ligands to understand the similarity between ligands based on their binding site. This module indicates that ATP, ADP and GTP ligands are in the same cluster and thus their binding sites or interacting residues exhibit a high level of similarity. Propensity-based prediction module has been developed for predicting ligand-interacting residues in a protein for more than 800 ligands. In addition, a number of web-based tools have been integrated to facilitate users in creating web logo and two-sample between ligand interacting and non-interacting residues. In summary, this manuscript presents a web-server for analysis of ligand interacting residue. This server is available for public use from URL http://crdd.osdd.net/raghava/lpicom .

  13. The wisdom of crowds in action: Forecasting epidemic diseases with a web-based prediction market system.

    Science.gov (United States)

    Li, Eldon Y; Tung, Chen-Yuan; Chang, Shu-Hsun

    2016-08-01

    The quest for an effective system capable of monitoring and predicting the trends of epidemic diseases is a critical issue for communities worldwide. With the prevalence of Internet access, more and more researchers today are using data from both search engines and social media to improve the prediction accuracy. In particular, a prediction market system (PMS) exploits the wisdom of crowds on the Internet to effectively accomplish relatively high accuracy. This study presents the architecture of a PMS and demonstrates the matching mechanism of logarithmic market scoring rules. The system was implemented to predict infectious diseases in Taiwan with the wisdom of crowds in order to improve the accuracy of epidemic forecasting. The PMS architecture contains three design components: database clusters, market engine, and Web applications. The system accumulated knowledge from 126 health professionals for 31 weeks to predict five disease indicators: the confirmed cases of dengue fever, the confirmed cases of severe and complicated influenza, the rate of enterovirus infections, the rate of influenza-like illnesses, and the confirmed cases of severe and complicated enterovirus infection. Based on the winning ratio, the PMS predicts the trends of three out of five disease indicators more accurately than does the existing system that uses the five-year average values of historical data for the same weeks. In addition, the PMS with the matching mechanism of logarithmic market scoring rules is easy to understand for health professionals and applicable to predict all the five disease indicators. The PMS architecture of this study affords organizations and individuals to implement it for various purposes in our society. The system can continuously update the data and improve prediction accuracy in monitoring and forecasting the trends of epidemic diseases. Future researchers could replicate and apply the PMS demonstrated in this study to more infectious diseases and wider

  14. Transcriptome tomography for brain analysis in the web-accessible anatomical space.

    Directory of Open Access Journals (Sweden)

    Yuko Okamura-Oho

    Full Text Available Increased information on the encoded mammalian genome is expected to facilitate an integrated understanding of complex anatomical structure and function based on the knowledge of gene products. Determination of gene expression-anatomy associations is crucial for this understanding. To elicit the association in the three-dimensional (3D space, we introduce a novel technique for comprehensive mapping of endogenous gene expression into a web-accessible standard space: Transcriptome Tomography. The technique is based on conjugation of sequential tissue-block sectioning, all fractions of which are used for molecular measurements of gene expression densities, and the block- face imaging, which are used for 3D reconstruction of the fractions. To generate a 3D map, tissues are serially sectioned in each of three orthogonal planes and the expression density data are mapped using a tomographic technique. This rapid and unbiased mapping technique using a relatively small number of original data points allows researchers to create their own expression maps in the broad anatomical context of the space. In the first instance we generated a dataset of 36,000 maps, reconstructed from data of 61 fractions measured with microarray, covering the whole mouse brain (ViBrism: http://vibrism.riken.jp/3dviewer/ex/index.html in one month. After computational estimation of the mapping accuracy we validated the dataset against existing data with respect to the expression location and density. To demonstrate the relevance of the framework, we showed disease related expression of Huntington's disease gene and Bdnf. Our tomographic approach is applicable to analysis of any biological molecules derived from frozen tissues, organs and whole embryos, and the maps are spatially isotropic and well suited to the analysis in the standard space (e.g. Waxholm Space for brain-atlas databases. This will facilitate research creating and using open-standards for a molecular

  15. Web standards facilitating accessibility in a digitally inclusive South Africa – Perspectives from developing the South African National Accessibility Portal

    CSIR Research Space (South Africa)

    Coetzee, L

    2008-11-01

    Full Text Available Many factors impact on the ability to create a digitally inclusive society in a developing world context. These include lack of access to information and communication technology (ICT), infrastructure, low literacy levels as well as low ICT related...

  16. A web portal for accessing, viewing and comparing in situ observations, EO products and model output data

    Science.gov (United States)

    Vines, Aleksander; Hamre, Torill; Lygre, Kjetil

    2014-05-01

    The GreenSeas project (Development of global plankton data base and model system for eco-climate early warning) aims to advance the knowledge and predictive capacities of how marine ecosystems will respond to global change. A main task has been to set up a data delivery and monitoring core service following the open and free data access policy implemented in the Global Monitoring for the Environment and Security (GMES) programme. A key feature of the system is its ability to compare data from different datasets, including an option to upload one's own netCDF files. The user can for example search in an in situ database for different variables (like temperature, salinity, different elements, light, specific plankton types or rate measurements) with different criteria (bounding box, date/time, depth, Longhurst region, cruise/transect) and compare the data with model data. The user can choose model data or Earth observation data from a list, or upload his/her own netCDF files to use in the comparison. The data can be visualized on a map, as graphs and plots (e.g. time series and property-property plots), or downloaded in various formats. The aim is to ensure open and free access to historical plankton data, new data (EO products and in situ measurements), model data (including estimates of simulation error) and biological, environmental and climatic indicators to a range of stakeholders, such as scientists, policy makers and environmental managers. We have implemented a web-based GIS(Geographical Information Systems) system and want to demonstrate the use of this. The tool is designed for a wide range of users: Novice users, who want a simple way to be able to get basic information about the current state of the marine planktonic ecosystem by utilizing predefined queries and comparisons with models. Intermediate level users who want to explore the database on their own and customize the prefedined setups. Advanced users who want to perform complex queries and

  17. ComplexContact: a web server for inter-protein contact prediction using deep learning

    KAUST Repository

    Zeng, Hong; Wang, Sheng; Zhou, Tianming; Zhao, Feifeng; Li, Xiufeng; Wu, Qing; Xu, Jinbo

    2018-01-01

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  18. ComplexContact: a web server for inter-protein contact prediction using deep learning

    KAUST Repository

    Zeng, Hong

    2018-05-20

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  19. ComplexContact: a web server for inter-protein contact prediction using deep learning.

    Science.gov (United States)

    Zeng, Hong; Wang, Sheng; Zhou, Tianming; Zhao, Feifeng; Li, Xiufeng; Wu, Qing; Xu, Jinbo

    2018-05-22

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  20. National Scale Marine Geophysical Data Portal for the Israel EEZ with Public Access Web-GIS Platform

    Science.gov (United States)

    Ketter, T.; Kanari, M.; Tibor, G.

    2017-12-01

    Recent offshore discoveries and regulation in the Israel Exclusive Economic Zone (EEZ) are the driving forces behind increasing marine research and development initiatives such as infrastructure development, environmental protection and decision making among many others. All marine operations rely on existing seabed information, while some also generate new data. We aim to create a single platform knowledge-base to enable access to existing information, in a comprehensive, publicly accessible web-based interface. The Israel EEZ covers approx. 26,000 sqkm and has been surveyed continuously with various geophysical instruments over the past decades, including 10,000 km of multibeam survey lines, 8,000 km of sub-bottom seismic lines, and hundreds of sediment sampling stations. Our database consists of vector and raster datasets from multiple sources compiled into a repository of geophysical data and metadata, acquired nation-wide by several research institutes and universities. The repository will enable public access via a web portal based on a GIS platform, including datasets from multibeam, sub-bottom profiling, single- and multi-channel seismic surveys and sediment sampling analysis. Respective data products will also be available e.g. bathymetry, substrate type, granulometry, geological structure etc. Operating a web-GIS based repository allows retrieval of pre-existing data for potential users to facilitate planning of future activities e.g. conducting marine surveys, construction of marine infrastructure and other private or public projects. User interface is based on map oriented spatial selection, which will reveal any relevant data for designated areas of interest. Querying the database will allow the user to obtain information about the data owner and to address them for data retrieval as required. Wide and free public access to existing data and metadata can save time and funds for academia, government and commercial sectors, while aiding in cooperation

  1. BioIMAX: A Web 2.0 approach for easy exploratory and collaborative access to multivariate bioimage data

    Directory of Open Access Journals (Sweden)

    Khan Michael

    2011-07-01

    Full Text Available Abstract Background Innovations in biological and biomedical imaging produce complex high-content and multivariate image data. For decision-making and generation of hypotheses, scientists need novel information technology tools that enable them to visually explore and analyze the data and to discuss and communicate results or findings with collaborating experts from various places. Results In this paper, we present a novel Web2.0 approach, BioIMAX, for the collaborative exploration and analysis of multivariate image data by combining the webs collaboration and distribution architecture with the interface interactivity and computation power of desktop applications, recently called rich internet application. Conclusions BioIMAX allows scientists to discuss and share data or results with collaborating experts and to visualize, annotate, and explore multivariate image data within one web-based platform from any location via a standard web browser requiring only a username and a password. BioIMAX can be accessed at http://ani.cebitec.uni-bielefeld.de/BioIMAX with the username "test" and the password "test1" for testing purposes.

  2. Viral IRES prediction system - a web server for prediction of the IRES secondary structure in silico.

    Directory of Open Access Journals (Sweden)

    Jun-Jie Hong

    Full Text Available The internal ribosomal entry site (IRES functions as cap-independent translation initiation sites in eukaryotic cells. IRES elements have been applied as useful tools for bi-cistronic expression vectors. Current RNA structure prediction programs are unable to predict precisely the potential IRES element. We have designed a viral IRES prediction system (VIPS to perform the IRES secondary structure prediction. In order to obtain better results for the IRES prediction, the VIPS can evaluate and predict for all four different groups of IRESs with a higher accuracy. RNA secondary structure prediction, comparison, and pseudoknot prediction programs were implemented to form the three-stage procedure for the VIPS. The backbone of VIPS includes: the RNAL fold program, aimed to predict local RNA secondary structures by minimum free energy method; the RNA Align program, intended to compare predicted structures; and pknotsRG program, used to calculate the pseudoknot structure. VIPS was evaluated by using UTR database, IRES database and Virus database, and the accuracy rate of VIPS was assessed as 98.53%, 90.80%, 82.36% and 80.41% for IRES groups 1, 2, 3, and 4, respectively. This advance useful search approach for IRES structures will facilitate IRES related studies. The VIPS on-line website service is available at http://140.135.61.250/vips/.

  3. Incorporating information on predicted solvent accessibility to the co-evolution-based study of protein interactions.

    Science.gov (United States)

    Ochoa, David; García-Gutiérrez, Ponciano; Juan, David; Valencia, Alfonso; Pazos, Florencio

    2013-01-27

    A widespread family of methods for studying and predicting protein interactions using sequence information is based on co-evolution, quantified as similarity of phylogenetic trees. Part of the co-evolution observed between interacting proteins could be due to co-adaptation caused by inter-protein contacts. In this case, the co-evolution is expected to be more evident when evaluated on the surface of the proteins or the internal layers close to it. In this work we study the effect of incorporating information on predicted solvent accessibility to three methods for predicting protein interactions based on similarity of phylogenetic trees. We evaluate the performance of these methods in predicting different types of protein associations when trees based on positions with different characteristics of predicted accessibility are used as input. We found that predicted accessibility improves the results of two recent versions of the mirrortree methodology in predicting direct binary physical interactions, while it neither improves these methods, nor the original mirrortree method, in predicting other types of interactions. That improvement comes at no cost in terms of applicability since accessibility can be predicted for any sequence. We also found that predictions of protein-protein interactions are improved when multiple sequence alignments with a richer representation of sequences (including paralogs) are incorporated in the accessibility prediction.

  4. The International Mouse Phenotyping Consortium Web Portal, a unified point of access for knockout mice and related phenotyping data

    Science.gov (United States)

    Koscielny, Gautier; Yaikhom, Gagarine; Iyer, Vivek; Meehan, Terrence F.; Morgan, Hugh; Atienza-Herrero, Julian; Blake, Andrew; Chen, Chao-Kung; Easty, Richard; Di Fenza, Armida; Fiegel, Tanja; Grifiths, Mark; Horne, Alan; Karp, Natasha A.; Kurbatova, Natalja; Mason, Jeremy C.; Matthews, Peter; Oakley, Darren J.; Qazi, Asfand; Regnart, Jack; Retha, Ahmad; Santos, Luis A.; Sneddon, Duncan J.; Warren, Jonathan; Westerberg, Henrik; Wilson, Robert J.; Melvin, David G.; Smedley, Damian; Brown, Steve D. M.; Flicek, Paul; Skarnes, William C.; Mallon, Ann-Marie; Parkinson, Helen

    2014-01-01

    The International Mouse Phenotyping Consortium (IMPC) web portal (http://www.mousephenotype.org) provides the biomedical community with a unified point of access to mutant mice and rich collection of related emerging and existing mouse phenotype data. IMPC mouse clinics worldwide follow rigorous highly structured and standardized protocols for the experimentation, collection and dissemination of data. Dedicated ‘data wranglers’ work with each phenotyping center to collate data and perform quality control of data. An automated statistical analysis pipeline has been developed to identify knockout strains with a significant change in the phenotype parameters. Annotation with biomedical ontologies allows biologists and clinicians to easily find mouse strains with phenotypic traits relevant to their research. Data integration with other resources will provide insights into mammalian gene function and human disease. As phenotype data become available for every gene in the mouse, the IMPC web portal will become an invaluable tool for researchers studying the genetic contributions of genes to human diseases. PMID:24194600

  5. Looking back, looking forward: 10 years of development to collect, preserve and access the Danish Web

    DEFF Research Database (Denmark)

    Laursen, Ditte; Møldrup-Dalum, Per

    Digital heritage archiving is an ongoing activity that requires commitment, involvement and cooperation between heritage institutions and policy makers as well as producers and users of information. In this presentation, we will address how a web archive is created over time as well as what or who...

  6. Comparing Accessibility Auditing Methods for Ebooks: Crowdsourced, Functionality-Led Versus Web Content Methodologies.

    Science.gov (United States)

    James, Abi; Draffan, E A; Wald, Mike

    2017-01-01

    This paper presents a gap analysis between crowdsourced functional accessibility evaluations of ebooks conducted by non-experts and the technical accessibility standards employed by developers. It also illustrates how combining these approaches can provide more appropriate information for a wider group of users with print impairments.

  7. Robust Query Processing for Personalized Information Access on the Semantic Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Stuckenschmidt, Heiner; Wache, Holger

    and user preferences. We describe a framework for information access that combines query refinement and relaxation in order to provide robust, personalized access to heterogeneous RDF data as well as an implementation in terms of rewriting rules and explain its application in the context of e-learning...

  8. TBI server: a web server for predicting ion effects in RNA folding.

    Science.gov (United States)

    Zhu, Yuhong; He, Zhaojian; Chen, Shi-Jie

    2015-01-01

    Metal ions play a critical role in the stabilization of RNA structures. Therefore, accurate prediction of the ion effects in RNA folding can have a far-reaching impact on our understanding of RNA structure and function. Multivalent ions, especially Mg²⁺, are essential for RNA tertiary structure formation. These ions can possibly become strongly correlated in the close vicinity of RNA surface. Most of the currently available software packages, which have widespread success in predicting ion effects in biomolecular systems, however, do not explicitly account for the ion correlation effect. Therefore, it is important to develop a software package/web server for the prediction of ion electrostatics in RNA folding by including ion correlation effects. The TBI web server http://rna.physics.missouri.edu/tbi_index.html provides predictions for the total electrostatic free energy, the different free energy components, and the mean number and the most probable distributions of the bound ions. A novel feature of the TBI server is its ability to account for ion correlation and ion distribution fluctuation effects. By accounting for the ion correlation and fluctuation effects, the TBI server is a unique online tool for computing ion-mediated electrostatic properties for given RNA structures. The results can provide important data for in-depth analysis for ion effects in RNA folding including the ion-dependence of folding stability, ion uptake in the folding process, and the interplay between the different energetic components.

  9. LDAP: a web server for lncRNA-disease association prediction.

    Science.gov (United States)

    Lan, Wei; Li, Min; Zhao, Kaijie; Liu, Jin; Wu, Fang-Xiang; Pan, Yi; Wang, Jianxin

    2017-02-01

    Increasing evidences have demonstrated that long noncoding RNAs (lncRNAs) play important roles in many human diseases. Therefore, predicting novel lncRNA-disease associations would contribute to dissect the complex mechanisms of disease pathogenesis. Some computational methods have been developed to infer lncRNA-disease associations. However, most of these methods infer lncRNA-disease associations only based on single data resource. In this paper, we propose a new computational method to predict lncRNA-disease associations by integrating multiple biological data resources. Then, we implement this method as a web server for lncRNA-disease association prediction (LDAP). The input of the LDAP server is the lncRNA sequence. The LDAP predicts potential lncRNA-disease associations by using a bagging SVM classifier based on lncRNA similarity and disease similarity. The web server is available at http://bioinformatics.csu.edu.cn/ldap jxwang@mail.csu.edu.cn. Supplementary data are available at Bioinformatics online.

  10. SVMDLF: A novel R-based Web application for prediction of dipeptidyl peptidase 4 inhibitors.

    Science.gov (United States)

    Chandra, Sharat; Pandey, Jyotsana; Tamrakar, Akhilesh K; Siddiqi, Mohammad Imran

    2017-12-01

    Dipeptidyl peptidase 4 (DPP4) is a well-known target for the antidiabetic drugs. However, currently available DPP4 inhibitor screening assays are costly and labor-intensive. It is important to create a robust in silico method to predict the activity of DPP4 inhibitor for the new lead finding. Here, we introduce an R-based Web application SVMDLF (SVM-based DPP4 Lead Finder) to predict the inhibitor of DPP4, based on support vector machine (SVM) model, predictions of which are confirmed by in vitro biological evaluation. The best model generated by MACCS structure fingerprint gave the Matthews correlation coefficient of 0.87 for the test set and 0.883 for the external test set. We screened Maybridge database consisting approximately 53,000 compounds. For further bioactivity assay, six compounds were shortlisted, and of six hits, three compounds showed significant DPP4 inhibitory activities with IC 50 values ranging from 8.01 to 10.73 μm. This application is an OpenCPU server app which is a novel single-page R-based Web application for the DPP4 inhibitor prediction. The SVMDLF is freely available and open to all users at http://svmdlf.net/ocpu/library/dlfsvm/www/ and http://www.cdri.res.in/svmdlf/. © 2017 John Wiley & Sons A/S.

  11. TBI server: a web server for predicting ion effects in RNA folding.

    Directory of Open Access Journals (Sweden)

    Yuhong Zhu

    Full Text Available Metal ions play a critical role in the stabilization of RNA structures. Therefore, accurate prediction of the ion effects in RNA folding can have a far-reaching impact on our understanding of RNA structure and function. Multivalent ions, especially Mg²⁺, are essential for RNA tertiary structure formation. These ions can possibly become strongly correlated in the close vicinity of RNA surface. Most of the currently available software packages, which have widespread success in predicting ion effects in biomolecular systems, however, do not explicitly account for the ion correlation effect. Therefore, it is important to develop a software package/web server for the prediction of ion electrostatics in RNA folding by including ion correlation effects.The TBI web server http://rna.physics.missouri.edu/tbi_index.html provides predictions for the total electrostatic free energy, the different free energy components, and the mean number and the most probable distributions of the bound ions. A novel feature of the TBI server is its ability to account for ion correlation and ion distribution fluctuation effects.By accounting for the ion correlation and fluctuation effects, the TBI server is a unique online tool for computing ion-mediated electrostatic properties for given RNA structures. The results can provide important data for in-depth analysis for ion effects in RNA folding including the ion-dependence of folding stability, ion uptake in the folding process, and the interplay between the different energetic components.

  12. El acceso a VacciMonitor puede hacerse a través de la Web of Science / Accessing VacciMonitor by the Web of Science

    Directory of Open Access Journals (Sweden)

    Daniel Francisco Arencibia-Arrebola

    2015-01-01

    Full Text Available VacciMonitor has gradually increased its visibility by access to different databases. Thus, it was introduced in the project SciELO, EBSCO, HINARI, Redalyc, SCOPUS, DOAJ, SICC Data Bases, SeCiMed, among almost thirty well-known index sites, including the virtual libraries of the main universities from United States of America and other countries. Through an agreement SciELO-Web of Science (WoS it will be possible to include the journals that are indexed in SciELO in the WoS, however this collaboration work is already presenting its outcomes, it is possible to access the content of SciELO by WoS in the link: http://wokinfo.com/products_tools/multidisciplinar y/scielo/ WoS was designed by the Institute for Scientific Information (ISI and it is one of the products of the pack ISI Web of Knowledge, currently property of Thomson Reuters (1. WoS is a service of citation index and databases, worldwide on-line leader with multidisciplinary information covering the knowledge fields of sciences in general, social sciences as well as arts and humanities with more than 46 million of bibliographical references and other hundreds of citations, that made possible navigation in the broad web of journal articles, lecture materials and other registers included in its collection (1. The logic of the functioning of WoS is based on quantitative criteria, since a bigger production demonstrates a greater number of registered papers in most recognized Journals and to what extend these papers are cited by these journals (2. The information obtained from WoS databases are very useful to address efforts of scientific research to a personal, institutional or national level. Scientists publishing in WoS journals not only produce more scientific literature but also this literature is more consulted and used (3. However, it should be considered that statistics of this site for the bibliometric analysis only take into account those journals in this web, but contains three

  13. Leveraging Web Services in Providing Efficient Discovery, Retrieval, and Integration of NASA-Sponsored Observations and Predictions

    Science.gov (United States)

    Bambacus, M.; Alameh, N.; Cole, M.

    2006-12-01

    The Applied Sciences Program at NASA focuses on extending the results of NASA's Earth-Sun system science research beyond the science and research communities to contribute to national priority applications with societal benefits. By employing a systems engineering approach, supporting interoperable data discovery and access, and developing partnerships with federal agencies and national organizations, the Applied Sciences Program facilitates the transition from research to operations in national applications. In particular, the Applied Sciences Program identifies twelve national applications, listed at http://science.hq.nasa.gov/earth-sun/applications/, which can be best served by the results of NASA aerospace research and development of science and technologies. The ability to use and integrate NASA data and science results into these national applications results in enhanced decision support and significant socio-economic benefits for each of the applications. This paper focuses on leveraging the power of interoperability and specifically open standard interfaces in providing efficient discovery, retrieval, and integration of NASA's science research results. Interoperability (the ability to access multiple, heterogeneous geoprocessing environments, either local or remote by means of open and standard software interfaces) can significantly increase the value of NASA-related data by increasing the opportunities to discover, access and integrate that data in the twelve identified national applications (particularly in non-traditional settings). Furthermore, access to data, observations, and analytical models from diverse sources can facilitate interdisciplinary and exploratory research and analysis. To streamline this process, the NASA GeoSciences Interoperability Office (GIO) is developing the NASA Earth-Sun System Gateway (ESG) to enable access to remote geospatial data, imagery, models, and visualizations through open, standard web protocols. The gateway (online

  14. The protein circular dichroism data bank, a Web-based site for access to circular dichroism spectroscopic data.

    Science.gov (United States)

    Whitmore, Lee; Woollett, Benjamin; Miles, Andrew J; Janes, Robert W; Wallace, B A

    2010-10-13

    The Protein Circular Dichroism Data Bank (PCDDB) is a newly released resource for structural biology. It is a web-accessible (http://pcddb.cryst.bbk.ac.uk) data bank for circular dichroism (CD) and synchrotron radiation circular dichroism (SRCD) spectra and their associated experimental and secondary metadata, with links to protein sequence and structure data banks. It is designed to provide a public repository for CD spectroscopic data on macromolecules, to parallel the Protein Data Bank (PDB) for crystallographic, electron microscopic, and nuclear magnetic resonance spectroscopic data. Similarly to the PDB, it includes validation checking procedures to ensure good practice and the integrity of the deposited data. This paper reports on the first public release of the PCDDB, which provides access to spectral data that comprise standard reference datasets. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Web Mining

    Science.gov (United States)

    Fürnkranz, Johannes

    The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to Web data and documents. This chapter provides a brief overview of web mining techniques and research areas, most notably hypertext classification, wrapper induction, recommender systems and web usage mining.

  16. Visibilidad y accesibilidad web de las tesinas de licenciatura en Bibliotecología y Documentación en la Argentina = Web Visibility and Accessibility of Theses in Library Science and Documentation in Argentina

    Directory of Open Access Journals (Sweden)

    Sandra Gisela Martín

    2013-06-01

    Full Text Available Se busca describir la visibilidad y accesibilidad web de la investigación en Bibliotecología y Documentación en la Argentina mediante el estudio de las tesinas presentadas para optar al título de licenciatura. Constituye un estudio exploratorio descriptivo con enfoque cuantitativo donde se investiga la visibilidad de tesinas en catálogos y repositorios institucionales, la cantidad de tesinas visibles en la web /cantidad de tesinas accesibles en texto completo, los metadatos de los registros en los catálogos, los metadatos de los registros en los repositorios, la producción de tesinas por año según visibilidad web, la cantidad de autores por tesina según visibilidad web y la distribución temática del contenido de las tesinas. Se concluye que la producción científica de tesinas en Bibliotecología en la Argentina está muy dispersa en la web y que la visibilidad y accesibilidad a las mismas es muy escasa = It describes the web visibility and accessibility of research in library and documentation in Argentina by studying dissertations submitted to qualify for the bachelor's degree. It is an exploratory study with quantitative approach where the visibility of theses in catalogs and institutional repositories, the number of theses visible on the web/amount accessible in full text, metadata records in catalogs, metadata records in the repositories, the production of dissertations per year according to web visibility, the number of authors per dissertation as web visibility and thematic distribution of the content of dissertations. It is concluded that the production of dissertations in library science in Argentina is spread on the web and that the visibility and accessibility of these is very low.

  17. The information-seeking behaviour of paediatricians accessing web-based resources.

    LENUS (Irish Health Repository)

    Prendiville, T W

    2012-02-01

    OBJECTIVES: To establish the information-seeking behaviours of paediatricians in answering every-day clinical queries. DESIGN: A questionnaire was distributed to every hospital-based paediatrician (paediatric registrar and consultant) working in Ireland. RESULTS: The study received 156 completed questionnaires, a 66.1% response. 67% of paediatricians utilised the internet as their first "port of call" when looking to answer a medical question. 85% believe that web-based resources have improved medical practice, with 88% reporting web-based resources are essential for medical practice today. 93.5% of paediatricians believe attempting to answer clinical questions as they arise is an important component in practising evidence-based medicine. 54% of all paediatricians have recommended websites to parents or patients. 75.5% of paediatricians report finding it difficult to keep up-to-date with new information relevant to their practice. CONCLUSIONS: Web-based paediatric resources are of increasing significance in day-to-day clinical practice. Many paediatricians now believe that the quality of patient care depends on it. Information technology resources play a key role in helping physicians to deliver, in a time-efficient manner, solutions to clinical queries at the point of care.

  18. US Geoscience Information Network, Web Services for Geoscience Information Discovery and Access

    Science.gov (United States)

    Richard, S.; Allison, L.; Clark, R.; Coleman, C.; Chen, G.

    2012-04-01

    The US Geoscience information network has developed metadata profiles for interoperable catalog services based on ISO19139 and the OGC CSW 2.0.2. Currently data services are being deployed for the US Dept. of Energy-funded National Geothermal Data System. These services utilize OGC Web Map Services, Web Feature Services, and THREDDS-served NetCDF for gridded datasets. Services and underlying datasets (along with a wide variety of other information and non information resources are registered in the catalog system. Metadata for registration is produced by various workflows, including harvest from OGC capabilities documents, Drupal-based web applications, transformation from tabular compilations. Catalog search is implemented using the ESRI Geoportal open-source server. We are pursuing various client applications to demonstrated discovery and utilization of the data services. Currently operational applications allow catalog search and data acquisition from map services in an ESRI ArcMap extension, a catalog browse and search application built on openlayers and Django. We are developing use cases and requirements for other applications to utilize geothermal data services for resource exploration and evaluation.

  19. Prototype and Evaluation of AutoHelp: A Case-based, Web-accessible Help Desk System for EOSDIS

    Science.gov (United States)

    Mitchell, Christine M.; Thurman, David A.

    1999-01-01

    AutoHelp is a case-based, Web-accessible help desk for users of the EOSDIS. Its uses a combination of advanced computer and Web technologies, knowledge-based systems tools, and cognitive engineering to offload the current, person-intensive, help desk facilities at the DAACs. As a case-based system, AutoHelp starts with an organized database of previous help requests (questions and answers) indexed by a hierarchical category structure that facilitates recognition by persons seeking assistance. As an initial proof-of-concept demonstration, a month of email help requests to the Goddard DAAC were analyzed and partially organized into help request cases. These cases were then categorized to create a preliminary case indexing system, or category structure. This category structure allows potential users to identify or recognize categories of questions, responses, and sample cases similar to their needs. Year one of this research project focused on the development of a technology demonstration. User assistance 'cases' are stored in an Oracle database in a combination of tables linking prototypical questions with responses and detailed examples from the email help requests analyzed to date. When a potential user accesses the AutoHelp system, a Web server provides a Java applet that displays the category structure of the help case base organized by the needs of previous users. When the user identifies or requests a particular type of assistance, the applet uses Java database connectivity (JDBC) software to access the database and extract the relevant cases. The demonstration will include an on-line presentation of how AutoHelp is currently structured. We will show how a user might request assistance via the Web interface and how the AutoHelp case base provides assistance. The presentation will describe the DAAC data collection, case definition, and organization to date, as well as the AutoHelp architecture. It will conclude with the year 2 proposal to more fully develop the

  20. Enhancing Access to Scientific Models through Standard Web Services, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to investigate the feasibility and value of the "Software as a Service" paradigm in facilitating access to Earth Science numerical models. We envision...

  1. Automating testbed documentation and database access using World Wide Web (WWW) tools

    Science.gov (United States)

    Ames, Charles; Auernheimer, Brent; Lee, Young H.

    1994-01-01

    A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.

  2. 3dRPC: a web server for 3D RNA-protein structure prediction.

    Science.gov (United States)

    Huang, Yangyu; Li, Haotian; Xiao, Yi

    2018-04-01

    RNA-protein interactions occur in many biological processes. To understand the mechanism of these interactions one needs to know three-dimensional (3D) structures of RNA-protein complexes. 3dRPC is an algorithm for prediction of 3D RNA-protein complex structures and consists of a docking algorithm RPDOCK and a scoring function 3dRPC-Score. RPDOCK is used to sample possible complex conformations of an RNA and a protein by calculating the geometric and electrostatic complementarities and stacking interactions at the RNA-protein interface according to the features of atom packing of the interface. 3dRPC-Score is a knowledge-based potential that uses the conformations of nucleotide-amino-acid pairs as statistical variables and that is used to choose the near-native complex-conformations obtained from the docking method above. Recently, we built a web server for 3dRPC. The users can easily use 3dRPC without installing it locally. RNA and protein structures in PDB (Protein Data Bank) format are the only needed input files. It can also incorporate the information of interface residues or residue-pairs obtained from experiments or theoretical predictions to improve the prediction. The address of 3dRPC web server is http://biophy.hust.edu.cn/3dRPC. yxiao@hust.edu.cn.

  3. Spontaneous diffusion of an effective skin cancer prevention program through Web-based access to program materials.

    Science.gov (United States)

    Hall, Dawn M; Escoffery, Cam; Nehl, Eric; Glanz, Karen

    2010-11-01

    Little information exists about the diffusion of evidence-based interventions, a process that can occur naturally in organized networks with established communication channels. This article describes the diffusion of an effective skin cancer prevention program called Pool Cool through available Web-based program materials. We used self-administered surveys to collect information from program users about access to and use of Web-based program materials. We analyzed the content of e-mails sent to the official Pool Cool Web site to obtain qualitative information about spontaneous diffusion. Program users were dispersed throughout the United States, most often learning about the program through a Web site (32%), publication (26%), or colleague (19%). Most respondents (86%) reported that their pool provided educational activities at swimming lessons. The Leader's Guide (59%) and lesson cards (50%) were the most commonly downloaded materials, and most respondents reported using these core items sometimes, often, or always. Aluminum sun-safety signs were the least frequently used materials. A limited budget was the most commonly noted obstacle to sun-safety efforts at the pool (85%). Factors supporting sun safety at the pool centered around risk management (85%) and health of the pool staff (78%). Diffusion promotes the use of evidence-based health programs and can occur with and without systematic efforts. Strategies such as providing well-packaged, user-friendly program materials at low or no cost and strategic advertisement of the availability of program materials may increase program use and exposure. Furthermore, highlighting the benefits of the program can motivate potential program users.

  4. GalaxyHomomer: a web server for protein homo-oligomer structure prediction from a monomer sequence or structure.

    Science.gov (United States)

    Baek, Minkyung; Park, Taeyong; Heo, Lim; Park, Chiwook; Seok, Chaok

    2017-07-03

    Homo-oligomerization of proteins is abundant in nature, and is often intimately related with the physiological functions of proteins, such as in metabolism, signal transduction or immunity. Information on the homo-oligomer structure is therefore important to obtain a molecular-level understanding of protein functions and their regulation. Currently available web servers predict protein homo-oligomer structures either by template-based modeling using homo-oligomer templates selected from the protein structure database or by ab initio docking of monomer structures resolved by experiment or predicted by computation. The GalaxyHomomer server, freely accessible at http://galaxy.seoklab.org/homomer, carries out template-based modeling, ab initio docking or both depending on the availability of proper oligomer templates. It also incorporates recently developed model refinement methods that can consistently improve model quality. Moreover, the server provides additional options that can be chosen by the user depending on the availability of information on the monomer structure, oligomeric state and locations of unreliable/flexible loops or termini. The performance of the server was better than or comparable to that of other available methods when tested on benchmark sets and in a recent CASP performed in a blind fashion. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Dscam1 web server: online prediction of Dscam1 self- and hetero-affinity.

    Science.gov (United States)

    Marini, Simone; Nazzicari, Nelson; Biscarini, Filippo; Wang, Guang-Zhong

    2017-06-15

    Formation of homodimers by identical Dscam1 protein isomers on cell surface is the key factor for the self-avoidance of growing neurites. Dscam1 immense diversity has a critical role in the formation of arthropod neuronal circuit, showing unique evolutionary properties when compared to other cell surface proteins. Experimental measures are available for 89 self-binding and 1722 hetero-binding protein samples, out of more than 19 thousands (self-binding) and 350 millions (hetero-binding) possible isomer combinations. We developed Dscam1 Web Server to quickly predict Dscam1 self- and hetero- binding affinity for batches of Dscam1 isomers. The server can help the study of Dscam1 affinity and help researchers navigate through the tens of millions of possible isomer combinations to isolate the strong-binding ones. Dscam1 Web Server is freely available at: http://bioinformatics.tecnoparco.org/Dscam1-webserver . Web server code is available at https://gitlab.com/ne1s0n/Dscam1-binding . simone.marini@unipv.it or guangzhong.wang@picb.ac.cn. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. Adaptive web data extraction policies

    Directory of Open Access Journals (Sweden)

    Provetti, Alessandro

    2008-12-01

    Full Text Available Web data extraction is concerned, among other things, with routine data accessing and downloading from continuously-updated dynamic Web pages. There is a relevant trade-off between the rate at which the external Web sites are accessed and the computational burden on the accessing client. We address the problem by proposing a predictive model, typical of the Operating Systems literature, of the rate-of-update of each Web source. The presented model has been implemented into a new version of the Dynamo project: a middleware that assists in generating informative RSS feeds out of traditional HTML Web sites. To be effective, i.e., make RSS feeds be timely and informative and to be scalable, Dynamo needs a careful tuning and customization of its polling policies, which are described in detail.

  7. Sistema de resum automàtic accessible via web service

    OpenAIRE

    Pedrajas Pérez, Samuel

    2015-01-01

    En este proyecto se presenta un sistema de resumen automático que utiliza el método de las cadenas léxicas. Se ha implementado como un módulo para FreeLing, que es una librería open source de análisis lingüístico, y hecho accesible vía web service en TextServer, la plataforma de webservices de TALP. In this project an automatic summarization system which uses the lexical chain method is shown. It has been implemented as a module for FreeLing, an open source language analysis tool suite, an...

  8. A Web-based computer system supporting information access, exchange and management during building processes

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt

    1998-01-01

    During the last two decades, a number of research efforts have been made in the field of computing systmes related to the building construction industry. Most of the projects have focused on a part of the entire design process and have typically been limited to a specific domain. This paper prese...... presents a newly developed computer system based on the World Wide Web on the Internet. The focus is on the simplicity of the systems structure and on an intuitive and user friendly interface...

  9. Web Accessibility for Older Adults: A Comparative Analysis of Disability Laws.

    Science.gov (United States)

    Yang, Y Tony; Chen, Brian

    2015-10-01

    Access to the Internet is increasingly critical for health information retrieval, access to certain government benefits and services, connectivity to friends and family members, and an array of commercial and social services that directly affect health. Yet older adults, particularly those with disabilities, are at risk of being left behind in this growing age- and disability-based digital divide. The Americans with Disabilities Act (ADA) was designed to guarantee older adults and persons with disabilities equal access to employment, retail, and other places of public accommodation. Yet older Internet users sometimes face challenges when they try to access the Internet because of disabilities associated with age. Current legal interpretations of the ADA, however, do not consider the Internet to be an entity covered by law. In this article, we examine the current state of Internet accessibility protection in the United States through the lens of the ADA, sections 504 and 508 of the Rehabilitation Act, state laws and industry guidelines. We then compare U.S. rules to those of OECD (Organisation for Economic Co-Operation and Development) countries, notably in the European Union, Canada, Japan, Australia, and the Nordic countries. Our policy recommendations follow from our analyses of these laws and guidelines, and we conclude that the biggest challenge in bridging the age- and disability-based digital divide is the need to extend accessibility requirements to private, not just governmental, entities and organizations. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Web-based access, aggregation, and visualization of future climate projections with emphasis on agricultural assessments

    Science.gov (United States)

    Villoria, Nelson B.; Elliott, Joshua; Müller, Christoph; Shin, Jaewoo; Zhao, Lan; Song, Carol

    2018-01-01

    Access to climate and spatial datasets by non-specialists is restricted by technical barriers involving hardware, software and data formats. We discuss an open-source online tool that facilitates downloading the climate data from the global circulation models used by the Inter-Sectoral Impacts Model Intercomparison Project. The tool also offers temporal and spatial aggregation capabilities for incorporating future climate scenarios in applications where spatial aggregation is important. We hope that streamlined access to these data facilitates analysis of climate related issues while considering the uncertainties derived from future climate projections and temporal aggregation choices.

  11. Impact of web accessibility barriers on users with a hearing impairment

    Directory of Open Access Journals (Sweden)

    Afra Pascual

    2015-01-01

    Full Text Available Se realizaron pruebas de usuarios a personas con discapacidad a uditiva evaluando el impacto que las diferentes barreras de acc esibilidad causan en este tipo de usuarios. El objetivo de recoger esta in formación fue para comunicar a personas que editan contenido en la Web de forma más empática los problemas d e accesibilidad que más afect an a este colectivo, las personas con discapacidad auditiva,y a sí evitar las barreras de accesibilidad que potencialmente podrían estar creando. Como resultado, se obse rva que las barreras que causan mas impacto a usuarios con discapacidad audi tiva son el “texto complejo” y el “contenido multimedia” sin alternativas. En ambos casos los editores de contenido deberían tener en cuenta vigilar la legibilidad del c ontenido web y acompañar de subtítulos y lenguaje de signos el contenido multimedia.

  12. A computational fluid dynamics (CFD) study of WEB-treated aneurysms: Can CFD predict WEB "compression" during follow-up?

    Science.gov (United States)

    Caroff, Jildaz; Mihalea, Cristian; Da Ros, Valerio; Yagi, Takanobu; Iacobucci, Marta; Ikka, Léon; Moret, Jacques; Spelle, Laurent

    2017-07-01

    Recent reports have revealed a worsening of aneurysm occlusion between WEB treatment baseline and angiographic follow-up due to "compression" of the device. We utilized computational fluid dynamics (CFD) in order to determine whether the underlying mechanism of this worsening is flow related. We included data from all consecutive patients treated in our institution with a WEB for unruptured aneurysms located either at the middle cerebral artery or basilar tip. The CFD study was performed using pre-operative 3D rotational angiography. From digital subtraction follow-up angiographies patients were dichotomized into two groups: one with WEB "compression" and one without. We performed statistical analyses to determine a potential correlation between WEB compression and CFD inflow ratio. Between July 2012 and June 2015, a total of 22 unruptured middle cerebral artery or basilar tip aneurysms were treated with a WEB device in our department. Three patients were excluded from the analysis and the mean follow-up period was 17months. Eleven WEBs presented "compression" during follow-up. Interestingly, device "compression" was statistically correlated to the CFD inflow ratio (P=0.018), although not to aneurysm volume, aspect ratio or neck size. The mechanisms underlying the worsening of aneurysm occlusion in WEB-treated patients due to device compression are most likely complex as well as multifactorial. However, it is apparent from our pilot study that a high arterial inflow is, at least, partially involved. Further theoretical and animal research studies are needed to increase our understanding of this phenomenon. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  13. Web Access to Digitised Content of the Exhibition Novo Mesto 1848-1918 at the Dolenjska Museum, Novo Mesto

    Directory of Open Access Journals (Sweden)

    Majda Pungerčar

    2013-09-01

    Full Text Available EXTENDED ABSTRACTFor the first time, the Dolenjska museum Novo mesto provided access to digitised museum resources when they took the decision to enrich the exhibition Novo mesto 1848-1918 by adding digital content. The following goals were identified: the digital content was created at the time of exhibition planning and design, it met the needs of different age groups of visitors, and during the exhibition the content was accessible via touch screen. As such, it also served for educational purposes (content-oriented lectures or problem solving team work. In the course of exhibition digital content was accessible on the museum website http://www.novomesto1848-1918.si. The digital content was divided into the following sections: the web photo gallery, the quiz and the game. The photo gallery was designed in the same way as the exhibition and the print catalogue and extended by the photos of contemporary Novo mesto and accompanied by the music from the orchestron machine. The following themes were outlined: the Austrian Empire, the Krka and Novo mesto, the town and its symbols, images of the town and people, administration and economy, social life and Novo mesto today followed by digitised archive materials and sources from that period such as the Commemorative book of the Uniformed Town Guard, the National Reading Room Guest Book, the Kazina guest book, the album of postcards and the Diploma of Honoured Citizen Josip Gerdešič. The Web application was also a tool for a simple and on line selection of digitised material and the creation of new digital content which proved to be much more convenient for lecturing than Power Point presentations. The quiz consisted of 40 questions relating to the exhibition theme and the catalogue. Each question offered a set of three answers only one of them being correct and illustrated by photography. The application auto selected ten questions and valued the answers immediately. The quiz could be accessed

  14. Towards a tangible web: using physical objects to access and manipulate the Internet of Things

    CSIR Research Space (South Africa)

    Smith, Andrew C

    2013-09-01

    Full Text Available . This additional step has resulted in the phenomenon commonly referred to as the Internet of Things (IoT). In order to realise the full potential of the IoT, individuals need a mechanism to access and manipulate it. A potential mechanism for achieving...

  15. SPEER-SERVER: a web server for prediction of protein specificity determining sites.

    Science.gov (United States)

    Chakraborty, Abhijit; Mandloi, Sapan; Lanczycki, Christopher J; Panchenko, Anna R; Chakrabarti, Saikat

    2012-07-01

    Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids' Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/.

  16. QoS prediction for web services based on user-trust propagation model

    Science.gov (United States)

    Thinh, Le-Van; Tu, Truong-Dinh

    2017-10-01

    There is an important online role for Web service providers and users; however, the rapidly growing number of service providers and users, it can create some similar functions among web services. This is an exciting area for research, and researchers seek to to propose solutions for the best service to users. Collaborative filtering (CF) algorithms are widely used in recommendation systems, although these are less effective for cold-start users. Recently, some recommender systems have been developed based on social network models, and the results show that social network models have better performance in terms of CF, especially for cold-start users. However, most social network-based recommendations do not consider the user's mood. This is a hidden source of information, and is very useful in improving prediction efficiency. In this paper, we introduce a new model called User-Trust Propagation (UTP). The model uses a combination of trust and the mood of users to predict the QoS value and matrix factorisation (MF), which is used to train the model. The experimental results show that the proposed model gives better accuracy than other models, especially for the cold-start problem.

  17. EarthServer2 : The Marine Data Service - Web based and Programmatic Access to Ocean Colour Open Data

    Science.gov (United States)

    Clements, Oliver; Walker, Peter

    2017-04-01

    The ESA Ocean Colour - Climate Change Initiative (ESA OC-CCI) has produced a long-term high quality global dataset with associated per-pixel uncertainty data. This dataset has now grown to several hundred terabytes (uncompressed) and is freely available to download. However, the sheer size of the dataset can act as a barrier to many users; large network bandwidth, local storage and processing requirements can prevent researchers without the backing of a large organisation from taking advantage of this raw data. The EC H2020 project, EarthServer2, aims to create a federated data service providing access to more than 1 petabyte of earth science data. Within this federation the Marine Data Service already provides an innovative on-line tool-kit for filtering, analysing and visualising OC-CCI data. Data are made available, filtered and processed at source through a standards-based interface, the Open Geospatial Consortium Web Coverage Service and Web Coverage Processing Service. This work was initiated in the EC FP7 EarthServer project where it was found that the unfamiliarity and complexity of these interfaces itself created a barrier to wider uptake. The continuation project, EarthServer2, addresses these issues by providing higher level tools for working with these data. We will present some examples of these tools. Many researchers wish to extract time series data from discrete points of interest. We will present a web based interface, based on NASA/ESA WebWorldWind, for selecting points of interest and plotting time series from a chosen dataset. In addition, a CSV file of locations and times, such as a ship's track, can be uploaded and these points extracted and returned in a CSV file allowing researchers to work with the extract locally, such as a spreadsheet. We will also present a set of Python and JavaScript APIs that have been created to complement and extend the web based GUI. These APIs allow the selection of single points and areas for extraction. The

  18. Making Statistical Data More Easily Accessible on the Web Results of the StatSearch Case Study

    CERN Document Server

    Rajman, M; Boynton, I M; Fridlund, B; Fyhrlund, A; Sundgren, B; Lundquist, P; Thelander, H; Wänerskär, M

    2005-01-01

    In this paper we present the results of the StatSearch case study that aimed at providing an enhanced access to statistical data available on the Web. In the scope of this case study we developed a prototype of an information access tool combining a query-based search engine with semi-automated navigation techniques exploiting the hierarchical structuring of the available data. This tool enables a better control of the information retrieval, improving the quality and ease of the access to statistical information. The central part of the presented StatSearch tool consists in the design of an algorithm for automated navigation through a tree-like hierarchical document structure. The algorithm relies on the computation of query related relevance score distributions over the available database to identify the most relevant clusters in the data structure. These most relevant clusters are then proposed to the user for navigation, or, alternatively, are the support for the automated navigation process. Several appro...

  19. Frontier: High Performance Database Access Using Standard Web Components in a Scalable Multi-Tier Architecture

    International Nuclear Information System (INIS)

    Kosyakov, S.; Kowalkowski, J.; Litvintsev, D.; Lueking, L.; Paterno, M.; White, S.P.; Autio, Lauri; Blumenfeld, B.; Maksimovic, P.; Mathis, M.

    2004-01-01

    A high performance system has been assembled using standard web components to deliver database information to a large number of broadly distributed clients. The CDF Experiment at Fermilab is establishing processing centers around the world imposing a high demand on their database repository. For delivering read-only data, such as calibrations, trigger information, and run conditions data, we have abstracted the interface that clients use to retrieve data objects. A middle tier is deployed that translates client requests into database specific queries and returns the data to the client as XML datagrams. The database connection management, request translation, and data encoding are accomplished in servlets running under Tomcat. Squid Proxy caching layers are deployed near the Tomcat servers, as well as close to the clients, to significantly reduce the load on the database and provide a scalable deployment model. Details the system's construction and use are presented, including its architecture, design, interfaces, administration, performance measurements, and deployment plan

  20. The Timeseries Toolbox - A Web Application to Enable Accessible, Reproducible Time Series Analysis

    Science.gov (United States)

    Veatch, W.; Friedman, D.; Baker, B.; Mueller, C.

    2017-12-01

    The vast majority of data analyzed by climate researchers are repeated observations of physical process or time series data. This data lends itself of a common set of statistical techniques and models designed to determine trends and variability (e.g., seasonality) of these repeated observations. Often, these same techniques and models can be applied to a wide variety of different time series data. The Timeseries Toolbox is a web application designed to standardize and streamline these common approaches to time series analysis and modeling with particular attention to hydrologic time series used in climate preparedness and resilience planning and design by the U. S. Army Corps of Engineers. The application performs much of the pre-processing of time series data necessary for more complex techniques (e.g. interpolation, aggregation). With this tool, users can upload any dataset that conforms to a standard template and immediately begin applying these techniques to analyze their time series data.

  1. Web based dosimetry system for reading and monitoring dose through internet access

    International Nuclear Information System (INIS)

    Perle, S.C.; Bennett, K.; Kahilainen, J.; Vuotila, M.

    2010-01-01

    The Instadose TM dosemeter from Mirion Technologies is a small, rugged device based on patented direct ion storage technology and is accredited by the National Voluntary Laboratory Accreditation Program (NVLAP) through NIST, bringing radiation monitoring into the digital age. Smaller than a flash drive, this dosemeter provides an instant read-out when connected to any computer with internet access and a USB connection. Instadose devices provide radiation workers with more flexibility than today's dosemeters. Non Volatile Analog Memory Cell surrounded by a Gas Filled Ion Chamber. Dose changes the amount of Electric Charge in the DIS Analog Memory. The total charge storage capacity of the memory determines the available dose range. The state of the Analog Memory is determined by measuring the voltage across the memory cell. AMP (Account Management Program) provides secure real time access to account details, device assignments, reports and all pertinent account information. Access can be restricted based on the role assignment assigned to an individual. A variety of reports are available for download and customizing. The Advantages of the Instadose dosemeter are: - Unlimited reading capability, - Concerns about a possible exposure can be addressed immediately, - Re-readability without loss of exposure data, with cumulative exposure maintained. (authors)

  2. Web based dosimetry system for reading and monitoring dose through internet access

    Energy Technology Data Exchange (ETDEWEB)

    Perle, S.C.; Bennett, K.; Kahilainen, J.; Vuotila, M. [Mirion Technologies (United States); Mirion Technologies (Finland)

    2010-07-01

    The Instadose{sup TM} dosemeter from Mirion Technologies is a small, rugged device based on patented direct ion storage technology and is accredited by the National Voluntary Laboratory Accreditation Program (NVLAP) through NIST, bringing radiation monitoring into the digital age. Smaller than a flash drive, this dosemeter provides an instant read-out when connected to any computer with internet access and a USB connection. Instadose devices provide radiation workers with more flexibility than today's dosemeters. Non Volatile Analog Memory Cell surrounded by a Gas Filled Ion Chamber. Dose changes the amount of Electric Charge in the DIS Analog Memory. The total charge storage capacity of the memory determines the available dose range. The state of the Analog Memory is determined by measuring the voltage across the memory cell. AMP (Account Management Program) provides secure real time access to account details, device assignments, reports and all pertinent account information. Access can be restricted based on the role assignment assigned to an individual. A variety of reports are available for download and customizing. The Advantages of the Instadose dosemeter are: - Unlimited reading capability, - Concerns about a possible exposure can be addressed immediately, - Re-readability without loss of exposure data, with cumulative exposure maintained. (authors)

  3. BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications.

    Science.gov (United States)

    Whetzel, Patricia L; Noy, Natalya F; Shah, Nigam H; Alexander, Paul R; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A

    2011-07-01

    The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

  4. A sparse autoencoder-based deep neural network for protein solvent accessibility and contact number prediction.

    Science.gov (United States)

    Deng, Lei; Fan, Chao; Zeng, Zhiwen

    2017-12-28

    Direct prediction of the three-dimensional (3D) structures of proteins from one-dimensional (1D) sequences is a challenging problem. Significant structural characteristics such as solvent accessibility and contact number are essential for deriving restrains in modeling protein folding and protein 3D structure. Thus, accurately predicting these features is a critical step for 3D protein structure building. In this study, we present DeepSacon, a computational method that can effectively predict protein solvent accessibility and contact number by using a deep neural network, which is built based on stacked autoencoder and a dropout method. The results demonstrate that our proposed DeepSacon achieves a significant improvement in the prediction quality compared with the state-of-the-art methods. We obtain 0.70 three-state accuracy for solvent accessibility, 0.33 15-state accuracy and 0.74 Pearson Correlation Coefficient (PCC) for the contact number on the 5729 monomeric soluble globular protein dataset. We also evaluate the performance on the CASP11 benchmark dataset, DeepSacon achieves 0.68 three-state accuracy and 0.69 PCC for solvent accessibility and contact number, respectively. We have shown that DeepSacon can reliably predict solvent accessibility and contact number with stacked sparse autoencoder and a dropout approach.

  5. Design of a High Resolution Open Access Global Snow Cover Web Map Service Using Ground and Satellite Observations

    Science.gov (United States)

    Kadlec, J.; Ames, D. P.

    2014-12-01

    The aim of the presented work is creating a freely accessible, dynamic and re-usable snow cover map of the world by combining snow extent and snow depth datasets from multiple sources. The examined data sources are: remote sensing datasets (MODIS, CryoLand), weather forecasting model outputs (OpenWeatherMap, forecast.io), ground observation networks (CUAHSI HIS, GSOD, GHCN, and selected national networks), and user-contributed snow reports on social networks (cross-country and backcountry skiing trip reports). For adding each type of dataset, an interface and an adapter is created. Each adapter supports queries by area, time range, or combination of area and time range. The combined dataset is published as an online snow cover mapping service. This web service lowers the learning curve that is required to view, access, and analyze snow depth maps and snow time-series. All data published by this service are licensed as open data; encouraging the re-use of the data in customized applications in climatology, hydrology, sports and other disciplines. The initial version of the interactive snow map is on the website snow.hydrodata.org. This website supports the view by time and view by site. In view by time, the spatial distribution of snow for a selected area and time period is shown. In view by site, the time-series charts of snow depth at a selected location is displayed. All snow extent and snow depth map layers and time series are accessible and discoverable through internationally approved protocols including WMS, WFS, WCS, WaterOneFlow and WaterML. Therefore they can also be easily added to GIS software or 3rd-party web map applications. The central hypothesis driving this research is that the integration of user contributed data and/or social-network derived snow data together with other open access data sources will result in more accurate and higher resolution - and hence more useful snow cover maps than satellite data or government agency produced data by

  6. Translating access into utilization: lessons from the design and evaluation of a health insurance Web site to promote reproductive health care for young women in Massachusetts.

    Science.gov (United States)

    Janiak, Elizabeth; Rhodes, Elizabeth; Foster, Angel M

    2013-12-01

    Following state-level health care reform in Massachusetts, young women reported confusion over coverage of contraception and other sexual and reproductive health services under newly available health insurance products. To address this gap, a plain-language Web site titled "My Little Black Book for Sexual Health" was developed by a statewide network of reproductive health stakeholders. The purpose of this evaluation was to assess the health literacy demands and usability of the site among its target audience, women ages 18-26 years. We performed an evaluation of the literacy demands of the Web site's written content and tested the Web site's usability in a health communications laboratory. Participants found the Web site visually appealing and its overall design concept accessible. However, the Web site's literacy demands were high, and all participants encountered problems navigating through the Web site. Following this evaluation, the Web site was modified to be more usable and more comprehensible to women of all health literacy levels. To avail themselves of sexual and reproductive health services newly available under expanded health insurance coverage, young women require customized educational resources that are rigorously evaluated to ensure accessibility. To maximize utilization of reproductive health services under expanded health insurance coverage, US women require customized educational resources commensurate with their literacy skills. The application of established research methods from the field of health communications will enable advocates to evaluate and adapt these resources to best serve their targeted audiences. © 2013.

  7. IRESPred: Web Server for Prediction of Cellular and Viral Internal Ribosome Entry Site (IRES)

    Science.gov (United States)

    Kolekar, Pandurang; Pataskar, Abhijeet; Kulkarni-Kale, Urmila; Pal, Jayanta; Kulkarni, Abhijeet

    2016-01-01

    Cellular mRNAs are predominantly translated in a cap-dependent manner. However, some viral and a subset of cellular mRNAs initiate their translation in a cap-independent manner. This requires presence of a structured RNA element, known as, Internal Ribosome Entry Site (IRES) in their 5′ untranslated regions (UTRs). Experimental demonstration of IRES in UTR remains a challenging task. Computational prediction of IRES merely based on sequence and structure conservation is also difficult, particularly for cellular IRES. A web server, IRESPred is developed for prediction of both viral and cellular IRES using Support Vector Machine (SVM). The predictive model was built using 35 features that are based on sequence and structural properties of UTRs and the probabilities of interactions between UTR and small subunit ribosomal proteins (SSRPs). The model was found to have 75.51% accuracy, 75.75% sensitivity, 75.25% specificity, 75.75% precision and Matthews Correlation Coefficient (MCC) of 0.51 in blind testing. IRESPred was found to perform better than the only available viral IRES prediction server, VIPS. The IRESPred server is freely available at http://bioinfo.net.in/IRESPred/. PMID:27264539

  8. ADVERPred-Web Service for Prediction of Adverse Effects of Drugs.

    Science.gov (United States)

    Ivanov, Sergey M; Lagunin, Alexey A; Rudik, Anastasia V; Filimonov, Dmitry A; Poroikov, Vladimir V

    2018-01-22

    Application of structure-activity relationships (SARs) for the prediction of adverse effects of drugs (ADEs) has been reported in many published studies. Training sets for the creation of SAR models are usually based on drug label information which allows for the generation of data sets for many hundreds of drugs. Since many ADEs may not be related to drug consumption, one of the main problems in such studies is the quality of data on drug-ADE pairs obtained from labels. The information on ADEs may be included in three sections of the drug labels: "Boxed warning," "Warnings and Precautions," and "Adverse reactions." The first two sections, especially Boxed warning, usually contain the most frequent and severe ADEs that have either known or probable relationships to drug consumption. Using this information, we have created manually curated data sets for the five most frequent and severe ADEs: myocardial infarction, arrhythmia, cardiac failure, severe hepatotoxicity, and nephrotoxicity, with more than 850 drugs on average for each effect. The corresponding SARs were built with PASS (Prediction of Activity Spectra for Substances) software and had balanced accuracy values of 0.74, 0.7, 0.77, 0.67, and 0.75, respectively. They were implemented in a freely available ADVERPred web service ( http://www.way2drug.com/adverpred/ ), which enables a user to predict five ADEs based on the structural formula of compound. This web service can be applied for estimation of the corresponding ADEs for hits and lead compounds at the early stages of drug discovery.

  9. SeMPI: a genome-based secondary metabolite prediction and identification web server.

    Science.gov (United States)

    Zierep, Paul F; Padilla, Natàlia; Yonchev, Dimitar G; Telukunta, Kiran K; Klementz, Dennis; Günther, Stefan

    2017-07-03

    The secondary metabolism of bacteria, fungi and plants yields a vast number of bioactive substances. The constantly increasing amount of published genomic data provides the opportunity for an efficient identification of gene clusters by genome mining. Conversely, for many natural products with resolved structures, the encoding gene clusters have not been identified yet. Even though genome mining tools have become significantly more efficient in the identification of biosynthetic gene clusters, structural elucidation of the actual secondary metabolite is still challenging, especially due to as yet unpredictable post-modifications. Here, we introduce SeMPI, a web server providing a prediction and identification pipeline for natural products synthesized by polyketide synthases of type I modular. In order to limit the possible structures of PKS products and to include putative tailoring reactions, a structural comparison with annotated natural products was introduced. Furthermore, a benchmark was designed based on 40 gene clusters with annotated PKS products. The web server of the pipeline (SeMPI) is freely available at: http://www.pharmaceutical-bioinformatics.de/sempi. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Interactive access to LP DAAC satellite data archives through a combination of open-source and custom middleware web services

    Science.gov (United States)

    Davis, Brian N.; Werpy, Jason; Friesz, Aaron M.; Impecoven, Kevin; Quenzer, Robert; Maiersperger, Tom; Meyer, David J.

    2015-01-01

    Current methods of searching for and retrieving data from satellite land remote sensing archives do not allow for interactive information extraction. Instead, Earth science data users are required to download files over low-bandwidth networks to local workstations and process data before science questions can be addressed. New methods of extracting information from data archives need to become more interactive to meet user demands for deriving increasingly complex information from rapidly expanding archives. Moving the tools required for processing data to computer systems of data providers, and away from systems of the data consumer, can improve turnaround times for data processing workflows. The implementation of middleware services was used to provide interactive access to archive data. The goal of this middleware services development is to enable Earth science data users to access remote sensing archives for immediate answers to science questions instead of links to large volumes of data to download and process. Exposing data and metadata to web-based services enables machine-driven queries and data interaction. Also, product quality information can be integrated to enable additional filtering and sub-setting. Only the reduced content required to complete an analysis is then transferred to the user.

  11. RSARF: Prediction of residue solvent accessibility from protein sequence using random forest method

    KAUST Repository

    Ganesan, Pugalenthi; Kandaswamy, Krishna Kumar Umar; Chou -, Kuochen; Vivekanandan, Saravanan; Kolatkar, Prasanna R.

    2012-01-01

    Prediction of protein structure from its amino acid sequence is still a challenging problem. The complete physicochemical understanding of protein folding is essential for the accurate structure prediction. Knowledge of residue solvent accessibility gives useful insights into protein structure prediction and function prediction. In this work, we propose a random forest method, RSARF, to predict residue accessible surface area from protein sequence information. The training and testing was performed using 120 proteins containing 22006 residues. For each residue, buried and exposed state was computed using five thresholds (0%, 5%, 10%, 25%, and 50%). The prediction accuracy for 0%, 5%, 10%, 25%, and 50% thresholds are 72.9%, 78.25%, 78.12%, 77.57% and 72.07% respectively. Further, comparison of RSARF with other methods using a benchmark dataset containing 20 proteins shows that our approach is useful for prediction of residue solvent accessibility from protein sequence without using structural information. The RSARF program, datasets and supplementary data are available at http://caps.ncbs.res.in/download/pugal/RSARF/. - See more at: http://www.eurekaselect.com/89216/article#sthash.pwVGFUjq.dpuf

  12. A web accessible resource for investigating cassava phenomics and genomics information: BIOGEN BASE.

    Science.gov (United States)

    Jayakodi, Murukarthick; Selvan, Sreedevi Ghokhilamani; Natesan, Senthil; Muthurajan, Raveendran; Duraisamy, Raghu; Ramineni, Jana Jeevan; Rathinasamy, Sakthi Ambothi; Karuppusamy, Nageswari; Lakshmanan, Pugalenthi; Chokkappan, Mohan

    2011-01-01

    The goal of our research is to establish a unique portal to bring out the potential outcome of the research in the Casssava crop. The Biogen base for cassava clearly brings out the variations of different traits of the germplasms, maintained at the Tapioca and Castor Research Station, Tamil Nadu Agricultural University. Phenotypic and genotypic variations of the accessions are clearly depicted, for the users to browse and interpret the variations using the microsatellite markers. Database (BIOGEN BASE - CASSAVA) is designed using PHP and MySQL and is equipped with extensive search options. It is more user-friendly and made publicly available, to improve the research and development of cassava by making a wealth of genetics and genomics data available through open, common, and worldwide forum for all individuals interested in the field. The database is available for free at http://www.tnaugenomics.com/biogenbase/casava.php.

  13. Predicting IVF Outcome: A Proposed Web-based System Using Artificial Intelligence.

    Science.gov (United States)

    Siristatidis, Charalampos; Vogiatzi, Paraskevi; Pouliakis, Abraham; Trivella, Marialenna; Papantoniou, Nikolaos; Bettocchi, Stefano

    2016-01-01

    To propose a functional in vitro fertilization (IVF) prediction model to assist clinicians in tailoring personalized treatment of subfertile couples and improve assisted reproduction outcome. Construction and evaluation of an enhanced web-based system with a novel Artificial Neural Network (ANN) architecture and conformed input and output parameters according to the clinical and bibliographical standards, driven by a complete data set and "trained" by a network expert in an IVF setting. The system is capable to act as a routine information technology platform for the IVF unit and is capable of recalling and evaluating a vast amount of information in a rapid and automated manner to provide an objective indication on the outcome of an artificial reproductive cycle. ANNs are an exceptional candidate in providing the fertility specialist with numerical estimates to promote personalization of healthcare and adaptation of the course of treatment according to the indications. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  14. nuMap: a web platform for accurate prediction of nucleosome positioning.

    Science.gov (United States)

    Alharbi, Bader A; Alshammari, Thamir H; Felton, Nathan L; Zhurkin, Victor B; Cui, Feng

    2014-10-01

    Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and parameters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site. Copyright © 2014 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  15. nuMap: A Web Platform for Accurate Prediction of Nucleosome Positioning

    Directory of Open Access Journals (Sweden)

    Bader A. Alharbi

    2014-10-01

    Full Text Available Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and parameters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site.

  16. The Phyre2 web portal for protein modeling, prediction and analysis.

    Science.gov (United States)

    Kelley, Lawrence A; Mezulis, Stefans; Yates, Christopher M; Wass, Mark N; Sternberg, Michael J E

    2015-06-01

    Phyre2 is a suite of tools available on the web to predict and analyze protein structure, function and mutations. The focus of Phyre2 is to provide biologists with a simple and intuitive interface to state-of-the-art protein bioinformatics tools. Phyre2 replaces Phyre, the original version of the server for which we previously published a paper in Nature Protocols. In this updated protocol, we describe Phyre2, which uses advanced remote homology detection methods to build 3D models, predict ligand binding sites and analyze the effect of amino acid variants (e.g., nonsynonymous SNPs (nsSNPs)) for a user's protein sequence. Users are guided through results by a simple interface at a level of detail they determine. This protocol will guide users from submitting a protein sequence to interpreting the secondary and tertiary structure of their models, their domain composition and model quality. A range of additional available tools is described to find a protein structure in a genome, to submit large number of sequences at once and to automatically run weekly searches for proteins that are difficult to model. The server is available at http://www.sbg.bio.ic.ac.uk/phyre2. A typical structure prediction will be returned between 30 min and 2 h after submission.

  17. Coupling News Sentiment with Web Browsing Data Improves Prediction of Intra-Day Price Dynamics.

    Directory of Open Access Journals (Sweden)

    Gabriele Ranco

    Full Text Available The new digital revolution of big data is deeply changing our capability of understanding society and forecasting the outcome of many social and economic systems. Unfortunately, information can be very heterogeneous in the importance, relevance, and surprise it conveys, affecting severely the predictive power of semantic and statistical methods. Here we show that the aggregation of web users' behavior can be elicited to overcome this problem in a hard to predict complex system, namely the financial market. Specifically, our in-sample analysis shows that the combined use of sentiment analysis of news and browsing activity of users of Yahoo! Finance greatly helps forecasting intra-day and daily price changes of a set of 100 highly capitalized US stocks traded in the period 2012-2013. Sentiment analysis or browsing activity when taken alone have very small or no predictive power. Conversely, when considering a news signal where in a given time interval we compute the average sentiment of the clicked news, weighted by the number of clicks, we show that for nearly 50% of the companies such signal Granger-causes hourly price returns. Our result indicates a "wisdom-of-the-crowd" effect that allows to exploit users' activity to identify and weigh properly the relevant and surprising news, enhancing considerably the forecasting power of the news sentiment.

  18. Coupling News Sentiment with Web Browsing Data Improves Prediction of Intra-Day Price Dynamics.

    Science.gov (United States)

    Ranco, Gabriele; Bordino, Ilaria; Bormetti, Giacomo; Caldarelli, Guido; Lillo, Fabrizio; Treccani, Michele

    2016-01-01

    The new digital revolution of big data is deeply changing our capability of understanding society and forecasting the outcome of many social and economic systems. Unfortunately, information can be very heterogeneous in the importance, relevance, and surprise it conveys, affecting severely the predictive power of semantic and statistical methods. Here we show that the aggregation of web users' behavior can be elicited to overcome this problem in a hard to predict complex system, namely the financial market. Specifically, our in-sample analysis shows that the combined use of sentiment analysis of news and browsing activity of users of Yahoo! Finance greatly helps forecasting intra-day and daily price changes of a set of 100 highly capitalized US stocks traded in the period 2012-2013. Sentiment analysis or browsing activity when taken alone have very small or no predictive power. Conversely, when considering a news signal where in a given time interval we compute the average sentiment of the clicked news, weighted by the number of clicks, we show that for nearly 50% of the companies such signal Granger-causes hourly price returns. Our result indicates a "wisdom-of-the-crowd" effect that allows to exploit users' activity to identify and weigh properly the relevant and surprising news, enhancing considerably the forecasting power of the news sentiment.

  19. kmer-SVM: a web server for identifying predictive regulatory sequence features in genomic data sets

    Science.gov (United States)

    Fletez-Brant, Christopher; Lee, Dongwon; McCallion, Andrew S.; Beer, Michael A.

    2013-01-01

    Massively parallel sequencing technologies have made the generation of genomic data sets a routine component of many biological investigations. For example, Chromatin immunoprecipitation followed by sequence assays detect genomic regions bound (directly or indirectly) by specific factors, and DNase-seq identifies regions of open chromatin. A major bottleneck in the interpretation of these data is the identification of the underlying DNA sequence code that defines, and ultimately facilitates prediction of, these transcription factor (TF) bound or open chromatin regions. We have recently developed a novel computational methodology, which uses a support vector machine (SVM) with kmer sequence features (kmer-SVM) to identify predictive combinations of short transcription factor-binding sites, which determine the tissue specificity of these genomic assays (Lee, Karchin and Beer, Discriminative prediction of mammalian enhancers from DNA sequence. Genome Res. 2011; 21:2167–80). This regulatory information can (i) give confidence in genomic experiments by recovering previously known binding sites, and (ii) reveal novel sequence features for subsequent experimental testing of cooperative mechanisms. Here, we describe the development and implementation of a web server to allow the broader research community to independently apply our kmer-SVM to analyze and interpret their genomic datasets. We analyze five recently published data sets and demonstrate how this tool identifies accessory factors and repressive sequence elements. kmer-SVM is available at http://kmersvm.beerlab.org. PMID:23771147

  20. StructRNAfinder: an automated pipeline and web server for RNA families prediction.

    Science.gov (United States)

    Arias-Carrasco, Raúl; Vásquez-Morán, Yessenia; Nakaya, Helder I; Maracaja-Coutinho, Vinicius

    2018-02-17

    The function of many noncoding RNAs (ncRNAs) depend upon their secondary structures. Over the last decades, several methodologies have been developed to predict such structures or to use them to functionally annotate RNAs into RNA families. However, to fully perform this analysis, researchers should utilize multiple tools, which require the constant parsing and processing of several intermediate files. This makes the large-scale prediction and annotation of RNAs a daunting task even to researchers with good computational or bioinformatics skills. We present an automated pipeline named StructRNAfinder that predicts and annotates RNA families in transcript or genome sequences. This single tool not only displays the sequence/structural consensus alignments for each RNA family, according to Rfam database but also provides a taxonomic overview for each assigned functional RNA. Moreover, we implemented a user-friendly web service that allows researchers to upload their own nucleotide sequences in order to perform the whole analysis. Finally, we provided a stand-alone version of StructRNAfinder to be used in large-scale projects. The tool was developed under GNU General Public License (GPLv3) and is freely available at http://structrnafinder.integrativebioinformatics.me . The main advantage of StructRNAfinder relies on the large-scale processing and integrating the data obtained by each tool and database employed along the workflow, of which several files are generated and displayed in user-friendly reports, useful for downstream analyses and data exploration.

  1. Cross-modal prediction changes the timing of conscious access during the motion-induced blindness.

    Science.gov (United States)

    Chang, Acer Y C; Kanai, Ryota; Seth, Anil K

    2015-01-01

    Despite accumulating evidence that perceptual predictions influence perceptual content, the relations between these predictions and conscious contents remain unclear, especially for cross-modal predictions. We examined whether predictions of visual events by auditory cues can facilitate conscious access to the visual stimuli. We trained participants to learn associations between auditory cues and colour changes. We then asked whether congruency between auditory cues and target colours would speed access to consciousness. We did this by rendering a visual target subjectively invisible using motion-induced blindness and then gradually changing its colour while presenting congruent or incongruent auditory cues. Results showed that the visual target gained access to consciousness faster in congruent than in incongruent trials; control experiments excluded potentially confounding effects of attention and motor response. The expectation effect was gradually established over blocks suggesting a role for extensive training. Overall, our findings show that predictions learned through cross-modal training can facilitate conscious access to visual stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Are personal health records safe? A review of free web-accessible personal health record privacy policies.

    Science.gov (United States)

    Carrión Señor, Inmaculada; Fernández-Alemán, José Luis; Toval, Ambrosio

    2012-08-23

    Several obstacles prevent the adoption and use of personal health record (PHR) systems, including users' concerns regarding the privacy and security of their personal health information. To analyze the privacy and security characteristics of PHR privacy policies. It is hoped that identification of the strengths and weaknesses of the PHR systems will be useful for PHR users, health care professionals, decision makers, and designers. We conducted a systematic review using the principal databases related to health and computer science to discover the Web-based and free PHR systems mentioned in published articles. The privacy policy of each PHR system selected was reviewed to extract its main privacy and security characteristics. The search of databases and the myPHR website provided a total of 52 PHR systems, of which 24 met our inclusion criteria. Of these, 17 (71%) allowed users to manage their data and to control access to their health care information. Only 9 (38%) PHR systems permitted users to check who had accessed their data. The majority of PHR systems used information related to the users' accesses to monitor and analyze system use, 12 (50%) of them aggregated user information to publish trends, and 20 (83%) used diverse types of security measures. Finally, 15 (63%) PHR systems were based on regulations or principles such as the US Health Insurance Portability and Accountability Act (HIPAA) and the Health on the Net Foundation Code of Conduct (HONcode). Most privacy policies of PHR systems do not provide an in-depth description of the security measures that they use. Moreover, compliance with standards and regulations in PHR systems is still low.

  3. GRIP: A web-based system for constructing Gold Standard datasets for protein-protein interaction prediction

    Directory of Open Access Journals (Sweden)

    Zheng Huiru

    2009-01-01

    Full Text Available Abstract Background Information about protein interaction networks is fundamental to understanding protein function and cellular processes. Interaction patterns among proteins can suggest new drug targets and aid in the design of new therapeutic interventions. Efforts have been made to map interactions on a proteomic-wide scale using both experimental and computational techniques. Reference datasets that contain known interacting proteins (positive cases and non-interacting proteins (negative cases are essential to support computational prediction and validation of protein-protein interactions. Information on known interacting and non interacting proteins are usually stored within databases. Extraction of these data can be both complex and time consuming. Although, the automatic construction of reference datasets for classification is a useful resource for researchers no public resource currently exists to perform this task. Results GRIP (Gold Reference dataset constructor from Information on Protein complexes is a web-based system that provides researchers with the functionality to create reference datasets for protein-protein interaction prediction in Saccharomyces cerevisiae. Both positive and negative cases for a reference dataset can be extracted, organised and downloaded by the user. GRIP also provides an upload facility whereby users can submit proteins to determine protein complex membership. A search facility is provided where a user can search for protein complex information in Saccharomyces cerevisiae. Conclusion GRIP is developed to retrieve information on protein complex, cellular localisation, and physical and genetic interactions in Saccharomyces cerevisiae. Manual construction of reference datasets can be a time consuming process requiring programming knowledge. GRIP simplifies and speeds up this process by allowing users to automatically construct reference datasets. GRIP is free to access at http://rosalind.infj.ulst.ac.uk/GRIP/.

  4. Intro and Recent Advances: Remote Data Access via OPeNDAP Web Services

    Science.gov (United States)

    Fulker, David

    2016-01-01

    During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in1.server installation,2.server configuration,3.Hyrax aggregation capabilities,4.support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS),5.support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters. Topics 2 through 7 will be relevant to data consumers, data providers and notably, due to the open-source nature of all OPeNDAP software to developers wishing to extend Hyrax, to build compatible clients and servers, and/or to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.

  5. A Grounded Theory Study of the Process of Accessing Information on the World Wide Web by People with Mild Traumatic Brain Injury

    Science.gov (United States)

    Blodgett, Cynthia S.

    2008-01-01

    The purpose of this grounded theory study was to examine the process by which people with Mild Traumatic Brain Injury (MTBI) access information on the web. Recent estimates include amateur sports and recreation injuries, non-hospital clinics and treatment facilities, private and public emergency department visits and admissions, providing…

  6. FirstSearch and NetFirst--Web and Dial-up Access: Plus Ca Change, Plus C'est la Meme Chose?

    Science.gov (United States)

    Koehler, Wallace; Mincey, Danielle

    1996-01-01

    Compares and evaluates the differences between OCLC's dial-up and World Wide Web FirstSearch access methods and their interfaces with the underlying databases. Also examines NetFirst, OCLC's new Internet catalog, the only Internet tracking database from a "traditional" database service. (Author/PEN)

  7. Spliceman2: a computational web server that predicts defects in pre-mRNA splicing.

    Science.gov (United States)

    Cygan, Kamil Jan; Sanford, Clayton Hendrick; Fairbrother, William Guy

    2017-09-15

    Most pre-mRNA transcripts in eukaryotic cells must undergo splicing to remove introns and join exons, and splicing elements present a large mutational target for disease-causing mutations. Splicing elements are strongly position dependent with respect to the transcript annotations. In 2012, we presented Spliceman, an online tool that used positional dependence to predict how likely distant mutations around annotated splice sites were to disrupt splicing. Here, we present an improved version of the previous tool that will be more useful for predicting the likelihood of splicing mutations. We have added industry-standard input options (i.e. Spliceman now accepts variant call format files), which allow much larger inputs than previously available. The tool also can visualize the locations-within exons and introns-of sequence variants to be analyzed and the predicted effects on splicing of the pre-mRNA transcript. In addition, Spliceman2 integrates with RNAcompete motif libraries to provide a prediction of which trans -acting factors binding sites are disrupted/created and links out to the UCSC genome browser. In summary, the new features in Spliceman2 will allow scientists and physicians to better understand the effects of single nucleotide variations on splicing. Freely available on the web at http://fairbrother.biomed.brown.edu/spliceman2 . Website implemented in PHP framework-Laravel 5, PostgreSQL, Apache, and Perl, with all major browsers supported. william_fairbrother@brown.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. Protein Solvent-Accessibility Prediction by a Stacked Deep Bidirectional Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    Buzhong Zhang

    2018-05-01

    Full Text Available Residue solvent accessibility is closely related to the spatial arrangement and packing of residues. Predicting the solvent accessibility of a protein is an important step to understand its structure and function. In this work, we present a deep learning method to predict residue solvent accessibility, which is based on a stacked deep bidirectional recurrent neural network applied to sequence profiles. To capture more long-range sequence information, a merging operator was proposed when bidirectional information from hidden nodes was merged for outputs. Three types of merging operators were used in our improved model, with a long short-term memory network performing as a hidden computing node. The trained database was constructed from 7361 proteins extracted from the PISCES server using a cut-off of 25% sequence identity. Sequence-derived features including position-specific scoring matrix, physical properties, physicochemical characteristics, conservation score and protein coding were used to represent a residue. Using this method, predictive values of continuous relative solvent-accessible area were obtained, and then, these values were transformed into binary states with predefined thresholds. Our experimental results showed that our deep learning method improved prediction quality relative to current methods, with mean absolute error and Pearson’s correlation coefficient values of 8.8% and 74.8%, respectively, on the CB502 dataset and 8.2% and 78%, respectively, on the Manesh215 dataset.

  9. Protein Solvent-Accessibility Prediction by a Stacked Deep Bidirectional Recurrent Neural Network.

    Science.gov (United States)

    Zhang, Buzhong; Li, Linqing; Lü, Qiang

    2018-05-25

    Residue solvent accessibility is closely related to the spatial arrangement and packing of residues. Predicting the solvent accessibility of a protein is an important step to understand its structure and function. In this work, we present a deep learning method to predict residue solvent accessibility, which is based on a stacked deep bidirectional recurrent neural network applied to sequence profiles. To capture more long-range sequence information, a merging operator was proposed when bidirectional information from hidden nodes was merged for outputs. Three types of merging operators were used in our improved model, with a long short-term memory network performing as a hidden computing node. The trained database was constructed from 7361 proteins extracted from the PISCES server using a cut-off of 25% sequence identity. Sequence-derived features including position-specific scoring matrix, physical properties, physicochemical characteristics, conservation score and protein coding were used to represent a residue. Using this method, predictive values of continuous relative solvent-accessible area were obtained, and then, these values were transformed into binary states with predefined thresholds. Our experimental results showed that our deep learning method improved prediction quality relative to current methods, with mean absolute error and Pearson's correlation coefficient values of 8.8% and 74.8%, respectively, on the CB502 dataset and 8.2% and 78%, respectively, on the Manesh215 dataset.

  10. Predicting Graduation Rates at 4-Year Broad Access Institutions Using a Bayesian Modeling Approach

    Science.gov (United States)

    Crisp, Gloria; Doran, Erin; Salis Reyes, Nicole A.

    2018-01-01

    This study models graduation rates at 4-year broad access institutions (BAIs). We examine the student body, structural-demographic, and financial characteristics that best predict 6-year graduation rates across two time periods (2008-2009 and 2014-2015). A Bayesian model averaging approach is utilized to account for uncertainty in variable…

  11. The experiences of working carers of older people regarding access to a web-based family care support network offered by a municipality.

    Science.gov (United States)

    Andersson, Stefan; Erlingsson, Christen; Magnusson, Lennart; Hanson, Elizabeth

    2017-09-01

    Policy makers in Sweden and other European Member States pay increasing attention as to how best support working carers; carers juggling providing unpaid family care for older family members while performing paid work. Exploring perceived benefits and challenges with web-based information and communication technologies as a means of supporting working carers' in their caregiving role, this paper draws on findings from a qualitative study. The study aimed to describe working carers' experiences of having access to the web-based family care support network 'A good place' (AGP) provided by the municipality to support those caring for an older family member. Content analysis of interviews with nine working carers revealed three themes: A support hub, connections to peers, personnel and knowledge; Experiencing ICT support as relevant in changing life circumstances; and Upholding one's personal firewall. Findings indicate that the web-based family care support network AGP is an accessible, complementary means of support. Utilising support while balancing caregiving, work obligations and responsibilities was made easier with access to AGP; enabling working carers to access information, psychosocial support and learning opportunities. In particular, it provided channels for carers to share experiences with others, to be informed, and to gain insights into medical and care issues. This reinforced working carers' sense of competence, helping them meet caregiving demands and see positive aspects in their situation. Carers' low levels of digital skills and anxieties about using computer-based support were barriers to utilising web-based support and could lead to deprioritising of this support. However, to help carers overcome these barriers and to better match web-based support to working carers' preferences and situations, web-based support must be introduced in a timely manner and must more accurately meet each working carer's unique caregiving needs. © 2016 Nordic College

  12. Assessing the model transferability for prediction of transcription factor binding sites based on chromatin accessibility.

    Science.gov (United States)

    Liu, Sheng; Zibetti, Cristina; Wan, Jun; Wang, Guohua; Blackshaw, Seth; Qian, Jiang

    2017-07-27

    Computational prediction of transcription factor (TF) binding sites in different cell types is challenging. Recent technology development allows us to determine the genome-wide chromatin accessibility in various cellular and developmental contexts. The chromatin accessibility profiles provide useful information in prediction of TF binding events in various physiological conditions. Furthermore, ChIP-Seq analysis was used to determine genome-wide binding sites for a range of different TFs in multiple cell types. Integration of these two types of genomic information can improve the prediction of TF binding events. We assessed to what extent a model built upon on other TFs and/or other cell types could be used to predict the binding sites of TFs of interest. A random forest model was built using a set of cell type-independent features such as specific sequences recognized by the TFs and evolutionary conservation, as well as cell type-specific features derived from chromatin accessibility data. Our analysis suggested that the models learned from other TFs and/or cell lines performed almost as well as the model learned from the target TF in the cell type of interest. Interestingly, models based on multiple TFs performed better than single-TF models. Finally, we proposed a universal model, BPAC, which was generated using ChIP-Seq data from multiple TFs in various cell types. Integrating chromatin accessibility information with sequence information improves prediction of TF binding.The prediction of TF binding is transferable across TFs and/or cell lines suggesting there are a set of universal "rules". A computational tool was developed to predict TF binding sites based on the universal "rules".

  13. AllerTool: a web server for predicting allergenicity and allergic cross-reactivity in proteins.

    Science.gov (United States)

    Zhang, Zong Hong; Koh, Judice L Y; Zhang, Guang Lan; Choo, Khar Heng; Tammi, Martti T; Tong, Joo Chuan

    2007-02-15

    Assessment of potential allergenicity and patterns of cross-reactivity is necessary whenever novel proteins are introduced into human food chain. Current bioinformatic methods in allergology focus mainly on the prediction of allergenic proteins, with no information on cross-reactivity patterns among known allergens. In this study, we present AllerTool, a web server with essential tools for the assessment of predicted as well as published cross-reactivity patterns of allergens. The analysis tools include graphical representation of allergen cross-reactivity information; a local sequence comparison tool that displays information of known cross-reactive allergens; a sequence similarity search tool for assessment of cross-reactivity in accordance to FAO/WHO Codex alimentarius guidelines; and a method based on support vector machine (SVM). A 10-fold cross-validation results showed that the area under the receiver operating curve (A(ROC)) of SVM models is 0.90 with 86.00% sensitivity (SE) at specificity (SP) of 86.00%. AllerTool is freely available at http://research.i2r.a-star.edu.sg/AllerTool/.

  14. StaRProtein, A Web Server for Prediction of the Stability of Repeat Proteins

    Science.gov (United States)

    Xu, Yongtao; Zhou, Xu; Huang, Meilan

    2015-01-01

    Repeat proteins have become increasingly important due to their capability to bind to almost any proteins and the potential as alternative therapy to monoclonal antibodies. In the past decade repeat proteins have been designed to mediate specific protein-protein interactions. The tetratricopeptide and ankyrin repeat proteins are two classes of helical repeat proteins that form different binding pockets to accommodate various partners. It is important to understand the factors that define folding and stability of repeat proteins in order to prioritize the most stable designed repeat proteins to further explore their potential binding affinities. Here we developed distance-dependant statistical potentials using two classes of alpha-helical repeat proteins, tetratricopeptide and ankyrin repeat proteins respectively, and evaluated their efficiency in predicting the stability of repeat proteins. We demonstrated that the repeat-specific statistical potentials based on these two classes of repeat proteins showed paramount accuracy compared with non-specific statistical potentials in: 1) discriminate correct vs. incorrect models 2) rank the stability of designed repeat proteins. In particular, the statistical scores correlate closely with the equilibrium unfolding free energies of repeat proteins and therefore would serve as a novel tool in quickly prioritizing the designed repeat proteins with high stability. StaRProtein web server was developed for predicting the stability of repeat proteins. PMID:25807112

  15. ProBiS-ligands: a web server for prediction of ligands by examination of protein binding sites.

    Science.gov (United States)

    Konc, Janez; Janežič, Dušanka

    2014-07-01

    The ProBiS-ligands web server predicts binding of ligands to a protein structure. Starting with a protein structure or binding site, ProBiS-ligands first identifies template proteins in the Protein Data Bank that share similar binding sites. Based on the superimpositions of the query protein and the similar binding sites found, the server then transposes the ligand structures from those sites to the query protein. Such ligand prediction supports many activities, e.g. drug repurposing. The ProBiS-ligands web server, an extension of the ProBiS web server, is open and free to all users at http://probis.cmm.ki.si/ligands. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Web-based discovery, access and analysis tools for the provision of different data sources like remote sensing products and climate data

    Science.gov (United States)

    Eberle, J.; Hese, S.; Schmullius, C.

    2012-12-01

    To provide different of Earth Observation products in the area of Siberia, the Siberian Earth System Science Cluster (SIB-ESS-C) was established as a spatial data infrastructure at the University of Jena (Germany), Department for Earth Observation. The infrastructure implements standards published by the Open Geospatial Consortium (OGC) and the International Organization for Standardization (ISO) for data discovery, data access and data analysis. The objective of SIB-ESS-C is to faciliate environmental research and Earth system science in Siberia. Several products from the Moderate Resolution Imaging Spectroradiometer sensor were integrated by serving ISO-compliant Metadata and providing OGC-compliant Web Map Service for data visualization and Web Coverage Services / Web Feature Service for data access. Furthermore climate data from the World Meteorological Organization were downloaded, converted, provided as OGC Sensor Observation Service. Each climate data station is described with ISO-compliant Metadata. All these datasets from multiple sources are provided within the SIB-ESS-C infrastructure (figure 1). Furthermore an automatic workflow integrates updates of these datasets daily. The brokering approach within the SIB-ESS-C system is to collect data from different sources, convert the data into common data formats, if necessary, and provide them with standardized Web services. Additional tools are made available within the SIB-ESS-C Geoportal for an easy access to download and analysis functions (figure 2). The data can be visualized, accessed and analysed with this Geoportal. Providing OGC-compliant services the data can also be accessed with other OGC-compliant clients.; Figure 1. Technical Concept of SIB-ESS-C providing different data sources ; Figure 2. Screenshot of the web-based SIB-ESS-C system.

  17. Dengue prediction by the web: Tweets are a useful tool for estimating and forecasting Dengue at country and city level.

    Directory of Open Access Journals (Sweden)

    Cecilia de Almeida Marques-Toledo

    2017-07-01

    Full Text Available Infectious diseases are a leading threat to public health. Accurate and timely monitoring of disease risk and progress can reduce their impact. Mentioning a disease in social networks is correlated with physician visits by patients, and can be used to estimate disease activity. Dengue is the fastest growing mosquito-borne viral disease, with an estimated annual incidence of 390 million infections, of which 96 million manifest clinically. Dengue burden is likely to increase in the future owing to trends toward increased urbanization, scarce water supplies and, possibly, environmental change. The epidemiological dynamic of Dengue is complex and difficult to predict, partly due to costly and slow surveillance systems.In this study, we aimed to quantitatively assess the usefulness of data acquired by Twitter for the early detection and monitoring of Dengue epidemics, both at country and city level at a weekly basis. Here, we evaluated and demonstrated the potential of tweets modeling for Dengue estimation and forecast, in comparison with other available web-based data, Google Trends and Wikipedia access logs. Also, we studied the factors that might influence the goodness-of-fit of the model. We built a simple model based on tweets that was able to 'nowcast', i.e. estimate disease numbers in the same week, but also 'forecast' disease in future weeks. At the country level, tweets are strongly associated with Dengue cases, and can estimate present and future Dengue cases until 8 weeks in advance. At city level, tweets are also useful for estimating Dengue activity. Our model can be applied successfully to small and less developed cities, suggesting a robust construction, even though it may be influenced by the incidence of the disease, the activity of Twitter locally, and social factors, including human development index and internet access.Tweets association with Dengue cases is valuable to assist traditional Dengue surveillance at real-time and low

  18. Dengue prediction by the web: Tweets are a useful tool for estimating and forecasting Dengue at country and city level.

    Science.gov (United States)

    Marques-Toledo, Cecilia de Almeida; Degener, Carolin Marlen; Vinhal, Livia; Coelho, Giovanini; Meira, Wagner; Codeço, Claudia Torres; Teixeira, Mauro Martins

    2017-07-01

    Infectious diseases are a leading threat to public health. Accurate and timely monitoring of disease risk and progress can reduce their impact. Mentioning a disease in social networks is correlated with physician visits by patients, and can be used to estimate disease activity. Dengue is the fastest growing mosquito-borne viral disease, with an estimated annual incidence of 390 million infections, of which 96 million manifest clinically. Dengue burden is likely to increase in the future owing to trends toward increased urbanization, scarce water supplies and, possibly, environmental change. The epidemiological dynamic of Dengue is complex and difficult to predict, partly due to costly and slow surveillance systems. In this study, we aimed to quantitatively assess the usefulness of data acquired by Twitter for the early detection and monitoring of Dengue epidemics, both at country and city level at a weekly basis. Here, we evaluated and demonstrated the potential of tweets modeling for Dengue estimation and forecast, in comparison with other available web-based data, Google Trends and Wikipedia access logs. Also, we studied the factors that might influence the goodness-of-fit of the model. We built a simple model based on tweets that was able to 'nowcast', i.e. estimate disease numbers in the same week, but also 'forecast' disease in future weeks. At the country level, tweets are strongly associated with Dengue cases, and can estimate present and future Dengue cases until 8 weeks in advance. At city level, tweets are also useful for estimating Dengue activity. Our model can be applied successfully to small and less developed cities, suggesting a robust construction, even though it may be influenced by the incidence of the disease, the activity of Twitter locally, and social factors, including human development index and internet access. Tweets association with Dengue cases is valuable to assist traditional Dengue surveillance at real-time and low-cost. Tweets are

  19. Exploring Factors that Predict Preservice Teachers' Intentions to Use Web 2.0 Technologies Using Decomposed Theory of Planned Behavior

    Science.gov (United States)

    Sadaf, Ayesha; Newby, Timothy J.; Ertmer, Peggy A.

    2013-01-01

    This study investigated factors that predict preservice teachers' intentions to use Web 2.0 technologies in their future classrooms. The researchers used a mixed-methods research design and collected qualitative interview data (n = 7) to triangulate quantitative survey data (n = 286). Results indicate that positive attitudes and perceptions of…

  20. Estimation of brachial artery volume flow by duplex ultrasound imaging predicts dialysis access maturation.

    Science.gov (United States)

    Ko, Sae Hee; Bandyk, Dennis F; Hodgkiss-Harlow, Kelley D; Barleben, Andrew; Lane, John

    2015-06-01

    This study validated duplex ultrasound measurement of brachial artery volume flow (VF) as predictor of dialysis access flow maturation and successful hemodialysis. Duplex ultrasound was used to image upper extremity dialysis access anatomy and estimate access VF within 1 to 2 weeks of the procedure. Correlation of brachial artery VF with dialysis access conduit VF was performed using a standardized duplex testing protocol in 75 patients. The hemodynamic data were used to develop brachial artery flow velocity criteria (peak systolic velocity and end-diastolic velocity) predictive of three VF categories: low (800 mL/min). Brachial artery VF was then measured in 148 patients after a primary (n = 86) or revised (n = 62) upper extremity dialysis access procedure, and the VF category correlated with access maturation or need for revision before hemodialysis usage. Access maturation was conferred when brachial artery VF was >600 mL/min and conduit imaging indicated successful cannulation based on anatomic criteria of conduit diameter >5 mm and skin depth 800 mL/min was predicted when the brachial artery lumen diameter was >4.5 mm, peak systolic velocity was >150 cm/s, and the diastolic-to-systolic velocity ratio was >0.4. Brachial artery velocity spectra indicating VF 800 mL/min. Duplex testing to estimate brachial artery VF and assess the conduit for ease of cannulation can be performed in 5 minutes during the initial postoperative vascular clinic evaluation. Estimation of brachial artery VF using the duplex ultrasound, termed the "Fast, 5-min Dialysis Duplex Scan," facilitates patient evaluation after new or revised upper extremity dialysis access procedures. Brachial artery VF correlates with access VF measurements and has the advantage of being easier to perform and applicable for forearm and also arm dialysis access. When brachial artery velocity spectra criteria confirm a VF >800 mL/min, flow maturation and successful hemodialysis are predicted if anatomic criteria

  1. Accessibility

    DEFF Research Database (Denmark)

    Brooks, Anthony Lewis

    2017-01-01

    This contribution is timely as it addresses accessibility in regards system hardware and software aligned with introduction of the Twenty-First Century Communications and Video Accessibility Act (CVAA) and adjoined game industry waiver that comes into force January 2017. This is an act created...... by the USA Federal Communications Commission (FCC) to increase the access of persons with disabilities to modern communications, and for other purposes. The act impacts advanced communications services and products including text messaging; e-mail; instant messaging; video communications; browsers; game...... platforms; and games software. However, the CVAA has no legal status in the EU. This text succinctly introduces and questions implications, impact, and wider adoption. By presenting the full CVAA and game industry waiver the text targets to motivate discussions and further publications on the subject...

  2. Influence of Internet Accessibility and Demographic factors on utilization of Web-based Health Information Resources by Resident Doctors in Nigeria.

    Science.gov (United States)

    Ajuwon, G A; Popoola, S O

    2014-09-01

    The internet is a huge library with avalanche of information resources including healthcare information. There are numerous studies on use of electronic resources by healthcare providers including medical practitioners however, there is a dearth of information on the patterns of use of web-based health information resource by resident doctors in Nigeria. This study therefore investigates the influence of internet accessibility and demographic factors on utilization of web-based health information resources by resident doctors in tertiary healthcare institutions in Nigeria. Descriptive survey design was adopted for this study. The population of study consisted of medical doctors undergoing residency training in 13 tertiary healthcare institutions in South-West Nigeria. The tertiary healthcare institutions were Federal Medical Centres, University Teaching Hospitals and Specialist Hospitals (Neuropsychiatric and Orthopaedic). A pre-tested, self-administered questionnaire was used for data collection. The Statistical Package for the Social Sciences (SPSS) was used for data analysis. Data were analyzed using descriptive statistics, Pearson Product Moment correlation and multiple regression analysis. The mean age of the respondents was 34 years and males were in the majority (69.0%). A total of 96.1% respondents had access to the Internet. E-mail (X̄=5.40, SD=0.91), Google (X̄=5.26, SD=1.38), Yahoo (X̄=5.15, SD=4.44) were used weekly by the respondents. Preparation for Seminar/Grand Round presentation (X̄=8.4, SD=1.92), research (X̄=7.8, SD=2.70) and communication (X̄=7.6, SD=2.60) were ranked high as purposes for use of web-based information resources. There is a strong, positive and significant relationship between internet accessibility and utilization of web-based health information resources (r=0.628, pdesignation (B=-0.343) educational qualification (B=2.411) significantly influence utilization of web-based health information resources of the respondents. A

  3. Los catálogos en línea de acceso público del Mercosur disponibles en entorno web Web accessible online public access catalogs in the Mercosur

    Directory of Open Access Journals (Sweden)

    Elsa Barber

    2008-06-01

    Full Text Available Se analizan las interfaces de usuario de los catálogos en línea de acceso público (OPACs en entorno web de las bibliotecas universitarias, especializadas, públicas y nacionales de los países parte del Mercosur (Argentina, Brasil, Paraguay, Uruguay, para elaborar un diagnóstico de situación sobre: descripción bibliográfica, análisis temático, mensajes de ayuda al usuario, visualización de datos bibliográficos. Se adopta una metodología cuali-cuantitativa, se utiliza como instrumento de recolección de datos la lista de funcionalidades del sistema que proporciona Hildreth (1982, se actualiza, se obtiene un formulario que permite, mediante 38 preguntas cerradas, observar la frecuencia de aparición de las funcionalidades básicas propias de cuatro áreas: Área I - control de operaciones; Área II - control de formulación de la búsqueda y puntos de acceso; Área III - control de salida y Área IV - asistencia al usuario: información e instrucción. Se trabaja con la información correspondiente a 297 unidades. Se delimitan estratos por tipo de software, tipo de biblioteca y país. Se aplican a los resultados las pruebas de Chi-cuadrado, Odds ratio y regresión logística multinomial. El análisis corrobora la existencia de diferencias significativas en cada uno de los estratos y verifica que la mayoría de los OPACs relevados brindan prestaciones mínimas.User interfaces of web based online public access catalogs (OPACs of academic, special, public and national libraries in countries belonging to Mercosur (Argentina, Brazil, Paraguay, Uruguay are studied to provide a diagnosis of the situation of bibliographic description, subject analisis, help messages and bibliographic display. A cuali-cuantitative methodology is adopted and a checklist of systems functions created by Hildreth (1982 is updated and used as data collection tool. The resulting 38 closed questions checklist has allowed to observe the frequency of appearance of the

  4. SVM-Prot 2016: A Web-Server for Machine Learning Prediction of Protein Functional Families from Sequence Irrespective of Similarity.

    Science.gov (United States)

    Li, Ying Hong; Xu, Jing Yu; Tao, Lin; Li, Xiao Feng; Li, Shuang; Zeng, Xian; Chen, Shang Ying; Zhang, Peng; Qin, Chu; Zhang, Cheng; Chen, Zhe; Zhu, Feng; Chen, Yu Zong

    2016-01-01

    Knowledge of protein function is important for biological, medical and therapeutic studies, but many proteins are still unknown in function. There is a need for more improved functional prediction methods. Our SVM-Prot web-server employed a machine learning method for predicting protein functional families from protein sequences irrespective of similarity, which complemented those similarity-based and other methods in predicting diverse classes of proteins including the distantly-related proteins and homologous proteins of different functions. Since its publication in 2003, we made major improvements to SVM-Prot with (1) expanded coverage from 54 to 192 functional families, (2) more diverse protein descriptors protein representation, (3) improved predictive performances due to the use of more enriched training datasets and more variety of protein descriptors, (4) newly integrated BLAST analysis option for assessing proteins in the SVM-Prot predicted functional families that were similar in sequence to a query protein, and (5) newly added batch submission option for supporting the classification of multiple proteins. Moreover, 2 more machine learning approaches, K nearest neighbor and probabilistic neural networks, were added for facilitating collective assessment of protein functions by multiple methods. SVM-Prot can be accessed at http://bidd2.nus.edu.sg/cgi-bin/svmprot/svmprot.cgi.

  5. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng

    2018-02-06

    Experimental determination of membrane protein (MP) structures is challenging as they are often too large for nuclear magnetic resonance (NMR) experiments and difficult to crystallize. Currently there are only about 510 non-redundant MPs with solved structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology and secondary structure, two-dimensional (2D) prediction of the contact/distance map, together with three-dimensional (3D) modeling of the MP structure in the lipid bilayer, for each MP target from a given model organism. The precision of the computationally constructed MP structures is leveraged by state-of-the-art deep learning methods as well as cutting-edge modeling strategies. In particular, (i) we annotate 1D property via DeepCNF (Deep Convolutional Neural Fields) that not only models complex sequence-structure relationship but also interdependency between adjacent property labels; (ii) we predict 2D contact/distance map through Deep Transfer Learning which learns the patterns as well as the complex relationship between contacts/distances and protein features from non-membrane proteins; and (iii) we model 3D structure by feeding its predicted contacts and secondary structure to the Crystallography & NMR System (CNS) suite combined with a membrane burial potential that is residue-specific and depth-dependent. PredMP currently contains more than 2,200 multi-pass transmembrane proteins (length<700 residues) from Human. These transmembrane proteins are classified according to IUPHAR/BPS Guide, which provides a hierarchical organization of receptors, channels, transporters, enzymes and other drug targets according to their molecular relationships and physiological functions. Among these MPs, we estimated that our approach could predict correct folds for 1

  6. An improved rank based disease prediction using web navigation patterns on bio-medical databases

    Directory of Open Access Journals (Sweden)

    P. Dhanalakshmi

    2017-12-01

    Full Text Available Applying machine learning techniques to on-line biomedical databases is a challenging task, as this data is collected from large number of sources and it is multi-dimensional. Also retrieval of relevant document from large repository such as gene document takes more processing time and an increased false positive rate. Generally, the extraction of biomedical document is based on the stream of prior observations of gene parameters taken at different time periods. Traditional web usage models such as Markov, Bayesian and Clustering models are sensitive to analyze the user navigation patterns and session identification in online biomedical database. Moreover, most of the document ranking models on biomedical database are sensitive to sparsity and outliers. In this paper, a novel user recommendation system was implemented to predict the top ranked biomedical documents using the disease type, gene entities and user navigation patterns. In this recommendation system, dynamic session identification, dynamic user identification and document ranking techniques were used to extract the highly relevant disease documents on the online PubMed repository. To verify the performance of the proposed model, the true positive rate and runtime of the model was compared with that of traditional static models such as Bayesian and Fuzzy rank. Experimental results show that the performance of the proposed ranking model is better than the traditional models.

  7. MO-E-18C-01: Open Access Web-Based Peer-To-Peer Training and Education in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Pawlicki, T [UC San Diego Medical Center, La Jolla, CA (United States); Brown, D; Dunscombe, P [Tom Baker Cancer Centre, Calgary, AB (Canada); Mutic, S [Washington University School of Medicine, Saint Louis, MO (United States)

    2014-06-15

    Purpose: Current training and education delivery models have limitations which result in gaps in clinical proficiency with equipment, procedures, and techniques. Educational and training opportunities offered by vendors and professional societies are by their nature not available at point of need or for the life of clinical systems. The objective of this work is to leverage modern communications technology to provide peer-to-peer training and education for radiotherapy professionals, in the clinic and on demand, as they undertake their clinical duties. Methods: We have developed a free of charge web site ( https://i.treatsafely.org ) using the Google App Engine and datastore (NDB, GQL), Python with AJAX-RPC, and Javascript. The site is a radiotherapy-specific hosting service to which user-created videos illustrating clinical or physics processes and other relevant educational material can be uploaded. Efficient navigation to the material of interest is provided through several RT specific search tools and videos can be scored by users, thus providing comprehensive peer review of the site content. The site also supports multilingual narration\\translation of videos, a quiz function for competence assessment and a library function allowing groups or institutions to define their standard operating procedures based on the video content. Results: The website went live in August 2013 and currently has over 680 registered users from 55 countries; 27.2% from the United States, 9.8% from India, 8.3% from the United Kingdom, 7.3% from Brazil, and 47.5% from other countries. The users include physicists (57.4%), Oncologists (12.5%), therapists (8.2%) and dosimetrists (4.8%). There are 75 videos to date including English, Portuguese, Mandarin, and Thai. Conclusion: Based on the initial acceptance of the site, we conclude that this open access web-based peer-to-peer tool is fulfilling an important need in radiotherapy training and education. Site functionality should expand in

  8. MO-E-18C-01: Open Access Web-Based Peer-To-Peer Training and Education in Radiotherapy

    International Nuclear Information System (INIS)

    Pawlicki, T; Brown, D; Dunscombe, P; Mutic, S

    2014-01-01

    Purpose: Current training and education delivery models have limitations which result in gaps in clinical proficiency with equipment, procedures, and techniques. Educational and training opportunities offered by vendors and professional societies are by their nature not available at point of need or for the life of clinical systems. The objective of this work is to leverage modern communications technology to provide peer-to-peer training and education for radiotherapy professionals, in the clinic and on demand, as they undertake their clinical duties. Methods: We have developed a free of charge web site ( https://i.treatsafely.org ) using the Google App Engine and datastore (NDB, GQL), Python with AJAX-RPC, and Javascript. The site is a radiotherapy-specific hosting service to which user-created videos illustrating clinical or physics processes and other relevant educational material can be uploaded. Efficient navigation to the material of interest is provided through several RT specific search tools and videos can be scored by users, thus providing comprehensive peer review of the site content. The site also supports multilingual narration\\translation of videos, a quiz function for competence assessment and a library function allowing groups or institutions to define their standard operating procedures based on the video content. Results: The website went live in August 2013 and currently has over 680 registered users from 55 countries; 27.2% from the United States, 9.8% from India, 8.3% from the United Kingdom, 7.3% from Brazil, and 47.5% from other countries. The users include physicists (57.4%), Oncologists (12.5%), therapists (8.2%) and dosimetrists (4.8%). There are 75 videos to date including English, Portuguese, Mandarin, and Thai. Conclusion: Based on the initial acceptance of the site, we conclude that this open access web-based peer-to-peer tool is fulfilling an important need in radiotherapy training and education. Site functionality should expand in

  9. Genomic Prediction and Association Mapping of Curd-Related Traits in Gene Bank Accessions of Cauliflower.

    Science.gov (United States)

    Thorwarth, Patrick; Yousef, Eltohamy A A; Schmid, Karl J

    2018-02-02

    Genetic resources are an important source of genetic variation for plant breeding. Genome-wide association studies (GWAS) and genomic prediction greatly facilitate the analysis and utilization of useful genetic diversity for improving complex phenotypic traits in crop plants. We explored the potential of GWAS and genomic prediction for improving curd-related traits in cauliflower ( Brassica oleracea var. botrytis ) by combining 174 randomly selected cauliflower gene bank accessions from two different gene banks. The collection was genotyped with genotyping-by-sequencing (GBS) and phenotyped for six curd-related traits at two locations and three growing seasons. A GWAS analysis based on 120,693 single-nucleotide polymorphisms identified a total of 24 significant associations for curd-related traits. The potential for genomic prediction was assessed with a genomic best linear unbiased prediction model and BayesB. Prediction abilities ranged from 0.10 to 0.66 for different traits and did not differ between prediction methods. Imputation of missing genotypes only slightly improved prediction ability. Our results demonstrate that GWAS and genomic prediction in combination with GBS and phenotyping of highly heritable traits can be used to identify useful quantitative trait loci and genotypes among genetically diverse gene bank material for subsequent utilization as genetic resources in cauliflower breeding. Copyright © 2018 Thorwarth et al.

  10. Genomic Prediction and Association Mapping of Curd-Related Traits in Gene Bank Accessions of Cauliflower

    Directory of Open Access Journals (Sweden)

    Patrick Thorwarth

    2018-02-01

    Full Text Available Genetic resources are an important source of genetic variation for plant breeding. Genome-wide association studies (GWAS and genomic prediction greatly facilitate the analysis and utilization of useful genetic diversity for improving complex phenotypic traits in crop plants. We explored the potential of GWAS and genomic prediction for improving curd-related traits in cauliflower (Brassica oleracea var. botrytis by combining 174 randomly selected cauliflower gene bank accessions from two different gene banks. The collection was genotyped with genotyping-by-sequencing (GBS and phenotyped for six curd-related traits at two locations and three growing seasons. A GWAS analysis based on 120,693 single-nucleotide polymorphisms identified a total of 24 significant associations for curd-related traits. The potential for genomic prediction was assessed with a genomic best linear unbiased prediction model and BayesB. Prediction abilities ranged from 0.10 to 0.66 for different traits and did not differ between prediction methods. Imputation of missing genotypes only slightly improved prediction ability. Our results demonstrate that GWAS and genomic prediction in combination with GBS and phenotyping of highly heritable traits can be used to identify useful quantitative trait loci and genotypes among genetically diverse gene bank material for subsequent utilization as genetic resources in cauliflower breeding.

  11. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    Directory of Open Access Journals (Sweden)

    Nielsen Morten

    2009-07-01

    Full Text Available Abstract Background Estimation of the reliability of specific real value predictions is nontrivial and the efficacy of this is often questionable. It is important to know if you can trust a given prediction and therefore the best methods associate a prediction with a reliability score or index. For discrete qualitative predictions, the reliability is conventionally estimated as the difference between output scores of selected classes. Such an approach is not feasible for methods that predict a biological feature as a single real value rather than a classification. As a solution to this challenge, we have implemented a method that predicts the relative surface accessibility of an amino acid and simultaneously predicts the reliability for each prediction, in the form of a Z-score. Results An ensemble of artificial neural networks has been trained on a set of experimentally solved protein structures to predict the relative exposure of the amino acids. The method assigns a reliability score to each surface accessibility prediction as an inherent part of the training process. This is in contrast to the most commonly used procedures where reliabilities are obtained by post-processing the output. Conclusion The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability score with the individual predictions. However, our implementation of reliability scores in the form of a Z-score is shown to be the more informative measure for discriminating good predictions from bad ones in the entire range from completely buried to fully exposed amino acids. This is evident when comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0

  12. Exposure and food web transfer of pharmaceuticals in ospreys (Pandion haliaetus): Predictive model and empirical data

    Science.gov (United States)

    Lazarus, Rebecca S.; Rattner, Barnett A.; Du, Bowen; McGowan, Peter C.; Blazer, Vicki S.; Ottinger, Mary Ann

    2015-01-01

    The osprey (Pandion haliaetus) is a well-known sentinel of environmental contamination, yet no studies have traced pharmaceuticals through the water–fish–osprey food web. A screening-level exposure assessment was used to evaluate the bioaccumulation potential of 113 pharmaceuticals and metabolites, and an artificial sweetener in this food web. Hypothetical concentrations in water reflecting “wastewater effluent dominated” or “dilution dominated” scenarios were combined with pH-specific bioconcentration factors (BCFs) to predict uptake in fish. Residues in fish and osprey food intake rate were used to calculate the daily intake (DI) of compounds by an adult female osprey. Fourteen pharmaceuticals and a drug metabolite with a BCF greater than 100 and a DI greater than 20 µg/kg were identified as being most likely to exceed the adult human therapeutic dose (HTD). These 15 compounds were also evaluated in a 40 day cumulative dose exposure scenario using first-order kinetics to account for uptake and elimination. Assuming comparable absorption to humans, the half-lives (t1/2) for an adult osprey to reach the HTD within 40 days were calculated. For 3 of these pharmaceuticals, the estimated t1/2 in ospreys was less than that for humans, and thus an osprey might theoretically reach or exceed the HTD in 3 to 7 days. To complement the exposure model, 24 compounds were quantified in water, fish plasma, and osprey nestling plasma from 7 potentially impaired locations in Chesapeake Bay. Of the 18 analytes detected in water, 8 were found in fish plasma, but only 1 in osprey plasma (the antihypertensive diltiazem). Compared to diltiazem detection rate and concentrations in water (10/12 detects,

  13. Comparison of trial participants and open access users of a web-based physical activity intervention regarding adherence, attrition, and repeated participation.

    Science.gov (United States)

    Wanner, Miriam; Martin-Diener, Eva; Bauer, Georg; Braun-Fahrländer, Charlotte; Martin, Brian W

    2010-02-10

    Web-based interventions are popular for promoting healthy lifestyles such as physical activity. However, little is known about user characteristics, adherence, attrition, and predictors of repeated participation on open access physical activity websites. The focus of this study was Active-online, a Web-based individually tailored physical activity intervention. The aims were (1) to assess and compare user characteristics and adherence to the website (a) in the open access context over time from 2003 to 2009, and (b) between trial participants and open access users; and (2) to analyze attrition and predictors of repeated use among participants in a randomized controlled trial compared with registered open access users. Data routinely recorded in the Active-online user database were used. Adherence was defined as: the number of pages viewed, the proportion of visits during which a tailored module was begun, the proportion of visits during which tailored feedback was received, and the time spent in the tailored modules. Adherence was analyzed according to six one-year periods (2003-2009) and according to the context (trial or open access) based on first visits and longest visits. Attrition and predictors of repeated participation were compared between trial participants and open access users. The number of recorded visits per year on Active-online decreased from 42,626 in 2003-2004 to 8343 in 2008-2009 (each of six one-year time periods ran from April 23 to April 22 of the following year). The mean age of users was between 38.4 and 43.1 years in all time periods and both contexts. The proportion of women increased from 49.5% in 2003-2004 to 61.3% in 2008-2009 (Popen access users. For open access users, adherence was similar during the first and the longest visits; for trial participants, adherence was lower during the first visits and higher during the longest visits. Of registered open access users and trial participants, 25.8% and 67.3% respectively visited Active

  14. Analysis and prediction of agricultural pest dynamics with Tiko'n, a generic tool to develop agroecological food web models

    Science.gov (United States)

    Malard, J. J.; Rojas, M.; Adamowski, J. F.; Anandaraja, N.; Tuy, H.; Melgar-Quiñonez, H.

    2016-12-01

    While several well-validated crop growth models are currently widely used, very few crop pest models of the same caliber have been developed or applied, and pest models that take trophic interactions into account are even rarer. This may be due to several factors, including 1) the difficulty of representing complex agroecological food webs in a quantifiable model, and 2) the general belief that pesticides effectively remove insect pests from immediate concern. However, pests currently claim a substantial amount of harvests every year (and account for additional control costs), and the impact of insects and of their trophic interactions on agricultural crops cannot be ignored, especially in the context of changing climates and increasing pressures on crops across the globe. Unfortunately, most integrated pest management frameworks rely on very simple models (if at all), and most examples of successful agroecological management remain more anecdotal than scientifically replicable. In light of this, there is a need for validated and robust agroecological food web models that allow users to predict the response of these webs to changes in management, crops or climate, both in order to predict future pest problems under a changing climate as well as to develop effective integrated management plans. Here we present Tiko'n, a Python-based software whose API allows users to rapidly build and validate trophic web agroecological models that predict pest dynamics in the field. The programme uses a Bayesian inference approach to calibrate the models according to field data, allowing for the reuse of literature data from various sources and reducing the need for extensive field data collection. We apply the model to the cononut black-headed caterpillar (Opisina arenosella) and associated parasitoid data from Sri Lanka, showing how the modeling framework can be used to rapidly develop, calibrate and validate models that elucidate how the internal structures of food webs

  15. OGIS Access System

    Data.gov (United States)

    National Archives and Records Administration — The OGIS Access System (OAS) provides case management, stakeholder collaboration, and public communications activities including a web presence via a web portal.

  16. The unique role of lexical accessibility in predicting kindergarten emergent literacy.

    Science.gov (United States)

    Verhoeven, Ludo; van Leeuwe, Jan; Irausquin, Rosemarie; Segers, Eliane

    The goal of this longitudinal study was to examine how lexical quality predicts the emergence of literacy abilities in 169 Dutch kindergarten children before formal reading instruction has started. At the beginning of the school year, a battery of precursor measures associated with lexical quality was related to the emergence of letter knowledge and word decoding. Confirmatory factor analysis evidenced five domains related to lexical quality, i.e., vocabulary, phonological coding, phonological awareness, lexical retrieval and phonological working memory. Structural equation modeling showed that the development of letter knowledge during the year could be predicted from children's phonological awareness and lexical retrieval, and the emergence of word decoding from their phonological awareness and letter knowledge. It is concluded that it is primarily the accessibility of phonological representations in the mental lexicon that predicts the emergence of literacy in kindergarten.

  17. explICU: A web-based visualization and predictive modeling toolkit for mortality in intensive care patients.

    Science.gov (United States)

    Chen, Robert; Kumar, Vikas; Fitch, Natalie; Jagadish, Jitesh; Lifan Zhang; Dunn, William; Duen Horng Chau

    2015-01-01

    Preventing mortality in intensive care units (ICUs) has been a top priority in American hospitals. Predictive modeling has been shown to be effective in prediction of mortality based upon data from patients' past medical histories from electronic health records (EHRs). Furthermore, visualization of timeline events is imperative in the ICU setting in order to quickly identify trends in patient histories that may lead to mortality. With the increasing adoption of EHRs, a wealth of medical data is becoming increasingly available for secondary uses such as data exploration and predictive modeling. While data exploration and predictive modeling are useful for finding risk factors in ICU patients, the process is time consuming and requires a high level of computer programming ability. We propose explICU, a web service that hosts EHR data, displays timelines of patient events based upon user-specified preferences, performs predictive modeling in the back end, and displays results to the user via intuitive, interactive visualizations.

  18. AMPA: an automated web server for prediction of protein antimicrobial regions.

    Science.gov (United States)

    Torrent, Marc; Di Tommaso, Paolo; Pulido, David; Nogués, M Victòria; Notredame, Cedric; Boix, Ester; Andreu, David

    2012-01-01

    AMPA is a web application for assessing the antimicrobial domains of proteins, with a focus on the design on new antimicrobial drugs. The application provides fast discovery of antimicrobial patterns in proteins that can be used to develop new peptide-based drugs against pathogens. Results are shown in a user-friendly graphical interface and can be downloaded as raw data for later examination. AMPA is freely available on the web at http://tcoffee.crg.cat/apps/ampa. The source code is also available in the web. marc.torrent@upf.edu; david.andreu@upf.edu Supplementary data are available at Bioinformatics online.

  19. Predicting successful treatment outcome of web-based self-help for problem drinkers: secondary analysis from a randomized controlled trial.

    Science.gov (United States)

    Riper, Heleen; Kramer, Jeannet; Keuken, Max; Smit, Filip; Schippers, Gerard; Cuijpers, Pim

    2008-11-22

    Web-based self-help interventions for problem drinking are coming of age. They have shown promising results in terms of cost-effectiveness, and they offer opportunities to reach out on a broad scale to problem drinkers. The question now is whether certain groups of problem drinkers benefit more from such Web-based interventions than others. We sought to identify baseline, client-related predictors of the effectiveness of Drinking Less, a 24/7, free-access, interactive, Web-based self-help intervention without therapist guidance for problem drinkers who want to reduce their alcohol consumption. The intervention is based on cognitive-behavioral and self-control principles. We conducted secondary analysis of data from a pragmatic randomized trial with follow-up at 6 and 12 months. Participants (N = 261) were adult problem drinkers in the Dutch general population with a weekly alcohol consumption above 210 g of ethanol for men or 140 g for women, or consumption of at least 60 g (men) or 40 g (women) one or more days a week over the past 3 months. Six baseline participant characteristics were designated as putative predictors of treatment response: (1) gender, (2) education, (3) Internet use competence (sociodemographics), (4) mean weekly alcohol consumption, (5) prior professional help for alcohol problems (level of problem drinking), and (6) participants' expectancies of Web-based interventions for problem drinking. Intention-to-treat (ITT) analyses, using last-observation-carried-forward (LOCF) data, and regression imputation (RI) were performed to deal with loss to follow-up. Statistical tests for interaction terms were conducted and linear regression analysis was performed to investigate whether the participants' characteristics as measured at baseline predicted positive treatment responses at 6- and 12-month follow-ups. At 6 months, prior help for alcohol problems predicted a small, marginally significant positive treatment outcome in the RI model only (beta = .18

  20. PCTFPeval: a web tool for benchmarking newly developed algorithms for predicting cooperative transcription factor pairs in yeast.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng

    2015-01-01

    Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15

  1. Single-centre experience with Renal PatientView, a web-based system that provides patients with access to their laboratory results.

    Science.gov (United States)

    Woywodt, Alexander; Vythelingum, Kervina; Rayner, Scott; Anderton, John; Ahmed, Aimun

    2014-10-01

    Renal PatientView (RPV) is a novel, web-based system in the UK that provides patients with access to their laboratory results, in conjunction with patient information. To study how renal patients within our centre access and use RPV. We sent out questionnaires in December 2011 to all 651 RPV users under our care. We collected information on aspects such as the frequency and timing of RPV usage, the parameters viewed by users, and the impact of RPV on their care. A total of 295 (45 %) questionnaires were returned. The predominant users of RPV were transplant patients (42 %) followed by pre-dialysis chronic kidney disease patients (37 %). Forty-two percent of RPV users accessed their results after their clinic appointments, 38 % prior to visiting the clinic. The majority of patients (76 %) had used the system to discuss treatment with their renal physician, while 20 % of patients gave permission to other members of their family to use RPV to monitor results on their behalf. Most users (78 %) reported accessing RPV on average 1-5 times/month. Most patients used RPV to monitor their kidney function, 81 % to check creatinine levels, 57 % to check potassium results. Ninety-two percent of patients found RPV easy to use and 93 % felt that overall the system helps them in taking care of their condition; 53 % of patients reported high satisfaction with RPV. Our results provide interesting insight into use of a system that gives patients web-based access to laboratory results. The fact that 20 % of patients delegate access to relatives also warrants further study. We propose that online access to laboratory results should be offered to all renal patients, although clinicians need to be mindful of the 'digital divide', i.e. part of the population that is not amenable to IT-based strategies for patient empowerment.

  2. Communication of uncertainty in hydrological predictions: a user-driven example web service for Europe

    Science.gov (United States)

    Fry, Matt; Smith, Katie; Sheffield, Justin; Watts, Glenn; Wood, Eric; Cooper, Jon; Prudhomme, Christel; Rees, Gwyn

    2017-04-01

    Water is fundamental to society as it impacts on all facets of life, the economy and the environment. But whilst it creates opportunities for growth and life, it can also cause serious damages to society and infrastructure through extreme hydro-meteorological events such as floods or droughts. Anticipation of future water availability and extreme event risks would both help optimise growth and limit damage through better preparedness and planning, hence providing huge societal benefits. Recent scientific research advances make it now possible to provide hydrological outlooks at monthly to seasonal lead time, and future projections up to the end of the century accounting for climatic changes. However, high uncertainty remains in the predictions, which varies depending on location, time of the year, prediction range and hydrological variable. It is essential that this uncertainty is fully understood by decision makers so they can account for it in their planning. Hence, the challenge is to finds ways to communicate such uncertainty for a range of stakeholders with different technical background and environmental science knowledge. The project EDgE (End-to end Demonstrator for improved decision making in the water sector for Europe) funded by the Copernicus programme (C3S) is a proof-of-concept project that develops a unique service to support decision making for the water sector at monthly to seasonal and to multi-decadal lead times. It is a mutual effort of co-production between hydrologists and environmental modellers, computer scientists and stakeholders representative of key decision makers in Europe for the water sector. This talk will present the iterative co-production process of a web service that serves the need of the user community. Through a series of Focus Group meetings in Spain, Norway and the UK, options for visualising the hydrological predictions and associated uncertainties are presented and discussed first as mock-up dash boards, off-line tools

  3. Potential impacts of ocean acidification on the Puget Sound food web from a model study (NCEI Accession 0134852)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This archival package contains output from a study designed to evaluate the impacts of ocean acidification on the food web of Puget Sound, a large estuary in the...

  4. Dynamic Science Data Services for Display, Analysis and Interaction in Widely-Accessible, Web-Based Geospatial Platforms, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — TerraMetrics, Inc., proposes an SBIR Phase I R/R&D program to investigate and develop a key web services architecture that provides data processing, storage and...

  5. Open Access Platforms in Spinal Cord Injury: Existing Clinical Trial Data to Predict and Improve Outcomes.

    Science.gov (United States)

    Kramer, John L K; Geisler, Fred; Ramer, Leanne; Plunet, Ward; Cragg, Jacquelyn J

    2017-05-01

    Recovery from acute spinal cord injury (SCI) is characterized by extensive heterogeneity, resulting in uncertain prognosis. Reliable prediction of recovery in the acute phase benefits patients and their families directly, as well as improves the likelihood of detecting efficacy in clinical trials. This issue of heterogeneity is not unique to SCI. In fields such as traumatic brain injury, Parkinson's disease, and amyotrophic lateral sclerosis, one approach to understand variability in recovery has been to make clinical trial data widely available to the greater research community. We contend that the SCI community should adopt a similar approach in providing open access clinical trial data.

  6. The ATS Web Page Provides "Tool Boxes" for: Access Opportunities, Performance, Interfaces, Volume, Environments, "Wish List" Entry and Educational Outreach

    Science.gov (United States)

    1999-01-01

    This viewgraph presentation gives an overview of the Access to Space website, including information on the 'tool boxes' available on the website for access opportunities, performance, interfaces, volume, environments, 'wish list' entry, and educational outreach.

  7. Age-related differences in the accuracy of web query-based predictions of influenza-like illness.

    Directory of Open Access Journals (Sweden)

    Alexander Domnich

    Full Text Available Web queries are now widely used for modeling, nowcasting and forecasting influenza-like illness (ILI. However, given that ILI attack rates vary significantly across ages, in terms of both magnitude and timing, little is known about whether the association between ILI morbidity and ILI-related queries is comparable across different age-groups. The present study aimed to investigate features of the association between ILI morbidity and ILI-related query volume from the perspective of age.Since Google Flu Trends is unavailable in Italy, Google Trends was used to identify entry terms that correlated highly with official ILI surveillance data. All-age and age-class-specific modeling was performed by means of linear models with generalized least-square estimation. Hold-out validation was used to quantify prediction accuracy. For purposes of comparison, predictions generated by exponential smoothing were computed.Five search terms showed high correlation coefficients of > .6. In comparison with exponential smoothing, the all-age query-based model correctly predicted the peak time and yielded a higher correlation coefficient with observed ILI morbidity (.978 vs. .929. However, query-based prediction of ILI morbidity was associated with a greater error. Age-class-specific query-based models varied significantly in terms of prediction accuracy. In the 0-4 and 25-44-year age-groups, these did well and outperformed exponential smoothing predictions; in the 15-24 and ≥ 65-year age-classes, however, the query-based models were inaccurate and highly overestimated peak height. In all but one age-class, peak timing predicted by the query-based models coincided with observed timing.The accuracy of web query-based models in predicting ILI morbidity rates could differ among ages. Greater age-specific detail may be useful in flu query-based studies in order to account for age-specific features of the epidemiology of ILI.

  8. Prediction of the behavior of reinforced concrete deep beams with web openings using the finite ele

    Directory of Open Access Journals (Sweden)

    Ashraf Ragab Mohamed

    2014-06-01

    Full Text Available The exact analysis of reinforced concrete deep beams is a complex problem and the presence of web openings aggravates the situation. However, no code provision exists for the analysis of deep beams with web opening. The code implemented strut and tie models are debatable and no unique solution using these models is available. In this study, the finite element method is utilized to study the behavior of reinforced concrete deep beams with and without web openings. Furthermore, the effect of the reinforcement distribution on the beam overall capacity has been studied and compared to the Egyptian code guidelines. The damaged plasticity model has been used for the analysis. Models of simply supported deep beams under 3 and 4-point bending and continuous deep beams with and without web openings have been analyzed. Model verification has shown good agreement to literature experimental work. Results of the parametric analysis have shown that web openings crossing the expected compression struts should be avoided, and the depth of the opening should not exceed 20% of the beam overall depth. The reinforcement distribution should be in the range of 0.1–0.2 beam depth for simply supported deep beams.

  9. The EarthScope Array Network Facility: application-driven low-latency web-based tools for accessing high-resolution multi-channel waveform data

    Science.gov (United States)

    Newman, R. L.; Lindquist, K. G.; Clemesha, A.; Vernon, F. L.

    2008-12-01

    Since April 2004 the EarthScope USArray seismic network has grown to over 400 broadband stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. Providing secure, yet open, access to real-time and archived data for a broad range of audiences is best served by a series of platform agnostic low-latency web-based applications. We present a framework of tools that interface between the world wide web and Boulder Real Time Technologies Antelope Environmental Monitoring System data acquisition and archival software. These tools provide audiences ranging from network operators and geoscience researchers, to funding agencies and the general public, with comprehensive information about the experiment. This ranges from network-wide to station-specific metadata, state-of-health metrics, event detection rates, archival data and dynamic report generation over a stations two year life span. Leveraging open source web-site development frameworks for both the server side (Perl, Python and PHP) and client-side (Flickr, Google Maps/Earth and jQuery) facilitates the development of a robust extensible architecture that can be tailored on a per-user basis, with rapid prototyping and development that adheres to web-standards.

  10. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access todigital imaging and communication in medicinepersistent object protocol

    Directory of Open Access Journals (Sweden)

    Hui-Qun Wu

    2013-12-01

    Full Text Available AIM:To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS framework in conformance with digital imaging and communication in medicine (DICOM and health level 7 (HL7 protocol to realize fundus images and reports sharing and communication through internet.METHODS: Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO protocol, which contains three tiers.RESULTS:In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images.CONCLUSION:Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  11. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access to digital imaging and communication in medicine persistent object protocol.

    Science.gov (United States)

    Wu, Hui-Qun; Lv, Zheng-Min; Geng, Xing-Yun; Jiang, Kui; Tang, Le-Min; Zhou, Guo-Min; Dong, Jian-Cheng

    2013-01-01

    To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS) framework in conformance with digital imaging and communication in medicine (DICOM) and health level 7 (HL7) protocol to realize fundus images and reports sharing and communication through internet. Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE) Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO) protocol, which contains three tiers. In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME) type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images. Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  12. CID-miRNA: A web server for prediction of novel miRNA precursors in human genome

    International Nuclear Information System (INIS)

    Tyagi, Sonika; Vaz, Candida; Gupta, Vipin; Bhatia, Rohit; Maheshwari, Sachin; Srinivasan, Ashwin; Bhattacharya, Alok

    2008-01-01

    microRNAs (miRNA) are a class of non-protein coding functional RNAs that are thought to regulate expression of target genes by direct interaction with mRNAs. miRNAs have been identified through both experimental and computational methods in a variety of eukaryotic organisms. Though these approaches have been partially successful, there is a need to develop more tools for detection of these RNAs as they are also thought to be present in abundance in many genomes. In this report we describe a tool and a web server, named CID-miRNA, for identification of miRNA precursors in a given DNA sequence, utilising secondary structure-based filtering systems and an algorithm based on stochastic context free grammar trained on human miRNAs. CID-miRNA analyses a given sequence using a web interface, for presence of putative miRNA precursors and the generated output lists all the potential regions that can form miRNA-like structures. It can also scan large genomic sequences for the presence of potential miRNA precursors in its stand-alone form. The web server can be accessed at (http://mirna.jnu.ac.in/cidmirna/)

  13. The SubCons webserver: A user friendly web interface for state-of-the-art subcellular localization prediction.

    Science.gov (United States)

    Salvatore, M; Shu, N; Elofsson, A

    2018-01-01

    SubCons is a recently developed method that predicts the subcellular localization of a protein. It combines predictions from four predictors using a Random Forest classifier. Here, we present the user-friendly web-interface implementation of SubCons. Starting from a protein sequence, the server rapidly predicts the subcellular localizations of an individual protein. In addition, the server accepts the submission of sets of proteins either by uploading the files or programmatically by using command line WSDL API scripts. This makes SubCons ideal for proteome wide analyses allowing the user to scan a whole proteome in few days. From the web page, it is also possible to download precalculated predictions for several eukaryotic organisms. To evaluate the performance of SubCons we present a benchmark of LocTree3 and SubCons using two recent mass-spectrometry based datasets of mouse and drosophila proteins. The server is available at http://subcons.bioinfo.se/. © 2017 The Protein Society.

  14. Prediction of highly expressed genes in microbes based on chromatin accessibility

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2007-02-01

    Full Text Available Abstract Background It is well known that gene expression is dependent on chromatin structure in eukaryotes and it is likely that chromatin can play a role in bacterial gene expression as well. Here, we use a nucleosomal position preference measure of anisotropic DNA flexibility to predict highly expressed genes in microbial genomes. We compare these predictions with those based on codon adaptation index (CAI values, and also with experimental data for 6 different microbial genomes, with a particular interest in experimental data from Escherichia coli. Moreover, position preference is examined further in 328 sequenced microbial genomes. Results We find that absolute gene expression levels are correlated with the position preference in many microbial genomes. It is postulated that in these regions, the DNA may be more accessible to the transcriptional machinery. Moreover, ribosomal proteins and ribosomal RNA are encoded by DNA having significantly lower position preference values than other genes in fast-replicating microbes. Conclusion This insight into DNA structure-dependent gene expression in microbes may be exploited for predicting the expression of non-translated genes such as non-coding RNAs that may not be predicted by any of the conventional codon usage bias approaches.

  15. Prognostic table for predicting major cardiac events based on J-ACCESS investigation

    International Nuclear Information System (INIS)

    Nakajima, Kenichi; Nishimura, Tsunehiko

    2008-01-01

    The event risk of patients with coronary heart disease may be estimated by a large-scale prognostic database in a Japanese population. The aim of this study was to create a heart risk table for predicting the major cardiac event rate. Using the Japanese-assessment of cardiac event and survival study (J-ACCESS) database created by a prognostic investigation involving 117 hospitals and >4000 patients in Japan, multivariate logistic regression analysis was performed. The major event rate over a 3-year period that included cardiac death, non-fatal myocardial infarction, and severe heart failure requiring hospitalization was predicted by the logistic regression equation. The algorithm for calculating the event rate was simplified for creating tables. Two tables were created to calculate cardiac risk by age, perfusion score category, and ejection fraction with and without the presence of diabetes. A relative risk table comparing age-matched control subjects was also made. When the simplified tables were compared with the results from the original logistic regression analysis, both risk values and relative risks agreed well (P<0.0001 for both). The Heart Risk Table was created for patients suspected of having ischemic heart disease and who underwent myocardial perfusion gated single-photon emission computed tomography. The validity of risk assessment using a J-ACCESS database should be validated in a future study. (author)

  16. Predicting Plant-Accessible Water in the Critical Zone: Mountain Ecosystems in a Mediterranean Climate

    Science.gov (United States)

    Klos, P. Z.; Goulden, M.; Riebe, C. S.; Tague, C.; O'Geen, A. T.; Flinchum, B. A.; Safeeq, M.; Conklin, M. H.; Hart, S. C.; Asefaw Berhe, A.; Hartsough, P. C.; Holbrook, S.; Bales, R. C.

    2017-12-01

    Enhanced understanding of subsurface water storage, and the below-ground architecture and processes that create it, will advance our ability to predict how the impacts of climate change - including drought, forest mortality, wildland fire, and strained water security - will take form in the decades to come. Previous research has examined the importance of plant-accessible water in soil, but in upland landscapes within Mediterranean climates the soil is often only the upper extent of subsurface water storage. We draw insights from both this previous research and a case study of the Southern Sierra Critical Zone Observatory to: define attributes of subsurface storage, review observed patterns in its distribution, highlight nested methods for its estimation across scales, and showcase the fundamental processes controlling its formation. We observe that forest ecosystems at our sites subsist on lasting plant-accessible stores of subsurface water during the summer dry period and during multi-year droughts. This indicates that trees in these forest ecosystems are rooted deeply in the weathered, highly porous saprolite, which reaches up to 10-20 m beneath the surface. This confirms the importance of large volumes of subsurface water in supporting ecosystem resistance to climate and landscape change across a range of spatiotemporal scales. This research enhances the ability to predict the extent of deep subsurface storage across landscapes; aiding in the advancement of both critical zone science and the management of natural resources emanating from similar mountain ecosystems worldwide.

  17. Chromatin accessibility prediction via convolutional long short-term memory networks with k-mer embedding.

    Science.gov (United States)

    Min, Xu; Zeng, Wanwen; Chen, Ning; Chen, Ting; Jiang, Rui

    2017-07-15

    Experimental techniques for measuring chromatin accessibility are expensive and time consuming, appealing for the development of computational approaches to predict open chromatin regions from DNA sequences. Along this direction, existing methods fall into two classes: one based on handcrafted k -mer features and the other based on convolutional neural networks. Although both categories have shown good performance in specific applications thus far, there still lacks a comprehensive framework to integrate useful k -mer co-occurrence information with recent advances in deep learning. We fill this gap by addressing the problem of chromatin accessibility prediction with a convolutional Long Short-Term Memory (LSTM) network with k -mer embedding. We first split DNA sequences into k -mers and pre-train k -mer embedding vectors based on the co-occurrence matrix of k -mers by using an unsupervised representation learning approach. We then construct a supervised deep learning architecture comprised of an embedding layer, three convolutional layers and a Bidirectional LSTM (BLSTM) layer for feature learning and classification. We demonstrate that our method gains high-quality fixed-length features from variable-length sequences and consistently outperforms baseline methods. We show that k -mer embedding can effectively enhance model performance by exploring different embedding strategies. We also prove the efficacy of both the convolution and the BLSTM layers by comparing two variations of the network architecture. We confirm the robustness of our model to hyper-parameters by performing sensitivity analysis. We hope our method can eventually reinforce our understanding of employing deep learning in genomic studies and shed light on research regarding mechanisms of chromatin accessibility. The source code can be downloaded from https://github.com/minxueric/ismb2017_lstm . tingchen@tsinghua.edu.cn or ruijiang@tsinghua.edu.cn. Supplementary materials are available at

  18. Prospects of Genomic Prediction in the USDA Soybean Germplasm Collection: Historical Data Creates Robust Models for Enhancing Selection of Accessions

    Directory of Open Access Journals (Sweden)

    Diego Jarquin

    2016-08-01

    Full Text Available The identification and mobilization of useful genetic variation from germplasm banks for use in breeding programs is critical for future genetic gain and protection against crop pests. Plummeting costs of next-generation sequencing and genotyping is revolutionizing the way in which researchers and breeders interface with plant germplasm collections. An example of this is the high density genotyping of the entire USDA Soybean Germplasm Collection. We assessed the usefulness of 50K single nucleotide polymorphism data collected on 18,480 domesticated soybean (Glycine max accessions and vast historical phenotypic data for developing genomic prediction models for protein, oil, and yield. Resulting genomic prediction models explained an appreciable amount of the variation in accession performance in independent validation trials, with correlations between predicted and observed reaching up to 0.92 for oil and protein and 0.79 for yield. The optimization of training set design was explored using a series of cross-validation schemes. It was found that the target population and environment need to be well represented in the training set. Second, genomic prediction training sets appear to be robust to the presence of data from diverse geographical locations and genetic clusters. This finding, however, depends on the influence of shattering and lodging, and may be specific to soybean with its presence of maturity groups. The distribution of 7608 nonphenotyped accessions was examined through the application of genomic prediction models. The distribution of predictions of phenotyped accessions was representative of the distribution of predictions for nonphenotyped accessions, with no nonphenotyped accessions being predicted to fall far outside the range of predictions of phenotyped accessions.

  19. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng; Fei, Shiyang; Zongan, Wang; Li, Yu; Zhao, Feng; Gao, Xin

    2018-01-01

    structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology

  20. Factors Predicting Pre-Service Teachers' Adoption of Web 2.0 Technologies

    Science.gov (United States)

    Cheon, Jongpil; Coward, Fanni; Song, Jaeki; Lim, Sunho

    2012-01-01

    Classrooms full of "digital natives" represent the norm in U. S. schools, but like their predecessors, they mostly inhabit spaces characterized by a traditional view of teaching and learning. Understanding contributors to this mismatch, and especially teachers' role, is especially critical as Web 2.0 technologies enable greater learner…

  1. Medical high-resolution image sharing and electronic whiteboard system: A pure-web-based system for accessing and discussing lossless original images in telemedicine.

    Science.gov (United States)

    Qiao, Liang; Li, Ying; Chen, Xin; Yang, Sheng; Gao, Peng; Liu, Hongjun; Feng, Zhengquan; Nian, Yongjian; Qiu, Mingguo

    2015-09-01

    There are various medical image sharing and electronic whiteboard systems available for diagnosis and discussion purposes. However, most of these systems ask clients to install special software tools or web plug-ins to support whiteboard discussion, special medical image format, and customized decoding algorithm of data transmission of HRIs (high-resolution images). This limits the accessibility of the software running on different devices and operating systems. In this paper, we propose a solution based on pure web pages for medical HRIs lossless sharing and e-whiteboard discussion, and have set up a medical HRI sharing and e-whiteboard system, which has four-layered design: (1) HRIs access layer: we improved an tile-pyramid model named unbalanced ratio pyramid structure (URPS), to rapidly share lossless HRIs and to adapt to the reading habits of users; (2) format conversion layer: we designed a format conversion engine (FCE) on server side to real time convert and cache DICOM tiles which clients requesting with window-level parameters, to make browsers compatible and keep response efficiency to server-client; (3) business logic layer: we built a XML behavior relationship storage structure to store and share users' behavior, to keep real time co-browsing and discussion between clients; (4) web-user-interface layer: AJAX technology and Raphael toolkit were used to combine HTML and JavaScript to build client RIA (rich Internet application), to meet clients' desktop-like interaction on any pure webpage. This system can be used to quickly browse lossless HRIs, and support discussing and co-browsing smoothly on any web browser in a diversified network environment. The proposal methods can provide a way to share HRIs safely, and may be used in the field of regional health, telemedicine and remote education at a low cost. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Integrated web-based viewing and secure remote access to a clinical data repository and diverse clinical systems.

    Science.gov (United States)

    Duncan, R G; Saperia, D; Dulbandzhyan, R; Shabot, M M; Polaschek, J X; Jones, D T

    2001-01-01

    The advent of the World-Wide-Web protocols and client-server technology has made it easy to build low-cost, user-friendly, platform-independent graphical user interfaces to health information systems and to integrate the presentation of data from multiple systems. The authors describe a Web interface for a clinical data repository (CDR) that was moved from concept to production status in less than six months using a rapid prototyping approach, multi-disciplinary development team, and off-the-shelf hardware and software. The system has since been expanded to provide an integrated display of clinical data from nearly 20 disparate information systems.

  3. PMT Dark Noise Monitoring System for Neutrino Detector Borexino Based on the Devicenet Protocol and WEB-Access

    International Nuclear Information System (INIS)

    Chepurnov, A.S.; Orekhov, D.I.; Maimistov, D.A.; Sabelnikov, A.A.; Etenko, A.V.

    2006-01-01

    Monitoring of PMT dark noise in a neutrino detector BOREXINO is a procedure that indicates condition of the detector. Based on CAN industrial network, top level DeviceNet protocol and WEB visualization, the dark noise monitoring system having 256 channels for the internal detector and for the external muon veto was created. The system is composed as a set of controllers, converting the PMT signals to frequency and transmitting them over Can network. The software is the stack of the DeviceNet protocols, providing the data collecting and transporting. Server-side scripts build web pages of user interface and graphical visualization of data

  4. High Availability Applications for NOMADS at the NOAA Web Operations Center Aimed at Providing Reliable Real Time Access to Operational Model Data

    Science.gov (United States)

    Alpert, J. C.; Rutledge, G.; Wang, J.; Freeman, P.; Kang, C. Y.

    2009-05-01

    The NOAA Operational Modeling Archive Distribution System (NOMADS) is now delivering high availability services as part of NOAA's official real time data dissemination at its Web Operations Center (WOC). The WOC is a web service used by all organizational units in NOAA and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including metadata. Data sets served in this way with a high availability server offer vast possibilities for the creation of new products for value added retailers and the scientific community. New applications to access data and observations for verification of gridded model output, and progress toward integration with access to conventional and non-conventional observations will be discussed. We will demonstrate how users can use NOMADS services to repackage area subsets either using repackaging of GRIB2 files, or values selected by ensemble component, (forecast) time, vertical levels, global horizontal location, and by variable, virtually a 6- Dimensional analysis services across the internet.

  5. Omics AnalySIs System for PRecision Oncology (OASISPRO): A Web-based Omics Analysis Tool for Clinical Phenotype Prediction.

    Science.gov (United States)

    Yu, Kun-Hsing; Fitzpatrick, Michael R; Pappas, Luke; Chan, Warren; Kung, Jessica; Snyder, Michael

    2017-09-12

    Precision oncology is an approach that accounts for individual differences to guide cancer management. Omics signatures have been shown to predict clinical traits for cancer patients. However, the vast amount of omics information poses an informatics challenge in systematically identifying patterns associated with health outcomes, and no general-purpose data-mining tool exists for physicians, medical researchers, and citizen scientists without significant training in programming and bioinformatics. To bridge this gap, we built the Omics AnalySIs System for PRecision Oncology (OASISPRO), a web-based system to mine the quantitative omics information from The Cancer Genome Atlas (TCGA). This system effectively visualizes patients' clinical profiles, executes machine-learning algorithms of choice on the omics data, and evaluates the prediction performance using held-out test sets. With this tool, we successfully identified genes strongly associated with tumor stage, and accurately predicted patients' survival outcomes in many cancer types, including mesothelioma and adrenocortical carcinoma. By identifying the links between omics and clinical phenotypes, this system will facilitate omics studies on precision cancer medicine and contribute to establishing personalized cancer treatment plans. This web-based tool is available at http://tinyurl.com/oasispro ;source codes are available at http://tinyurl.com/oasisproSourceCode . © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. 48 CFR 311.7001 - Section 508 accessibility standards for HHS Web site content and communications materials.

    Science.gov (United States)

    2010-10-01

    ..., documents, charts, posters, presentations (such as Microsoft PowerPoint), or video material that is specifically intended for publication on, or delivery via, an HHS-owned or -funded Web site, the Project... standards, and resolve any related issues. (c) Based on those discussions, the Project Officer shall provide...

  7. A Distributed Web-based Solution for Ionospheric Model Real-time Management, Monitoring, and Short-term Prediction

    Science.gov (United States)

    Kulchitsky, A.; Maurits, S.; Watkins, B.

    2006-12-01

    With the widespread availability of the Internet today, many people can monitor various scientific research activities. It is important to accommodate this interest providing on-line access to dynamic and illustrative Web-resources, which could demonstrate different aspects of ongoing research. It is especially important to explain and these research activities for high school and undergraduate students, thereby providing more information for making decisions concerning their future studies. Such Web resources are also important to clarify scientific research for the general public, in order to achieve better awareness of research progress in various fields. Particularly rewarding is dissemination of information about ongoing projects within Universities and research centers to their local communities. The benefits of this type of scientific outreach are mutual, since development of Web-based automatic systems is prerequisite for many research projects targeting real-time monitoring and/or modeling of natural conditions. Continuous operation of such systems provide ongoing research opportunities for the statistically massive validation of the models, as well. We have developed a Web-based system to run the University of Alaska Fairbanks Polar Ionospheric Model in real-time. This model makes use of networking and computational resources at the Arctic Region Supercomputing Center. This system was designed to be portable among various operating systems and computational resources. Its components can be installed across different computers, separating Web servers and computational engines. The core of the system is a Real-Time Management module (RMM) written Python, which facilitates interactions of remote input data transfers, the ionospheric model runs, MySQL database filling, and PHP scripts for the Web-page preparations. The RMM downloads current geophysical inputs as soon as they become available at different on-line depositories. This information is processed to

  8. ProBiS tools (algorithm, database, and web servers) for predicting and modeling of biologically interesting proteins.

    Science.gov (United States)

    Konc, Janez; Janežič, Dušanka

    2017-09-01

    ProBiS (Protein Binding Sites) Tools consist of algorithm, database, and web servers for prediction of binding sites and protein ligands based on the detection of structurally similar binding sites in the Protein Data Bank. In this article, we review the operations that ProBiS Tools perform, provide comments on the evolution of the tools, and give some implementation details. We review some of its applications to biologically interesting proteins. ProBiS Tools are freely available at http://probis.cmm.ki.si and http://probis.nih.gov. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Web-Based Predictive Analytics to Improve Patient Flow in the Emergency Department

    Science.gov (United States)

    Buckler, David L.

    2012-01-01

    The Emergency Department (ED) simulation project was established to demonstrate how requirements-driven analysis and process simulation can help improve the quality of patient care for the Veterans Health Administration's (VHA) Veterans Affairs Medical Centers (VAMC). This project developed a web-based simulation prototype of patient flow in EDs, validated the performance of the simulation against operational data, and documented IT requirements for the ED simulation.

  10. An Adaptive Medium Access Parameter Prediction Scheme for IEEE 802.11 Real-Time Applications

    Directory of Open Access Journals (Sweden)

    Estefanía Coronado

    2017-01-01

    Full Text Available Multimedia communications have experienced an unprecedented growth due mainly to the increase in the content quality and the emergence of smart devices. The demand for these contents is tending towards wireless technologies. However, these transmissions are quite sensitive to network delays. Therefore, ensuring an optimum QoS level becomes of great importance. The IEEE 802.11e amendment was released to address the lack of QoS capabilities in the original IEEE 802.11 standard. Accordingly, the Enhanced Distributed Channel Access (EDCA function was introduced, allowing it to differentiate traffic streams through a group of Medium Access Control (MAC parameters. Although EDCA recommends a default configuration for these parameters, it has been proved that it is not optimum in many scenarios. In this work a dynamic prediction scheme for these parameters is presented. This approach ensures an appropriate traffic differentiation while maintaining compatibility with the stations without QoS support. As the APs are the only devices that use this algorithm, no changes are required to current network cards. The results show improvements in both voice and video transmissions, as well as in the QoS level of the network that the proposal achieves with regard to EDCA.

  11. Eduquito: ferramentas de autoria e de colaboração acessíveis na perspectiva da web 2.0 Eduquito: accessible authorship and collaboration tools from the perspective of web 2.0

    Directory of Open Access Journals (Sweden)

    Lucila Maria Costi Santarosa

    2012-09-01

    Full Text Available o Eduquito, ambiente digital/virtual de aprendizagem desenvolvido pela equipe de pesquisadores do NIEE/UFRGS, busca apoiar processos de inclusão sociodigital, por ser projetado em sintonia com os princípios de acessibilidade e de desenho universal, normatizados pela WAI/W3C. O desenvolvimento da plataforma digital/virtual acessível e os resultados da utilização por pessoas com deficiências são discutidos, revelando um processo permanente de verificação e de validação dos recursos e da funcionalidade do ambiente Eduquito junto a diversidade humana. Apresentamos e problematizamos duas ferramentas de autoria individual e coletiva - a Oficina Multimídia e o Bloguito, um blog acessível -, novos recursos do ambiente Eduquito que emergem da aplicabilidade do conceito de pervasividade, buscando instituir espaços de letramento e impulsionar práticas de mediação tecnológica para a inclusão sociodigital no contexto da Web 2.0.the Eduquito, a digital/virtual learning environment developed by a NIEE / UFRGS team of researchers, seeks to support processes of socio-digital inclusion, and for that reason it was devised according to accessibility principles and universal design systematized by WAI/W3C. We discuss the development of a digital/virtual accessible platform and the results of its use by people with special needs, revealing an ongoing process of verification and validation of features and functionality of the Eduquito environment considering human diversity. We present and question two individual and collective authorship tools - the Multimedia Workshop and Bloguito, an accessible blog - new features of Eduquito Environment that emerge from the applicability of the concept of pervasiveness, in order to establish literacy spaces and boost technological mediation for socio-digital inclusion in the Web 2.0 context.

  12. Using the Characteristics of Documents, Users and Tasks to Predict the Situational Relevance of Health Web Documents

    Directory of Open Access Journals (Sweden)

    Melinda Oroszlányová

    2017-09-01

    Full Text Available Relevance is usually estimated by search engines using document content, disregarding the user behind the search and the characteristics of the task. In this work, we look at relevance as framed in a situational context, calling it situational relevance, and analyze whether it is possible to predict it using documents, users and tasks characteristics. Using an existing dataset composed of health web documents, relevance judgments for information needs, user and task characteristics, we build a multivariate prediction model for situational relevance. Our model has an accuracy of 77.17%. Our findings provide insights into features that could improve the estimation of relevance by search engines, helping to conciliate the systemic and situational views of relevance. In a near future we will work on the automatic assessment of document, user and task characteristics.

  13. iGPCR-drug: a web server for predicting interaction between GPCRs and drugs in cellular networking.

    Directory of Open Access Journals (Sweden)

    Xuan Xiao

    Full Text Available Involved in many diseases such as cancer, diabetes, neurodegenerative, inflammatory and respiratory disorders, G-protein-coupled receptors (GPCRs are among the most frequent targets of therapeutic drugs. It is time-consuming and expensive to determine whether a drug and a GPCR are to interact with each other in a cellular network purely by means of experimental techniques. Although some computational methods were developed in this regard based on the knowledge of the 3D (dimensional structure of protein, unfortunately their usage is quite limited because the 3D structures for most GPCRs are still unknown. To overcome the situation, a sequence-based classifier, called "iGPCR-drug", was developed to predict the interactions between GPCRs and drugs in cellular networking. In the predictor, the drug compound is formulated by a 2D (dimensional fingerprint via a 256D vector, GPCR by the PseAAC (pseudo amino acid composition generated with the grey model theory, and the prediction engine is operated by the fuzzy K-nearest neighbour algorithm. Moreover, a user-friendly web-server for iGPCR-drug was established at http://www.jci-bioinfo.cn/iGPCR-Drug/. For the convenience of most experimental scientists, a step-by-step guide is provided on how to use the web-server to get the desired results without the need to follow the complicated math equations presented in this paper just for its integrity. The overall success rate achieved by iGPCR-drug via the jackknife test was 85.5%, which is remarkably higher than the rate by the existing peer method developed in 2010 although no web server was ever established for it. It is anticipated that iGPCR-Drug may become a useful high throughput tool for both basic research and drug development, and that the approach presented here can also be extended to study other drug - target interaction networks.

  14. 4DGeoBrowser: A Web-Based Data Browser and Server for Accessing and Analyzing Multi-Disciplinary Data

    National Research Council Canada - National Science Library

    Lerner, Steven

    2001-01-01

    .... Once the information is loaded onto a Geobrowser server the investigator-user is able to login to the website and use a set of data access and analysis tools to search, plot, and display this information...

  15. mobile Digital Access to a Web-enhanced Network (mDAWN): Assessing the Feasibility of Mobile Health Tools for Self-Management of Type-2 Diabetes.

    Science.gov (United States)

    Ho, Kendall; Newton, Lana; Boothe, Allison; Novak-Lauscher, Helen

    2015-01-01

    The mobile Digital Access to a Web-enhanced Network (mDAWN) program was implemented as an online, mobile self-management system to support patients with type-2 diabetes and their informal caregivers. Patients used wireless physiological sensors, received text messages, and had access to a secure web platform with health resources and semi-facilitated discussion forum. Outcomes were evaluated using (1) pre and post self-reported health behavior measures, (2) physiological outcomes, (3) program cost, and (4) in-depth participant interviews. The group had significantly decreased health distress, HbA1c levels, and systolic blood pressure. Participants largely saw the mDAWN as providing good value for the costs involved and found the program to be empowering in gaining control over their diabetes. mHealth programs have the potential to improve clinical outcomes through cost effective patient-led care for chronic illness. Further evaluation needs to examine integration of similar mHealth programs into the patient-physician relationship.

  16. Virtual Web Services

    OpenAIRE

    Rykowski, Jarogniew

    2007-01-01

    In this paper we propose an application of software agents to provide Virtual Web Services. A Virtual Web Service is a linked collection of several real and/or virtual Web Services, and public and private agents, accessed by the user in the same way as a single real Web Service. A Virtual Web Service allows unrestricted comparison, information merging, pipelining, etc., of data coming from different sources and in different forms. Detailed architecture and functionality of a single Virtual We...

  17. Enabling Web-Based GIS Tools for Internet and Mobile Devices To Improve and Expand NASA Data Accessibility and Analysis Functionality for the Renewable Energy and Agricultural Applications

    Science.gov (United States)

    Ross, A.; Stackhouse, P. W.; Tisdale, B.; Tisdale, M.; Chandler, W.; Hoell, J. M., Jr.; Kusterer, J.

    2014-12-01

    The NASA Langley Research Center Science Directorate and Atmospheric Science Data Center have initiated a pilot program to utilize Geographic Information System (GIS) tools that enable, generate and store climatological averages using spatial queries and calculations in a spatial database resulting in greater accessibility of data for government agencies, industry and private sector individuals. The major objectives of this effort include the 1) Processing and reformulation of current data to be consistent with ESRI and openGIS tools, 2) Develop functions to improve capability and analysis that produce "on-the-fly" data products, extending these past the single location to regional and global scales. 3) Update the current web sites to enable both web-based and mobile application displays for optimization on mobile platforms, 4) Interact with user communities in government and industry to test formats and usage of optimization, and 5) develop a series of metrics that allow for monitoring of progressive performance. Significant project results will include the the development of Open Geospatial Consortium (OGC) compliant web services (WMS, WCS, WFS, WPS) that serve renewable energy and agricultural application products to users using GIS software and tools. Each data product and OGC service will be registered within ECHO, the Common Metadata Repository, the Geospatial Platform, and Data.gov to ensure the data are easily discoverable and provide data users with enhanced access to SSE data, parameters, services, and applications. This effort supports cross agency, cross organization, and interoperability of SSE data products and services by collaborating with DOI, NRCan, NREL, NCAR, and HOMER for requirements vetting and test bed users before making available to the wider public.

  18. The ViennaRNA web services.

    Science.gov (United States)

    Gruber, Andreas R; Bernhart, Stephan H; Lorenz, Ronny

    2015-01-01

    The ViennaRNA package is a widely used collection of programs for thermodynamic RNA secondary structure prediction. Over the years, many additional tools have been developed building on the core programs of the package to also address issues related to noncoding RNA detection, RNA folding kinetics, or efficient sequence design considering RNA-RNA hybridizations. The ViennaRNA web services provide easy and user-friendly web access to these tools. This chapter describes how to use this online platform to perform tasks such as prediction of minimum free energy structures, prediction of RNA-RNA hybrids, or noncoding RNA detection. The ViennaRNA web services can be used free of charge and can be accessed via http://rna.tbi.univie.ac.at.

  19. Prediction of Scylla olivacea (Crustacea; Brachyura) peptide hormones using publicly accessible transcriptome shotgun assembly (TSA) sequences.

    Science.gov (United States)

    Christie, Andrew E

    2016-05-01

    The aquaculture of crabs from the genus Scylla is of increasing economic importance for many Southeast Asian countries. Expansion of Scylla farming has led to increased efforts to understand the physiology and behavior of these crabs, and as such, there are growing molecular resources for them. Here, publicly accessible Scylla olivacea transcriptomic data were mined for putative peptide-encoding transcripts; the proteins deduced from the identified sequences were then used to predict the structures of mature peptide hormones. Forty-nine pre/preprohormone-encoding transcripts were identified, allowing for the prediction of 187 distinct mature peptides. The identified peptides included isoforms of adipokinetic hormone-corazonin-like peptide, allatostatin A, allatostatin B, allatostatin C, bursicon β, CCHamide, corazonin, crustacean cardioactive peptide, crustacean hyperglycemic hormone/molt-inhibiting hormone, diuretic hormone 31, eclosion hormone, FMRFamide-like peptide, HIGSLYRamide, insulin-like peptide, intocin, leucokinin, myosuppressin, neuroparsin, neuropeptide F, orcokinin, pigment dispersing hormone, pyrokinin, red pigment concentrating hormone, RYamide, short neuropeptide F, SIFamide and tachykinin-related peptide, all well-known neuropeptide families. Surprisingly, the tissue used to generate the transcriptome mined here is reported to be testis. Whether or not the testis samples had neural contamination is unknown. However, if the peptides are truly produced by this reproductive organ, it could have far reaching consequences for the study of crustacean endocrinology, particularly in the area of reproductive control. Regardless, this peptidome is the largest thus far predicted for any brachyuran (true crab) species, and will serve as a foundation for future studies of peptidergic control in members of the commercially important genus Scylla. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Mining Web-based Educational Systems to Predict Student Learning Achievements

    Directory of Open Access Journals (Sweden)

    José del Campo-Ávila

    2015-03-01

    Full Text Available Educational Data Mining (EDM is getting great importance as a new interdisciplinary research field related to some other areas. It is directly connected with Web-based Educational Systems (WBES and Data Mining (DM, a fundamental part of Knowledge Discovery in Databases. The former defines the context: WBES store and manage huge amounts of data. Such data are increasingly growing and they contain hidden knowledge that could be very useful to the users (both teachers and students. It is desirable to identify such knowledge in the form of models, patterns or any other representation schema that allows a better exploitation of the system. The latter reveals itself as the tool to achieve such discovering. Data mining must afford very complex and different situations to reach quality solutions. Therefore, data mining is a research field where many advances are being done to accommodate and solve emerging problems. For this purpose, many techniques are usually considered. In this paper we study how data mining can be used to induce student models from the data acquired by a specific Web-based tool for adaptive testing, called SIETTE. Concretely we have used top down induction decision trees algorithms to extract the patterns because these models, decision trees, are easily understandable. In addition, the conducted validation processes have assured high quality models.

  1. Web-GIS platform for forest fire danger prediction in Ukraine: prospects of RS technologies

    Science.gov (United States)

    Baranovskiy, N. V.; Zharikova, M. V.

    2016-10-01

    There are many different statistical and empirical methods of forest fire danger use at present time. All systems have not physical basis. Last decade deterministic-probabilistic method is rapidly developed in Tomsk Polytechnic University. Forest sites classification is one way to estimate forest fire danger. We used this method in present work. Forest fire danger estimation depends on forest vegetation condition, forest fire retrospective, precipitation and air temperature. In fact, we use modified Nesterov Criterion. Lightning activity is under consideration as a high temperature source in present work. We use Web-GIS platform for program realization of this method. The program realization of the fire danger assessment system is the Web-oriented geoinformation system developed by the Django platform in the programming language Python. The GeoDjango framework was used for realization of cartographic functions. We suggest using of Terra/Aqua MODIS products for hot spot monitoring. Typical territory for forest fire danger estimation is Proletarskoe forestry of Kherson region (Ukraine).

  2. Genome-scale characterization of RNA tertiary structures and their functional impact by RNA solvent accessibility prediction.

    Science.gov (United States)

    Yang, Yuedong; Li, Xiaomei; Zhao, Huiying; Zhan, Jian; Wang, Jihua; Zhou, Yaoqi

    2017-01-01

    As most RNA structures are elusive to structure determination, obtaining solvent accessible surface areas (ASAs) of nucleotides in an RNA structure is an important first step to characterize potential functional sites and core structural regions. Here, we developed RNAsnap, the first machine-learning method trained on protein-bound RNA structures for solvent accessibility prediction. Built on sequence profiles from multiple sequence alignment (RNAsnap-prof), the method provided robust prediction in fivefold cross-validation and an independent test (Pearson correlation coefficients, r, between predicted and actual ASA values are 0.66 and 0.63, respectively). Application of the method to 6178 mRNAs revealed its positive correlation to mRNA accessibility by dimethyl sulphate (DMS) experimentally measured in vivo (r = 0.37) but not in vitro (r = 0.07), despite the lack of training on mRNAs and the fact that DMS accessibility is only an approximation to solvent accessibility. We further found strong association across coding and noncoding regions between predicted solvent accessibility of the mutation site of a single nucleotide variant (SNV) and the frequency of that variant in the population for 2.2 million SNVs obtained in the 1000 Genomes Project. Moreover, mapping solvent accessibility of RNAs to the human genome indicated that introns, 5' cap of 5' and 3' cap of 3' untranslated regions, are more solvent accessible, consistent with their respective functional roles. These results support conformational selections as the mechanism for the formation of RNA-protein complexes and highlight the utility of genome-scale characterization of RNA tertiary structures by RNAsnap. The server and its stand-alone downloadable version are available at http://sparks-lab.org. © 2016 Yang et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  3. Network of Research Infrastructures for European Seismology (NERIES)-Web Portal Developments for Interactive Access to Earthquake Data on a European Scale

    Science.gov (United States)

    Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.

    2009-04-01

    The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology-specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the

  4. Impact of Predicting Health Care Utilization Via Web Search Behavior: A Data-Driven Analysis.

    Science.gov (United States)

    Agarwal, Vibhu; Zhang, Liangliang; Zhu, Josh; Fang, Shiyuan; Cheng, Tim; Hong, Chloe; Shah, Nigam H

    2016-09-21

    By recent estimates, the steady rise in health care costs has deprived more than 45 million Americans of health care services and has encouraged health care providers to better understand the key drivers of health care utilization from a population health management perspective. Prior studies suggest the feasibility of mining population-level patterns of health care resource utilization from observational analysis of Internet search logs; however, the utility of the endeavor to the various stakeholders in a health ecosystem remains unclear. The aim was to carry out a closed-loop evaluation of the utility of health care use predictions using the conversion rates of advertisements that were displayed to the predicted future utilizers as a surrogate. The statistical models to predict the probability of user's future visit to a medical facility were built using effective predictors of health care resource utilization, extracted from a deidentified dataset of geotagged mobile Internet search logs representing searches made by users of the Baidu search engine between March 2015 and May 2015. We inferred presence within the geofence of a medical facility from location and duration information from users' search logs and putatively assigned medical facility visit labels to qualifying search logs. We constructed a matrix of general, semantic, and location-based features from search logs of users that had 42 or more search days preceding a medical facility visit as well as from search logs of users that had no medical visits and trained statistical learners for predicting future medical visits. We then carried out a closed-loop evaluation of the utility of health care use predictions using the show conversion rates of advertisements displayed to the predicted future utilizers. In the context of behaviorally targeted advertising, wherein health care providers are interested in minimizing their cost per conversion, the association between show conversion rate and predicted

  5. TheCellMap.org: A Web-Accessible Database for Visualizing and Mining the Global Yeast Genetic Interaction Network.

    Science.gov (United States)

    Usaj, Matej; Tan, Yizhao; Wang, Wen; VanderSluis, Benjamin; Zou, Albert; Myers, Chad L; Costanzo, Michael; Andrews, Brenda; Boone, Charles

    2017-05-05

    Providing access to quantitative genomic data is key to ensure large-scale data validation and promote new discoveries. TheCellMap.org serves as a central repository for storing and analyzing quantitative genetic interaction data produced by genome-scale Synthetic Genetic Array (SGA) experiments with the budding yeast Saccharomyces cerevisiae In particular, TheCellMap.org allows users to easily access, visualize, explore, and functionally annotate genetic interactions, or to extract and reorganize subnetworks, using data-driven network layouts in an intuitive and interactive manner. Copyright © 2017 Usaj et al.

  6. A SMART groundwater portal: An OGC web services orchestration framework for hydrology to improve data access and visualisation in New Zealand

    Science.gov (United States)

    Klug, Hermann; Kmoch, Alexander

    2014-08-01

    Transboundary and cross-catchment access to hydrological data is the key to designing successful environmental policies and activities. Electronic maps based on distributed databases are fundamental for planning and decision making in all regions and for all spatial and temporal scales. Freshwater is an essential asset in New Zealand (and globally) and the availability as well as accessibility of hydrological information held by or held for public authorities and businesses are becoming a crucial management factor. Access to and visual representation of environmental information for the public is essential for attracting greater awareness of water quality and quantity matters. Detailed interdisciplinary knowledge about the environment is required to ensure that the environmental policy-making community of New Zealand considers regional and local differences of hydrological statuses, while assessing the overall national situation. However, cross-regional and inter-agency sharing of environmental spatial data is complex and challenging. In this article, we firstly provide an overview of the state of the art standard compliant techniques and methodologies for the practical implementation of simple, measurable, achievable, repeatable, and time-based (SMART) hydrological data management principles. Secondly, we contrast international state of the art data management developments with the present status for groundwater information in New Zealand. Finally, for the topics (i) data access and harmonisation, (ii) sensor web enablement and (iii) metadata, we summarise our findings, provide recommendations on future developments and highlight the specific advantages resulting from a seamless view, discovery, access, and analysis of interoperable hydrological information and metadata for decision making.

  7. 76 FR 59307 - Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and...

    Science.gov (United States)

    2011-09-26

    ... with disabilities. By allowing carriers to choose how to initially make certain online customer service... to enable its customers to independently access the flight-related services it offers. Where... services online (e.g., seat selection) to also allow passengers to make special service requests online for...

  8. TargetNet: a web service for predicting potential drug-target interaction profiling via multi-target SAR models.

    Science.gov (United States)

    Yao, Zhi-Jiang; Dong, Jie; Che, Yu-Jing; Zhu, Min-Feng; Wen, Ming; Wang, Ning-Ning; Wang, Shan; Lu, Ai-Ping; Cao, Dong-Sheng

    2016-05-01

    Drug-target interactions (DTIs) are central to current drug discovery processes and public health fields. Analyzing the DTI profiling of the drugs helps to infer drug indications, adverse drug reactions, drug-drug interactions, and drug mode of actions. Therefore, it is of high importance to reliably and fast predict DTI profiling of the drugs on a genome-scale level. Here, we develop the TargetNet server, which can make real-time DTI predictions based only on molecular structures, following the spirit of multi-target SAR methodology. Naïve Bayes models together with various molecular fingerprints were employed to construct prediction models. Ensemble learning from these fingerprints was also provided to improve the prediction ability. When the user submits a molecule, the server will predict the activity of the user's molecule across 623 human proteins by the established high quality SAR model, thus generating a DTI profiling that can be used as a feature vector of chemicals for wide applications. The 623 SAR models related to 623 human proteins were strictly evaluated and validated by several model validation strategies, resulting in the AUC scores of 75-100 %. We applied the generated DTI profiling to successfully predict potential targets, toxicity classification, drug-drug interactions, and drug mode of action, which sufficiently demonstrated the wide application value of the potential DTI profiling. The TargetNet webserver is designed based on the Django framework in Python, and is freely accessible at http://targetnet.scbdd.com .

  9. TargetNet: a web service for predicting potential drug-target interaction profiling via multi-target SAR models

    Science.gov (United States)

    Yao, Zhi-Jiang; Dong, Jie; Che, Yu-Jing; Zhu, Min-Feng; Wen, Ming; Wang, Ning-Ning; Wang, Shan; Lu, Ai-Ping; Cao, Dong-Sheng

    2016-05-01

    Drug-target interactions (DTIs) are central to current drug discovery processes and public health fields. Analyzing the DTI profiling of the drugs helps to infer drug indications, adverse drug reactions, drug-drug interactions, and drug mode of actions. Therefore, it is of high importance to reliably and fast predict DTI profiling of the drugs on a genome-scale level. Here, we develop the TargetNet server, which can make real-time DTI predictions based only on molecular structures, following the spirit of multi-target SAR methodology. Naïve Bayes models together with various molecular fingerprints were employed to construct prediction models. Ensemble learning from these fingerprints was also provided to improve the prediction ability. When the user submits a molecule, the server will predict the activity of the user's molecule across 623 human proteins by the established high quality SAR model, thus generating a DTI profiling that can be used as a feature vector of chemicals for wide applications. The 623 SAR models related to 623 human proteins were strictly evaluated and validated by several model validation strategies, resulting in the AUC scores of 75-100 %. We applied the generated DTI profiling to successfully predict potential targets, toxicity classification, drug-drug interactions, and drug mode of action, which sufficiently demonstrated the wide application value of the potential DTI profiling. The TargetNet webserver is designed based on the Django framework in Python, and is freely accessible at http://targetnet.scbdd.com.

  10. Making the MagIC (Magnetics Information Consortium) Web Application Accessible to New Users and Useful to Experts

    Science.gov (United States)

    Minnett, R.; Koppers, A.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.

    2017-12-01

    Challenges are faced by both new and experienced users interested in contributing their data to community repositories, in data discovery, or engaged in potentially transformative science. The Magnetics Information Consortium (https://earthref.org/MagIC) has recently simplified its data model and developed a new containerized web application to reduce the friction in contributing, exploring, and combining valuable and complex datasets for the paleo-, geo-, and rock magnetic scientific community. The new data model more closely reflects the hierarchical workflow in paleomagnetic experiments to enable adequate annotation of scientific results and ensure reproducibility. The new open-source (https://github.com/earthref/MagIC) application includes an upload tool that is integrated with the data model to provide early data validation feedback and ease the friction of contributing and updating datasets. The search interface provides a powerful full text search of contributions indexed by ElasticSearch and a wide array of filters, including specific geographic and geological timescale filtering, to support both novice users exploring the database and experts interested in compiling new datasets with specific criteria across thousands of studies and millions of measurements. The datasets are not large, but they are complex, with many results from evolving experimental and analytical approaches. These data are also extremely valuable due to the cost in collecting or creating physical samples and the, often, destructive nature of the experiments. MagIC is heavily invested in encouraging young scientists as well as established labs to cultivate workflows that facilitate contributing their data in a consistent format. This eLightning presentation includes a live demonstration of the MagIC web application, developed as a configurable container hosting an isomorphic Meteor JavaScript application, MongoDB database, and ElasticSearch search engine. Visitors can explore the Mag

  11. PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins.

    Science.gov (United States)

    Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude

    2015-07-01

    Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose 'PockDrug-Server' to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Development of a secure and cost-effective infrastructure for the access of arbitrary web-based image distribution systems

    International Nuclear Information System (INIS)

    Hacklaender, T.; Demabre, N.; Cramer, B.M.; Kleber, K.; Schneider, H.

    2004-01-01

    Purpose: To build an infrastructure that enables radiologists on-call and external users a teleradiological access to the HTML-based image distribution system inside the hospital via internet. In addition, no investment costs should arise on the user side and the image data should be sent renamed using cryptographic techniques. Materials and Methods: A pure HTML-based system manages the image distribution inside the hospital, with an open source project extending this system through a secure gateway outside the firewall of the hospital. The gateway handles the communication between the external users and the HTML server within the network of the hospital. A second firewall is installed between the gateway and the external users and builds up a virtual private network (VPN). A connection between the gateway and the external user is only acknowledged if the computers involved authenticate each other via certificates and the external users authenticate via a multi-stage password system. All data are transferred encrypted. External users get only access to images that have been renamed to a pseudonym by means of automated processing before. Results: With an ADSL internet access, external users achieve an image load frequency of 0.4 CT images per second. More than 90% of the delay during image transfer results from security checks within the firewalls. Data passing the gateway induce no measurable delay. (orig.)

  13. PRODIGY : a web server for predicting the binding affinity of protein-protein complexes

    NARCIS (Netherlands)

    Xue, Li; Garcia Lopes Maia Rodrigues, João; Kastritis, Panagiotis L; Bonvin, Alexandre Mjj; Vangone, Anna

    2016-01-01

    Gaining insights into the structural determinants of protein-protein interactions holds the key for a deeper understanding of biological functions, diseases and development of therapeutics. An important aspect of this is the ability to accurately predict the binding strength for a given

  14. PREDICTING THE EFFECTIVENESS OF WEB INFORMATION SYSTEMS USING NEURAL NETWORKS MODELING: FRAMEWORK & EMPIRICAL TESTING

    Directory of Open Access Journals (Sweden)

    Dr. Kamal Mohammed Alhendawi

    2018-02-01

    Full Text Available The information systems (IS assessment studies have still used the commonly traditional tools such as questionnaires in evaluating the dependent variables and specially effectiveness of systems. Artificial neural networks have been recently accepted as an effective alternative tool for modeling the complicated systems and widely used for forecasting. A very few is known about the employment of Artificial Neural Network (ANN in the prediction IS effectiveness. For this reason, this study is considered as one of the fewest studies to investigate the efficiency and capability of using ANN for forecasting the user perceptions towards IS effectiveness where MATLAB is utilized for building and training the neural network model. A dataset of 175 subjects collected from international organization are utilized for ANN learning where each subject consists of 6 features (5 quality factors as inputs and one Boolean output. A percentage of 75% o subjects are used in the training phase. The results indicate an evidence on the ANN models has a reasonable accuracy in forecasting the IS effectiveness. For prediction, ANN with PURELIN (ANNP and ANN with TANSIG (ANNTS transfer functions are used. It is found that both two models have a reasonable prediction, however, the accuracy of ANNTS model is better than ANNP model (88.6% and 70.4% respectively. As the study proposes a new model for predicting IS dependent variables, it could save the considerably high cost that might be spent in sample data collection in the quantitative studies in the fields science, management, education, arts and others.

  15. Cooperative Mobile Web Browsing

    DEFF Research Database (Denmark)

    Perrucci, GP; Fitzek, FHP; Zhang, Qi

    2009-01-01

    This paper advocates a novel approach for mobile web browsing based on cooperation among wireless devices within close proximity operating in a cellular environment. In the actual state of the art, mobile phones can access the web using different cellular technologies. However, the supported data......-range links can then be used for cooperative mobile web browsing. By implementing the cooperative web browsing on commercial mobile phones, it will be shown that better performance is achieved in terms of increased data rate and therefore reduced access times, resulting in a significantly enhanced web...

  16. Antibody modeling using the prediction of immunoglobulin structure (PIGS) web server [corrected].

    Science.gov (United States)

    Marcatili, Paolo; Olimpieri, Pier Paolo; Chailyan, Anna; Tramontano, Anna

    2014-12-01

    Antibodies (or immunoglobulins) are crucial for defending organisms from pathogens, but they are also key players in many medical, diagnostic and biotechnological applications. The ability to predict their structure and the specific residues involved in antigen recognition has several useful applications in all of these areas. Over the years, we have developed or collaborated in developing a strategy that enables researchers to predict the 3D structure of antibodies with a very satisfactory accuracy. The strategy is completely automated and extremely fast, requiring only a few minutes (∼10 min on average) to build a structural model of an antibody. It is based on the concept of canonical structures of antibody loops and on our understanding of the way light and heavy chains pack together.

  17. Development of Shear Capacity Prediction Model for FRP-RC Beam without Web Reinforcement

    Directory of Open Access Journals (Sweden)

    Md. Arman Chowdhury

    2016-01-01

    Full Text Available Available codes and models generally use partially modified shear design equation, developed earlier for steel reinforced concrete, for predicting the shear capacity of FRP-RC members. Consequently, calculated shear capacity shows under- or overestimation. Furthermore, in most models some affecting parameters of shear strength are overlooked. In this study, a new and simplified shear capacity prediction model is proposed considering all the parameters. A large database containing 157 experimental results of FRP-RC beams without shear reinforcement is assembled from the published literature. A parametric study is then performed to verify the accuracy of the proposed model. Again, a comprehensive review of 9 codes and 12 available models is done, published back from 1997 to date for comparison with the proposed model. Hence, it is observed that the proposed equation shows overall optimized performance compared to all the codes and models within the range of used experimental dataset.

  18. Social Web mining and exploitation for serious applications: Technosocial Predictive Analytics and related technologies for public health, environmental and national security surveillance.

    Science.gov (United States)

    Kamel Boulos, Maged N; Sanfilippo, Antonio P; Corley, Courtney D; Wheeler, Steve

    2010-10-01

    This paper explores Technosocial Predictive Analytics (TPA) and related methods for Web "data mining" where users' posts and queries are garnered from Social Web ("Web 2.0") tools such as blogs, micro-blogging and social networking sites to form coherent representations of real-time health events. The paper includes a brief introduction to commonly used Social Web tools such as mashups and aggregators, and maps their exponential growth as an open architecture of participation for the masses and an emerging way to gain insight about people's collective health status of whole populations. Several health related tool examples are described and demonstrated as practical means through which health professionals might create clear location specific pictures of epidemiological data such as flu outbreaks. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  19. Sign Language Web Pages

    Science.gov (United States)

    Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.

    2006-01-01

    The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…

  20. When access is an issue: exploring barriers to predictive testing for Huntington disease in British Columbia, Canada.

    Science.gov (United States)

    Hawkins, Alice K; Creighton, Susan; Hayden, Michael R

    2013-02-01

    Predictive testing (PT) for Huntington disease (HD) requires several in-person appointments. This requirement may be a barrier to testing so that at risk individuals do not realize the potential benefits of PT. To understand the obstacles to PT in terms of the accessibility of services, as well as exploring mechanisms by which this issue may be addressed, we conducted an interview study of individuals at risk for HD throughout British Columbia, Canada. Results reveal that the accessibility of PT can be a barrier for two major reasons: distance and the inflexibility of the testing process. Distance is a structural barrier, and relates to the time and travel required to access PT, the financial and other opportunity costs associated with taking time away from work and family to attend appointments and the stress of navigating urban centers. The inflexibility of the testing process barrier relates to the emotional and psychological accessibility of PT. The results of the interview study reveal that there are access barriers to PT that deter individuals from receiving the support, information and counseling they require. What makes accessibility of PT services important is not just that it may result in differences in quality of life and care, but because these differences may be addressed with creative and adaptable solutions in the delivery of genetic services. The study findings underscore the need for us to rethink and personalize the way we deliver such services to improve access issues to prevent inequities in the health care system.

  1. Features predicting weight loss in overweight or obese participants in a web-based intervention: randomized trial.

    Science.gov (United States)

    Brindal, Emily; Freyne, Jill; Saunders, Ian; Berkovsky, Shlomo; Smith, Greg; Noakes, Manny

    2012-12-12

    at week 12 (P = .01). The average number of days that each site was used varied significantly (P = .02) and was higher for the supportive site at 5.96 (SD 11.36) and personalized-supportive site at 5.50 (SD 10.35), relative to the information-based site at 3.43 (SD 4.28). In total, 435 participants provided a valid final weight at the 12-week follow-up. Intention-to-treat analyses (using multiple imputations) revealed that there were no statistically significant differences in weight loss between sites (P = .42). On average, participants lost 2.76% (SE 0.32%) of their initial body weight, with 23.7% (SE 3.7%) losing 5% or more of their initial weight. Within supportive conditions, the level of use of the online weight tracker was predictive of weight loss (model estimate = 0.34, P frequency of use of the weight tracker. Relative to a static control, inclusion of social networking features and personalized meal planning recommendations in a web-based weight loss program did not demonstrate additive effects for user weight loss or retention. These features did, however, increase the average number of days that a user engaged with the system. For users of the supportive websites, greater use of the weight tracker tool was associated with greater weight loss.

  2. Congruency in the prediction of pathogenic missense mutations: state-of-the-art web-based tools.

    Science.gov (United States)

    Castellana, Stefano; Mazza, Tommaso

    2013-07-01

    A remarkable degree of genetic variation has been found in the protein-encoding regions of DNA through deep sequencing of samples obtained from thousands of subjects from several populations. Approximately half of the 20 000 single nucleotide polymorphisms present, even in normal healthy subjects, are nonsynonymous amino acid substitutions that could potentially affect protein function. The greatest challenges currently facing investigators are data interpretation and the development of strategies to identify the few gene-coding variants that actually cause or confer susceptibility to disease. A confusing array of options is available to address this problem. Unfortunately, the overall accuracy of these tools at ultraconserved positions is low, and predictions generated by current computational tools may mislead researchers involved in downstream experimental and clinical studies. First, we have presented an updated review of these tools and their primary functionalities, focusing on those that are naturally prone to analyze massive variant sets, to infer some interesting similarities among their results. Additionally, we have evaluated the prediction congruency for real whole-exome sequencing data in a proof-of-concept study on some of these web-based tools.

  3. Cy-preds: An algorithm and a web service for the analysis and prediction of cysteine reactivity.

    Science.gov (United States)

    Soylu, İnanç; Marino, Stefano M

    2016-02-01

    Cysteine (Cys) is a critically important amino acid, serving a variety of functions within proteins including structural roles, catalysis, and regulation of function through post-translational modifications. Predicting which Cys residues are likely to be reactive is a very sought after feature. Few methods are currently available for the task, either based on evaluation of physicochemical features (e.g., pKa and exposure) or based on similarity with known instances. In this study, we developed an algorithm (named HAL-Cy) which blends previous work with novel implementations to identify reactive Cys from nonreactive. HAL-Cy present two major components: (i) an energy based part, rooted on the evaluation of H-bond network contributions and (ii) a knowledge based part, composed of different profiling approaches (including a newly developed weighting matrix for sequence profiling). In our evaluations, HAL-Cy provided significantly improved performances, as tested in comparisons with existing approaches. We implemented our algorithm in a web service (Cy-preds), the ultimate product of our work; we provided it with a variety of additional features, tools, and options: Cy-preds is capable of performing fully automated calculations for a thorough analysis of Cys reactivity in proteins, ranging from reactivity predictions (e.g., with HAL-Cy) to functional characterization. We believe it represents an original, effective, and very useful addition to the current array of tools available to scientists involved in redox biology, Cys biochemistry, and structural bioinformatics. © 2015 Wiley Periodicals, Inc.

  4. How good are publicly available web services that predict bioactivity profiles for drug repurposing?

    Science.gov (United States)

    Murtazalieva, K A; Druzhilovskiy, D S; Goel, R K; Sastry, G N; Poroikov, V V

    2017-10-01

    Drug repurposing provides a non-laborious and less expensive way for finding new human medicines. Computational assessment of bioactivity profiles shed light on the hidden pharmacological potential of the launched drugs. Currently, several freely available computational tools are available via the Internet, which predict multitarget profiles of drug-like compounds. They are based on chemical similarity assessment (ChemProt, SuperPred, SEA, SwissTargetPrediction and TargetHunter) or machine learning methods (ChemProt and PASS). To compare their performance, this study has created two evaluation sets, consisting of (1) 50 well-known repositioned drugs and (2) 12 drugs recently patented for new indications. In the first set, sensitivity values varied from 0.64 (TarPred) to 1.00 (PASS Online) for the initial indications and from 0.64 (TarPred) to 0.98 (PASS Online) for the repurposed indications. In the second set, sensitivity values varied from 0.08 (SuperPred) to 1.00 (PASS Online) for the initial indications and from 0.00 (SuperPred) to 1.00 (PASS Online) for the repurposed indications. Thus, this analysis demonstrated that the performance of machine learning methods surpassed those of chemical similarity assessments, particularly in the case of novel repurposed indications.

  5. Toward Predictive Theories of Nuclear Reactions Across the Isotopic Chart: Web Report

    Energy Technology Data Exchange (ETDEWEB)

    Escher, J. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Blackmon, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Elster, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Launey, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lee, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Scielzo, N. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-12

    Recent years have seen exciting new developments and progress in nuclear structure theory, reaction theory, and experimental techniques, that allow us to move towards a description of exotic systems and environments, setting the stage for new discoveries. The purpose of the 5-week program was to bring together physicists from the low-energy nuclear structure and reaction communities to identify avenues for achieving reliable and predictive descriptions of reactions involving nuclei across the isotopic chart. The 4-day embedded workshop focused on connecting theory developments to experimental advances and data needs for astrophysics and other applications. Nuclear theory must address phenomena from laboratory experiments to stellar environments, from stable nuclei to weakly-bound and exotic isotopes. Expanding the reach of theory to these regimes requires a comprehensive understanding of the reaction mechanisms involved as well as detailed knowledge of nuclear structure. A recurring theme throughout the program was the desire to produce reliable predictions rooted in either ab initio or microscopic approaches. At the same time it was recognized that some applications involving heavy nuclei away from stability, e.g. those involving fi ssion fragments, may need to rely on simple parameterizations of incomplete data for the foreseeable future. The goal here, however, is to subsequently improve and refine the descriptions, moving to phenomenological, then microscopic approaches. There was overarching consensus that future work should also focus on reliable estimates of errors in theoretical descriptions.

  6. RBscore&NBench: a high-level web server for nucleic acid binding residues prediction with a large-scale benchmarking database.

    Science.gov (United States)

    Miao, Zhichao; Westhof, Eric

    2016-07-08

    RBscore&NBench combines a web server, RBscore and a database, NBench. RBscore predicts RNA-/DNA-binding residues in proteins and visualizes the prediction scores and features on protein structures. The scoring scheme of RBscore directly links feature values to nucleic acid binding probabilities and illustrates the nucleic acid binding energy funnel on the protein surface. To avoid dataset, binding site definition and assessment metric biases, we compared RBscore with 18 web servers and 3 stand-alone programs on 41 datasets, which demonstrated the high and stable accuracy of RBscore. A comprehensive comparison led us to develop a benchmark database named NBench. The web server is available on: http://ahsoka.u-strasbg.fr/rbscorenbench/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    Science.gov (United States)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich

    2015-04-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced

  8. Phytate/calcium molar ratio does not predict accessibility of calcium in ready-to-eat dishes.

    Science.gov (United States)

    Erba, Daniela; Manini, Federica; Meroni, Erika; Casiraghi, Maria C

    2017-08-01

    Phytic acid (PA), a naturally occurring compound of plant food, is generally considered to affect mineral bioavailability. The aim of this study was to investigate the reliability of the PA/calcium molar ratio as a predictive factor of calcium accessibility in composed dishes and their ingredients. Dishes were chosen whose ingredients were rich in Ca (milk or cheese) or in PA (whole-wheat cereals) in order to consider a range of PA/Ca ratios (from 0 to 2.4) and measure Ca solubility using an in vitro approach. The amounts of soluble Ca in composed dishes were consistent with the sum of soluble Ca from ingredients (three out of five meals) or higher. Among whole-wheat products, bread showed higher Ca accessibility (71%, PA/Ca = 1.1) than biscuits (23%, PA/Ca = 0.9) and pasta (15%, PA/Ca = 1.5), and among Ca-rich ingredients, semi-skimmed milk displayed higher Ca accessibility (64%) than sliced cheese (50%) and Parmesan (38%). No significant correlation between the PA/Ca ratio and Ca accessibility was found (P = 0.077). The reliability of the PA/Ca ratio for predicting the availability of calcium in composed dishes is unsatisfactory; data emphasized the importance of the overall food matrix influence on mineral accessibility. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  9. Web Caching

    Indian Academy of Sciences (India)

    leveraged through Web caching technology. Specifically, Web caching becomes an ... Web routing can improve the overall performance of the Internet. Web caching is similar to memory system caching - a Web cache stores Web resources in ...

  10. Access and scientific exploitation of planetary plasma datasets with the CDPP/AMDA web-based tool

    Science.gov (United States)

    Andre, Nicolas

    2012-07-01

    The field of planetary sciences has greatly expanded in recent years with space missions orbiting around most of the planets of our Solar System. The growing amount and wealth of data available make it difficult for scientists to exploit data coming from many sources that can initially be heterogeneous in their organization, description and format. It is an important objective of the Europlanet-RI (supported by EU within FP7) to add value to space missions by significantly contributing to the effective scientific exploitation of collected data; to enable space researchers to take full advantage of the potential value of data sets. To this end and to enhance the science return from space missions, innovative tools have to be developed and offered to the community. AMDA (Automated Multi-Dataset Analysis, http://cdpp-amda.cesr.fr/) is a web-based facility developed at CDPP Toulouse in France (http://cdpp.cesr.fr) for on line analysis of space physics data (heliosphere, magnetospheres, planetary environments) coming from either its local database or distant ones. AMDA has been recently integrated as a service to the scientific community for the Plasma Physics thematic node of the Europlanet-RI IDIS (Integrated and Distributed Information Service, http://www.europlanet-idis.fi/) activities, in close cooperation with IWF Graz (http://europlanet-plasmanode.oeaw.ac.at/index.php?id=9). We will report the status of our current technical and scientific efforts to integrate in the local database of AMDA various planetary plasma datasets (at Mercury, Venus, Mars, Earth and Moon, Jupiter, Saturn) from heterogeneous sources, including NASA/Planetary Data System (http://ppi.pds.nasa.gov/). We will also present our prototype Virtual Observatory activities to connect the AMDA tool to the IVOA Aladin astrophysical tool to enable pluridisciplinary studies of giant planet auroral emissions. This presentation will be done on behalf of the CDPP Team and Europlanet-RI IDIS plasma node

  11. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    Web content changes rapidly [18]. In Focused Web Harvesting [17] which aim it is to achieve a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a set of all the relevant web data to their topics of interest. Whether you are a fan

  12. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    The change of the web content is rapid. In Focused Web Harvesting [?], which aims at achieving a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a complete set of related web data to their interesting topics. Whether you are a fan

  13. Systematic review and evaluation of web-accessible tools for management of diabetes and related cardiovascular risk factors by patients and healthcare providers.

    Science.gov (United States)

    Yu, Catherine H; Bahniwal, Robinder; Laupacis, Andreas; Leung, Eman; Orr, Michael S; Straus, Sharon E

    2012-01-01

    To identify and evaluate the effectiveness, clinical usefulness, sustainability, and usability of web-compatible diabetes-related tools. Medline, EMBASE, CINAHL, Cochrane Central Register of Controlled Trials, world wide web. Studies were included if they described an electronic audiovisual tool used as a means to educate patients, care givers, or clinicians about diabetes management and assessed a psychological, behavioral, or clinical outcome. Study abstraction and evaluation for clinical usefulness, sustainability, and usability were performed by two independent reviewers. Of 12,616 citations and 1541 full-text articles reviewed, 57 studies met inclusion criteria. Forty studies used experimental designs (25 randomized controlled trials, one controlled clinical trial, 14 before-after studies), and 17 used observational designs. Methodological quality and ratings for clinical usefulness and sustainability were variable, and there was a high prevalence of usability errors. Tools showed moderate but inconsistent effects on a variety of psychological and clinical outcomes including HbA1c and weight. Meta-regression of adequately reported studies (12 studies, 2731 participants) demonstrated that, although the interventions studied resulted in positive outcomes, this was not moderated by clinical usefulness nor usability. This review is limited by the number of accessible tools, exclusion of tools for mobile devices, study quality, and the use of non-validated scales. Few tools were identified that met our criteria for effectiveness, usefulness, sustainability, and usability. Priority areas include identifying strategies to minimize website attrition and enabling patients and clinicians to make informed decisions about website choice by encouraging reporting of website quality indicators.

  14. Features Predicting Weight Loss in Overweight or Obese Participants in a Web-Based Intervention: Randomized Trial

    OpenAIRE

    Brindal, Emily; Freyne, Jill; Saunders, Ian; Berkovsky, Shlomo; Smith, Greg; Noakes, Manny

    2012-01-01

    Background Obesity remains a serious issue in many countries. Web-based programs offer good potential for delivery of weight loss programs. Yet, many Internet-delivered weight loss studies include support from medical or nutritional experts, and relatively little is known about purely web-based weight loss programs. Objective To determine whether supportive features and personalization in a 12-week web-based lifestyle intervention with no in-person professional contact affect retention and we...

  15. Freiburg RNA Tools: a web server integrating INTARNA, EXPARNA and LOCARNA.

    Science.gov (United States)

    Smith, Cameron; Heyne, Steffen; Richter, Andreas S; Will, Sebastian; Backofen, Rolf

    2010-07-01

    The Freiburg RNA tools web server integrates three tools for the advanced analysis of RNA in a common web-based user interface. The tools IntaRNA, ExpaRNA and LocARNA support the prediction of RNA-RNA interaction, exact RNA matching and alignment of RNA, respectively. The Freiburg RNA tools web server and the software packages of the stand-alone tools are freely accessible at http://rna.informatik.uni-freiburg.de.

  16. Predicting Early Center Care Utilization in a Context of Universal Access

    Science.gov (United States)

    Zachrisson, Henrik Daae; Janson, Harald; Naerde, Ane

    2013-01-01

    This paper reports predictors for center care utilization prior to 18 months of age in Norway, a country with a welfare system providing up to one-year paid parental leave and universal access to subsidized and publicly regulated center care. A community sample of 1103 families was interviewed about demographics, family, and child characteristics…

  17. The benefit of non contrast-enhanced magnetic resonance angiography for predicting vascular access surgery outcome: a computer model perspective.

    Directory of Open Access Journals (Sweden)

    Maarten A G Merkx

    Full Text Available INTRODUCTION: Vascular access (VA surgery, a prerequisite for hemodialysis treatment of end-stage renal-disease (ESRD patients, is hampered by complication rates, which are frequently related to flow enhancement. To assist in VA surgery planning, a patient-specific computer model for postoperative flow enhancement was developed. The purpose of this study is to assess the benefit of non contrast-enhanced magnetic resonance angiography (NCE-MRA data as patient-specific geometrical input for the model-based prediction of surgery outcome. METHODS: 25 ESRD patients were included in this study. All patients received a NCE-MRA examination of the upper extremity blood vessels in addition to routine ultrasound (US. Local arterial radii were assessed from NCE-MRA and converted to model input using a linear fit per artery. Venous radii were determined with US. The effect of radius measurement uncertainty on model predictions was accounted for by performing Monte-Carlo simulations. The resulting flow prediction interval of the computer model was compared with the postoperative flow obtained from US. Patients with no overlap between model-based prediction and postoperative measurement were further analyzed to determine whether an increase in geometrical detail improved computer model prediction. RESULTS: Overlap between postoperative flows and model-based predictions was obtained for 71% of patients. Detailed inspection of non-overlapping cases revealed that the geometrical details that could be assessed from NCE-MRA explained most of the differences, and moreover, upon addition of these details in the computer model the flow predictions improved. CONCLUSIONS: The results demonstrate clearly that NCE-MRA does provide valuable geometrical information for VA surgery planning. Therefore, it is recommended to use this modality, at least for patients at risk for local or global narrowing of the blood vessels as well as for patients for whom an US-based model

  18. Cognitive-behavioral therapy for obsessive–compulsive disorder: access to treatment, prediction of long-term outcome with neuroimaging

    Directory of Open Access Journals (Sweden)

    O’Neill J

    2015-07-01

    Full Text Available Joseph O'Neill,1 Jamie D Feusner,2 1Division of Child Psychiatry, 2Division of Adult Psychiatry, UCLA Semel Institute for Neuroscience and Human Behavior, Los Angeles, CA, USA Abstract: This article reviews issues related to a major challenge to the field for obsessive–compulsive disorder (OCD: improving access to cognitive-behavioral therapy (CBT. Patient-related barriers to access include the stigma of OCD and reluctance to take on the demands of CBT. Patient-external factors include the shortage of trained CBT therapists and the high costs of CBT. The second half of the review focuses on one partial, yet plausible aid to improve accessprediction of long-term response to CBT, particularly using neuroimaging methods. Recent pilot data are presented revealing a potential for pretreatment resting-state functional magnetic resonance imaging and magnetic resonance spectroscopy of the brain to forecast OCD symptom severity up to 1 year after completing CBT. Keywords: follow-up, access to treatment, relapse, resting-state fMRI, magnetic resonance spectroscopy

  19. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures

    Directory of Open Access Journals (Sweden)

    Wasik Szymon

    2010-05-01

    Full Text Available Abstract Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA

  20. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Nejdl, Wolfgang

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...

  1. MetCCS predictor: a web server for predicting collision cross-section values of metabolites in ion mobility-mass spectrometry based metabolomics.

    Science.gov (United States)

    Zhou, Zhiwei; Xiong, Xin; Zhu, Zheng-Jiang

    2017-07-15

    In metabolomics, rigorous structural identification of metabolites presents a challenge for bioinformatics. The use of collision cross-section (CCS) values of metabolites derived from ion mobility-mass spectrometry effectively increases the confidence of metabolite identification, but this technique suffers from the limit number of available CCS values. Currently, there is no software available for rapidly generating the metabolites' CCS values. Here, we developed the first web server, namely, MetCCS Predictor, for predicting CCS values. It can predict the CCS values of metabolites using molecular descriptors within a few seconds. Common users with limited background on bioinformatics can benefit from this software and effectively improve the metabolite identification in metabolomics. The web server is freely available at: http://www.metabolomics-shanghai.org/MetCCS/ . jiangzhu@sioc.ac.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  2. Predicting Social Networking Site Use and Online Communication Practices among Adolescents: The Role of Access and Device Ownership

    Directory of Open Access Journals (Sweden)

    Drew P. Cingel

    2014-01-01

    Full Text Available Given adolescents' heavy social media use, this study examined a number of predictors of adolescent social media use, as well as predictors of online communication practices. Using data collected from a national sample of 467 adolescents between the ages of 13 and 17, results indicate that demographics, technology access, and technology ownership are related to social media use and communication practices. Specifically, females log onto and use more constructive communication practices on Facebook compared to males. Additionally, adolescents who own smartphones engage in more constructive online communication practices than those who share regular cell phones or those who do not have access to a cell phone. Overall, results imply that ownership of mobile technologies, such as smartphones and iPads, may be more predictive of social networking site use and online communication practices than general ownership of technology.

  3. Predicting Social Networking Site Use and Online Communication Practices among Adolescents: The Role of Access and Device Ownership

    Directory of Open Access Journals (Sweden)

    Drew P. Cingel

    2014-06-01

    Full Text Available Given adolescents' heavy social media use, this study examined a number of predictors of adolescent social media use, as well as predictors of online communication practices. Using data collected from a national sample of 467 adolescents between the ages of 13 and 17, results indicate that demographics, technology access, and technology ownership are related to social media use and communication practices. Specifically, females log onto and use more constructive com-munication practices on Facebook compared to males. Additionally, adolescents who own smartphones engage in more constructive online communication practices than those who share regular cell phones or those who do not have access to a cell phone. Overall, results imply that ownership of mobile technologies, such as smartphones and iPads, may be more predictive of social networking site use and online communication practices than general ownership of technology.

  4. Implementation of Freeman-Wimley prediction algorithm in a web-based application for in silico identification of beta-barrel membrane proteins

    OpenAIRE

    José Antonio Agüero-Fernández; Lisandra Aguilar-Bultet; Yandy Abreu-Jorge; Agustín Lage-Castellanos; Yannier Estévez-Dieppa

    2015-01-01

    Beta-barrel type proteins play an important role in both, human and veterinary medicine. In particular, their localization on the bacterial surface, and their involvement in virulence mechanisms of pathogens, have turned them into an interesting target in studies to search for vaccine candidates. Recently, Freeman and Wimley developed a prediction algorithm based on the physicochemical properties of transmembrane beta-barrels proteins (TMBBs). Based on that algorithm, and using Grails, a web-...

  5. academic libraries embrace Web access

    Directory of Open Access Journals (Sweden)

    1997-01-01

    Full Text Available This article presents the lessons learned when six universitiesin the West Midlands, UK worked collaboratively within the eLib (Electronic Libraries Programme. These libraries and learning resource centres spent two years focused on a culture change in the support provided to education, law, and life sciences subject departments. Their efforts resulted in a mediation Model for networked information. The project manager for TAPin reflects upon the importance of attitudes, infrastructure, staff skills, appearances and service practices in the process of change.

  6. Prediction of highly expressed genes in microbes based on chromatin accessibility

    DEFF Research Database (Denmark)

    Willenbrock, Hanni; Ussery, David

    2007-01-01

    BACKGROUND: It is well known that gene expression is dependent on chromatin structure in eukaryotes and it is likely that chromatin can play a role in bacterial gene expression as well. Here, we use a nucleosomal position preference measure of anisotropic DNA flexibility to predict highly expressed...

  7. EPA Web Training Classes

    Science.gov (United States)

    Scheduled webinars can help you better manage EPA web content. Class topics include Drupal basics, creating different types of pages in the WebCMS such as document pages and forms, using Google Analytics, and best practices for metadata and accessibility.

  8. Web Design Matters

    Science.gov (United States)

    Mathews, Brian

    2009-01-01

    The web site is a library's most important feature. Patrons use the web site for numerous functions, such as renewing materials, placing holds, requesting information, and accessing databases. The homepage is the place they turn to look up the hours, branch locations, policies, and events. Whether users are at work, at home, in a building, or on…

  9. Link-Prediction Enhanced Consensus Clustering for Complex Networks (Open Access)

    Science.gov (United States)

    2016-05-20

    RESEARCH ARTICLE Link-Prediction Enhanced Consensus Clustering for Complex Networks Matthew Burgess1*, Eytan Adar1,2, Michael Cafarella1 1Computer...consensus clustering algorithm to enhance community detection on incomplete networks. Our framework utilizes existing community detection algorithms that...types of complex networks exhibit community structure: groups of highly connected nodes. Communities or clusters often reflect nodes that share similar

  10. Scalable web services for the PSIPRED Protein Analysis Workbench.

    Science.gov (United States)

    Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T

    2013-07-01

    Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.

  11. Enterprise Dynamic Access Control (EDAC)

    National Research Council Canada - National Science Library

    Fernandez, Richard

    2005-01-01

    .... Resources can represent software applications, web services and even facility access. An effective access control model should be capable of evaluating resource access based on user characteristics and environmentals...

  12. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    DEFF Research Database (Denmark)

    Petersen, Bent; Petersen, Thomas Nordahl; Andersen, Pernille

    2009-01-01

    : The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability...... comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0.79 and 0.74 are obtained using our and the compared method, respectively. This tendency is true for any selected subset....

  13. Analysis and prediction of pest dynamics in an agroforestry context using Tiko'n, a generic tool to develop food web models

    Science.gov (United States)

    Rojas, Marcela; Malard, Julien; Adamowski, Jan; Carrera, Jaime Luis; Maas, Raúl

    2017-04-01

    While it is known that climate change will impact future plant-pest population dynamics, potentially affecting crop damage, agroforestry with its enhanced biodiversity is said to reduce the outbreaks of pest insects by providing natural enemies for the control of pest populations. This premise is known in the literature as the natural enemy hypothesis and has been widely studied qualitatively. However, disagreement still exists on whether biodiversity enhancement reduces pest outbreaks, showing the need of quantitatively understanding the mechanisms behind the interactions between pests and natural enemies, also known as trophic interactions. Crop pest models that study insect population dynamics in agroforestry contexts are very rare, and pest models that take trophic interactions into account are even rarer. This may be due to the difficulty of representing complex food webs in a quantifiable model. There is therefore a need for validated food web models that allow users to predict the response of these webs to changes in climate in agroforestry systems. In this study we present Tiko'n, a Python-based software whose API allows users to rapidly build and validate trophic web models; the program uses a Bayesian inference approach to calibrate the models according to field data, allowing for the reuse of literature data from various sources and reducing the need for extensive field data collection. Tiko'n was run using coffee leaf miner (Leucoptera coffeella) and associated parasitoid data from a shaded coffee plantation, showing the mechanisms of insect population dynamics within a tri-trophic food web in an agroforestry system.

  14. Real-time prediction of Physicochemical and Toxicological Endpoints Using the Web-based CompTox Chemistry Dashboard (ACS Fall meeting) 10 of 12

    Science.gov (United States)

    The EPA CompTox Chemistry Dashboard developed by the National Center for Computational Toxicology (NCCT) provides access to data for ~750,000 chemical substances. The data include experimental and predicted data for physicochemical, environmental fate and transport and toxicologi...

  15. Prediction of the neuropeptidomes of members of the Astacidea (Crustacea, Decapoda) using publicly accessible transcriptome shotgun assembly (TSA) sequence data.

    Science.gov (United States)

    Christie, Andrew E; Chi, Megan

    2015-12-01

    The decapod infraorder Astacidea is comprised of clawed lobsters and freshwater crayfish. Due to their economic importance and their use as models for investigating neurochemical signaling, much work has focused on elucidating their neurochemistry, particularly their peptidergic systems. Interestingly, no astacidean has been the subject of large-scale peptidomic analysis via in silico transcriptome mining, this despite growing transcriptomic resources for members of this taxon. Here, the publicly accessible astacidean transcriptome shotgun assembly data were mined for putative peptide-encoding transcripts; these sequences were used to predict the structures of mature neuropeptides. One hundred seventy-six distinct peptides were predicted for Procambarus clarkii, including isoforms of adipokinetic hormone-corazonin-like peptide (ACP), allatostatin A (AST-A), allatostatin B, allatostatin C (AST-C) bursicon α, bursicon β, CCHamide, crustacean hyperglycemic hormone (CHH)/ion transport peptide (ITP), diuretic hormone 31 (DH31), eclosion hormone (EH), FMRFamide-like peptide, GSEFLamide, intocin, leucokinin, neuroparsin, neuropeptide F, pigment dispersing hormone, pyrokinin, RYamide, short neuropeptide F (sNPF), SIFamide, sulfakinin and tachykinin-related peptide (TRP). Forty-six distinct peptides, including isoforms of AST-A, AST-C, bursicon α, CCHamide, CHH/ITP, DH31, EH, intocin, myosuppressin, neuroparsin, red pigment concentrating hormone, sNPF and TRP, were predicted for Pontastacus leptodactylus, with a bursicon β and a neuroparsin predicted for Cherax quadricarinatus. The identification of ACP is the first from a decapod, while the predictions of CCHamide, EH, GSEFLamide, intocin, neuroparsin and RYamide are firsts for the Astacidea. Collectively, these data greatly expand the catalog of known astacidean neuropeptides and provide a foundation for functional studies of peptidergic signaling in members of this decapod infraorder. Copyright © 2015 Elsevier Inc

  16. Improving spatial prediction of Schistosoma haematobium prevalence in southern Ghana through new remote sensors and local water access profiles.

    Science.gov (United States)

    Kulinkina, Alexandra V; Walz, Yvonne; Koch, Magaly; Biritwum, Nana-Kwadwo; Utzinger, Jürg; Naumova, Elena N

    2018-06-04

    Schistosomiasis is a water-related neglected tropical disease. In many endemic low- and middle-income countries, insufficient surveillance and reporting lead to poor characterization of the demographic and geographic distribution of schistosomiasis cases. Hence, modeling is relied upon to predict areas of high transmission and to inform control strategies. We hypothesized that utilizing remotely sensed (RS) environmental data in combination with water, sanitation, and hygiene (WASH) variables could improve on the current predictive modeling approaches. Schistosoma haematobium prevalence data, collected from 73 rural Ghanaian schools, were used in a random forest model to investigate the predictive capacity of 15 environmental variables derived from RS data (Landsat 8, Sentinel-2, and Global Digital Elevation Model) with fine spatial resolution (10-30 m). Five methods of variable extraction were tested to determine the spatial linkage between school-based prevalence and the environmental conditions of potential transmission sites, including applying the models to known human water contact locations. Lastly, measures of local water access and groundwater quality were incorporated into RS-based models to assess the relative importance of environmental and WASH variables. Predictive models based on environmental characterization of specific locations where people contact surface water bodies offered some improvement as compared to the traditional approach based on environmental characterization of locations where prevalence is measured. A water index (MNDWI) and topographic variables (elevation and slope) were important environmental risk factors, while overall, groundwater iron concentration predominated in the combined model that included WASH variables. The study helps to understand localized drivers of schistosomiasis transmission. Specifically, unsatisfactory water quality in boreholes perpetuates reliance of surface water bodies, indirectly increasing

  17. Predictive risk modelling under different data access scenarios: who is identified as high risk and for how long?

    Science.gov (United States)

    Johnson, Tracy L; Kaldor, Jill; Sutherland, Kim; Humphries, Jacob; Jorm, Louisa R; Levesque, Jean-Frederic

    2018-01-01

    Objective This observational study critically explored the performance of different predictive risk models simulating three data access scenarios, comparing: (1) sociodemographic and clinical profiles; (2) consistency in high-risk designation across models; and (3) persistence of high-risk status over time. Methods Cross-sectional health survey data (2006–2009) for more than 260 000 Australian adults 45+ years were linked to longitudinal individual hospital, primary care, pharmacy and mortality data. Three risk models predicting acute emergency hospitalisations were explored, simulating conditions where data are accessed through primary care practice management systems, or through hospital-based electronic records, or through a hypothetical ‘full’ model using a wider array of linked data. High-risk patients were identified using different risk score thresholds. Models were reapplied monthly for 24 months to assess persistence in high-risk categorisation. Results The three models displayed similar statistical performance. Three-quarters of patients in the high-risk quintile from the ‘full’ model were also identified using the primary care or hospital-based models, with the remaining patients differing according to age, frailty, multimorbidity, self-rated health, polypharmacy, prior hospitalisations and imminent mortality. The use of higher risk prediction thresholds resulted in lower levels of agreement in high-risk designation across models and greater morbidity and mortality in identified patient populations. Persistence of high-risk status varied across approaches according to updated information on utilisation history, with up to 25% of patients reassessed as lower risk within 1 year. Conclusion/implications Small differences in risk predictors or risk thresholds resulted in comparatively large differences in who was classified as high risk and for how long. Pragmatic predictive risk modelling design decisions based on data availability or projected

  18. An Implementation of Semantic Web System for Information retrieval using J2EE Technologies.

    OpenAIRE

    B.Hemanth kumar,; Prof. M.Surendra Prasad Babu

    2011-01-01

    Accessing web resources (Information) is an essential facility provided by web applications to every body. Semantic web is one of the systems that provide a facility to access the resources through web service applications. Semantic web and web Services are new emerging web based technologies. An automatic information processing system can be developed by using semantic web and web services, each having its own contribution within the context of developing web-based information systems and ap...

  19. OceanNOMADS: Real-time and retrospective access to operational U.S. ocean prediction products

    Science.gov (United States)

    Harding, J. M.; Cross, S. L.; Bub, F.; Ji, M.

    2011-12-01

    The National Oceanic and Atmospheric Administration (NOAA) National Operational Model Archive and Distribution System (NOMADS) provides both real-time and archived atmospheric model output from servers at the National Centers for Environmental Prediction (NCEP) and National Climatic Data Center (NCDC) respectively (http://nomads.ncep.noaa.gov/txt_descriptions/marRutledge-1.pdf). The NOAA National Ocean Data Center (NODC) with NCEP is developing a complementary capability called OceanNOMADS for operational ocean prediction models. An NCEP ftp server currently provides real-time ocean forecast output (http://www.opc.ncep.noaa.gov/newNCOM/NCOM_currents.shtml) with retrospective access through NODC. A joint effort between the Northern Gulf Institute (NGI; a NOAA Cooperative Institute) and the NOAA National Coastal Data Development Center (NCDDC; a division of NODC) created the developmental version of the retrospective OceanNOMADS capability (http://www.northerngulfinstitute.org/edac/ocean_nomads.php) under the NGI Ecosystem Data Assembly Center (EDAC) project (http://www.northerngulfinstitute.org/edac/). Complementary funding support for the developmental OceanNOMADS from U.S. Integrated Ocean Observing System (IOOS) through the Southeastern University Research Association (SURA) Model Testbed (http://testbed.sura.org/) this past year provided NODC the analogue that facilitated the creation of an NCDDC production version of OceanNOMADS (http://www.ncddc.noaa.gov/ocean-nomads/). Access tool development and storage of initial archival data sets occur on the NGI/NCDDC developmental servers with transition to NODC/NCCDC production servers as the model archives mature and operational space and distribution capability grow. Navy operational global ocean forecast subsets for U.S waters comprise the initial ocean prediction fields resident on the NCDDC production server. The NGI/NCDDC developmental server currently includes the Naval Research Laboratory Inter-America Seas

  20. Predicting Middle School Students' Use of Web 2.0 Technologies out of School Using Home and School Technological Variables

    Science.gov (United States)

    Hughes, Joan E.; Read, Michelle F.; Jones, Sara; Mahometa, Michael

    2015-01-01

    This study used multiple regression to identify predictors of middle school students' Web 2.0 activities out of school, a construct composed of 15 technology activities. Three middle schools participated, where sixth- and seventh-grade students completed a questionnaire. Independent predictor variables included three demographic and five computer…

  1. Web services for distributed and interoperable hydro-information systems

    Science.gov (United States)

    Horak, J.; Orlik, A.; Stromsky, J.

    2008-03-01

    Web services support the integration and interoperability of Web-based applications and enable machine-to-machine interaction. The concepts of web services and open distributed architecture were applied to the development of T-DSS, the prototype customised for web based hydro-information systems. T-DSS provides mapping services, database related services and access to remote components, with special emphasis placed on the output flexibility (e.g. multilingualism), where SOAP web services are mainly used for communication. The remote components are represented above all by remote data and mapping services (e.g. meteorological predictions), modelling and analytical systems (currently HEC-HMS, MODFLOW and additional utilities), which support decision making in water management.

  2. Distributing flight dynamics products via the World Wide Web

    Science.gov (United States)

    Woodard, Mark; Matusow, David

    1996-01-01

    The NASA Flight Dynamics Products Center (FDPC), which make available selected operations products via the World Wide Web, is reported on. The FDPC can be accessed from any host machine connected to the Internet. It is a multi-mission service which provides Internet users with unrestricted access to the following standard products: antenna contact predictions; ground tracks; orbit ephemerides; mean and osculating orbital elements; earth sensor sun and moon interference predictions; space flight tracking data network summaries; and Shuttle transport system predictions. Several scientific data bases are available through the service.

  3. Web OPAC Interfaces: An Overview.

    Science.gov (United States)

    Babu, B. Ramesh; O'Brien, Ann

    2000-01-01

    Discussion of Web-based online public access catalogs (OPACs) focuses on a review of six Web OPAC interfaces in use in academic libraries in the United Kingdom. Presents a checklist and guidelines of important features and functions that are currently available, including search strategies, access points, display, links, and layout. (Author/LRW)

  4. Use of Readily Accessible Inflammatory Markers to Predict Diabetic Kidney Disease

    Directory of Open Access Journals (Sweden)

    Lauren Winter

    2018-05-01

    Full Text Available Diabetic kidney disease is a common complication of type 1 and type 2 diabetes and is the primary cause of end-stage renal disease in developed countries. Early detection of diabetic kidney disease will facilitate early intervention aimed at reducing the rate of progression to end-stage renal disease. Diabetic kidney disease has been traditionally classified based on the presence of albuminuria. More recently estimated glomerular filtration rate has also been incorporated into the staging of diabetic kidney disease. While albuminuric diabetic kidney disease is well described, the phenotype of non-albuminuric diabetic kidney disease is now widely accepted. An association between markers of inflammation and diabetic kidney disease has previously been demonstrated. Effector molecules of the innate immune system including C-reactive protein, interleukin-6, and tumor necrosis factor-α are increased in patients with diabetic kidney disease. Furthermore, renal infiltration of neutrophils, macrophages, and lymphocytes are observed in renal biopsies of patients with diabetic kidney disease. Similarly high serum neutrophil and low serum lymphocyte counts have been shown to be associated with diabetic kidney disease. The neutrophil–lymphocyte ratio is considered a robust measure of systemic inflammation and is associated with the presence of inflammatory conditions including the metabolic syndrome and insulin resistance. Cross-sectional studies have demonstrated a link between high levels of the above inflammatory biomarkers and diabetic kidney disease. Further longitudinal studies will be required to determine if these readily available inflammatory biomarkers can accurately predict the presence and prognosis of diabetic kidney disease, above and beyond albuminuria, and estimated glomerular filtration rate.

  5. An MRI-based classification scheme to predict passive access of 5 to 50-nm large nanoparticles to tumors.

    Science.gov (United States)

    Karageorgis, Anastassia; Dufort, Sandrine; Sancey, Lucie; Henry, Maxime; Hirsjärvi, Samuli; Passirani, Catherine; Benoit, Jean-Pierre; Gravier, Julien; Texier, Isabelle; Montigon, Olivier; Benmerad, Mériem; Siroux, Valérie; Barbier, Emmanuel L; Coll, Jean-Luc

    2016-02-19

    Nanoparticles are useful tools in oncology because of their capacity to passively accumulate in tumors in particular via the enhanced permeability and retention (EPR) effect. However, the importance and reliability of this effect remains controversial and quite often unpredictable. In this preclinical study, we used optical imaging to detect the accumulation of three types of fluorescent nanoparticles in eight different subcutaneous and orthotopic tumor models, and dynamic contrast-enhanced and vessel size index Magnetic Resonance Imaging (MRI) to measure the functional parameters of these tumors. The results demonstrate that the permeability and blood volume fraction determined by MRI are useful parameters for predicting the capacity of a tumor to accumulate nanoparticles. Translated to a clinical situation, this strategy could help anticipate the EPR effect of a particular tumor and thus its accessibility to nanomedicines.

  6. NetOglyc: prediction of mucin type O-glycosylation sites based on sequence context and surface accessibility

    DEFF Research Database (Denmark)

    Hansen, Jan Erik; Lund, Ole; Tolstrup, Niels

    1998-01-01

    -glycosylated serine and threonine residues in independent test sets, thus proving more accurate than matrix statistics and vector projection methods. Predicition of O-glycosylation sites in the envelope glycoprotein gp120 from the primate lentiviruses HIV-1, HIV-2 and SIV are presented. The most conserved O...... structure and surface accessibility. The sequence context of glycosylated threonines was found to differ from that of serine, and the sites were found to cluster. Non-clustered sites had a sequence context different from that of clustered sites. charged residues were disfavoured at postition -1 and +3......-glycosylation signals in these evolutionary-related glycoproteins were found in their first hypervariable loop, V1. However, the strain variation for HIV-1 gp120 was significant. A computer server, available through WWW or E-mail, has been developed for prediction of mucin type O-glycosylation sites in proteins based...

  7. Anonymous Web Browsing and Hosting

    OpenAIRE

    MANOJ KUMAR; ANUJ RANI

    2013-01-01

    In today’s high tech environment every organization, individual computer users use internet for accessing web data. To maintain high confidentiality and security of the data secure web solutions are required. In this paper we described dedicated anonymous web browsing solutions which makes our browsing faster and secure. Web application which play important role for transferring our secret information including like email need more and more security concerns. This paper also describes that ho...

  8. MODIS Collection 6 Land Product Subsets Web Service

    Data.gov (United States)

    National Aeronautics and Space Administration — The MODIS Web Service provides data access capabilities for Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 6 land products. The web service...

  9. Proactive behavior, but not inhibitory control, predicts repeated innovation by spotted hyenas tested with a multi-access box.

    Science.gov (United States)

    Johnson-Ulrich, Lily; Johnson-Ulrich, Zoe; Holekamp, Kay

    2018-05-01

    Innovation is widely linked to cognitive ability, brain size, and adaptation to novel conditions. However, successful innovation appears to be influenced by both cognitive factors, such as inhibitory control, and non-cognitive behavioral traits. We used a multi-access box (MAB) paradigm to measure repeated innovation, the number of unique innovations learned across trials, by 10 captive spotted hyenas (Crocuta crocuta). Spotted hyenas are highly innovative in captivity and also display striking variation in behavioral traits, making them good model organisms for examining the relationship between innovation and other behavioral traits. We measured persistence, motor diversity, motivation, activity, efficiency, inhibitory control, and neophobia demonstrated by hyenas while interacting with the MAB. We also independently assessed inhibitory control with a detour cylinder task. Most hyenas were able to solve the MAB at least once, but only four hyenas satisfied learning criteria for all four possible solutions. Interestingly, neither measure of inhibitory control predicted repeated innovation. Instead, repeated innovation was predicted by a proactive syndrome of behavioral traits that included high persistence, high motor diversity, high activity and low neophobia. Our results suggest that this proactive behavioral syndrome may be more important than inhibitory control for successful innovation with the MAB by members of this species.

  10. Development and Validation of a Preprocedural Risk Score to Predict Access Site Complications After Peripheral Vascular Interventions Based on the Vascular Quality Initiative Database

    Directory of Open Access Journals (Sweden)

    Daniel Ortiz

    2016-01-01

    Full Text Available Purpose: Access site complications following peripheral vascular intervention (PVI are associated with prolonged hospitalization and increased mortality. Prediction of access site complication risk may optimize PVI care; however, there is no tool designed for this. We aimed to create a clinical scoring tool to stratify patients according to their risk of developing access site complications after PVI. Methods: The Society for Vascular Surgery’s Vascular Quality Initiative database yielded 27,997 patients who had undergone PVI at 131 North American centers. Clinically and statistically significant preprocedural risk factors associated with in-hospital, post-PVI access site complications were included in a multivariate logistic regression model, with access site complications as the outcome variable. A predictive model was developed with a random sample of 19,683 (70% PVI procedures and validated in 8,314 (30%. Results: Access site complications occurred in 939 (3.4% patients. The risk tool predictors are female gender, age > 70 years, white race, bedridden ambulatory status, insulin-treated diabetes mellitus, prior minor amputation, procedural indication of claudication, and nonfemoral arterial access site (model c-statistic = 0.638. Of these predictors, insulin-treated diabetes mellitus and prior minor amputation were protective of access site complications. The discriminatory power of the risk model was confirmed by the validation dataset (c-statistic = 0.6139. Higher risk scores correlated with increased frequency of access site complications: 1.9% for low risk, 3.4% for moderate risk and 5.1% for high risk. Conclusions: The proposed clinical risk score based on eight preprocedural characteristics is a tool to stratify patients at risk for post-PVI access site complications. The risk score may assist physicians in identifying patients at risk for access site complications and selection of patients who may benefit from bleeding avoidance

  11. LECTINPred: web Server that Uses Complex Networks of Protein Structure for Prediction of Lectins with Potential Use as Cancer Biomarkers or in Parasite Vaccine Design.

    Science.gov (United States)

    Munteanu, Cristian R; Pedreira, Nieves; Dorado, Julián; Pazos, Alejandro; Pérez-Montoto, Lázaro G; Ubeira, Florencio M; González-Díaz, Humberto

    2014-04-01

    Lectins (Ls) play an important role in many diseases such as different types of cancer, parasitic infections and other diseases. Interestingly, the Protein Data Bank (PDB) contains +3000 protein 3D structures with unknown function. Thus, we can in principle, discover new Ls mining non-annotated structures from PDB or other sources. However, there are no general models to predict new biologically relevant Ls based on 3D chemical structures. We used the MARCH-INSIDE software to calculate the Markov-Shannon 3D electrostatic entropy parameters for the complex networks of protein structure of 2200 different protein 3D structures, including 1200 Ls. We have performed a Linear Discriminant Analysis (LDA) using these parameters as inputs in order to seek a new Quantitative Structure-Activity Relationship (QSAR) model, which is able to discriminate 3D structure of Ls from other proteins. We implemented this predictor in the web server named LECTINPred, freely available at http://bio-aims.udc.es/LECTINPred.php. This web server showed the following goodness-of-fit statistics: Sensitivity=96.7 % (for Ls), Specificity=87.6 % (non-active proteins), and Accuracy=92.5 % (for all proteins), considering altogether both the training and external prediction series. In mode 2, users can carry out an automatic retrieval of protein structures from PDB. We illustrated the use of this server, in operation mode 1, performing a data mining of PDB. We predicted Ls scores for +2000 proteins with unknown function and selected the top-scored ones as possible lectins. In operation mode 2, LECTINPred can also upload 3D structural models generated with structure-prediction tools like LOMETS or PHYRE2. The new Ls are expected to be of relevance as cancer biomarkers or useful in parasite vaccine design. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Accessing Electronic Journals.

    Science.gov (United States)

    McKay, Sharon Cline

    1999-01-01

    Discusses issues librarians need to consider when providing access to electronic journals. Topics include gateways; index and abstract services; validation and pay-per-view; title selection; integration with OPACs (online public access catalogs)or Web sites; paper availability; ownership versus access; usage restrictions; and services offered…

  13. OHSU web site readers have access to distinctive information. How to e-mail a child in the hospital; where to find a midwife.

    Science.gov (United States)

    Botvin, Judith D

    2005-01-01

    Oregon Health & Science University (OHSU), Portland, Ore., posts a web site which generously covers the various aspects of this complex organization. Visitors to www.osuhealth.com, can find a midwife, send an email to a child in the hospital, and learn about the benefits of living in Oregon.

  14. Semantic Web

    Directory of Open Access Journals (Sweden)

    Anna Lamandini

    2011-06-01

    Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.

  15. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M.B. Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  16. Dynamic Web Pages: Performance Impact on Web Servers.

    Science.gov (United States)

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  17. Validation of a Web-Based Tool to Predict the Ipsilateral Breast Tumor Recurrence (IBTR! 2.0) after Breast-Conserving Therapy for Korean Patients.

    Science.gov (United States)

    Jung, Seung Pil; Hur, Sung Mo; Lee, Se Kyung; Kim, Sangmin; Choi, Min-Young; Bae, Soo Youn; Kim, Jiyoung; Kim, Min Kuk; Kil, Won Ho; Choe, Jun-Ho; Kim, Jung-Han; Kim, Jee Soo; Nam, Seok Jin; Bae, Jeoung Won; Lee, Jeong Eon

    2013-03-01

    IBTR! 2.0 is a web-based nomogram that predicts the 10-year ipsilateral breast tumor recurrence (IBTR) rate after breast-conserving therapy. We validated this nomogram in Korean patients. The nomogram was tested for 520 Korean patients, who underwent breast-conserving surgery followed by radiation therapy. Predicted and observed 10-year outcomes were compared for the entire cohort and for each group, predefined by nomogram-predicted risks: group 1, 10%. In overall patients, the overall 10 year predicted and observed estimates of IBTR were 5.22% and 5.70% (p=0.68). In group 1, (n=124), the predicted and observed estimates were 2.25% and 1.80% (p=0.73), in group 2 (n=177), 3.95% and 3.90% (p=0.97), in group 3 (n=181), 7.14% and 8.80% (p=0.42), and in group 4 (n=38), 11.66% and 14.90% (p=0.73), respectively. In a previous validation of this nomogram based on American patients, nomogram-predicted IBTR rates were overestimated in the high-risk subgroup. However, our results based on Korean patients showed that the observed IBTR was higher than the predicted estimates in groups 3 and 4. This difference may arise from ethnic differences, as well as from the methods used to detect IBTR and the healthcare environment. IBTR! 2.0 may be considered as an acceptable nomogram in Korean patients with low- to moderate-risk of in-breast recurrence. Before widespread use of this nomogram, the IBTR! 2.0 needs a larger validation study and continuous modification.

  18. WebVis: a hierarchical web homepage visualizer

    Science.gov (United States)

    Renteria, Jose C.; Lodha, Suresh K.

    2000-02-01

    WebVis, the Hierarchical Web Home Page Visualizer, is a tool for managing home web pages. The user can access this tool via the WWW and obtain a hierarchical visualization of one's home web pages. WebVis is a real time interactive tool that supports many different queries on the statistics of internal files such as sizes, age, and type. In addition, statistics on embedded information such as VRML files, Java applets, images and sound files can be extracted and queried. Results of these queries are visualized using color, shape and size of different nodes of the hierarchy. The visualization assists the user in a variety of task, such as quickly finding outdated information or locate large files. WebVIs is one solution to the growing web space maintenance problem. Implementation of WebVis is realized with Perl and Java. Perl pattern matching and file handling routines are used to collect and process web space linkage information and web document information. Java utilizes the collected information to produce visualization of the web space. Java also provides WebVis with real time interactivity, while running off the WWW. Some WebVis examples of home web page visualization are presented.

  19. Statistics of Use, Accesses and Follow-up the CIEMAT Website in the 2014 Year; Estadísticas de Uso, Accesos y Seguimiento del Portal Web CIEMAT en 2014

    Energy Technology Data Exchange (ETDEWEB)

    Lomba Falcón, L.

    2015-07-01

    This report analyzes and describes the connections, the context and the user access main features to the CIEMAT website (http://www.ciemat.es/) throughout 2014, as well as the own temporary progression are examined. The results obtained, using the program called Google Analytics, enables us to know which contents and what section of the web portal need to be reoriented or modified, according the CIEMAT strategic and functional requirements. Comparing the resulting information with the information obtained in different annual periods will allow analysis of trends in access evolution to that site and trends of behavior of its visitors. Having more data and more time perspective of achieved records will help improve the planning and management of the official website. That is to say, it will contribute to the spreading of the CIEMAT, as well as its specific activities and its own research lines.

  20. AcconPred: Predicting Solvent Accessibility and Contact Number Simultaneously by a Multitask Learning Framework under the Conditional Neural Fields Model

    Directory of Open Access Journals (Sweden)

    Jianzhu Ma

    2015-01-01

    Full Text Available Motivation. The solvent accessibility of protein residues is one of the driving forces of protein folding, while the contact number of protein residues limits the possibilities of protein conformations. The de novo prediction of these properties from protein sequence is important for the study of protein structure and function. Although these two properties are certainly related with each other, it is challenging to exploit this dependency for the prediction. Method. We present a method AcconPred for predicting solvent accessibility and contact number simultaneously, which is based on a shared weight multitask learning framework under the CNF (conditional neural fields model. The multitask learning framework on a collection of related tasks provides more accurate prediction than the framework trained only on a single task. The CNF method not only models the complex relationship between the input features and the predicted labels, but also exploits the interdependency among adjacent labels. Results. Trained on 5729 monomeric soluble globular protein datasets, AcconPred could reach 0.68 three-state accuracy for solvent accessibility and 0.75 correlation for contact number. Tested on the 105 CASP11 domain datasets for solvent accessibility, AcconPred could reach 0.64 accuracy, which outperforms existing methods.

  1. AcconPred: Predicting Solvent Accessibility and Contact Number Simultaneously by a Multitask Learning Framework under the Conditional Neural Fields Model.

    Science.gov (United States)

    Ma, Jianzhu; Wang, Sheng

    2015-01-01

    The solvent accessibility of protein residues is one of the driving forces of protein folding, while the contact number of protein residues limits the possibilities of protein conformations. The de novo prediction of these properties from protein sequence is important for the study of protein structure and function. Although these two properties are certainly related with each other, it is challenging to exploit this dependency for the prediction. We present a method AcconPred for predicting solvent accessibility and contact number simultaneously, which is based on a shared weight multitask learning framework under the CNF (conditional neural fields) model. The multitask learning framework on a collection of related tasks provides more accurate prediction than the framework trained only on a single task. The CNF method not only models the complex relationship between the input features and the predicted labels, but also exploits the interdependency among adjacent labels. Trained on 5729 monomeric soluble globular protein datasets, AcconPred could reach 0.68 three-state accuracy for solvent accessibility and 0.75 correlation for contact number. Tested on the 105 CASP11 domain datasets for solvent accessibility, AcconPred could reach 0.64 accuracy, which outperforms existing methods.

  2. The size, morphology, site, and access score predicts critical outcomes of endoscopic mucosal resection in the colon.

    Science.gov (United States)

    Sidhu, Mayenaaz; Tate, David J; Desomer, Lobke; Brown, Gregor; Hourigan, Luke F; Lee, Eric Y T; Moss, Alan; Raftopoulos, Spiro; Singh, Rajvinder; Williams, Stephen J; Zanati, Simon; Burgess, Nicholas; Bourke, Michael J

    2018-01-25

    The SMSA (size, morphology, site, access) polyp scoring system is a method of stratifying the difficulty of polypectomy through assessment of four domains. The aim of this study was to evaluate the ability of SMSA to predict critical outcomes of endoscopic mucosal resection (EMR). We retrospectively applied SMSA to a prospectively collected multicenter database of large colonic laterally spreading lesions (LSLs) ≥ 20 mm referred for EMR. Standard inject-and-resect EMR procedures were performed. The primary end points were correlation of SMSA level with technical success, adverse events, and endoscopic recurrence. 2675 lesions in 2675 patients (52.6 % male) underwent EMR. Failed single-session EMR occurred in 124 LSLs (4.6 %) and was predicted by the SMSA score ( P  < 0.001). Intraprocedural and clinically significant postendoscopic bleeding was significantly less common for SMSA 2 LSLs (odds ratio [OR] 0.36, P  < 0.001 and OR 0.23, P  < 0.01) and SMSA 3 LSLs (OR 0.41, P  < 0.001 and OR 0.60, P  = 0.05) compared with SMSA 4 lesions. Similarly, endoscopic recurrence at first surveillance was less likely among SMSA 2 (OR 0.19, P  < 0.001) and SMSA 3 (OR 0.33, P  < 0.001) lesions compared with SMSA 4 lesions. This also extended to second surveillance among SMSA 4 LSLs. SMSA is a simple, readily applicable, clinical score that identifies a subgroup of patients who are at increased risk of failed EMR, adverse events, and adenoma recurrence at surveillance colonoscopy. This information may be useful for improving informed consent, planning endoscopy lists, and developing quality control measures for practitioners of EMR, with potential implications for EMR benchmarking and training. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Acessibilidade dos sítios web dos governos estaduais brasileiros: uma análise quantitativa entre 1996 e 2007 Accessibility of Brazilian state government websites: a quantitative analysis between 1996 and 2007

    Directory of Open Access Journals (Sweden)

    André Pimenta Freire

    2009-04-01

    Full Text Available A utilização da web para a disponibilização de informações e serviços de órgãos governamentais para os cidadãos tem se tornado cada vez mais expressiva. Assim, a garantia de que esses conteúdos e serviços possam ser acessíveis a qualquer cidadão é imprescindível, independentemente de necessidades especiais ou de quaisquer outras barreiras. No Brasil, o Decreto-Lei nº5.296/2004 determinou que todos os órgãos governamentais deveriam adaptar seus sítios na web de acordo com critérios de acessibilidade até dezembro de 2005. Com o objetivo de verificar a evolução da acessibilidade ao longo dos anos e como foi o impacto dessa legislação, este artigo analisa a acessibilidade dos sítios dos governos estaduais brasileiros por meio de amostras coletadas entre 1996 e 2007. Foram efetuadas análises por meio de métricas, obtidas por avaliações com ferramentas automáticas. Os resultados indicam que a legislação teve pouco impacto para a melhoria real da acessibilidade dos sítios no período indicado, com uma melhora somente em 2007. Verifica-se que se faz necessário adotar políticas públicas mais efetivas para que as pessoas com necessidades especiais tenham os seus direitos para acesso a informações e aos serviços públicos na web assegurados mais amplamente.The use of the web to provide government information and services to citizens has become more and more significant. Ensuring that any citizen can access these contents and services is essential, regardless of disabilities or other barriers they may have. In Brazil, Executive Act no. 5.296/2004 ruled that all government agencies should adapt their websites according to accessibility guidelines until December 2005. In order to check the trend of accessibility over the years and the impact of such legislation, this article analyzes the accessibility of the Brazilian state government websites from 1996 to 2007. Analyses were carried out by means of metrics, obtained

  4. Engineering Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    suit the user profile the most. This paper summarizes the domain engineering framework for such adaptive web applications. The framework provides guidelines to develop adaptive web applications as members of a family. It suggests how to utilize the design artifacts as knowledge which can be used......Information and services on the web are accessible for everyone. Users of the web differ in their background, culture, political and social environment, interests and so on. Ambient intelligence was envisioned as a concept for systems which are able to adapt to user actions and needs....... With the growing amount of information and services, the web applications become natural candidates to adopt the concepts of ambient intelligence. Such applications can deal with divers user intentions and actions based on the user profile and can suggest the combination of information content and services which...

  5. Bioprocess-Engineering Education with Web Technology

    NARCIS (Netherlands)

    Sessink, O.

    2006-01-01

    Development of learning material that is distributed through and accessible via the World Wide Web. Various options from web technology are exploited to improve the quality and efficiency of learning material.

  6. Holistic approaches to e-learning accessibility

    OpenAIRE

    Phipps, Lawrie; Kelly, Brian

    2006-01-01

    The importance of accessibility to digital e-learning resources is widely acknowledged. The World Wide Web Consortium Web Accessibility Initiative has played a leading role in promoting the importance of accessibility and developing guidelines that can help when developing accessible web resources. The accessibility of e-learning resources provides additional challenges. While it is important to consider the technical and resource related aspects of e-learning when designing and developing re...

  7. Web-based access to near real-time and archived high-density time-series data: cyber infrastructure challenges & developments in the open-source Waveform Server

    Science.gov (United States)

    Reyes, J. C.; Vernon, F. L.; Newman, R. L.; Steidl, J. H.

    2010-12-01

    The Waveform Server is an interactive web-based interface to multi-station, multi-sensor and multi-channel high-density time-series data stored in Center for Seismic Studies (CSS) 3.0 schema relational databases (Newman et al., 2009). In the last twelve months, based on expanded specifications and current user feedback, both the server-side infrastructure and client-side interface have been extensively rewritten. The Python Twisted server-side code-base has been fundamentally modified to now present waveform data stored in cluster-based databases using a multi-threaded architecture, in addition to supporting the pre-existing single database model. This allows interactive web-based access to high-density (broadband @ 40Hz to strong motion @ 200Hz) waveform data that can span multiple years; the common lifetime of broadband seismic networks. The client-side interface expands on it's use of simple JSON-based AJAX queries to now incorporate a variety of User Interface (UI) improvements including standardized calendars for defining time ranges, applying on-the-fly data calibration to display SI-unit data, and increased rendering speed. This presentation will outline the various cyber infrastructure challenges we have faced while developing this application, the use-cases currently in existence, and the limitations of web-based application development.

  8. Head First Web Design

    CERN Document Server

    Watrall, Ethan

    2008-01-01

    Want to know how to make your pages look beautiful, communicate your message effectively, guide visitors through your website with ease, and get everything approved by the accessibility and usability police at the same time? Head First Web Design is your ticket to mastering all of these complex topics, and understanding what's really going on in the world of web design. Whether you're building a personal blog or a corporate website, there's a lot more to web design than div's and CSS selectors, but what do you really need to know? With this book, you'll learn the secrets of designing effecti

  9. Chemistry WebBook

    Science.gov (United States)

    SRD 69 NIST Chemistry WebBook (Web, free access)   The NIST Chemistry WebBook contains: Thermochemical data for over 7000 organic and small inorganic compounds; thermochemistry data for over 8000 reactions; IR spectra for over 16,000 compounds; mass spectra for over 33,000 compounds; UV/Vis spectra for over 1600 compounds; electronic and vibrational spectra for over 5000 compounds; constants of diatomic molecules(spectroscopic data) for over 600 compounds; ion energetics data for over 16,000 compounds; thermophysical property data for 74 fluids.

  10. Oracle Application Express 5 for beginners a practical guide to rapidly develop data-centric web applications accessible from desktop, laptops, tablets, and smartphones

    CERN Document Server

    2015-01-01

    Oracle Application Express has taken another big leap towards becoming a true next generation RAD tool. It has entered into its fifth version to build robust web applications. One of the most significant feature in this release is a new page designer that helps developers create and edit page elements within a single page design view, which enormously maximizes developer productivity. Without involving the audience too much into the boring bits, this full colored edition adopts an inspiring approach that helps beginners practically evaluate almost every feature of Oracle Application Express, including all features new to version 5. The most convincing way to explore a technology is to apply it to a real world problem. In this book, you’ll develop a sales application that demonstrates almost every feature to practically expose the anatomy of Oracle Application Express 5. The short list below presents some main topics of Oracle APEX covered in this book: Rapid web application development for desktops, la...

  11. Implementation of a scalable, web-based, automated clinical decision support risk-prediction tool for chronic kidney disease using C-CDA and application programming interfaces.

    Science.gov (United States)

    Samal, Lipika; D'Amore, John D; Bates, David W; Wright, Adam

    2017-11-01

    Clinical decision support tools for risk prediction are readily available, but typically require workflow interruptions and manual data entry so are rarely used. Due to new data interoperability standards for electronic health records (EHRs), other options are available. As a clinical case study, we sought to build a scalable, web-based system that would automate calculation of kidney failure risk and display clinical decision support to users in primary care practices. We developed a single-page application, web server, database, and application programming interface to calculate and display kidney failure risk. Data were extracted from the EHR using the Consolidated Clinical Document Architecture interoperability standard for Continuity of Care Documents (CCDs). EHR users were presented with a noninterruptive alert on the patient's summary screen and a hyperlink to details and recommendations provided through a web application. Clinic schedules and CCDs were retrieved using existing application programming interfaces to the EHR, and we provided a clinical decision support hyperlink to the EHR as a service. We debugged a series of terminology and technical issues. The application was validated with data from 255 patients and subsequently deployed to 10 primary care clinics where, over the course of 1 year, 569 533 CCD documents were processed. We validated the use of interoperable documents and open-source components to develop a low-cost tool for automated clinical decision support. Since Consolidated Clinical Document Architecture-based data extraction extends to any certified EHR, this demonstrates a successful modular approach to clinical decision support. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  12. Access/AML -

    Data.gov (United States)

    Department of Transportation — The AccessAML is a web-based internet single application designed to reduce the vulnerability associated with several accounts assinged to a single users. This is a...

  13. Evaluation of a web based informatics system with data mining tools for predicting outcomes with quantitative imaging features in stroke rehabilitation clinical trials

    Science.gov (United States)

    Wang, Ximing; Kim, Bokkyu; Park, Ji Hoon; Wang, Erik; Forsyth, Sydney; Lim, Cody; Ravi, Ragini; Karibyan, Sarkis; Sanchez, Alexander; Liu, Brent

    2017-03-01

    Quantitative imaging biomarkers are used widely in clinical trials for tracking and evaluation of medical interventions. Previously, we have presented a web based informatics system utilizing quantitative imaging features for predicting outcomes in stroke rehabilitation clinical trials. The system integrates imaging features extraction tools and a web-based statistical analysis tool. The tools include a generalized linear mixed model(GLMM) that can investigate potential significance and correlation based on features extracted from clinical data and quantitative biomarkers. The imaging features extraction tools allow the user to collect imaging features and the GLMM module allows the user to select clinical data and imaging features such as stroke lesion characteristics from the database as regressors and regressands. This paper discusses the application scenario and evaluation results of the system in a stroke rehabilitation clinical trial. The system was utilized to manage clinical data and extract imaging biomarkers including stroke lesion volume, location and ventricle/brain ratio. The GLMM module was validated and the efficiency of data analysis was also evaluated.

  14. ANGDelMut – a web-based tool for predicting and analyzing functional loss mechanisms of amyotrophic lateral sclerosis-associated angiogenin mutations [v2; ref status: indexed, http://f1000r.es/2mc

    Directory of Open Access Journals (Sweden)

    Aditya K Padhi

    2013-12-01

    Full Text Available ANGDelMut is a web-based tool for predicting the functional consequences of missense mutations in the angiogenin (ANG protein, which is associated with amyotrophic lateral sclerosis (ALS. Missense mutations in ANG result in loss of either ribonucleolytic activity or nuclear translocation activity or both of these functions, and in turn cause ALS. However, no web-based tools are available to predict whether a newly identified ANG mutation will possibly lead to ALS. More importantly, no web-implemented method is currently available to predict the mechanisms of loss-of-function(s of ANG mutants. In light of this observation, we developed the ANGDelMut web-based tool, which predicts whether an ANG mutation is deleterious or benign. The user selects certain attributes from the input panel, which serves as a query to infer whether a mutant will exhibit loss of ribonucleolytic activity or nuclear translocation activity or whether the overall stability will be affected. The output states whether the mutation is deleterious or benign, and if it is deleterious, gives the possible mechanism(s of loss-of-function. This web-based tool, freely available at http://bioschool.iitd.ernet.in/DelMut/, is the first of its kind to provide a platform for researchers and clinicians, to infer the functional consequences of ANG mutations and correlate their possible association with ALS ahead of experimental findings.

  15. ANGDelMut – a web-based tool for predicting and analyzing functional loss mechanisms of amyotrophic lateral sclerosis-associated angiogenin mutations [v3; ref status: indexed, http://f1000r.es/2yt

    Directory of Open Access Journals (Sweden)

    Aditya K Padhi

    2014-02-01

    Full Text Available ANGDelMut is a web-based tool for predicting the functional consequences of missense mutations in the angiogenin (ANG protein, which is associated with amyotrophic lateral sclerosis (ALS. Missense mutations in ANG result in loss of either ribonucleolytic activity or nuclear translocation activity or both of these functions, and in turn cause ALS. However, no web-based tools are available to predict whether a newly identified ANG mutation will possibly lead to ALS. More importantly, no web-implemented method is currently available to predict the mechanisms of loss-of-function(s of ANG mutants. In light of this observation, we developed the ANGDelMut web-based tool, which predicts whether an ANG mutation is deleterious or benign. The user selects certain attributes from the input panel, which serves as a query to infer whether a mutant will exhibit loss of ribonucleolytic activity or nuclear translocation activity or whether the overall stability will be affected. The output states whether the mutation is deleterious or benign, and if it is deleterious, gives the possible mechanism(s of loss-of-function. This web-based tool, freely available at http://bioschool.iitd.ernet.in/DelMut/, is the first of its kind to provide a platform for researchers and clinicians, to infer the functional consequences of ANG mutations and correlate their possible association with ALS ahead of experimental findings.

  16. A novel design of hidden web crawler using ontology

    OpenAIRE

    Manvi; Bhatia, Komal Kumar; Dixit, Ashutosh

    2015-01-01

    Deep Web is content hidden behind HTML forms. Since it represents a large portion of the structured, unstructured and dynamic data on the Web, accessing Deep-Web content has been a long challenge for the database community. This paper describes a crawler for accessing Deep-Web using Ontologies. Performance evaluation of the proposed work showed that this new approach has promising results.

  17. Delivering Electronic Resources with Web OPACs and Other Web-based Tools: Needs of Reference Librarians.

    Science.gov (United States)

    Bordeianu, Sever; Carter, Christina E.; Dennis, Nancy K.

    2000-01-01

    Describes Web-based online public access catalogs (Web OPACs) and other Web-based tools as gateway methods for providing access to library collections. Addresses solutions for overcoming barriers to information, such as through the implementation of proxy servers and other authentication tools for remote users. (Contains 18 references.)…

  18. A Web-based nomogram predicting para-aortic nodal metastasis in incompletely staged patients with endometrial cancer: a Korean Multicenter Study.

    Science.gov (United States)

    Kang, Sokbom; Lee, Jong-Min; Lee, Jae-Kwan; Kim, Jae-Weon; Cho, Chi-Heum; Kim, Seok-Mo; Park, Sang-Yoon; Park, Chan-Yong; Kim, Ki-Tae

    2014-03-01

    The purpose of this study is to develop a Web-based nomogram for predicting the individualized risk of para-aortic nodal metastasis in incompletely staged patients with endometrial cancer. From 8 institutions, the medical records of 397 patients who underwent pelvic and para-aortic lymphadenectomy as a surgical staging procedure were retrospectively reviewed. A multivariate logistic regression model was created and internally validated by rigorous bootstrap resampling methods. Finally, the model was transformed into a user-friendly Web-based nomogram (http://http://www.kgog.org/nomogram/empa001.html). The rate of para-aortic nodal metastasis was 14.4% (57/397 patients). Using a stepwise variable selection, 4 variables including deep myometrial invasion, non-endometrioid subtype, lymphovascular space invasion, and log-transformed CA-125 levels were finally adopted. After 1000 repetitions of bootstrapping, all of these 4 variables retained a significant association with para-aortic nodal metastasis in the multivariate analysis-deep myometrial invasion (P = 0.001), non-endometrioid histologic subtype (P = 0.034), lymphovascular space invasion (P = 0.003), and log-transformed serum CA-125 levels (P = 0.004). The model showed good discrimination (C statistics = 0.87; 95% confidence interval, 0.82-0.92) and accurate calibration (Hosmer-Lemeshow P = 0.74). This nomogram showed good performance in predicting para-aortic metastasis in patients with endometrial cancer. The tool may be useful in determining the extent of lymphadenectomy after incomplete surgery.

  19. The utility and limitations of current web-available algorithms to predict peptides recognized by CD4 T cells in response to pathogen infection #

    Science.gov (United States)

    Chaves, Francisco A.; Lee, Alvin H.; Nayak, Jennifer; Richards, Katherine A.; Sant, Andrea J.

    2012-01-01

    The ability to track CD4 T cells elicited in response to pathogen infection or vaccination is critical because of the role these cells play in protective immunity. Coupled with advances in genome sequencing of pathogenic organisms, there is considerable appeal for implementation of computer-based algorithms to predict peptides that bind to the class II molecules, forming the complex recognized by CD4 T cells. Despite recent progress in this area, there is a paucity of data regarding their success in identifying actual pathogen-derived epitopes. In this study, we sought to rigorously evaluate the performance of multiple web-available algorithms by comparing their predictions and our results using purely empirical methods for epitope discovery in influenza that utilized overlapping peptides and cytokine Elispots, for three independent class II molecules. We analyzed the data in different ways, trying to anticipate how an investigator might use these computational tools for epitope discovery. We come to the conclusion that currently available algorithms can indeed facilitate epitope discovery, but all shared a high degree of false positive and false negative predictions. Therefore, efficiencies were low. We also found dramatic disparities among algorithms and between predicted IC50 values and true dissociation rates of peptide:MHC class II complexes. We suggest that improved success of predictive algorithms will depend less on changes in computational methods or increased data sets and more on changes in parameters used to “train” the algorithms that factor in elements of T cell repertoire and peptide acquisition by class II molecules. PMID:22467652

  20. Development of Web tools to predict axillary lymph node metastasis and pathological response to neoadjuvant chemotherapy in breast cancer patients.

    Science.gov (United States)

    Sugimoto, Masahiro; Takada, Masahiro; Toi, Masakazu

    2014-12-09

    Nomograms are a standard computational tool to predict the likelihood of an outcome using multiple available patient features. We have developed a more powerful data mining methodology, to predict axillary lymph node (AxLN) metastasis and response to neoadjuvant chemotherapy (NAC) in primary breast cancer patients. We developed websites to use these tools. The tools calculate the probability of AxLN metastasis (AxLN model) and pathological complete response to NAC (NAC model). As a calculation algorithm, we employed a decision tree-based prediction model known as the alternative decision tree (ADTree), which is an analog development of if-then type decision trees. An ensemble technique was used to combine multiple ADTree predictions, resulting in higher generalization abilities and robustness against missing values. The AxLN model was developed with training datasets (n=148) and test datasets (n=143), and validated using an independent cohort (n=174), yielding an area under the receiver operating characteristic curve (AUC) of 0.768. The NAC model was developed and validated with n=150 and n=173 datasets from a randomized controlled trial, yielding an AUC of 0.787. AxLN and NAC models require users to input up to 17 and 16 variables, respectively. These include pathological features, including human epidermal growth factor receptor 2 (HER2) status and imaging findings. Each input variable has an option of "unknown," to facilitate prediction for cases with missing values. The websites developed facilitate the use of these tools, and serve as a database for accumulating new datasets.

  1. Arab Libraries’ Web-based OPACs: An evaluative study in the light of IFLA’s Guidelines For Online Public Access Catalogue (OPAC Displays

    Directory of Open Access Journals (Sweden)

    Sherif Kamel Shaheen

    2005-03-01

    Full Text Available The research aims at evaluating Arabic Libraries’ Web-based Catalogues in the light of Principles and Recommendations published in: IFLA’s Guidelines For OPAC Displays (September 30, 2003 Draft For Worldwide Review. The total No. Of Recommendations reached” 38 “were categorized under three main titles, as follows: User Needs (12 recommendations, Content and arrangement Principle (25 recommendations, Standardization Principle (one recommendation However that number increased to reach 88 elements when formulated as evaluative criteria and included in the study’s checklist.

  2. Designing an Internationally Accessible Web-Based Questionnaire to Discover Risk Factors for Amyotrophic Lateral Sclerosis: A Case-Control Study

    Science.gov (United States)

    Parkin Kullmann, Jane Alana; Hayes, Susan; Wang, Min-Xia

    2015-01-01

    Background Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease with a typical survival of three to five years. Epidemiological studies using paper-based questionnaires in individual countries or continents have failed to find widely accepted risk factors for the disease. The advantages of online versus paper-based questionnaires have been extensively reviewed, but few online epidemiological studies into human neurodegenerative diseases have so far been undertaken. Objective To design a Web-based questionnaire to identify environmental risk factors for ALS and enable international comparisons of these risk factors. Methods A Web-based epidemiological questionnaire for ALS has been developed based on experience gained from administering a previous continent-wide paper-based questionnaire for this disease. New and modified questions have been added from our previous paper-based questionnaire, from literature searches, and from validated ALS questionnaires supplied by other investigators. New criteria to allow the separation of familial and sporadic ALS cases have been included. The questionnaire addresses many risk factors that have already been proposed for ALS, as well as a number that have not yet been rigorously examined. To encourage participation, responses are collected anonymously and no personally identifiable information is requested. The survey is being translated into a number of languages which will allow many people around the world to read and answer it in their own language. Results After the questionnaire had been online for 4 months, it had 379 respondents compared to only 46 respondents for the same initial period using a paper-based questionnaire. The average age of the first 379 web questionnaire respondents was 54 years compared to the average age of 60 years for the first 379 paper questionnaire respondents. The questionnaire is soon to be promoted in a number of countries through ALS associations and disease

  3. The design and implementation of web mining in web sites security

    Science.gov (United States)

    Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li

    2003-06-01

    The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.

  4. A survey on web modeling approaches for ubiquitous web applications

    NARCIS (Netherlands)

    Schwinger, W.; Retschitzegger, W.; Schauerhuber, A.; Kappel, G.; Wimmer, M.; Pröll, B.; Cachero Castro, C.; Casteleyn, S.; De Troyer, O.; Fraternali, P.; Garrigos, I.; Garzotto, F.; Ginige, A.; Houben, G.J.P.M.; Koch, N.; Moreno, N.; Pastor, O.; Paolini, P.; Pelechano Ferragud, V.; Rossi, G.; Schwabe, D.; Tisi, M.; Vallecillo, A.; Sluijs, van der K.A.M.; Zhang, G.

    2008-01-01

    Purpose – Ubiquitous web applications (UWA) are a new type of web applications which are accessed in various contexts, i.e. through different devices, by users with various interests, at anytime from anyplace around the globe. For such full-fledged, complex software systems, a methodologically sound

  5. From intermediation to disintermediation and apomediation: new models for consumers to access and assess the credibility of health information in the age of Web2.0.

    Science.gov (United States)

    Eysenbach, Gunther

    2007-01-01

    This theoretical paper discusses the model that, as a result of the social process of disintermediation enabled by digital media, traditional intermediaries are replaced by what this author calls apomediaries, which are tools and peers standing by to guide consumers to trustworthy information, or adding credibility to information. For apomediation to be an attractive and successful model for consumers, the recipient has to reach a certain degree of maturity and autonomy. Different degrees of autonomy may explain differences in information seeking and credibility appraisal behaviours. It is hypothesized that in an apomediated environment, tools, influential peers and opinion leaders are the primary conveyors of trust and credibility. In this environment, apomediary credibility may become equally or more important than source credibility or even message credibility. It is suggested to use tools of network analysis to study the dynamics of apomediary credibility in a networked digital world. There are practical implications of the apomediation model for developers of consumer health websites which aspire to come across as "credible: Consumers need and want to be able to be co-creators of content, not merely be an audience who is broadcasted to. Web2.0 technology enables such sites. Engaging and credible Web sites are about building community and communities are built upon personal and social needs.

  6. Oracle application express 5.1 basics and beyond a practical guide to rapidly develop data-centric web applications accessible from desktop, laptops, tablets, and smartphones

    CERN Document Server

    2017-01-01

    You will find stuff about workspace, application, page, and so on in every APEX book. But this book is unique because the information it contains is not available anywhere else! Unlike other books, it adopts a stimulating approach to reveal almost every feature necessary for the beginners of Oracle APEX and also takes them beyond the basics. As a technology enthusiast I write on a variety of new technologies, but writing books on Oracle Application Express is my passion. The blood pumping comments I get from my readers on Amazon (and in my inbox) are the main forces that motivate me to write a book whenever a new version of Oracle APEX is launched. This is my fifth book on Oracle APEX (and the best so far) written after discovering the latest 5.1 version. As usual, I’m sharing my personal learning experience through this book to expose this unique rapid web application development platform. In Oracle Application Express you can build robust web applications. The new version is launched with some more prol...

  7. Patterns of access to the Guías de Práctica Clínica (Clinical Practice Guidelines Web Portal in Colombia

    Directory of Open Access Journals (Sweden)

    Fernando Suárez-Obando

    2016-10-01

    Full Text Available Introduction: Colombian clinical practice guidelines for health care are published in the Web Portal of the Ministry of Health and Social Protection. Objective: To analyze the traffic of the clinical guidelines portal through web consultation metrics. Materials and methods: The website traffic analysis was performed over a period of 20 months using Google Analytics and Megalytic. Results: 190 115 users logged in, and 125 475 of them (≈66% were first-time visitors, while 63 118 (≈33% were repeated users. 126 994 users visited 608 745 pages, with an average of 3.2 pages per session, query time of 3.45 minutes per visit and average rebound rate of 46.74%. 40% of users interacted with at least three pages and 40% left the site without interacting with a second page. The sessions originated in Colombia, Mexico, Peru and Spain; the first country represented 169 666 visits and Bogotá D.C. recorded the highest number of visits (32%, followed by Medellín (12.3%, Cali (8.3%, Barranquilla (4.1% and Bucaramanga (3.3%, for a total of 60% of the traffic. The most visited guides were handling pregnancy and infection of the urogenital tract. Conclusions: The portal had an acceptable traffic during the first 20 months of operation. An innovative portal that improves the dissemination of the guides must remain active.

  8. Web application to access U.S. Army Corps of Engineers Civil Works and Restoration Projects information for the Rio Grande Basin, southern Colorado, New Mexico, and Texas

    Science.gov (United States)

    Archuleta, Christy-Ann M.; Eames, Deanna R.

    2009-01-01

    The Rio Grande Civil Works and Restoration Projects Web Application, developed by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers (USACE) Albuquerque District, is designed to provide publicly available information through the Internet about civil works and restoration projects in the Rio Grande Basin. Since 1942, USACE Albuquerque District responsibilities have included building facilities for the U.S. Army and U.S. Air Force, providing flood protection, supplying water for power and public recreation, participating in fire remediation, protecting and restoring wetlands and other natural resources, and supporting other government agencies with engineering, contracting, and project management services. In the process of conducting this vast array of engineering work, the need arose for easily tracking the locations of and providing information about projects to stakeholders and the public. This fact sheet introduces a Web application developed to enable users to visualize locations and search for information about USACE (and some other Federal, State, and local) projects in the Rio Grande Basin in southern Colorado, New Mexico, and Texas.

  9. WebVR: an interactive web browser for virtual environments

    Science.gov (United States)

    Barsoum, Emad; Kuester, Falko

    2005-03-01

    The pervasive nature of web-based content has lead to the development of applications and user interfaces that port between a broad range of operating systems and databases, while providing intuitive access to static and time-varying information. However, the integration of this vast resource into virtual environments has remained elusive. In this paper we present an implementation of a 3D Web Browser (WebVR) that enables the user to search the internet for arbitrary information and to seamlessly augment this information into virtual environments. WebVR provides access to the standard data input and query mechanisms offered by conventional web browsers, with the difference that it generates active texture-skins of the web contents that can be mapped onto arbitrary surfaces within the environment. Once mapped, the corresponding texture functions as a fully integrated web-browser that will respond to traditional events such as the selection of links or text input. As a result, any surface within the environment can be turned into a web-enabled resource that provides access to user-definable data. In order to leverage from the continuous advancement of browser technology and to support both static as well as streamed content, WebVR uses ActiveX controls to extract the desired texture skin from industry strength browsers, providing a unique mechanism for data fusion and extensibility.

  10. Network of Research Infrastructures for European Seismology (NERIES)—Web Portal Developments for Interactive Access to Earthquake Data on a European Scale

    OpenAIRE

    A. Spinuso; L. Trani; S. Rives; P. Thomy; F. Euchner; Danijel Schorlemmer; Joachim Saul; Andres Heinloo; R. Bossu; T. van Eck

    2009-01-01

    The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach bas...

  11. Web Extensible Display Manager

    Energy Technology Data Exchange (ETDEWEB)

    Slominski, Ryan [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Larrieu, Theodore L. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2018-02-01

    Jefferson Lab's Web Extensible Display Manager (WEDM) allows staff to access EDM control system screens from a web browser in remote offices and from mobile devices. Native browser technologies are leveraged to avoid installing and managing software on remote clients such as browser plugins, tunnel applications, or an EDM environment. Since standard network ports are used firewall exceptions are minimized. To avoid security concerns from remote users modifying a control system, WEDM exposes read-only access and basic web authentication can be used to further restrict access. Updates of monitored EPICS channels are delivered via a Web Socket using a web gateway. The software translates EDM description files (denoted with the edl suffix) to HTML with Scalable Vector Graphics (SVG) following the EDM's edl file vector drawing rules to create faithful screen renderings. The WEDM server parses edl files and creates the HTML equivalent in real-time allowing existing screens to work without modification. Alternatively, the familiar drag and drop EDM screen creation tool can be used to create optimized screens sized specifically for smart phones and then rendered by WEDM.

  12. The World Wide Web Revisited

    Science.gov (United States)

    Owston, Ron

    2007-01-01

    Nearly a decade ago the author wrote in one of the first widely-cited academic articles, Educational Researcher, about the educational role of the web. He argued that educators must be able to demonstrate that the web (1) can increase access to learning, (2) must not result in higher costs for learning, and (3) can lead to improved learning. These…

  13. Digging Deeper: The Deep Web.

    Science.gov (United States)

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  14. Professional Access 2013 programming

    CERN Document Server

    Hennig, Teresa; Hepworth, George; Yudovich, Dagi (Doug)

    2013-01-01

    Authoritative and comprehensive coverage for building Access 2013 Solutions Access, the most popular database system in the world, just opened a new frontier in the Cloud. Access 2013 provides significant new features for building robust line-of-business solutions for web, client and integrated environments.  This book was written by a team of Microsoft Access MVPs, with consulting and editing by Access experts, MVPs and members of the Microsoft Access team. It gives you the information and examples to expand your areas of expertise and immediately start to develop and upgrade projects. Exp

  15. A new general dynamic model predicting radionuclide concentrations and fluxes in coastal areas from readily accessible driving variables

    International Nuclear Information System (INIS)

    Haakanson, Lars

    2004-01-01

    This paper presents a general, process-based dynamic model for coastal areas for radionuclides (metals, organics and nutrients) from both single pulse fallout and continuous deposition. The model gives radionuclide concentrations in water (total, dissolved and particulate phases and concentrations in sediments and fish) for entire defined coastal areas. The model gives monthly variations. It accounts for inflow from tributaries, direct fallout to the coastal area, internal fluxes (sedimentation, resuspension, diffusion, burial, mixing and biouptake and retention in fish) and fluxes to and from the sea outside the defined coastal area and/or adjacent coastal areas. The fluxes of water and substances between the sea and the coastal area are differentiated into three categories of coast types: (i) areas where the water exchange is regulated by tidal effects; (ii) open coastal areas where the water exchange is regulated by coastal currents; and (iii) semi-enclosed archipelago coasts. The coastal model gives the fluxes to and from the following four abiotic compartments: surface water, deep water, ET areas (i.e., areas where fine sediment erosion and transport processes dominate the bottom dynamic conditions and resuspension appears) and A-areas (i.e., areas of continuous fine sediment accumulation). Criteria to define the boundaries for the given coastal area towards the sea, and to define whether a coastal area is open or closed are given in operational terms. The model is simple to apply since all driving variables may be readily accessed from maps and standard monitoring programs. The driving variables are: latitude, catchment area, mean annual precipitation, fallout and month of fallout and parameters expressing coastal size and form as determined from, e.g., digitized bathymetric maps using a GIS program. Selected results: the predictions of radionuclide concentrations in water and fish largely depend on two factors, the concentration in the sea outside the given

  16. Electronic doors to education: study of high school website accessibility in Iowa.

    Science.gov (United States)

    Klein, David; Myhill, William; Hansen, Linda; Asby, Gary; Michaelson, Susan; Blanck, Peter

    2003-01-01

    The Americans with Disabilities Act (ADA), and Sections 504 and 508 of the Rehabilitation Act, prohibit discrimination against people with disabilities in all aspects of daily life, including education, work, and access to places of public accommodations. Increasingly, these antidiscrimination laws are used by persons with disabilities to ensure equal access to e-commerce, and to private and public Internet websites. To help assess the impact of the anti-discrimination mandate for educational communities, this study examined 157 website home pages of Iowa public high schools (52% of high schools in Iowa) in terms of their electronic accessibility for persons with disabilities. We predicted that accessibility problems would limit students and others in obtaining information from the web pages as well as limiting ability to navigate to other web pages. Findings show that although many web pages examined included information in accessible formats, none of the home pages met World Wide Web Consortium (W3C) standards for accessibility. The most frequent accessibility problem was lack of alternative text (ALT tags) for graphics. Technical sophistication built into pages was found to reduce accessibility. Implications are discussed for schools and educational institutions, and for laws, policies, and procedures on website accessibility. Copyright 2003 John Wiley & Sons, Ltd.

  17. Persistence and availability of Web services in computational biology.

    Science.gov (United States)

    Schultheiss, Sebastian J; Münch, Marc-Christian; Andreeva, Gergana D; Rätsch, Gunnar

    2011-01-01

    We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository.

  18. Implementation of Freeman-Wimley prediction algorithm in a web-based application for in silico identification of beta-barrel membrane proteins

    Directory of Open Access Journals (Sweden)

    José Antonio Agüero-Fernández

    2015-11-01

    Full Text Available Beta-barrel type proteins play an important role in both, human and veterinary medicine. In particular, their localization on the bacterial surface, and their involvement in virulence mechanisms of pathogens, have turned them into an interesting target in studies to search for vaccine candidates. Recently, Freeman and Wimley developed a prediction algorithm based on the physicochemical properties of transmembrane beta-barrels proteins (TMBBs. Based on that algorithm, and using Grails, a web-based application was implemented. This system, named Beta Predictor, is capable of processing from one protein sequence to complete predicted proteomes up to 10000 proteins with a runtime of about 0.019 seconds per 500-residue protein, and it allows graphical analyses for each protein. The application was evaluated with a validation set of 535 non-redundant proteins, 102 TMBBs and 433 non-TMBBs. The sensitivity, specificity, Matthews correlation coefficient, positive predictive value and accuracy were calculated, being 85.29%, 95.15%, 78.72%, 80.56% and 93.27%, respectively. The performance of this system was compared with TMBBs predictors, BOMP and TMBHunt, using the same validation set. Taking into account the order mentioned above, the following results were obtained: 76.47%, 99.31%, 83.05%, 96.30% and 94.95% for BOMP, and 78.43%, 92.38%, 67.90%, 70.17% and 89.78% for TMBHunt. Beta Predictor was outperformed by BOMP but the latter showed better behavior than TMBHunt

  19. Persistent Web References – Best Practices and New Suggestions

    DEFF Research Database (Denmark)

    Zierau, Eld; Nyvang, Caroline; Kromann, Thomas Hvid

    In this paper, we suggest adjustments to best practices for persistent web referencing; adjustments that aim at preservation and long time accessibility of web referenced resources in general, but with focus on web references in web archives. Web referencing is highly relevant and crucial...... refer to archive URLs which depends on the web archives access implementations. A major part of the suggested adjustments is a new web reference standard for archived web references (called wPID), which is a supplement to the current practices. The purpose of the standard is to support general, global...

  20. Web archives

    DEFF Research Database (Denmark)

    Finnemann, Niels Ole

    2018-01-01

    This article deals with general web archives and the principles for selection of materials to be preserved. It opens with a brief overview of reasons why general web archives are needed. Section two and three present major, long termed web archive initiatives and discuss the purposes and possible...... values of web archives and asks how to meet unknown future needs, demands and concerns. Section four analyses three main principles in contemporary web archiving strategies, topic centric, domain centric and time-centric archiving strategies and section five discuss how to combine these to provide...... a broad and rich archive. Section six is concerned with inherent limitations and why web archives are always flawed. The last sections deal with the question how web archives may fit into the rapidly expanding, but fragmented landscape of digital repositories taking care of various parts...

  1. Prey interception drives web invasion and spider size determines successful web takeover in nocturnal orb-web spiders.

    Science.gov (United States)

    Gan, Wenjin; Liu, Shengjie; Yang, Xiaodong; Li, Daiqin; Lei, Chaoliang

    2015-09-24

    A striking feature of web-building spiders is the use of silk to make webs, mainly for prey capture. However, building a web is energetically expensive and increases the risk of predation. To reduce such costs and still have access to abundant prey, some web-building spiders have evolved web invasion behaviour. In general, no consistent patterns of web invasion have emerged and the factors determining web invasion remain largely unexplored. Here we report web invasion among conspecifics in seven nocturnal species of orb-web spiders, and examined the factors determining the probability of webs that could be invaded and taken over by conspecifics. About 36% of webs were invaded by conspecifics, and 25% of invaded webs were taken over by the invaders. A web that was built higher and intercepted more prey was more likely to be invaded. Once a web was invaded, the smaller the size of the resident spider, the more likely its web would be taken over by the invader. This study suggests that web invasion, as a possible way of reducing costs, may be widespread in nocturnal orb-web spiders. © 2015. Published by The Company of Biologists Ltd.

  2. Prey interception drives web invasion and spider size determines successful web takeover in nocturnal orb-web spiders

    Directory of Open Access Journals (Sweden)

    Wenjin Gan

    2015-10-01

    Full Text Available A striking feature of web-building spiders is the use of silk to make webs, mainly for prey capture. However, building a web is energetically expensive and increases the risk of predation. To reduce such costs and still have access to abundant prey, some web-building spiders have evolved web invasion behaviour. In general, no consistent patterns of web invasion have emerged and the factors determining web invasion remain largely unexplored. Here we report web invasion among conspecifics in seven nocturnal species of orb-web spiders, and examined the factors determining the probability of webs that could be invaded and taken over by conspecifics. About 36% of webs were invaded by conspecifics, and 25% of invaded webs were taken over by the invaders. A web that was built higher and intercepted more prey was more likely to be invaded. Once a web was invaded, the smaller the size of the resident spider, the more likely its web would be taken over by the invader. This study suggests that web invasion, as a possible way of reducing costs, may be widespread in nocturnal orb-web spiders.

  3. AGRIS: providing access to agricultural research data exploiting open data on the web [v1; ref status: indexed, http://f1000r.es/599

    Directory of Open Access Journals (Sweden)

    Fabrizio Celli

    2015-05-01

    Full Text Available AGRIS is the International System for Agricultural Science and Technology. It is supported by a large community of data providers, partners and users. AGRIS is a database that aggregates bibliographic data, and through this core data, related content across online information systems is retrieved by taking advantage of Semantic Web capabilities. AGRIS is a global public good and its vision is to be a responsive service to its user needs by facilitating contributions and feedback regarding the AGRIS core knowledgebase, AGRIS’s future and its continuous development. Periodic AGRIS e-consultations, partner meetings and user feedback are assimilated to the development of the AGRIS application and content coverage. This paper outlines the current AGRIS technical set-up, its network of partners, data providers and users as well as how AGRIS’s responsiveness to clients’ needs inspires the continuous technical development of the application. The paper concludes by providing a use case of how the AGRIS stakeholder input and the subsequent AGRIS e-consultation results influence the development of the AGRIS application, knowledgebase and service delivery.

  4. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  5. Moby and Moby 2: creatures of the deep (web).

    Science.gov (United States)

    Vandervalk, Ben P; McCarthy, E Luke; Wilkinson, Mark D

    2009-03-01

    Facile and meaningful integration of data from disparate resources is the 'holy grail' of bioinformatics. Some resources have begun to address this problem by providing their data using Semantic Web standards, specifically the Resource Description Framework (RDF) and the Web Ontology Language (OWL). Unfortunately, adoption of Semantic Web standards has been slow overall, and even in cases where the standards are being utilized, interconnectivity between resources is rare. In response, we have seen the emergence of centralized 'semantic warehouses' that collect public data from third parties, integrate it, translate it into OWL/RDF and provide it to the community as a unified and queryable resource. One limitation of the warehouse approach is that queries are confined to the resources that have been selected for inclusion. A related problem, perhaps of greater concern, is that the majority of bioinformatics data exists in the 'Deep Web'-that is, the data does not exist until an application or analytical tool is invoked, and therefore does not have a predictable Web address. The inability to utilize Uniform Resource Identifiers (URIs) to address this data is a barrier to its accessibility via URI-centric Semantic Web technologies. Here we examine 'The State of the Union' for the adoption of Semantic Web standards in the health care and life sciences domain by key bioinformatics resources, explore the nature and connectivity of several community-driven semantic warehousing projects, and report on our own progress with the CardioSHARE/Moby-2 project, which aims to make the resources of the Deep Web transparently accessible through SPARQL queries.

  6. Designing, Implementing, and Evaluating Secure Web Browsers

    Science.gov (United States)

    Grier, Christopher L.

    2009-01-01

    Web browsers are plagued with vulnerabilities, providing hackers with easy access to computer systems using browser-based attacks. Efforts that retrofit existing browsers have had limited success since modern browsers are not designed to withstand attack. To enable more secure web browsing, we design and implement new web browsers from the ground…

  7. The Geospatial Web and Local Geographical Education

    Science.gov (United States)

    Harris, Trevor M.; Rouse, L. Jesse; Bergeron, Susan J.

    2010-01-01

    Recent innovations in the Geospatial Web represent a paradigm shift in Web mapping by enabling educators to explore geography in the classroom by dynamically using a rapidly growing suite of impressive online geospatial tools. Coupled with access to spatial data repositories and User-Generated Content, the Geospatial Web provides a powerful…

  8. Web-Mediated Knowledge Synthesis for Educators

    Science.gov (United States)

    DeSchryver, Michael

    2015-01-01

    Ubiquitous and instant access to information on the Web is challenging what constitutes 21st century literacies. This article explores the notion of Web-mediated knowledge synthesis, an approach to integrating Web-based learning that may result in generative synthesis of ideas. This article describes the skills and strategies that may support…

  9. Quantifying retrieval bias in Web archive search

    NARCIS (Netherlands)

    Samar, Thaer; Traub, Myriam C.; van Ossenbruggen, Jacco; Hardman, Lynda; de Vries, Arjen P.

    2018-01-01

    A Web archive usually contains multiple versions of documents crawled from the Web at different points in time. One possible way for users to access a Web archive is through full-text search systems. However, previous studies have shown that these systems can induce a bias, known as the

  10. Bringing the Web to America

    CERN Multimedia

    Kunz, P F

    1999-01-01

    On 12 December 1991, Dr. Kunz installed the first Web server outside of Europe at the Stanford Linear Accelerator Center. Today, if you do not have access to the Web you are considered disadvantaged. Before it made sense for Tim Berners-Lee to invent the Web at CERN, there had to a number of ingredients in place. Dr. Kunz will present a history of how these ingredients developed and the role the academic research community had in forming them. In particular, the role that big science, such as high energy physics, played in giving us the Web we have today...

  11. On Web User Tracking: How Third-Party Http Requests Track Users' Browsing Patterns for Personalised Advertising

    OpenAIRE

    Puglisi, Silvia; Rebollo-Monedero, David; Forné, Jordi

    2016-01-01

    On today's Web, users trade access to their private data for content and services. Advertising sustains the business model of many websites and applications. Efficient and successful advertising relies on predicting users' actions and tastes to suggest a range of products to buy. It follows that, while surfing the Web users leave traces regarding their identity in the form of activity patterns and unstructured data. We analyse how advertising networks build user footprints and how the sugg...

  12. Realising the Full Potential of the Web.

    Science.gov (United States)

    Berners-Lee, Tim

    1999-01-01

    Argues that the first phase of the Web is communication through shared knowledge. Predicts that the second side to the Web, yet to emerge, is that of machine-understandable information, with humans providing the inspiration and the intuition. (CR)

  13. Access 2013 for dummies

    CERN Document Server

    Ulrich Fuller, Laurie

    2013-01-01

    The easy guide to Microsoft Access returns with updates on the latest version! Microsoft Access allows you to store, organize, view, analyze, and share data; the new Access 2013 release enables you to build even more powerful, custom database solutions that integrate with the web and enterprise data sources. Access 2013 For Dummies covers all the new features of the latest version of Accessand serves as an ideal reference, combining the latest Access features with the basics of building usable databases. You'll learn how to create an app from the Welcome screen, get support

  14. Pro Access 2010 Development

    CERN Document Server

    Collins, Mark

    2011-01-01

    Pro Access 2010 Development is a fundamental resource for developing business applications that take advantage of the features of Access 2010 and the many sources of data available to your business. In this book, you'll learn how to build database applications, create Web-based databases, develop macros and Visual Basic for Applications (VBA) tools for Access applications, integrate Access with SharePoint and other business systems, and much more. Using a practical, hands-on approach, this book will take you through all the facets of developing Access-based solutions, such as data modeling, co

  15. Reducing shame in a game that predicts HIV risk reduction for young adult MSM: a randomized trial delivered nationally over the Web.

    Science.gov (United States)

    Christensen, John L; Miller, Lynn Carol; Appleby, Paul Robert; Corsbie-Massay, Charisse; Godoy, Carlos Gustavo; Marsella, Stacy C; Read, Stephen J

    2013-11-13

    Men who have sex with men (MSM) often face socially sanctioned disapproval of sexual deviance from the heterosexual "normal." Such sexual stigma can be internalized producing a painful affective state (i.e., shame). Although shame (e.g., addiction) can predict risk-taking (e.g., alcohol abuse), sexual shame's link to sexual risk-taking is unclear. Socially Optimized Learning in Virtual Environments (SOLVE) was designed to reduce MSM's sexual shame, but whether it does so, and if that reduction predicts HIV risk reduction, is unclear. To test if at baseline, MSM's reported past unprotected anal intercourse (UAI) is related to shame; MSM's exposure to SOLVE compared to a wait-list control (WLC) condition reduces MSM's shame; and shame-reduction mediates the link between WLC condition and UAI risk reduction. HIV-negative, self-identified African American, Latino or White MSM, aged 18-24 years, who had had UAI with a non-primary/casual partner in the past three months were recruited for a national online study. Eligible MSM were computer randomized to either WLC or a web-delivered SOLVE. Retained MSM completed baseline measures (e.g., UAI in the past three months; current level of shame) and, in the SOLVE group, viewed at least one level of the game. At the end of the first session, shame was measured again. MSM completed follow-up UAI measures three months later. All data from 921 retained MSM (WLC condition, 484; SOLVE condition, 437) were analyzed, with missing data multiply imputed. At baseline, MSM reporting more risky sexual behaviour reported more shame (r s=0.21; peffect was significant (point estimate -0.10, 95% bias-corrected CI [-0.01 to -0.23] such that participants in the SOLVE treatment condition reported greater reductions in shame, which in turn predicted reductions in risky sexual behaviour at follow-up. The direct effect, however, was not significant. SOLVE is the first intervention to: (1) significantly reduce shame for MSM; and (2) demonstrate that

  16. Web-based services for drug design and discovery.

    Science.gov (United States)

    Frey, Jeremy G; Bird, Colin L

    2011-09-01

    Reviews of the development of drug discovery through the 20(th) century recognised the importance of chemistry and increasingly bioinformatics, but had relatively little to say about the importance of computing and networked computing in particular. However, the design and discovery of new drugs is arguably the most significant single application of bioinformatics and cheminformatics to have benefitted from the increases in the range and power of the computational techniques since the emergence of the World Wide Web, commonly now referred to as simply 'the Web'. Web services have enabled researchers to access shared resources and to deploy standardized calculations in their search for new drugs. This article first considers the fundamental principles of Web services and workflows, and then explores the facilities and resources that have evolved to meet the specific needs of chem- and bio-informatics. This strategy leads to a more detailed examination of the basic components that characterise molecules and the essential predictive techniques, followed by a discussion of the emerging networked services that transcend the basic provisions, and the growing trend towards embracing modern techniques, in particular the Semantic Web. In the opinion of the authors, the issues that require community action are: increasing the amount of chemical data available for open access; validating the data as provided; and developing more efficient links between the worlds of cheminformatics and bioinformatics. The goal is to create ever better drug design services.

  17. MetIDB: A Publicly Accessible Database of Predicted and Experimental 1H NMR Spectra of Flavonoids

    NARCIS (Netherlands)

    Mihaleva, V.V.; Beek, te T.A.; Zimmeren, van F.; Moco, S.I.A.; Laatikainen, R.; Niemitz, M.; Korhonen, S.P.; Driel, van M.A.; Vervoort, J.

    2013-01-01

    Identification of natural compounds, especially secondary metabolites, has been hampered by the lack of easy to use and accessible reference databases. Nuclear magnetic resonance (NMR) spectroscopy is the most selective technique for identification of unknown metabolites. High quality 1H NMR (proton

  18. The benefit of non contrast-enhanced magnetic resonance angiography for predicting vascular access surgery outcome : a coomputer model perspective

    NARCIS (Netherlands)

    Merkx, M.A.G.; Huberts, W.; Bosboom, E.M.H.; Bode, A.S.; Bescos, J.O.; Tordoir, J.H.M.; Breeuwer, M.; Vosse, van de F.N.

    2013-01-01

    Introduction Vascular access (VA) surgery, a prerequisite for hemodialysis treatment of end-stage renal-disease (ESRD) patients, is hampered by complication rates, which are frequently related to flow enhancement. To assist in VA surgery planning, a patient-specific computer model for postoperative

  19. Introduction to Webometrics Quantitative Web Research for the Social Sciences

    CERN Document Server

    Thelwall, Michael

    2009-01-01

    Webometrics is concerned with measuring aspects of the web: web sites, web pages, parts of web pages, words in web pages, hyperlinks, web search engine results. The importance of the web itself as a communication medium and for hosting an increasingly wide array of documents, from journal articles to holiday brochures, needs no introduction. Given this huge and easily accessible source of information, there are limitless possibilities for measuring or counting on a huge scale (e.g., the number of web sites, the number of web pages, the number of blogs) or on a smaller scale (e.g., the number o

  20. Web Engineering

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-06-23

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.