WorldWideScience

Sample records for accurate web accessible

  1. Web Accessibility and Guidelines

    Science.gov (United States)

    Harper, Simon; Yesilada, Yeliz

    Access to, and movement around, complex online environments, of which the World Wide Web (Web) is the most popular example, has long been considered an important and major issue in the Web design and usability field. The commonly used slang phrase ‘surfing the Web’ implies rapid and free access, pointing to its importance among designers and users alike. It has also been long established that this potentially complex and difficult access is further complicated, and becomes neither rapid nor free, if the user is disabled. There are millions of people who have disabilities that affect their use of the Web. Web accessibility aims to help these people to perceive, understand, navigate, and interact with, as well as contribute to, the Web, and thereby the society in general. This accessibility is, in part, facilitated by the Web Content Accessibility Guidelines (WCAG) currently moving from version one to two. These guidelines are intended to encourage designers to make sure their sites conform to specifications, and in that conformance enable the assistive technologies of disabled users to better interact with the page content. In this way, it was hoped that accessibility could be supported. While this is in part true, guidelines do not solve all problems and the new WCAG version two guidelines are surrounded by controversy and intrigue. This chapter aims to establish the published literature related to Web accessibility and Web accessibility guidelines, and discuss limitations of the current guidelines and future directions.

  2. World Wide Access: Accessible Web Design.

    Science.gov (United States)

    Washington Univ., Seattle.

    This brief paper considers the application of "universal design" principles to Web page design in order to increase accessibility for people with disabilities. Suggestions are based on the World Wide Web Consortium's accessibility initiative, which has proposed guidelines for all Web authors and federal government standards. Seven guidelines for…

  3. From Web accessibility to Web adaptability.

    Science.gov (United States)

    Kelly, Brian; Nevile, Liddy; Sloan, David; Fanou, Sotiris; Ellison, Ruth; Herrod, Lisa

    2009-07-01

    This article asserts that current approaches to enhance the accessibility of Web resources fail to provide a solid foundation for the development of a robust and future-proofed framework. In particular, they fail to take advantage of new technologies and technological practices. The article introduces a framework for Web adaptability, which encourages the development of Web-based services that can be resilient to the diversity of uses of such services, the target audience, available resources, technical innovations, organisational policies and relevant definitions of 'accessibility'. The article refers to a series of author-focussed approaches to accessibility through which the authors and others have struggled to find ways to promote accessibility for people with disabilities. These approaches depend upon the resource author's determination of the anticipated users' needs and their provision. Through approaches labelled as 1.0, 2.0 and 3.0, the authors have widened their focus to account for contexts and individual differences in target audiences. Now, the authors want to recognise the role of users in determining their engagement with resources (including services). To distinguish this new approach, the term 'adaptability' has been used to replace 'accessibility'; new definitions of accessibility have been adopted, and the authors have reviewed their previous work to clarify how it is relevant to the new approach. Accessibility 1.0 is here characterised as a technical approach in which authors are told how to construct resources for a broadly defined audience. This is known as universal design. Accessibility 2.0 was introduced to point to the need to account for the context in which resources would be used, to help overcome inadequacies identified in the purely technical approach. Accessibility 3.0 moved the focus on users from a homogenised universal definition to recognition of the idiosyncratic needs and preferences of individuals and to cater for them. All of

  4. Web Accessibility in Romania: The Conformance of Municipal Web Sites to Web Content Accessibility Guidelines

    Directory of Open Access Journals (Sweden)

    Costin PRIBEANU

    2012-01-01

    Full Text Available The accessibility of public administration web sites is a key quality attribute for the successful implementation of the Information Society. The purpose of this paper is to present a second review of municipal web sites in Romania that is based on automated accessibility checking. A number of 60 web sites were evaluated against WCAG 2.0 recommendations. The analysis of results reveals a relatively low web accessibility of municipal web sites and highlights several aspects. Firstly, a slight progress in web accessibility was noticed as regarded the sample evaluated in 2010. Secondly, the number of specific accessibility errors is varying across the web sites and the accessibility is not preserved in time. Thirdly, these variations suggest that an accessibility check before launching a new release for a web page is not a common practice.

  5. Discovering More Accurate Frequent Web Usage Patterns

    CERN Document Server

    Bayir, Murat Ali; Cosar, Ahmet; Fidan, Guven

    2008-01-01

    Web usage mining is a type of web mining, which exploits data mining techniques to discover valuable information from navigation behavior of World Wide Web users. As in classical data mining, data preparation and pattern discovery are the main issues in web usage mining. The first phase of web usage mining is the data processing phase, which includes the session reconstruction operation from server logs. Session reconstruction success directly affects the quality of the frequent patterns discovered in the next phase. In reactive web usage mining techniques, the source data is web server logs and the topology of the web pages served by the web server domain. Other kinds of information collected during the interactive browsing of web site by user, such as cookies or web logs containing similar information, are not used. The next phase of web usage mining is discovering frequent user navigation patterns. In this phase, pattern discovery methods are applied on the reconstructed sessions obtained in the first phas...

  6. Web Accessibility, Libraries, and the Law

    Directory of Open Access Journals (Sweden)

    Camilla Fulton

    2011-03-01

    Full Text Available With an abundance of library resources being served on the web, researchers are finding that disabled people oftentimes do not have the same level of access to materials as their nondisabled peers. This paper discusses web accessibility in the context of United States’ federal laws most referenced in web accessibility lawsuits. Additionally, it reveals which states have statutes that mirror federal web accessibility guidelines and to what extent. Interestingly, fewer than half of the states have adopted statutes addressing web accessibility, and fewer than half of these reference Section 508 of the Rehabilitation Act or Web Content Accessibility Guidelines (WCAG 1.0. Regardless of sparse legislation surrounding web accessibility, librarians should consult the appropriate web accessibility resources to ensure that their specialized content reaches all.

  7. Web Accessibility - A timely recognized challenge

    CERN Document Server

    Qadri, Jameel A

    2011-01-01

    Web Accessibility for disabled people has posed a challenge to the civilized societies that claim to uphold the principles of equal opportunity and nondiscrimination. Certain concrete measures have been taken to narrow down the digital divide between normal and disabled users of Internet technology. The efforts have resulted in enactment of legislations and laws and mass awareness about the discriminatory nature of the accessibility issue, besides the efforts have resulted in the development of commensurate technological tools to develop and test the Web accessibility. World Wide Web consortium's (W3C) Web Accessibility Initiative (WAI) has framed a comprehensive document comprising of set of guidelines to make the Web sites accessible to the users with disabilities. This paper is about the issues and aspects surrounding Web Accessibility. The details and scope are kept limited to comply with the aim of the paper which is to create awareness and to provide basis for in-depth investigation.

  8. Web accessibility standards and disability: developing critical perspectives on accessibility.

    Science.gov (United States)

    Lewthwaite, Sarah

    2014-01-01

    Currently, dominant web accessibility standards do not respect disability as a complex and culturally contingent interaction; recognizing that disability is a variable, contrary and political power relation, rather than a biological limit. Against this background there is clear scope to broaden the ways in which accessibility standards are understood, developed and applied. Commentary. The values that shape and are shaped by legislation promote universal, statistical and automated approaches to web accessibility. This results in web accessibility standards conveying powerful norms fixing the relationship between technology and disability, irrespective of geographical, social, technological or cultural diversity. Web accessibility standards are designed to enact universal principles; however, they express partial and biopolitical understandings of the relation between disability and technology. These values can be limiting, and potentially counter-productive, for example, for the majority of disabled people in the "Global South" where different contexts constitute different disabilities and different experiences of web access. To create more robust, accessible outcomes for disabled people, research and standards practice should diversify to embrace more interactional accounts of disability in different settings. Implications for Rehabilitation Creating accessible experiences is an essential aspect of rehabilitation. Web standards promote universal accessibility as a property of an online resource or service. This undervalues the importance of the user's intentions, expertize, their context, and the complex social and cultural nature of disability. Standardized, universal approaches to web accessibility may lead to counterproductive outcomes for disabled people whose impairments and circumstances do not meet Western disability and accessibility norms. Accessible experiences for rehabilitation can be enhanced through an additional focus on holistic approaches to

  9. Web accessibility of public universities in Andalusia

    Directory of Open Access Journals (Sweden)

    Luis Alejandro Casasola Balsells

    2017-06-01

    Full Text Available This paper describes an analysis conducted in 2015 to evaluate the accessibility of content on Andalusian public university websites. In order to determinate whether these websites are accessible, an assessment has been carried out to check conformance with the latest Web Content Accessibility Guidelines (WCAG 2.0 established by the World Wide Web Consortium (W3C. For this purpose, we have designed a methodology for analysis that combines the use of three automatic tools (eXaminator, MINHAP web accessibility tool, and TAW with a manual analysis to provide a greater reliability and validity of the results. Although the results are acceptable overall, a detailed analysis shows that more is still needed for achieving full accessibility for the entire university community. In this respect, we suggest several corrections to common accessibility errors for facilitating the design of university web portals.

  10. Understanding and Supporting Web Developers: Design and Evaluation of a Web Accessibility Information Resource (WebAIR).

    Science.gov (United States)

    Swallow, David; Petrie, Helen; Power, Christopher

    2016-01-01

    This paper describes the design and evaluation of a Web Accessibility Information Resource (WebAIR) for supporting web developers to create and evaluate accessible websites. WebAIR was designed with web developers in mind, recognising their current working practices and acknowledging their existing understanding of web accessibility. We conducted an evaluation with 32 professional web developers in which they used either WebAIR or an existing accessibility information resource, the Web Content Accessibility Guidelines, to identify accessibility problems. The findings indicate that several design decisions made in relation to the language, organisation, and volume of WebAIR were effective in supporting web developers to undertake web accessibility evaluations.

  11. Web accessibility and open source software.

    Science.gov (United States)

    Obrenović, Zeljko

    2009-07-01

    A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long-term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities, for which existing browsers tend to provide only partial solutions. Although functionality for Web accessibility can be found as open source and free software components, their reuse and integration is complex because they were developed in diverse implementation environments, following standards and conventions incompatible with the Web. To address these problems, we have started several activities that aim at exploiting the potential of open-source software for Web accessibility. The first of these activities is the development of Adaptable Multi-Interface COmmunicator (AMICO):WEB, an infrastructure that facilitates efficient reuse and integration of open source software components into the Web environment. The main contribution of AMICO:WEB is in enabling the syntactic and semantic interoperability between Web extension mechanisms and a variety of integration mechanisms used by open source and free software components. Its design is based on our experiences in solving practical problems where we have used open source components to improve accessibility of rich media Web applications. The second of our activities involves improving education, where we have used our platform to teach students how to build advanced accessibility solutions from diverse open-source software. We are also partially involved in the recently started Eclipse projects called Accessibility Tools Framework (ACTF), the aim of which is development of extensible infrastructure, upon which developers can build a variety of utilities that help to evaluate and enhance the accessibility of applications and content for people with disabilities. In this article we briefly report on these activities.

  12. Web browser accessibility using open source software

    NARCIS (Netherlands)

    Obrenovic, Z.; Ossenbruggen, J.R. van

    2007-01-01

    A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities,

  13. Evaluating Web accessibility at different processing phases

    Science.gov (United States)

    Fernandes, N.; Lopes, R.; Carriço, L.

    2012-09-01

    Modern Web sites use several techniques (e.g. DOM manipulation) that allow for the injection of new content into their Web pages (e.g. AJAX), as well as manipulation of the HTML DOM tree. This has the consequence that the Web pages that are presented to users (i.e. after browser processing) are different from the original structure and content that is transmitted through HTTP communication (i.e. after browser processing). This poses a series of challenges for Web accessibility evaluation, especially on automated evaluation software. This article details an experimental study designed to understand the differences posed by accessibility evaluation after Web browser processing. We implemented a Javascript-based evaluator, QualWeb, that can perform WCAG 2.0 based accessibility evaluations in the two phases of browser processing. Our study shows that, in fact, there are considerable differences between the HTML DOM trees in both phases, which have the consequence of having distinct evaluation results. We discuss the impact of these results in the light of the potential problems that these differences can pose to designers and developers that use accessibility evaluators that function before browser processing.

  14. Web services interface to EPICS channel access

    Institute of Scientific and Technical Information of China (English)

    DUAN Lei; SHEN Liren

    2008-01-01

    Web services is used in Experimental Physics and Industrial Control System (EPICS). Combined with EPICS Channel Access protocol, Web services' high usability, platform independence and language independence can be used to design a fully transparent and uniform software interface layer, which helps us complete channel data acquisition, modification and monitoring functions. This software interface layer, a cross-platform of cross-language,has good interopcrability and reusability.

  15. Village Green Project: Web-accessible Database

    Science.gov (United States)

    The purpose of this web-accessible database is for the public to be able to view instantaneous readings from a solar-powered air monitoring station located in a public location (prototype pilot test is outside of a library in Durham County, NC). The data are wirelessly transmitte...

  16. Web Design for Accessibility: Policies and Practice.

    Science.gov (United States)

    Foley, Alan; Regan, Bob

    2002-01-01

    Discusses Web design for people with disabilities and outlines a process-based approach to accessibility policy implementation. Topics include legal mandates; determining which standards apply to a given organization; validation, or evaluation of the site; site architecture; navigation; and organizational needs. (Author/LRW)

  17. Web-accessible Chemical Compound Information

    OpenAIRE

    Roth, Dana L

    2008-01-01

    Web-accessible chemical compound information resources are widely available. In addition to fee-based resources, such as SciFinder Scholar and Beilstein, there is a wide variety of freely accessible resources such as ChemSpider and PubChem. The author provides a general description of various fee-based and free chemical compound resources. The free resources generally offer an acceptable alternative to fee-based resources for quick retrieval. It is assumed that readers will be familiar with ...

  18. Web-accessible Chemical Compound Information

    OpenAIRE

    Roth, Dana L

    2008-01-01

    Web-accessible chemical compound information resources are widely available. In addition to fee-based resources, such as SciFinder Scholar and Beilstein, there is a wide variety of freely accessible resources such as ChemSpider and PubChem. The author provides a general description of various fee-based and free chemical compound resources. The free resources generally offer an acceptable alternative to fee-based resources for quick retrieval. It is assumed that readers will be familiar with ...

  19. Access to Space Interactive Design Web Site

    Science.gov (United States)

    Leon, John; Cutlip, William; Hametz, Mark

    2000-01-01

    The Access To Space (ATS) Group at NASA's Goddard Space Flight Center (GSFC) supports the science and technology community at GSFC by facilitating frequent and affordable opportunities for access to space. Through partnerships established with access mode suppliers, the ATS Group has developed an interactive Mission Design web site. The ATS web site provides both the information and the tools necessary to assist mission planners in selecting and planning their ride to space. This includes the evaluation of single payloads vs. ride-sharing opportunities to reduce the cost of access to space. Features of this site include the following: (1) Mission Database. Our mission database contains a listing of missions ranging from proposed missions to manifested. Missions can be entered by our user community through data input tools. Data is then accessed by users through various search engines: orbit parameters, ride-share opportunities, spacecraft parameters, other mission notes, launch vehicle, and contact information. (2) Launch Vehicle Toolboxes. The launch vehicle toolboxes provide the user a full range of information on vehicle classes and individual configurations. Topics include: general information, environments, performance, payload interface, available volume, and launch sites.

  20. Web Accessibility Theory and Practice: An Introduction for University Faculty

    Science.gov (United States)

    Bradbard, David A.; Peters, Cara

    2010-01-01

    Web accessibility is the practice of making Web sites accessible to all, particularly those with disabilities. As the Internet becomes a central part of post-secondary instruction, it is imperative that instructional Web sites be designed for accessibility to meet the needs of disabled students. The purpose of this article is to introduce Web…

  1. Investigating the appropriateness and relevance of mobile web accessibility guidelines

    OpenAIRE

    Clegg-Vinell, R; Bailey, C.; Gkatzidou, V

    2014-01-01

    The Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C) develop and maintain guidelines for making the web more accessible to people with disabilities. WCAG 2.0 and the MWBP 1.0 are internationally regarded as the industry standard guidelines for web accessibility. Mobile testing sessions conducted by AbilityNet document issues raised by users in a report format, relating issues to guidelines wherever possible. This paper presents the results of a preliminary investigati...

  2. Accurate And Efficient Crawling The Deep Web: Surfacing Hidden Value

    OpenAIRE

    Suneet Kumar; Anuj Kumar Yadav; Rakesh Bharti; Rani Choudhary

    2011-01-01

    Searching Focused web crawlers have recently emerged as an alternative to the well-established web search engines. While the well-known focused crawlers retrieve relevant web-pages, there are various applications which target whole websites instead of single web-pages. For example, companies are represented by websites, not by individual web-pages. To answer queries targeted at websites, web directories are an established solution. In this paper, we introduce a novel focused website crawler t...

  3. "Fine Tuning" image accessibility for museum Web sites

    OpenAIRE

    Leporini, Barbara; Norscia, Ivan

    2008-01-01

    Accessibility and usability guidelines are available to design web sites accessible to blind users. However, the actual usability of accessible web pages varies depending on the type of information the user is dealing with. Museum web sites, including specimens and hall descriptions, need specific requirements to allow vision-impaired users, who navigate using a screen-reader, to access pieces of information that are mainly based on visual perception. Here we address a methodology to be appli...

  4. Global Web Accessibility Analysis of National Government Portals and Ministry Web Sites

    DEFF Research Database (Denmark)

    Goodwin, Morten; Susar, Deniz; Nietzio, Annika

    2011-01-01

    Equal access to public information and services for all is an essential part of the United Nations (UN) Declaration of Human Rights. Today, the Web plays an important role in providing information and services to citizens. Unfortunately, many government Web sites are poorly designed and have...... accessibility barriers that prevent people with disabilities from using them. This article combines current Web accessibility benchmarking methodologies with a sound strategy for comparing Web accessibility among countries and continents. Furthermore, the article presents the first global analysis of the Web...... accessibility of 192 United Nation Member States made publically available. The article also identifies common properties of Member States that have accessible and inaccessible Web sites and shows that implementing antidisability discrimination laws is highly beneficial for the accessibility of Web sites, while...

  5. Predicting Web Page Accesses, using Users’ Profile and Markov Models

    OpenAIRE

    Zeynab Fazelipour

    2016-01-01

    Nowadays web is an important source for information retrieval, the sources on WWW are constantly increasing and the users accessing the web have different backgrounds. Consequently, finding the information which satisfies the personal users needs is not so easy. Exploration of users behaviors in the web, as a method for extracting the knowledge lying behind the way of how the users interact with the web, is considered as an important tool in the field of web mining. By identifying user's beha...

  6. User Experience-UX-and the Web Accessibility Standards

    Directory of Open Access Journals (Sweden)

    Osama Sohaib

    2011-05-01

    Full Text Available The success of web-based applications depends on how well it is perceive by the end-users. The various web accessibility guidelines have promoted to help improve accessing, understanding the content of web pages. Designing for the total User Experience (UX is an evolving discipline of the World Wide Web mainstream that focuses on how the end users will work to achieve their target goals. To satisfy end-users, web-based applications must fulfill some common needs like clarity, accessibility and availability. The aim of this study is to evaluate how the User Experience characteristics of web-based application are related to web accessibility guidelines (WCAG 2.0, ISO 9241:151 and Section 508.

  7. Global Web Accessibility Analysis of National Government Portals and Ministry Web Sites

    DEFF Research Database (Denmark)

    Goodwin, Morten; Susar, Deniz; Nietzio, Annika

    2011-01-01

    Equal access to public information and services for all is an essential part of the United Nations (UN) Declaration of Human Rights. Today, the Web plays an important role in providing information and services to citizens. Unfortunately, many government Web sites are poorly designed and have...... accessibility of 192 United Nation Member States made publically available. The article also identifies common properties of Member States that have accessible and inaccessible Web sites and shows that implementing antidisability discrimination laws is highly beneficial for the accessibility of Web sites, while...... signing the UN Rights and Dignity of Persons with Disabilities has had no such effect yet. The article demonstrates that, despite the commonly held assumption to the contrary, mature, high-quality Web sites are more accessible than lower quality ones. Moreover, Web accessibility conformance claims by Web...

  8. A Framework for Transparently Accessing Deep Web Sources

    Science.gov (United States)

    Dragut, Eduard Constantin

    2010-01-01

    An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…

  9. A Framework for Transparently Accessing Deep Web Sources

    Science.gov (United States)

    Dragut, Eduard Constantin

    2010-01-01

    An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…

  10. Current state of web accessibility of Malaysian ministries websites

    Science.gov (United States)

    Ahmi, Aidi; Mohamad, Rosli

    2016-08-01

    Despite the fact that Malaysian public institutions have progressed considerably on website and portal usage, web accessibility has been reported as one of the issues deserves special attention. Consistent with the government moves to promote an effective use of web and portal, it is essential for the government institutions to ensure compliance with established standards and guidelines on web accessibility. This paper evaluates accessibility of 25 Malaysian ministries websites using automated tools i.e. WAVE and Achecker. Both tools are designed to objectively evaluate web accessibility in conformance with Web Content Accessibility Guidelines 2.0 (WCAG 2.0) and United States Rehabilitation Act 1973 (Section 508). The findings reported somewhat low compliance to web accessibility standard amongst the ministries. Further enhancement is needed in the aspect of input elements such as label and checkbox to be associated with text as well as image-related elements. This findings could be used as a mechanism for webmasters to locate and rectify errors pertaining to the web accessibility and to ensure equal access of the web information and services to all citizen.

  11. AN AUTOMATIC AND METHODOLOGICAL APPROACH FOR ACCESSIBLE WEB APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Lourdes Moreno

    2007-06-01

    Full Text Available Semantic Web approaches try to get the interoperability and communication among technologies and organizations. Nevertheless, sometimes it is forgotten that the Web must be useful for every user, consequently it is necessary to include tools and techniques doing Semantic Web be accessible. Accessibility and usability are two usually joined concepts widely used in web application development, however their meaning are different. Usability means the way to make easy the use but accessibility is referred to the access possibility. For the first one, there are many well proved approaches in real cases. However, accessibility field requires a deeper research that will make feasible the access to disable people and also the access to novel non-disable people due to the cost to automate and maintain accessible applications. In this paper, we propose one architecture to achieve the accessibility in web-environments dealing with the WAI accessibility standard and the Universal Design paradigm. This architecture tries to control the accessibility in web applications development life-cycle following a methodology starting from a semantic conceptual model and leans on description languages and controlled vocabularies.

  12. Accessing the SEED Genome Databases via Web Services API: Tools for Programmers

    Directory of Open Access Journals (Sweden)

    Vonstein Veronika

    2010-06-01

    Full Text Available Abstract Background The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. Results The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. Conclusions We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.

  13. Web Accessibility Knowledge and Skills for Non-Web Library Staff

    Science.gov (United States)

    McHale, Nina

    2012-01-01

    Why do librarians and library staff other than Web librarians and developers need to know about accessibility? Web services staff do not--or should not--operate in isolation from the rest of the library staff. It is important to consider what areas of online accessibility are applicable to other areas of library work and to colleagues' regular job…

  14. Web accessibility practical advice for the library and information professional

    CERN Document Server

    Craven, Jenny

    2008-01-01

    Offers an introduction to web accessibility and usability for information professionals, offering advice on the concerns relevant to library and information organizations. This book can be used as a resource for developing staff training and awareness activities. It will also be of value to website managers involved in web design and development.

  15. Accessible Web Design - The Power of the Personal Message.

    Science.gov (United States)

    Whitney, Gill

    2015-01-01

    The aim of this paper is to describe ongoing research being carried out to enable people with visual impairments to communicate directly with designers and specifiers of hobby and community web sites to maximise the accessibility of their sites. The research started with an investigation of the accessibility of community and hobby web sites as perceived by a group of visually impaired end users. It is continuing with an investigation into how to best to communicate with web designers who are not experts in web accessibility. The research is making use of communication theory to investigate how terminology describing personal experience can be used in the most effective and powerful way. By working with the users using a Delphi study the research has ensured that the views of the visually impaired end users is successfully transmitted.

  16. A Web-Based Remote Access Laboratory Using SCADA

    Science.gov (United States)

    Aydogmus, Z.; Aydogmus, O.

    2009-01-01

    The Internet provides an opportunity for students to access laboratories from outside the campus. This paper presents a Web-based remote access real-time laboratory using SCADA (supervisory control and data acquisition) control. The control of an induction motor is used as an example to demonstrate the effectiveness of this remote laboratory,…

  17. Binary Coded Web Access Pattern Tree in Education Domain

    Science.gov (United States)

    Gomathi, C.; Moorthi, M.; Duraiswamy, K.

    2008-01-01

    Web Access Pattern (WAP), which is the sequence of accesses pursued by users frequently, is a kind of interesting and useful knowledge in practice. Sequential Pattern mining is the process of applying data mining techniques to a sequential database for the purposes of discovering the correlation relationships that exist among an ordered list of…

  18. SIDECACHE: Information access, management and dissemination framework for web services

    Directory of Open Access Journals (Sweden)

    Robbins Kay A

    2011-06-01

    Full Text Available Abstract Background Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. Findings SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. Conclusions We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.

  19. Fit3D: a web application for highly accurate screening of spatial residue patterns in protein structure data.

    Science.gov (United States)

    Kaiser, Florian; Eisold, Alexander; Bittrich, Sebastian; Labudde, Dirk

    2016-03-01

    The clarification of linkage between protein structure and function is still a demanding process and can be supported by comparison of spatial residue patterns, so-called structural motifs. However, versatile up-to-date resources to search for local structure similarities are rare. We present Fit3D, an easily accessible web application for highly accurate screening of structural motifs in 3D protein data. The web application is accessible at https://biosciences.hs-mittweida.de/fit3d and program sources of the command line version were released under the terms of GNU GPLv3. Platform-independent binaries and documentations for offline usage are available at https://bitbucket.org/fkaiser/fit3d florian.kaiser@hs-mittweida.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. EnviroAtlas - Accessibility Characteristics in the Conterminous U.S. Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service includes maps that illustrate factors affecting transit accessibility, and indicators of accessibility. Accessibility measures how...

  1. UK Government Web Continuity: Persisting Access through Aligning Infrastructures

    Directory of Open Access Journals (Sweden)

    Amanda Spencer

    2009-06-01

    Full Text Available Normal 0 Government's use of the Web in the UK is prolific and a wide range of services are now available though this channel. The government set out to address the problem that links from Hansard (the transcripts of Parliamentary debates were not maintained over time and that therefore there was need for some long-term storage and stewardship of information, including maintaining access. Further investigation revealed that linking was key, not only in maintaining access to information, but also to the discovery of information. This resulted in a project that affects the entire  government Web estate, with a solution leveraging the basic building blocks of the Internet (DNS and the Web (HTTP and URIs in a pragmatic way, to ensure that an infrastructure is in place to provide access to important information both now and in the future.

  2. Learning Task Knowledge from Dialog and Web Access

    Directory of Open Access Journals (Sweden)

    Vittorio Perera

    2015-06-01

    Full Text Available We present KnoWDiaL, an approach for Learning and using task-relevant Knowledge from human-robot Dialog and access to the Web. KnoWDiaL assumes that there is an autonomous agent that performs tasks, as requested by humans through speech. The agent needs to “understand” the request, (i.e., to fully ground the task until it can proceed to plan for and execute it. KnoWDiaL contributes such understanding by using and updating a Knowledge Base, by dialoguing with the user, and by accessing the web. We believe that KnoWDiaL, as we present it, can be applied to general autonomous agents. However, we focus on our work with our autonomous collaborative robot, CoBot, which executes service tasks in a building, moving around and transporting objects between locations. Hence, the knowledge acquired and accessed consists of groundings of language to robot actions, and building locations, persons, and objects. KnoWDiaL handles the interpretation of voice commands, is robust regarding speech recognition errors, and is able to learn commands involving referring expressions in an open domain, (i.e., without requiring a lexicon. We present in detail the multiple components of KnoWDiaL, namely a frame-semantic parser, a probabilistic grounding model, a web-based predicate evaluator, a dialog manager, and the weighted predicate-based Knowledge Base. We illustrate the knowledge access and updates from the dialog and Web access, through detailed and complete examples. We further evaluate the correctness of the predicate instances learned into the Knowledge Base, and show the increase in dialog efficiency as a function of the number of interactions. We have extensively and successfully used KnoWDiaL in CoBot dialoguing and accessing the Web, and extract a few corresponding example sequences from captured videos.

  3. Access Control of Web- and Java-Based Applications

    Science.gov (United States)

    Tso, Kam S.; Pajevski, Michael J.

    2013-01-01

    Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers

  4. mGene.web: a web service for accurate computational gene finding.

    Science.gov (United States)

    Schweikert, Gabriele; Behr, Jonas; Zien, Alexander; Zeller, Georg; Ong, Cheng Soon; Sonnenburg, Sören; Rätsch, Gunnar

    2009-07-01

    We describe mGene.web, a web service for the genome-wide prediction of protein coding genes from eukaryotic DNA sequences. It offers pre-trained models for the recognition of gene structures including untranslated regions in an increasing number of organisms. With mGene.web, users have the additional possibility to train the system with their own data for other organisms on the push of a button, a functionality that will greatly accelerate the annotation of newly sequenced genomes. The system is built in a highly modular way, such that individual components of the framework, like the promoter prediction tool or the splice site predictor, can be used autonomously. The underlying gene finding system mGene is based on discriminative machine learning techniques and its high accuracy has been demonstrated in an international competition on nematode genomes. mGene.web is available at http://www.mgene.org/web, it is free of charge and can be used for eukaryotic genomes of small to moderate size (several hundred Mbp).

  5. 3PAC: Enforcing Access Policies for Web Services

    NARCIS (Netherlands)

    van Bemmel, J.; Wegdam, M.; Lagerberg, K.

    Web Services fail to deliver on the promise of ubiquitous deployment and seamless interoperability due to the lack of a uniform, standards-based approach to all aspects of security. In particular, the enforcement of access policies in a Service Oriented Architecture is not addressed adequately. We

  6. 3PAC: Enforcing Access Policies for Web Services

    NARCIS (Netherlands)

    van Bemmel, J.; Wegdam, M.; Lagerberg, K.

    2005-01-01

    Web Services fail to deliver on the promise of ubiquitous deployment and seamless interoperability due to the lack of a uniform, standards-based approach to all aspects of security. In particular, the enforcement of access policies in a Service Oriented Architecture is not addressed adequately. We p

  7. Data Vault: providing simple web access to NRAO data archives

    Science.gov (United States)

    DuPlain, Ron; Benson, John; Sessoms, Eric

    2008-08-01

    In late 2007, the National Radio Astronomy Observatory (NRAO) launched Data Vault, a feature-rich web application for simplified access to NRAO data archives. This application allows users to submit a Google-like free-text search, and browse, download, and view further information on matching telescope data. Data Vault uses the model-view-controller design pattern with web.py, a minimalist open-source web framework built with the Python Programming Language. Data Vault implements an Ajax client built on the Google Web Toolkit (GWT), which creates structured JavaScript applications. This application supports plug-ins for linking data to additional web tools and services, including Google Sky. NRAO sought the inspiration of Google's remarkably elegant user interface and notable performance to create a modern search tool for the NRAO science data archive, taking advantage of the rapid development frameworks of web.py and GWT to create a web application on a short timeline, while providing modular, easily maintainable code. Data Vault provides users with a NRAO-focused data archive while linking to and providing more information wherever possible. Free-text search capabilities are possible (and even simple) with an innovative query parser. NRAO develops all software under an open-source license; Data Vault is available to developers and users alike.

  8. Unlocking the Gates to the Kingdom: Designing Web Pages for Accessibility.

    Science.gov (United States)

    Mills, Steven C.

    As the use of the Web is perceived to be an effective tool for dissemination of research findings for the provision of asynchronous instruction, the issue of accessibility of Web page information will become more and more relevant. The World Wide Web consortium (W3C) has recognized a disparity in accessibility to the Web between persons with and…

  9. Geodetic Data Via Web Services: Standardizing Access, Expanding Accessibility, and Promoting Discovery

    Science.gov (United States)

    Zietlow, D. W.; Molnar, C.; Meertens, C. M.; Phillips, D. A.; Bartel, B. A.; Ertz, D. J.

    2016-12-01

    UNAVCO, a university-governed consortium that enables geodetic research and education, is developing and implementing new web services to standardize and enhance access to geodetic data for an ever-growing community of users. This simple and easy to use tool gives both experienced and novice users access to all data and products archived at UNAVCO through a uniform interface regardless of data type and structure. UNAVCO data product types include GPS station position time series, velocity estimates and metadata, as well as meteorological time series, and borehole geophysical data and metadata including strain, seismic, and tilt time series. Users access data through a request URL or through the Swagger user interface (UI). The Swagger UI allows users to easily learn about the web services and provides users the ability to test the web services in their web browser. Swagger UI also provides documentation of the web services' URLs and query parameters. With this documentation users can see the valid query parameters for each web service to assist in documentation for both the developers and the users.Output from the web services is currently in a standard comma-separated (CSV) format that can then be used in other processing and/or visualization programs. The web services are being standardized so that all CSV formats will follow the GeoCSV specification. Other formats will be available such as GeoJSON and time series xml since not all data are well represented by the CSV format. The UNAVCO web services are written using Python along with the Flask microframework. This allows quick development and the ability to easily implement new services. Future services are being planned to allow users to access metadata from any station with data available directly or indirectly from the UNAVCO website. In collaboration with the UNAVCO Student Internship Program (USIP), we developed a short video demonstrating how to use the web services tool to assist new users and the broader

  10. Dynamic Tracking of Web Activity Accessed by Users Using Cookies

    Directory of Open Access Journals (Sweden)

    K.V.S. Jaharsh Samayan

    2015-07-01

    Full Text Available The motive of this study is to suggest a protocol which can be implemented to observe the activities of any node within a network whose contribution to the organization needs to be measured. Many associates working in any organization misuse the resources allocated to them and waste their working time in unproductive work which is of no use to the organization. In order to tackle this problem the dynamic approach in monitoring web pages accessed by user using cookies gives a very efficient way of tracking all the activities of the individual and store in cookies which are generated based on their recent web activity and display a statistical information of how the users web activity for the time period has been utilized for every IP-address in the network. In a ever challenging dynamic world monitoring the productivity of the associates in the organization plays an utmost important role.

  11. Secure Communication and Access Control for Mobile Web Service Provisioning

    CERN Document Server

    Srirama, Satish Narayana

    2010-01-01

    It is now feasible to host basic web services on a smart phone due to the advances in wireless devices and mobile communication technologies. While the applications are quite welcoming, the ability to provide secure and reliable communication in the vulnerable and volatile mobile ad-hoc topologies is vastly becoming necessary. The paper mainly addresses the details and issues in providing secured communication and access control for the mobile web service provisioning domain. While the basic message-level security can be provided, providing proper access control mechanisms for the Mobile Host still poses a great challenge. This paper discusses details of secure communication and proposes the distributed semantics-based authorization mechanism.

  12. Architecture for large-scale automatic web accessibility evaluation based on the UWEM methodology

    DEFF Research Database (Denmark)

    Ulltveit-Moe, Nils; Olsen, Morten Goodwin; Pillai, Anand B.

    2008-01-01

    The European Internet Accessibility project (EIAO) has developed an Observatory for performing large scale automatic web accessibility evaluations of public sector web sites in Europe. The architecture includes a distributed web crawler that crawls web sites for links until either a given budget...... of web pages have been identified or the web site has been crawled exhaustively. Subsequently, a uniform random subset of the crawled web pages is sampled and sent for accessibility evaluation and the evaluation results are stored in a Resource Description Format (RDF) database that is later loaded...

  13. Access Control of Web and Java Based Applications

    Science.gov (United States)

    Tso, Kam S.; Pajevski, Michael J.; Johnson, Bryan

    2011-01-01

    Cyber security has gained national and international attention as a result of near continuous headlines from financial institutions, retail stores, government offices and universities reporting compromised systems and stolen data. Concerns continue to rise as threats of service interruption, and spreading of viruses become ever more prevalent and serious. Controlling access to application layer resources is a critical component in a layered security solution that includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. In this paper we discuss the development of an application-level access control solution, based on an open-source access manager augmented with custom software components, to provide protection to both Web-based and Java-based client and server applications.

  14. Accessing multimedia content from mobile applications using semantic web technologies

    Science.gov (United States)

    Kreutel, Jörn; Gerlach, Andrea; Klekamp, Stefanie; Schulz, Kristin

    2014-02-01

    We describe the ideas and results of an applied research project that aims at leveraging the expressive power of semantic web technologies as a server-side backend for mobile applications that provide access to location and multimedia data and allow for a rich user experience in mobile scenarios, ranging from city and museum guides to multimedia enhancements of any kind of narrative content, including e-book applications. In particular, we will outline a reusable software architecture for both server-side functionality and native mobile platforms that is aimed at significantly decreasing the effort required for developing particular applications of that kind.

  15. DERIVING USER ACCESS PATTERNS AND MINING WEB COMMUNITY WITH WEB-LOG DATA FOR PREDICTING USER SESSIONS WITH PAJEK

    Directory of Open Access Journals (Sweden)

    S. Balaji

    2012-10-01

    Full Text Available Web logs are a young and dynamic media type. Due to the intrinsic relationship among Web objects and the deficiency of a uniform schema of web documents, Web community mining has become significant area for Web data management and analysis. The research of Web communities extents a number of research domains. In this paper an ontological model has been present with some recent studies on this topic, which cover finding relevant Web pages based on linkage information, discovering user access patterns through analyzing Web log files from Web data. A simulation has been created with the academic website crawled data. The simulation is done in JAVA and ORACLE environment. Results show that prediction of user session could give us plenty of vital information for the Business Intelligence. Search Engine Optimization could also use these potential results which are discussed in the paper in detail.

  16. Accessing NASA Technology with the World Wide Web

    Science.gov (United States)

    Nelson, Michael L.; Bianco, David J.

    1995-01-01

    NASA Langley Research Center (LaRC) began using the World Wide Web (WWW) in the summer of 1993, becoming the first NASA installation to provide a Center-wide home page. This coincided with a reorganization of LaRC to provide a more concentrated focus on technology transfer to both aerospace and non-aerospace industry. Use of WWW and NCSA Mosaic not only provides automated information dissemination, but also allows for the implementation, evolution and integration of many technology transfer and technology awareness applications. This paper describes several of these innovative applications, including the on-line presentation of the entire Technology OPportunities Showcase (TOPS), an industrial partnering showcase that exists on the Web long after the actual 3-day event ended. The NASA Technical Report Server (NTRS) provides uniform access to many logically similar, yet physically distributed NASA report servers. WWW is also the foundation of the Langley Software Server (LSS), an experimental software distribution system which will distribute LaRC-developed software. In addition to the more formal technology distribution projects, WWW has been successful in connecting people with technologies and people with other people.

  17. Open access web technology for mathematics learning in higher education

    Directory of Open Access Journals (Sweden)

    Mari Carmen González-Videgaray

    2016-05-01

    Full Text Available Problems with mathematics learning, “math anxiety” or “statistics anxiety” among university students can be avoided by using teaching strategies and technological tools. Besides personal suffering, low achievement in mathematics reduces terminal efficiency and decreases enrollment in careers related to science, technology and mathematics. This paper has two main goals: 1 to offer an organized inventory of open access web resources for math learning in higher education, and 2 to explore to what extent these resources are currently known and used by students and teachers. The first goal was accomplished by running a search in Google and then classifying resources. For the second, we conducted a survey among a sample of students (n=487 and teachers (n=60 from mathematics and engineering within the largest public university in Mexico. We categorized 15 high-quality web resources. Most of them are interactive simulations and computer algebra systems. ResumenLos problemas en el aprendizaje de las matemáticas, como “ansiedad matemática” y “ansiedad estadística” pueden evitarse si se usan estrategias de enseñanza y herramientas tecnológicas. Además de un sufrimiento personal, el bajo rendimiento en matemáticas reduce la eficiencia terminal y decrementa la matrícula en carreras relacionadas con ciencia, tecnología y matemáticas. Este artículo tiene dos objetivos: 1 ofrecer un inventario organizado de recursos web de acceso abierto para aprender matemáticas en la universidad, y 2 explorar en qué medida estos recursos se usan actualmente entre alumnos y profesores. El primer objetivo se logró con un perfil de búsqueda en Google y una clasificación. Para el segundo, se condujo una encuesta en una muestra de estudiantes (n=487 y maestros (n=60 de matemáticas e ingeniería de la universidad más grande de México. Categorizamos 15 recursos web de alta calidad. La mayoría son simulaciones interactivas y

  18. Archiving Web Sites for Preservation and Access: MODS, METS and MINERVA

    Science.gov (United States)

    Guenther, Rebecca; Myrick, Leslie

    2006-01-01

    Born-digital material such as archived Web sites provides unique challenges in ensuring access and preservation. This article examines some of the technical challenges involved in harvesting and managing Web archives as well as metadata strategies to provide descriptive, technical, and preservation related information about archived Web sites,…

  19. Content accessibility of Web documents: Overview of concepts and needed standards

    DEFF Research Database (Denmark)

    Alapetite, A.

    2006-01-01

    to broaden the scope to any type of user and any type of use case. The document provides an introduction to some required concepts and technical standards for designing accessible Web sites. A brief review of thelegal requirements in a few countries for Web accessibility complements the recommendations...

  20. Designing A General Deep Web Access Approach Based On A Newly Introduced Factor; Harvestability Factor (HF)

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; van Keulen, Maurice; Hiemstra, Djoerd

    2014-01-01

    The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester

  1. Designing A General Deep Web Access Approach Based On A Newly Introduced Factor; Harvestability Factor (HF)

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Keulen, van Maurice; Hiemstra, Djoerd

    2014-01-01

    The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester wh

  2. Designing A General Deep Web Access Approach Based On A Newly Introduced Factor; Harvestability Factor (HF)

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; van Keulen, Maurice; Hiemstra, Djoerd

    2014-01-01

    The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester wh

  3. Herramientas para la evaluación de la accesibilidad Web/Tools for the evaluation of Web accessibility

    National Research Council Canada - National Science Library

    Esmeralda Serrano Mascaraque

    2009-01-01

    ...: Accesibilidad Web, herramientas de evaluación, programas de accesibilidad, navegadores. ABSTRACT There are different systems to check if a website is accessible or not, among then we can point out the automated tools that help evaluate, through verification of de facto (the average) standards, the global accessibility that the contents of a website presents,...

  4. An Empirical Evaluation of Web System Access for Smartphone Clients

    Directory of Open Access Journals (Sweden)

    Scott Fowler

    2012-11-01

    Full Text Available As smartphone clients are restricted in computational power and bandwidth, it is important to minimise the overhead of transmitted messages. This paper identifies and studies methods that reduce the amount of data being transferred via wireless links between a web service client and a web service. Measurements were performed in a real environment based on a web service prototype providing public transport information for the city of Hamburg in Germany, using actual wireless links with a mobile smartphone device. REST based web services using the data exchange formats JSON, XML and Fast Infoset were evaluated against the existing SOAP based web service.

  5. 网络无障碍的发展:政策、理论和方法%Development of Web Accessibility: Policies, Theories and Apporoaches

    Institute of Scientific and Technical Information of China (English)

    Xiaoming Zeng

    2006-01-01

    The article is intended to introduce the readers to the concept and background of Web accessibility in the United States. I will first discuss different definitions of Web accessibility. The beneficiaries of accessible Web or the sufferers from inaccessible Web will be discussed based on the type of disability. The importance of Web accessibility will be introduced from the perspectives of ethical, demographic, legal, and financial importance. Web accessibility related standards and legislations will be discussed in great detail. Previous research on evaluating Web accessibility will be presented. Lastly, a system for automated Web accessibility transformation will be introduced as an alternative approach for enhancing Web accessibility.

  6. GAPforAPE: an augmented browsing system to improve Web 2.0 accessibility

    Science.gov (United States)

    Mirri, Silvia; Salomoni, Paola; Prandi, Catia; Muratori, Ludovico Antonio

    2012-09-01

    The Web 2.0 evolution has spread more interactive technologies which affected accessibility for users who navigate the Web by using assistive technologies. In particular, the partial download of new data, the continuous refreshing, and the massive use of scripting can represent significant barriers especially for people with visual impairments, who enjoy the Web by means of screen readers. On the other hand, such technologies can be an opportunity, because they can provide a new means of transcoding Web content, making the Web more accessible. In this article we present GAPforAPE, an augmented browsing system (based on Web browsers extensions) which offers a user's profiling system and transcodes Web content according to constrains declared by users: the same Web page is provided to any user, but GAPforAPE computes adequate customizations, by exploiting scripting technologies which usually affect Web pages accessibility. GAPforAPE imitates screen readers behavior: it applies a specific set of transcoding scripts devoted to a given Web site, when available, and a default set of transcoding operations otherwise. The continuous and quick evolution of the Web has shown that a crowdsourcing system is a desirable solution, letting the transcoding scripts evolve in the same way.

  7. Assessment the web accessibility of e-shops of selected Polish e-commerce companies

    Directory of Open Access Journals (Sweden)

    Anna Michalczyk

    2015-11-01

    Full Text Available The article attempts to answer the question: How in terms of web availability presents a group of web services type of e-shops operated by selected polish e-commerce companies? Discusses the essence of the web availability in the context of WCAG 2.0 standard and business benefits for companies arising from ownership accessible website fulfilling the recommendations of WCAG 2.0. Assessed of level the web accessibility of e-shops of selected polish e-commerce companies.

  8. Hand Society and Matching Program Web Sites Provide Poor Access to Information Regarding Hand Surgery Fellowship.

    Science.gov (United States)

    Hinds, Richard M; Klifto, Christopher S; Naik, Amish A; Sapienza, Anthony; Capo, John T

    2016-08-01

    The Internet is a common resource for applicants of hand surgery fellowships, however, the quality and accessibility of fellowship online information is unknown. The objectives of this study were to evaluate the accessibility of hand surgery fellowship Web sites and to assess the quality of information provided via program Web sites. Hand fellowship Web site accessibility was evaluated by reviewing the American Society for Surgery of the Hand (ASSH) on November 16, 2014 and the National Resident Matching Program (NRMP) fellowship directories on February 12, 2015, and performing an independent Google search on November 25, 2014. Accessible Web sites were then assessed for quality of the presented information. A total of 81 programs were identified with the ASSH directory featuring direct links to 32% of program Web sites and the NRMP directory directly linking to 0%. A Google search yielded direct links to 86% of program Web sites. The quality of presented information varied greatly among the 72 accessible Web sites. Program description (100%), fellowship application requirements (97%), program contact email address (85%), and research requirements (75%) were the most commonly presented components of fellowship information. Hand fellowship program Web sites can be accessed from the ASSH directory and, to a lesser extent, the NRMP directory. However, a Google search is the most reliable method to access online fellowship information. Of assessable programs, all featured a program description though the quality of the remaining information was variable. Hand surgery fellowship applicants may face some difficulties when attempting to gather program information online. Future efforts should focus on improving the accessibility and content quality on hand surgery fellowship program Web sites.

  9. How Accurately Can the Google Web Speech API Recognize and Transcribe Japanese L2 English Learners' Oral Production?

    Science.gov (United States)

    Ashwell, Tim; Elam, Jesse R.

    2017-01-01

    The ultimate aim of our research project was to use the Google Web Speech API to automate scoring of elicited imitation (EI) tests. However, in order to achieve this goal, we had to take a number of preparatory steps. We needed to assess how accurate this speech recognition tool is in recognizing native speakers' production of the test items; we…

  10. Design and Implementation of Open-Access Web-Based Education ...

    African Journals Online (AJOL)

    Design and Implementation of Open-Access Web-Based Education Useful ... using an open source platform which will be more flexible, and cost effective due to free licensing. ... It was observed to have service requirements of online activities.

  11. Systems and Services for Real-Time Web Access to NPP Data Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Science & Technology, Inc. (GST) proposes to investigate information processing and delivery technologies to provide near-real-time Web-based access to...

  12. Facilitating access to the web of data a guide for librarians

    CERN Document Server

    Stuart, David

    2011-01-01

    Offers an introduction to the web of data and the semantic web, exploring technologies including APIs, microformats and linked data. This title includes topical commentary and practical examples that explore how information professionals can harness the power of this phenomenon to inform strategy and become facilitators of access to data.

  13. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks

    OpenAIRE

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-01-01

    Hybrid mobile applications (apps) combine the features of Web applications and “native” mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources—file system, location, camera, contacts, etc.

  14. WebGeocalc and Cosmographia: Modern Tools to Access SPICE Archives

    Science.gov (United States)

    Semenov, B. V.; Acton, C. H.; Bachman, N. J.; Ferguson, E. W.; Rose, M. E.; Wright, E. D.

    2017-06-01

    The WebGeocalc (WGC) web client-server tool and the SPICE-enhanced Cosmographia visualization program are two new ways for accessing space mission geometry data provided in the PDS SPICE kernel archives and by mission operational SPICE kernel sets.

  15. Improving access to space weather data via workflows and web services

    Science.gov (United States)

    Sundaravel, Anu Swapna

    The Space Physics Interactive Data Resource (SPIDR) is a web-based interactive tool developed by NOAA's National Geophysical Data Center to provide access to historical space physics datasets. These data sets are widely used by physicists for space weather modeling and predictions. Built on a distributed network of databases and application servers, SPIDR offers services in two ways: via a web page interface and via a web service interface. SPIDR exposes several SOAP-based web services that client applications implement to connect to a number of data sources for data download and processing. At present, the usage of the web services has been difficult, adding unnecessary complexity to client applications and inconvenience to the scientists who want to use these datasets. The purpose of this study focuses on improving SPIDR's web interface to better support data access, integration and display. This is accomplished in two ways: (1) examining the needs of scientists to better understand what web services they require to better access and process these datasets and (2) developing a client application to support SPIDR's SOAP-based services using the Kepler scientific workflow system. To this end, we identified, designed and developed several web services for filtering the existing datasets and created several Kepler workflows to automate routine tasks associated with these datasets. These workflows are a part of the custom NGDC build of the Kepler tool. Scientists are already familiar with Kepler due to its extensive use in this domain. As a result, this approach provides them with tools that are less daunting than raw web services and ultimately more useful and customizable. We evaluated our work by interviewing various scientists who make use of SPIDR and having them use the developed Kepler workflows while recording their feedback and suggestions. Our work has improved SPIDR such that new web services are now available and scientists have access to a desktop

  16. Web access and dissemination of Andalusian coastal erosion rates: viewers and standard/filtered map services.

    Science.gov (United States)

    Álvarez Francoso, Jose; Prieto Campos, Antonio; Ojeda Zujar, Jose; Guisado-Pintado, Emilia; Pérez Alcántara, Juan Pedro

    2017-04-01

    enhances access to erosion rates but also enables multi-scale data representation. The 2nd proposal, as intended to be used by technicians and specialists on the field, includes a geoviewer with an innovative profile (including visualization of time-ranges, application of different uncertainty levels to the data, etc) to fulfil the needs of these users. For its development, a set of Javascript libraries combined with Openlayers (or Leaflet) are implemented to guarantee all the functionalities existing for the basic geoviewer. Further to this, the viewer has been improved by i) the generation of services by request through the application of a filter in ECQL language (Extended Common Query Language), using the vendor parameter CQL_FILTER from Geoserver. These dynamic filters allow the final user to predefine the visualised variable, its spatial and temporal domain, a range of specific values and other attributes, thus multiplying the generation of real-time cartography; ii) by using the layer's WFS service, the Javascript application exploit the alphanumeric data to generate related statistics in real time (e.g. mean rates, length of eroded coast, etc.) and interactive graphs (via HighCharts.js library) which accurately help in beach erosion rates interpretation (representing trends and bars diagrams, among others. As a result two approaches for communicating scientific results to different audiences based on web-based with complete dataset of geo-information, services and functionalities are implemented. The combination of standardised environmental data with tailor-made exploitation techniques (interactive maps, and real-time statistics) assures the correct access and interpretation of the information.

  17. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks.

    Science.gov (United States)

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-02-01

    Hybrid mobile applications (apps) combine the features of Web applications and "native" mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources-file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies "bridges" that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources-the ability to read and write contacts list, local files, etc.-to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content

  18. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks

    Science.gov (United States)

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-01-01

    Hybrid mobile applications (apps) combine the features of Web applications and “native” mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources—file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies “bridges” that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources—the ability to read and write contacts list, local files, etc.—to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign

  19. Developing Guidelines for Evaluating the Adaptation of Accessible Web-Based Learning Materials

    Science.gov (United States)

    Radovan, Marko; Perdih, Mojca

    2016-01-01

    E-learning is a rapidly developing form of education. One of the key characteristics of e-learning is flexibility, which enables easier access to knowledge for everyone. Information and communications technology (ICT), which is e-learning's main component, enables alternative means of accessing the web-based learning materials that comprise the…

  20. A Distributed Intranet/Web Solution to Integrated Management of Access Networks

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this article, we describe the present situation of access network management, enumerate a few problems during the development of network management systems, then put forward a distributed Intranet/Web solution named iMAN to the integrated management of access networks, present its architecture and protocol stack, and describe its application in practice.

  1. Examining How Web Designers' Activity Systems Address Accessibility: Activity Theory as a Guide

    Science.gov (United States)

    Russell, Kyle

    2014-01-01

    While accessibility of information technologies is often acknowledged as important, it is frequently not well addressed in practice. The purpose of this study was to examine the work of web developers and content managers to explore why and how accessibility is or is not addressed as an objective as websites are planned, built and maintained.…

  2. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric; Geest, van der Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  3. Enhancing Independent Internet Access for Individuals with Mental Retardation through Use of a Specialized Web Browser: A Pilot Study.

    Science.gov (United States)

    Davies, Daniel K.; Stock, Steven E.; Wehmeyer, Michael L.

    2001-01-01

    In this study, a prototype web browser, called Web Trek, that utilizes multimedia to provide access for individuals with cognitive disabilities was developed and pilot-tested with 12 adults with mental retardation. The Web Trek browser provided greater independence in accessing the Internet compared to Internet Explorer. (Contains references.)…

  4. Accesibilidad vs usabilidad web: evaluación y correlación Accessibility vs. WEB Usability- Evaluation and Correlation

    Directory of Open Access Journals (Sweden)

    Esmeralda Serrano Mascaraque

    2009-08-01

    Full Text Available Los organismos oficiales deben facilitar recursos informativos y prestar servicios a través de diversos medios en aras de conseguir el derecho a la información que le asiste a todo ciudadano. En el momento actual la Web es uno de los recursos más extendidos y por ello es fundamental evaluar el grado de accesibilidad que tienen los contenidos volcados en la Red. Para lograr esto se aplicarán las herramientas y software necesarios y se evaluará el nivel de accesibilidad de un grupo de sitios web representativos. Además se intentará determinar si existe algún tipo de relación entre accesibilidad y usabilidad, ya que ambos son aspectos deseables (o incluso exigibles legalmente, en el caso de la accesibilidad para tener un correcto diseño de web.Government agencies should provide information resources and services through various means in order to achieve the right to information that assists all citizens. Being the Web one of the most widespread resources, it becomes essential to evaluate the degree of its content accessibility. We will evaluate this level on a representative group of websites, and we will try to determine whether there is any relationship between accessibility and usability since both aspects are desired (or even legally required in the case of the accesibility in a proper Web design.

  5. Security Guidelines for the Development of Accessible Web Applications through the implementation of intelligent systems

    Directory of Open Access Journals (Sweden)

    Luis Joyanes Aguilar

    2009-12-01

    Full Text Available Due to the significant increase in threats, attacks and vulnerabilities that affect the Web in recent years has resulted the development and implementation of pools and methods to ensure security measures in the privacy, confidentiality and data integrity of users and businesses. Under certain circumstances, despite the implementation of these tools do not always get the flow of information which is passed in a secure manner. Many of these security tools and methods cannot be accessed by people who have disabilities or assistive technologies which enable people to access the Web efficiently. Among these security tools that are not accessible are the virtual keyboard, the CAPTCHA and other technologies that help to some extent to ensure safety on the Internet and are used in certain measures to combat malicious code and attacks that have been increased in recent times on the Web. Through the implementation of intelligent systems can detect, recover and receive information on the characteristics and properties of the different tools and hardware devices or software with which the user is accessing a web application and through analysis and interpretation of these intelligent systems can infer and automatically adjust the characteristics necessary to have these tools to be accessible by anyone regardless of disability or navigation context. This paper defines a set of guidelines and specific features that should have the security tools and methods to ensure the Web accessibility through the implementation of intelligent systems.

  6. KBWS: an EMBOSS associated package for accessing bioinformatics web services.

    Science.gov (United States)

    Oshita, Kazuki; Arakawa, Kazuharu; Tomita, Masaru

    2011-04-29

    The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS) UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS), adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded) and http://soap.g-language.org/kbws_dl.wsdl (Document/literal).

  7. KBWS: an EMBOSS associated package for accessing bioinformatics web services

    Directory of Open Access Journals (Sweden)

    Tomita Masaru

    2011-04-01

    Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.

  8. Kids Not Getting the Web Access They Want

    Science.gov (United States)

    Minkel, Walter

    2004-01-01

    A new study shows that students aged 6 to 17 who have access to the Interact at home are growing afore and more dissatisfied with the access to the Net available to them at school. Grunwald Associates, a California market research firm, released the results of their survey, "Children, Families and the Internet," on December 4. Seventy-six percent…

  9. JavaScript Access to DICOM Network and Objects in Web Browser.

    Science.gov (United States)

    Drnasin, Ivan; Grgić, Mislav; Gogić, Goran

    2017-01-30

    Digital imaging and communications in medicine (DICOM) 3.0 standard provides the baseline for the picture archiving and communication systems (PACS). The development of Internet and various communication media initiated demand for non-DICOM access to PACS systems. Ever-increasing utilization of the web browsers, laptops and handheld devices, as opposed to desktop applications and static organizational computers, lead to development of different web technologies. The DICOM standard officials accepted those subsequently as tools of alternative access. This paper provides an overview of the current state of development of the web access technology to the DICOM repositories. It presents a different approach of using HTML5 features of the web browsers through the JavaScript language and the WebSocket protocol by enabling real-time communication with DICOM repositories. JavaScript DICOM network library, DICOM to WebSocket proxy and a proof-of-concept web application that qualifies as a DICOM 3.0 device were developed.

  10. STUDY ON ACCESS CONTROL FOR WEB SERVICES BASED ON ABAC%基于ABAC的Web Services访问控制研究

    Institute of Scientific and Technical Information of China (English)

    夏春涛; 杨艳丽; 曹利峰

    2012-01-01

    为解决Web Services访问控制问题,分析了传统访问控制模型在Web Services应用中的不足,给出了面向Web Services 的基于属性的访问控制模型ABAC(Attribute Based Access Control)的定义,设计了ABAC访问控制架构,并利用可扩展的访问控制标记语言XACML( eXtensible Access Control Markup Language)实现了细粒度的Web Services访问控制系统.系统的应用有效保护了Web Services资源.%To deal with access control for web services, the problem of application of traditional access control model in web services is analysed, then the definition of web services-oriented attribute-based access control ( ABAC) model is presented, and the architecture of ABAC is designed. Furthermore, the fine-grained access control system for web services is implemented with XACML, the application of the system has effectively protected the resources of web services.

  11. Enhancing Access to Scientific Models through Standard Web Services Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to investigate the feasibility and value of the "Software as a Service" paradigm in facilitating access to Earth Science numerical models. We...

  12. The WebACS - An Accessible Graphical Editor.

    Science.gov (United States)

    Parker, Stefan; Nussbaum, Gerhard; Pölzer, Stephan

    2017-01-01

    This paper is about the solution to accessibility problems met when implementing a graphical editor, a major challenge being the comprehension of the relationships between graphical components, which needs to be guaranteed for blind and vision impaired users. In the concrete case the HTML5 canvas and Javascript were used. Accessibility was reached by implementing a list view of elements, which also enhances the usability of the editor.

  13. Accessibility of dynamic web applications with emphasis on visually impaired users

    Directory of Open Access Journals (Sweden)

    Kingsley Okoye

    2014-09-01

    Full Text Available As the internet is fast migrating from static web pages to dynamic web pages, the users with visual impairment find it confusing and challenging when accessing the contents on the web. There is evidence that dynamic web applications pose accessibility challenges for the visually impaired users. This study shows that a difference can be made through the basic understanding of the technical requirement of users with visual impairment and addresses a number of issues pertinent to the accessibility needs for such users. We propose that only by designing a framework that is structurally flexible, by removing unnecessary extras and thereby making every bit useful (fit-for-purpose, will visually impaired users be given an increased capacity to intuitively access e-contents. This theory is implemented in a dynamic website for the visually impaired designed in this study. Designers should be aware of how the screen reading software works to enable them make reasonable adjustments or provide alternative content that still corresponds to the objective content to increase the possibility of offering faultless service to such users. The result of our research reveals that materials can be added to a content repository or re-used from existing ones by identifying the content types and then transforming them into a flexible and accessible one that fits the requirements of the visually impaired through our method (no-frill + agile methodology rather than computing in advance or designing according to a given specification.

  14. An Efficient Hybrid Algorithm for Mining Web Frequent Access Patterns

    Institute of Scientific and Technical Information of China (English)

    ZHAN Li-qiang; LIU Da-xin

    2004-01-01

    We propose an efficient hybrid algorithm WDHP in this paper for mining frequent access patterns.WDHP adopts the techniques of DHP to optimize its performance, which is using hash table to filter candidate set and trimming database.Whenever the database is trimmed to a size less than a specified threshold, the algorithm puts the database into main memory by constructing a tree, and finds frequent patterns on the tree.The experiment shows that WDHP outperform algorithm DHP and main memory based algorithm WAP in execution efficiency.

  15. Cloud-based Web Services for Near-Real-Time Web access to NPP Satellite Imagery and other Data

    Science.gov (United States)

    Evans, J. D.; Valente, E. G.

    2010-12-01

    We are building a scalable, cloud computing-based infrastructure for Web access to near-real-time data products synthesized from the U.S. National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP) and other geospatial and meteorological data. Given recent and ongoing changes in the the NPP and NPOESS programs (now Joint Polar Satellite System), the need for timely delivery of NPP data is urgent. We propose an alternative to a traditional, centralized ground segment, using distributed Direct Broadcast facilities linked to industry-standard Web services by a streamlined processing chain running in a scalable cloud computing environment. Our processing chain, currently implemented on Amazon.com's Elastic Compute Cloud (EC2), retrieves raw data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) and synthesizes data products such as Sea-Surface Temperature, Vegetation Indices, etc. The cloud computing approach lets us grow and shrink computing resources to meet large and rapid fluctuations (twice daily) in both end-user demand and data availability from polar-orbiting sensors. Early prototypes have delivered various data products to end-users with latencies between 6 and 32 minutes. We have begun to replicate machine instances in the cloud, so as to reduce latency and maintain near-real time data access regardless of increased data input rates or user demand -- all at quite moderate monthly costs. Our service-based approach (in which users invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored and composite (e.g., false-color multiband) products on demand. To facilitate broad impact and adoption of our technology, we have emphasized open, industry-standard software interfaces and open source software. Through our work, we envision the widespread establishment of similar, derived, or interoperable systems for

  16. Pilot Evaluation of a Web-Based Intervention Targeting Sexual Health Service Access

    Science.gov (United States)

    Brown, K. E.; Newby, K.; Caley, M.; Danahay, A.; Kehal, I.

    2016-01-01

    Sexual health service access is fundamental to good sexual health, yet interventions designed to address this have rarely been implemented or evaluated. In this article, pilot evaluation findings for a targeted public health behavior change intervention, delivered via a website and web-app, aiming to increase uptake of sexual health services among…

  17. Towards automated processing of the right of access in inter-organizational Web Service compositions

    DEFF Research Database (Denmark)

    Herkenhöner, Ralph; De Meer, Hermann; Jensen, Meiko

    2010-01-01

    with trade secret protection. In this paper, we present an automated architecture to enable exercising the right of access in the domain of inter-organizational business processes based on Web Services technology. Deriving its requirements from the legal, economical, and technical obligations, we show...

  18. Towards automated processing of the right of access in inter-organizational Web Service compositions

    DEFF Research Database (Denmark)

    Herkenhöner, Ralph; De Meer, Hermann; Jensen, Meiko;

    2010-01-01

    with trade secret protection. In this paper, we present an automated architecture to enable exercising the right of access in the domain of inter-organizational business processes based on Web Services technology. Deriving its requirements from the legal, economical, and technical obligations, we show...

  19. Mirage of us: A reflection on the role of the Web in widening access ...

    African Journals Online (AJOL)

    Mirage of us: A reflection on the role of the Web in widening access to references on Southern African arts, culture and heritage. ... a wiki encyclopaedia facilitates networked social collaboration uniquely suited to the co-operative principles of ...

  20. Programmatic access to data and information at the IRIS DMC via web services

    Science.gov (United States)

    Weertman, B. R.; Trabant, C.; Karstens, R.; Suleiman, Y. Y.; Ahern, T. K.; Casey, R.; Benson, R. B.

    2011-12-01

    The IRIS Data Management Center (DMC) has developed a suite of web services that provide access to the DMC's time series holdings, their related metadata and earthquake catalogs. In addition, services are available to perform simple, on-demand time series processing at the DMC prior to being shipped to the user. The primary goal is to provide programmatic access to data and processing services in a manner usable by and useful to the research community. The web services are relatively simple to understand and use and will form the foundation on which future DMC access tools will be built. Based on standard Web technologies they can be accessed programmatically with a wide range of programming languages (e.g. Perl, Python, Java), command line utilities such as wget and curl or with any web browser. We anticipate these services being used for everything from simple command line access, used in shell scripts and higher programming languages to being integrated within complex data processing software. In addition to improving access to our data by the seismological community the web services will also make our data more accessible to other disciplines. The web services available from the DMC include ws-bulkdataselect for the retrieval of large volumes of miniSEED data, ws-timeseries for the retrieval of individual segments of time series data in a variety of formats (miniSEED, SAC, ASCII, audio WAVE, and PNG plots) with optional signal processing, ws-station for station metadata in StationXML format, ws-resp for the retrieval of instrument response in RESP format, ws-sacpz for the retrieval of sensor response in the SAC poles and zeros convention and ws-event for the retrieval of earthquake catalogs. To make the services even easier to use, the DMC is developing a library that allows Java programmers to seamlessly retrieve and integrate DMC information into their own programs. The library will handle all aspects of dealing with the services and will parse the returned

  1. PlantLoc: an accurate web server for predicting plant protein subcellular localization by substantiality motif

    OpenAIRE

    Tang, Shengnan; Li, Tonghua; Cong, Peisheng; Xiong, Wenwei; Wang, Zhiheng; Sun, Jiangming

    2013-01-01

    Knowledge of subcellular localizations (SCLs) of plant proteins relates to their functions and aids in understanding the regulation of biological processes at the cellular level. We present PlantLoc, a highly accurate and fast webserver for predicting the multi-label SCLs of plant proteins. The PlantLoc server has two innovative characters: building localization motif libraries by a recursive method without alignment and Gene Ontology information; and establishing simple architecture for rapi...

  2. nuMap:A Web Platform for Accurate Prediction of Nucleosome Positioning

    Institute of Scientific and Technical Information of China (English)

    Bader A Alharbi; Thamir H Alshammari; Nathan L Felton; Victor B Zhurkin; Feng Cui

    2014-01-01

    Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and param-eters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site.

  3. New data access with HTTP/WebDAV in the ATLAS experiment

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration; Serfon, Cedric; Garonne, Vincent; Blunier, Sylvain; Lavorini, Vincenzo; Nilsson, Paul

    2015-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyze collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.

  4. New data access with HTTP/WebDAV in the ATLAS experiment

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration; Serfon, Cedric; Garonne, Vincent; Blunier, Sylvain; Lavorini, Vincenzo; Nilsson, Paul

    2015-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyse collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.

  5. Developing Access Control Model of Web OLAP over Trusted and Collaborative Data Warehouses

    Science.gov (United States)

    Fugkeaw, Somchart; Mitrpanont, Jarernsri L.; Manpanpanich, Piyawit; Juntapremjitt, Sekpon

    This paper proposes the design and development of Role- based Access Control (RBAC) model for the Single Sign-On (SSO) Web-OLAP query spanning over multiple data warehouses (DWs). The model is based on PKI Authentication and Privilege Management Infrastructure (PMI); it presents a binding model of RBAC authorization based on dimension privilege specified in attribute certificate (AC) and user identification. Particularly, the way of attribute mapping between DW user authentication and privilege of dimensional access is illustrated. In our approach, we apply the multi-agent system to automate flexible and effective management of user authentication, role delegation as well as system accountability. Finally, the paper culminates in the prototype system A-COLD (Access Control of web-OLAP over multiple DWs) that incorporates the OLAP features and authentication and authorization enforcement in the multi-user and multi-data warehouse environment.

  6. A University Web Portal redesign applying accessibility patterns. Breaking Down Barriers for Visually Impaired Users

    Directory of Open Access Journals (Sweden)

    Hernán Sosa

    2015-08-01

    Full Text Available Definitely, the WWW and ICTs have become the preferred media for the interaction between society and its citizens, and public and private organizations have today the possibility of deploying their activities through the Web. In particular, university education is a domain where the benefits of these technological resources can strongly contribute in caring for students. However, most university Web portals are inaccessible to their user community (students, professors, and non-teaching staff, between others, since these portals do not take into account the needs of people with different capabilities. In this work, we propose an accessibility pattern driven process to the redesign of university Web portals, aiming to break down barriers for visually impaired users. The approach is implemented to a real case study: the Web portal of Universidad Nacional de la Patagonia Austral (UNPA. The results come from applying accessibility recommendations and evaluation tools (automatic and manual from internationally recognized organizations, to both versions of the Web portal: the original and the redesign one.

  7. Search, Read and Write: An Inquiry into Web Accessibility for People with Dyslexia.

    Science.gov (United States)

    Berget, Gerd; Herstad, Jo; Sandnes, Frode Eika

    2016-01-01

    Universal design in context of digitalisation has become an integrated part of international conventions and national legislations. A goal is to make the Web accessible for people of different genders, ages, backgrounds, cultures and physical, sensory and cognitive abilities. Political demands for universally designed solutions have raised questions about how it is achieved in practice. Developers, designers and legislators have looked towards the Web Content Accessibility Guidelines (WCAG) for answers. WCAG 2.0 has become the de facto standard for universal design on the Web. Some of the guidelines are directed at the general population, while others are targeted at more specific user groups, such as the visually impaired or hearing impaired. Issues related to cognitive impairments such as dyslexia receive less attention, although dyslexia is prevalent in at least 5-10% of the population. Navigation and search are two common ways of using the Web. However, while navigation has received a fair amount of attention, search systems are not explicitly included, although search has become an important part of people's daily routines. This paper discusses WCAG in the context of dyslexia for the Web in general and search user interfaces specifically. Although certain guidelines address topics that affect dyslexia, WCAG does not seem to fully accommodate users with dyslexia.

  8. Pilot evaluation of a web-based intervention targeting sexual health service access.

    Science.gov (United States)

    Brown, K E; Newby, K; Caley, M; Danahay, A; Kehal, I

    2016-04-01

    Sexual health service access is fundamental to good sexual health, yet interventions designed to address this have rarely been implemented or evaluated. In this article, pilot evaluation findings for a targeted public health behavior change intervention, delivered via a website and web-app, aiming to increase uptake of sexual health services among 13-19-year olds are reported. A pre-post questionnaire-based design was used. Matched baseline and follow-up data were identified from 148 respondents aged 13-18 years. Outcome measures were self-reported service access, self-reported intention to access services and beliefs about services and service access identified through needs analysis. Objective service access data provided by local sexual health services were also analyzed. Analysis suggests the intervention had a significant positive effect on psychological barriers to and antecedents of service access among females. Males, who reported greater confidence in service access compared with females, significantly increased service access by time 2 follow-up. Available objective service access data support the assertion that the intervention may have led to increases in service access. There is real promise for this novel digital intervention. Further evaluation is planned as the model is licensed to and rolled out by other local authorities in the United Kingdom.

  9. EntrezAJAX: direct web browser access to the Entrez Programming Utilities

    Directory of Open Access Journals (Sweden)

    Pallen Mark J

    2010-06-01

    Full Text Available Abstract Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/

  10. Web-accessible digital brain atlas of the common marmoset (Callithrix jacchus).

    Science.gov (United States)

    Tokuno, Hironobu; Tanaka, Ikuko; Umitsu, Yoshitomo; Akazawa, Toshikazu; Nakamura, Yasuhisa

    2009-05-01

    Here we describe a web-accessible digital brain atlas of the common marmoset (Callithrix jacchus) at http://marmoset-brain.org:2008. We prepared the histological sections of the marmoset brain using various staining techniques. For virtual microscopy, high-resolution digital images of sections were obtained with Aperio Scanscope. The digital images were then converted to Zoomify files (zoomable multiresolution image files). Thereby, we could provide the multiresolution images of the marmoset brains for fast interactive viewing on the web via the Internet. In addition, we describe an automated method to obtain drawings of Nissl-stained sections.

  11. Design and Implementation of File Access and Control System Based on Dynamic Web

    Institute of Scientific and Technical Information of China (English)

    GAO Fuxiang; YAO Lan; BAO Shengfei; YU Ge

    2006-01-01

    A dynamic Web application, which can help the departments of enterprise to collaborate with each other conveniently, is proposed. Several popular design solutions are introduced at first. Then, dynamic Web system is chosen for developing the file access and control system. Finally, the paper gives the detailed process of the design and implementation of the system, which includes some key problems such as solutions of document management and system security. Additionally, the limitations of the system as well as the suggestions of further improvement are also explained.

  12. An Alternative Solution to Https for Secure Access to Web Services

    Directory of Open Access Journals (Sweden)

    Cristina Livia Iancu

    2012-06-01

    Full Text Available This paper presents a solution for accessing web services in a light-secure way. Because the payload of the messages is not so sensitive, it is taken care only about protecting the user name and the password used for authentication and authorization into the web services system. The advantage of this solution compared to the common used SSL is avoiding the overhead related to the handshake and encryption, providing a faster response to the clients. The solution is intended for Windows machines and is developed using the latest stable Microsoft technologies.

  13. A web product data management system based on Simple Object Access Protocol

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A new web product data management architecture is presented. The three-tier web architecture and Simple Object Access Protocol (SOAP) are combined to build the web-based product data management (PDM) system which includes three tiers: the user services tier, the business services tier, and the data services tier. The client service component uses the serverside technology, and Extensible Markup Language (XML) web service which uses SOAP as the communication protocol is chosen as the business service component. To illustrate how to build a web-based PDM system using the proposed architecture,a case PDM system which included three logical tires was built. To use the security and central management features of the database, a stored procedure was recommended in the data services tier. The business object was implemented as an XML web service so that client could use standard internet protocols to communicate with the business object from any platform. In order to satisfy users using all sorts of browser, the server-side technology and Microsoft ASP.NET was used to create the dynamic user interface.

  14. Uniform Access to Astronomical Web Services and its Implementation in SkyMouse

    Science.gov (United States)

    Sun, Hua-Ping; Cui, Chen-Zhou; Zhao, Yong-Heng

    2008-06-01

    With the progress of information technologies and astronomical observation technologies, as an example of cyber-infrastructure based sciences, the Virtual Observatory (VO) is initiated and spreaded quickly. More and more on-line accessible database systems and different kinds of services are available. Although astronomers have been aware of the importance of the interoperability, integrated access to the on-line information is still difficult. The SkyMouse is a smart system developed by Chinese Virtual Observatory project to let us access different online resource systems easier than ever. Not like some VO efforts on uniformed access systems, for example, NVO DataScope, SkyMouse tries to show a comprehensive overview for a specific object, but not to snatch as much data as possible. Stimulated by a simple "Mouse Over" on an interested object name, various VO-compliant and traditional databases, i.e. SIMBAD, NED, VizieR, DSS, ADS, are queried by the SkyMouse. An overview for the given object, including basic information, image, observation and references, is displayed in the user's default web browser. In this article, the authors will introduce the framework of SkyMouse. During the development of SkyMouse, various Web services will be called. In order to invoke these Web services, two problems must be solved, i.e. interoperability and performance. In the paper, a detailed description for these problems and the authors' resolution are given.

  15. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    Science.gov (United States)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  16. CombiROC: an interactive web tool for selecting accurate marker combinations of omics data

    Science.gov (United States)

    Mazzara, Saveria; Rossi, Riccardo L.; Grifantini, Renata; Donizetti, Simone; Abrignani, Sergio; Bombaci, Mauro

    2017-01-01

    Diagnostic accuracy can be improved considerably by combining multiple markers, whose performance in identifying diseased subjects is usually assessed via receiver operating characteristic (ROC) curves. The selection of multimarker signatures is a complicated process that requires integration of data signatures with sophisticated statistical methods. We developed a user-friendly tool, called CombiROC, to help researchers accurately determine optimal markers combinations from diverse omics methods. With CombiROC data from different domains, such as proteomics and transcriptomics, can be analyzed using sensitivity/specificity filters: the number of candidate marker panels rising from combinatorial analysis is easily optimized bypassing limitations imposed by the nature of different experimental approaches. Leaving to the user full control on initial selection stringency, CombiROC computes sensitivity and specificity for all markers combinations, performances of best combinations and ROC curves for automatic comparisons, all visualized in a graphic interface. CombiROC was designed without hard-coded thresholds, allowing a custom fit to each specific data: this dramatically reduces the computational burden and lowers the false negative rates given by fixed thresholds. The application was validated with published data, confirming the marker combination already originally described or even finding new ones. CombiROC is a novel tool for the scientific community freely available at http://CombiROC.eu. PMID:28358118

  17. ChEMBL web services: streamlining access to drug discovery data and utilities.

    Science.gov (United States)

    Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P

    2015-07-01

    ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology.

  18. A Web Service for File-Level Access to Disk Images

    Directory of Open Access Journals (Sweden)

    Sunitha Misra

    2014-07-01

    Full Text Available Digital forensics tools have many potential applications in the curation of digital materials in libraries, archives and museums (LAMs. Open source digital forensics tools can help LAM professionals to extract digital contents from born-digital media and make more informed preservation decisions. Many of these tools have ways to display the metadata of the digital media, but few provide file-level access without having to mount the device or use complex command-line utilities. This paper describes a project to develop software that supports access to the contents of digital media without having to mount or download the entire image. The work examines two approaches in creating this tool: First, a graphical user interface running on a local machine. Second, a web-based application running in web browser. The project incorporates existing open source forensics tools and libraries including The Sleuth Kit and libewf along with the Flask web application framework and custom Python scripts to generate web pages supporting disk image browsing.

  19. E-serials cataloging access to continuing and integrating resources via the catalog and the web

    CERN Document Server

    Cole, Jim

    2014-01-01

    This comprehensive guide examines the state of electronic serials cataloging with special attention paid to online capacities. E-Serials Cataloging: Access to Continuing and Integrating Resources via the Catalog and the Web presents a review of the e-serials cataloging methods of the 1990s and discusses the international standards (ISSN, ISBD[ER], AACR2) that are applicable. It puts the concept of online accessibility into historical perspective and offers a look at current applications to consider. Practicing librarians, catalogers and administrators of technical services, cataloging and serv

  20. World Wide Webs: Crossing the Digital Divide through Promotion of Public Access

    Science.gov (United States)

    Coetzee, Liezl

    “As Bill Gates and Steve Case proclaim the global omnipresence of the Internet, the majority of non-Western nations and 97 per cent of the world's population remain unconnected to the net for lack of money, access, or knowledge. This exclusion of so vast a share of the global population from the Internet sharply contradicts the claims of those who posit the World Wide Web as a ‘universal' medium of egalitarian communication.” (Trend 2001:2)

  1. Web Accessibility of the Higher Education Institute Websites Based on the World Wide Web Consortium and Section 508 of the Rehabilitation Act

    Science.gov (United States)

    Alam, Najma H.

    2014-01-01

    The problem observed in this study is the low level of compliance of higher education website accessibility with Section 508 of the Rehabilitation Act of 1973. The literature supports the non-compliance of websites with the federal policy in general. Studies were performed to analyze the accessibility of fifty-four sample web pages using automated…

  2. Web Based Access to Real-Time Meteorological Products Optimized for PDA- Smartphones

    Science.gov (United States)

    Dengel, R. C.; Bellon, W.; Robaidek, J.

    2006-05-01

    Recent advances in wireless broadband services and coverage have made access to the internet possible in remote locations. Users can now access the web via an ever increasing number of small, handheld devices specifically designed to allow voice and data exchange using this expanding service. So called PDA phones or smartphones blend the features of traditional PDA devices with telecommunications capabilities. The University of Wisconsin - Madison, Space Science and Engineering Center (SSEC) has produced a web site holding a variety of meteorological image and text displays optimized for this new technology. The site features animations of real-time radar and satellite clouds with value added graphical overlays of severe watches and warnings. Products focus on remotely sensed information supplemented with conventional ground observations. The PDA Animated Weather (PAW) website has rapidly been adopted by numerous institutions and individuals desiring access to real-time meteorological information independent of their location. Of particular note are users that can be classified as first responders, including foreign and domestic based police and file departments. This paper offers an overview of the PAW project including product design, automated production and web presentation. Numerous examples of user applications will be presented, planned future products and functionality will be discussed.

  3. SalanderMaps: A rapid overview about felt earthquakes through data mining of web-accesses

    Science.gov (United States)

    Kradolfer, Urs

    2013-04-01

    While seismological observatories detect and locate earthquakes based on measurements of the ground motion, they neither know a priori whether an earthquake has been felt by the public nor is it known, where it has been felt. Such information is usually gathered by evaluating feedback reported by the public through on-line forms on the web. However, after a felt earthquake in Switzerland, many people visit the webpages of the Swiss Seismological Service (SED) at the ETH Zurich and each such visit leaves traces in the logfiles on our web-servers. Data mining techniques, applied to these logfiles and mining publicly available data bases on the internet open possibilities to obtain previously unknown information about our virtual visitors. In order to provide precise information to authorities and the media, it would be desirable to rapidly know from which locations these web-accesses origin. The method 'Salander' (Seismic Activitiy Linked to Area codes - Nimble Detection of Earthquake Rumbles) will be introduced and it will be explained, how the IP-addresses (each computer or router directly connected to the internet has a unique IP-address; an example would be 129.132.53.5) of a sufficient amount of our virtual visitors were linked to their geographical area. This allows us to unprecedentedly quickly know whether and where an earthquake was felt in Switzerland. It will also be explained, why the method Salander is superior to commercial so-called geolocation products. The corresponding products of the Salander method, animated SalanderMaps, which are routinely generated after each earthquake with a magnitude of M>2 in Switzerland (http://www.seismo.ethz.ch/prod/salandermaps/, available after March 2013), demonstrate how the wavefield of earthquakes propagates through Switzerland and where it was felt. Often, such information is available within less than 60 seconds after origin time, and we always get a clear picture within already five minutes after origin time

  4. Increasing access to terrestrial ecology and remote sensing (MODIS) data through Web services and visualization tools

    Science.gov (United States)

    Santhana Vannan, S.; Cook, R. B.; Wei, Y.

    2012-12-01

    In recent years user access to data and information is increasingly handled through tools, services, and applications. Standards-based services have facilitated this development. These service-based methods to access data has boosted the use of data and in increasingly complex ways. The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) has taken the approach of service-based access to data and visualization for distribution and visualization of its terrestrial ecology data, including MODIS (Moderate Resolution Imaging Spectroradiometer) remote sensing data products. The MODIS data products are highly useful for field research. The spectral, spatial and temporal characteristics of MODIS products have made them an important data source for analyzing key science questions relating to Earth system processes at multiple spatial and temporal scales. However, MODIS data volume and the complexity in data format make it less usable in some cases. To solve this usability issue, the ORNL DAAC has developed a system that prepares and distributes subsets of selected MODIS land products in a scale and format useful for field researchers. Web and Web service tools provide MODIS subsets in comma-delimited text format and in GIS compatible GeoTIFF format. Users can download and visualize MODIS subsets for a set of pre-defined locations, order MODIS subsets for any land location or automate the process of subset extraction using a SOAP-based Web service. The MODIS tools and services can be extended to support the large volume of data that would be produced by the various decadal survey missions. http://daac.ornl.gov/MODIS . The ORNL DAAC has also created a Web-based Spatial Data Access Tool (SDAT) that enables users to browse, visualize, and download a wide variety of geospatial data in various user-selected spatial/temporal extents, formats, and projections. SDAT is based on Open Geospatial Consortium (OGC) Web service standards that allows users to

  5. Using the STOQS Web Application for Access to in situ Oceanographic Data

    Science.gov (United States)

    McCann, M. P.

    2012-12-01

    Using the STOQS Web Application for Access to in situ Oceanographic Data Mike McCann 7 August 2012 With increasing measurement and sampling capabilities of autonomous oceanographic platforms (e.g. Gliders, Autonomous Underwater Vehicles, Wavegliders), the need to efficiently access and visualize the data they collect is growing. The Monterey Bay Aquarium Research Institute has designed and built the Spatial Temporal Oceanographic Query System (STOQS) specifically to address this issue. The need for STOQS arises from inefficiencies discovered from using CF-NetCDF point observation conventions for these data. The problem is that access efficiency decreases with decreasing dimension of CF-NetCDF data. For example, the Trajectory Common Data Model feature type has only one coordinate dimension, usually Time - positions of the trajectory (Depth, Latitude, Longitude) are stored as non-indexed record variables within the NetCDF file. If client software needs to access data between two depth values or from a bounded geographic area, then the whole data set must be read and the selection made within the client software. This is very inefficient. What is needed is a way to easily select data of interest from an archive given any number of spatial, temporal, or other constraints. Geospatial relational database technology provides this capability. The full STOQS application consists of a Postgres/PostGIS database, Mapserver, and Python-Django running on a server and Web 2.0 technology (jQuery, OpenLayers, Twitter Bootstrap) running in a modern web browser. The web application provides faceted search capabilities allowing a user to quickly drill into the data of interest. Data selection can be constrained by spatial, temporal, and depth selections as well as by parameter value and platform name. The web application layer also provides a REST (Representational State Transfer) Application Programming Interface allowing tools such as the Matlab stoqstoolbox to retrieve data

  6. Children and young people's views on access to a web-based application to support personal management of long-term conditions: a qualitative study.

    Science.gov (United States)

    Huby, K; Swallow, V; Smith, T; Carolan, I

    2017-01-01

    An exploration of children and young people's views on a proposed web-based application to support personal management of chronic kidney disease at home is important for developing resources that meet their needs and preferences. As part of a wider study to develop and evaluate a web-based information and support application for parents managing their child's chronic kidney disease, qualitative interviews were conducted with 26 children and young people aged 5-17 years. Interviews explored their views on content of a proposed child and young person-appropriate application to support personal management of their condition. Data were analysed by using framework technique and self-efficacy theory. One overarching theme of Access and three subthemes (information, accessibility and normalization) were identified. Information needed to be clear and accurate, age appropriate and secure. Access to Wi-Fi was essential to utilize information and retain contact with peers. For some, it was important to feel 'normal' and so they would choose not to access any care information when outside of the hospital as this reduced their ability to feel normal. Developing a web-based application that meets children and young peoples' information and support needs will maximize its utility and enhance the effectiveness of home-based clinical caregiving, therefore contributing to improved outcomes for patients. © 2016 John Wiley & Sons Ltd.

  7. SensorWeb Hub infrastructure for open access to scientific research data

    Science.gov (United States)

    de Filippis, Tiziana; Rocchi, Leandro; Rapisardi, Elena

    2015-04-01

    The sharing of research data is a new challenge for the scientific community that may benefit from a large amount of information to solve environmental issues and sustainability in agriculture and urban contexts. Prerequisites for this challenge is the development of an infrastructure that ensure access, management and preservation of data, technical support for a coordinated and harmonious management of data that, in the framework of Open Data Policies, should encourages the reuse and the collaboration. The neogeography and the citizen as sensors approach, highlight that new data sources need a new set of tools and practices so to collect, validate, categorize, and use / access these "crowdsourced" data, that integrate the data sets produced in the scientific field, thus "feeding" the overall available data for analysis and research. When the scientific community embraces the dimension of collaboration and sharing, access and re-use, in order to accept the open innovation approach, it should redesign and reshape the processes of data management: the challenges of technological and cultural innovation, enabled by web 2.0 technologies, bring to the scenario where the sharing of structured and interoperable data will constitute the unavoidable building block to set up a new paradigm of scientific research. In this perspective the Institute of Biometeorology, CNR, whose aim is contributing to sharing and development of research data, has developed the "SensorWebHub" (SWH) infrastructure to support the scientific activities carried out in several research projects at national and international level. It is designed to manage both mobile and fixed open source meteorological and environmental sensors, in order to integrate the existing agro-meteorological and urban monitoring networks. The proposed architecture uses open source tools to ensure sustainability in the development and deployment of web applications with geographic features and custom analysis, as requested

  8. The new ALICE DQM client: a web access to ROOT-based objects

    Science.gov (United States)

    von Haller, B.; Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F.; Delort, C.; Dénes, E.; Diviá, R.; Fuchs, U.; Niedziela, J.; Simonetti, G.; Soós, C.; Telesca, A.; Vande Vyvre, P.; Wegrzynek, A.

    2015-12-01

    A Large Ion Collider Experiment (ALICE) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). The online Data Quality Monitoring (DQM) plays an essential role in the experiment operation by providing shifters with immediate feedback on the data being recorded in order to quickly identify and overcome problems. An immediate access to the DQM results is needed not only by shifters in the control room but also by detector experts worldwide. As a consequence, a new web application has been developed to dynamically display and manipulate the ROOT-based objects produced by the DQM system in a flexible and user friendly interface. The architecture and design of the tool, its main features and the technologies that were used, both on the server and the client side, are described. In particular, we detail how we took advantage of the most recent ROOT JavaScript I/O and web server library to give interactive access to ROOT objects stored in a database. We describe as well the use of modern web techniques and packages such as AJAX, DHTMLX and jQuery, which has been instrumental in the successful implementation of a reactive and efficient application. We finally present the resulting application and how code quality was ensured. We conclude with a roadmap for future technical and functional developments.

  9. Mining Sequential Access Pattern with Low Support From Large Pre-Processed Web Logs

    Directory of Open Access Journals (Sweden)

    S. Vijayalakshmi

    2010-01-01

    Full Text Available Problem statement: To find frequently occurring Sequential patterns from web log file on the basis of minimum support provided. We introduced an efficient strategy for discovering Web usage mining is the application of sequential pattern mining techniques to discover usage patterns from Web data, in order to understand and better serve the needs of Web-based applications. Approach: The approaches adopt a divide-and conquer pattern-growth principle. Our proposed method combined tree projection and prefix growth features from pattern-growth category with position coded feature from early-pruning category, all of these features are key characteristics of their respective categories, so we consider our proposed method as a pattern growth, early-pruning hybrid algorithm. Results: Our proposed Hybrid algorithm eliminated the need to store numerous intermediate WAP trees during mining. Since only the original tree was stored, it drastically cuts off huge memory access costs, which may include disk I/O cost in a virtual memory environment, especially when mining very long sequences with millions of records. Conclusion: An attempt had been made to our approach for improving efficiency. Our proposed method totally eliminates reconstructions of intermediate WAP-trees during mining and considerably reduces execution time.

  10. Fast and Accurate Accessible Surface Area Prediction Without a Sequence Profile.

    Science.gov (United States)

    Faraggi, Eshel; Kouza, Maksim; Zhou, Yaoqi; Kloczkowski, Andrzej

    2017-01-01

    A fast accessible surface area (ASA) predictor is presented. In this new approach no residue mutation profiles generated by multiple sequence alignments are used as inputs. Instead, we use only single sequence information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for ASAquick are available from Research and Information Systems at http://mamiris.com and from the Battelle Center for Mathematical Medicine at http://mathmed.org .

  11. U-Access: a web-based system for routing pedestrians of differing abilities

    Science.gov (United States)

    Sobek, Adam D.; Miller, Harvey J.

    2006-09-01

    For most people, traveling through urban and built environments is straightforward. However, for people with physical disabilities, even a short trip can be difficult and perhaps impossible. This paper provides the design and implementation of a web-based system for the routing and prescriptive analysis of pedestrians with different physical abilities within built environments. U-Access, as a routing tool, provides pedestrians with the shortest feasible route with respect to one of three differing ability levels, namely, peripatetic (unaided mobility), aided mobility (mobility with the help of a cane, walker or crutches) and wheelchair users. U-Access is also an analytical tool that can help identify obstacles in built environments that create routing discrepancies among pedestrians with different physical abilities. This paper discusses the system design, including database, algorithm and interface specifications, and technologies for efficiently delivering results through the World Wide Web (WWW). This paper also provides an illustrative example of a routing problem and an analytical evaluation of the existing infrastructure which identifies the obstacles that pose the greatest discrepancies between physical ability levels. U-Access was evaluated by wheelchair users and route experts from the Center for Disability Services at The University of Utah, USA.

  12. The impacts of problem gambling on concerned significant others accessing web-based counselling.

    Science.gov (United States)

    Dowling, Nicki A; Rodda, Simone N; Lubman, Dan I; Jackson, Alun C

    2014-08-01

    The 'concerned significant others' (CSOs) of people with problem gambling frequently seek professional support. However, there is surprisingly little research investigating the characteristics or help-seeking behaviour of these CSOs, particularly for web-based counselling. The aims of this study were to describe the characteristics of CSOs accessing the web-based counselling service (real time chat) offered by the Australian national gambling web-based counselling site, explore the most commonly reported CSO impacts using a new brief scale (the Problem Gambling Significant Other Impact Scale: PG-SOIS), and identify the factors associated with different types of CSO impact. The sample comprised all 366 CSOs accessing the service over a 21 month period. The findings revealed that the CSOs were most often the intimate partners of problem gamblers and that they were most often females aged under 30 years. All CSOs displayed a similar profile of impact, with emotional distress (97.5%) and impacts on the relationship (95.9%) reported to be the most commonly endorsed impacts, followed by impacts on social life (92.1%) and finances (91.3%). Impacts on employment (83.6%) and physical health (77.3%) were the least commonly endorsed. There were few significant differences in impacts between family members (children, partners, parents, and siblings), but friends consistently reported the lowest impact scores. Only prior counselling experience and Asian cultural background were consistently associated with higher CSO impacts. The findings can serve to inform the development of web-based interventions specifically designed for the CSOs of problem gamblers.

  13. Remote Internet access to advanced analytical facilities: a new approach with Web-based services.

    Science.gov (United States)

    Sherry, N; Qin, J; Fuller, M Suominen; Xie, Y; Mola, O; Bauer, M; McIntyre, N S; Maxwell, D; Liu, D; Matias, E; Armstrong, C

    2012-09-04

    Over the past decade, the increasing availability of the World Wide Web has held out the possibility that the efficiency of scientific measurements could be enhanced in cases where experiments were being conducted at distant facilities. Examples of early successes have included X-ray diffraction (XRD) experimental measurements of protein crystal structures at synchrotrons and access to scanning electron microscopy (SEM) and NMR facilities by users from institutions that do not possess such advanced capabilities. Experimental control, visual contact, and receipt of results has used some form of X forwarding and/or VNC (virtual network computing) software that transfers the screen image of a server at the experimental site to that of the users' home site. A more recent development is a web services platform called Science Studio that provides teams of scientists with secure links to experiments at one or more advanced research facilities. The software provides a widely distributed team with a set of controls and screens to operate, observe, and record essential parts of the experiment. As well, Science Studio provides high speed network access to computing resources to process the large data sets that are often involved in complex experiments. The simple web browser and the rapid transfer of experimental data to a processing site allow efficient use of the facility and assist decision making during the acquisition of the experimental results. The software provides users with a comprehensive overview and record of all parts of the experimental process. A prototype network is described involving X-ray beamlines at two different synchrotrons and an SEM facility. An online parallel processing facility has been developed that analyzes the data in near-real time using stream processing. Science Studio and can be expanded to include many other analytical applications, providing teams of users with rapid access to processed results along with the means for detailed

  14. Accurate single-sequence prediction of solvent accessible surface area using local and global features.

    Science.gov (United States)

    Faraggi, Eshel; Zhou, Yaoqi; Kloczkowski, Andrzej

    2014-11-01

    We present a new approach for predicting the Accessible Surface Area (ASA) using a General Neural Network (GENN). The novelty of the new approach lies in not using residue mutation profiles generated by multiple sequence alignments as descriptive inputs. Instead we use solely sequential window information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment-based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is tested on predicting the ASA of globular proteins and found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for GENN and ASAquick are available from Research and Information Systems at http://mamiris.com, from the SPARKS Lab at http://sparks-lab.org, and from the Battelle Center for Mathematical Medicine at http://mathmed.org.

  15. TTLEM: Open access tool for building numerically accurate landscape evolution models in MATLAB

    Science.gov (United States)

    Campforts, Benjamin; Schwanghart, Wolfgang; Govers, Gerard

    2017-04-01

    Despite a growing interest in LEMs, accuracy assessment of the numerical methods they are based on has received little attention. Here, we present TTLEM which is an open access landscape evolution package designed to develop and test your own scenarios and hypothesises. TTLEM uses a higher order flux-limiting finite-volume method to simulate river incision and tectonic displacement. We show that this scheme significantly influences the evolution of simulated landscapes and the spatial and temporal variability of erosion rates. Moreover, it allows the simulation of lateral tectonic displacement on a fixed grid. Through the use of a simple GUI the software produces visible output of evolving landscapes through model run time. In this contribution, we illustrate numerical landscape evolution through a set of movies spanning different spatial and temporal scales. We focus on the erosional domain and use both spatially constant and variable input values for uplift, lateral tectonic shortening, erodibility and precipitation. Moreover, we illustrate the relevance of a stochastic approach for realistic hillslope response modelling. TTLEM is a fully open source software package, written in MATLAB and based on the TopoToolbox platform (topotoolbox.wordpress.com). Installation instructions can be found on this website and the therefore designed GitHub repository.

  16. Accurate single-sequence prediction of solvent accessible surface area using local and global features

    Science.gov (United States)

    Faraggi, Eshel; Zhou, Yaoqi; Kloczkowski, Andrzej

    2014-01-01

    We present a new approach for predicting the Accessible Surface Area (ASA) using a General Neural Network (GENN). The novelty of the new approach lies in not using residue mutation profiles generated by multiple sequence alignments as descriptive inputs. Instead we use solely sequential window information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is tested on predicting the ASA of globular proteins and found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for GENN and ASAquick are available from Research and Information Systems at http://mamiris.com, from the SPARKS Lab at http://sparks-lab.org, and from the Battelle Center for Mathematical Medicine at http://mathmed.org. PMID:25204636

  17. Web 2.0 Sites for Collaborative Self-Access: The Learning Advisor vs. Google®

    Directory of Open Access Journals (Sweden)

    Craig D. Howard

    2011-09-01

    Full Text Available While Web 2.0 technologies provide motivated, self-access learners with unprecedented opportunities for language learning, Web 2.0 designs are not of universally equal value for learning. This article reports on research carried out at Indiana University Bloomington using an empirical method to select websites for self-access language learning. Two questions related to Web 2.0 recommendations were asked: (1 How do recommended Web 2.0 sites rank in terms of interactivity features? (2 How likely is a learner to find highly interactive sites on their own? A list of 20 sites used for supplemental and self-access activities in language programs at five universities was compiled and provided the initial data set. Purposive sampling criteria revealed 10 sites truly represented Web 2.0 design. To address the first question, a feature analysis was applied (Herring, The international handbook of internet research. Berlin: Springer, 2008. An interactivity framework was developed from previous research to identify Web 2.0 design features, and sites were ranked according to feature quantity. The method used to address the second question was an interconnectivity analysis that measured direct and indirect interconnectivity within Google results. Highly interactive Web 2.0 sites were not prominent in Google search results, nor were they often linked via third party sites. It was determined that, using typical keywords or searching via blogs and recommendation sites, self-access learners were highly unlikely to find the most promising Web 2.0 sites for language learning. A discussion of the role of the learning advisor in guiding Web 2.0 collaborative self-access, as well as some strategic short cuts to quick analysis, conclude the article.

  18. Access and completion of a Web-based treatment in a population-based sample of tornado-affected adolescents.

    Science.gov (United States)

    Price, Matthew; Yuen, Erica K; Davidson, Tatiana M; Hubel, Grace; Ruggiero, Kenneth J

    2015-08-01

    Although Web-based treatments have significant potential to assess and treat difficult-to-reach populations, such as trauma-exposed adolescents, the extent that such treatments are accessed and used is unclear. The present study evaluated the proportion of adolescents who accessed and completed a Web-based treatment for postdisaster mental health symptoms. Correlates of access and completion were examined. A sample of 2,000 adolescents living in tornado-affected communities was assessed via structured telephone interview and invited to a Web-based treatment. The modular treatment addressed symptoms of posttraumatic stress disorder, depression, and alcohol and tobacco use. Participants were randomized to experimental or control conditions after accessing the site. Overall access for the intervention was 35.8%. Module completion for those who accessed ranged from 52.8% to 85.6%. Adolescents with parents who used the Internet to obtain health-related information were more likely to access the treatment. Adolescent males were less likely to access the treatment. Future work is needed to identify strategies to further increase the reach of Web-based treatments to provide clinical services in a postdisaster context. (c) 2015 APA, all rights reserved).

  19. Accessibility and Use of Web-Based Electronic Resources by Physicians in a Psychiatric Institution in Nigeria

    Science.gov (United States)

    Oduwole, Adebambo Adewale; Oyewumi, Olatundun

    2010-01-01

    Purpose: This study aims to examine the accessibility and use of web-based electronic databases on the Health InterNetwork Access to Research Initiative (HINARI) portal by physicians in the Neuropsychiatric Hospital, Aro--a psychiatry health institution in Nigeria. Design/methodology/approach: Collection of data was through the use of a three-part…

  20. Making It Work for Everyone: HTML5 and CSS Level 3 for Responsive, Accessible Design on Your Library's Web Site

    Science.gov (United States)

    Baker, Stewart C.

    2014-01-01

    This article argues that accessibility and universality are essential to good Web design. A brief review of library science literature sets the issue of Web accessibility in context. The bulk of the article explains the design philosophies of progressive enhancement and responsive Web design, and summarizes recent updates to WCAG 2.0, HTML5, CSS…

  1. Making It Work for Everyone: HTML5 and CSS Level 3 for Responsive, Accessible Design on Your Library's Web Site

    Science.gov (United States)

    Baker, Stewart C.

    2014-01-01

    This article argues that accessibility and universality are essential to good Web design. A brief review of library science literature sets the issue of Web accessibility in context. The bulk of the article explains the design philosophies of progressive enhancement and responsive Web design, and summarizes recent updates to WCAG 2.0, HTML5, CSS…

  2. Accurate guidance for percutaneous access to a specific target in soft tissues: preclinical study of computer-assisted pericardiocentesis.

    Science.gov (United States)

    Chavanon, O; Barbe, C; Troccaz, J; Carrat, L; Ribuot, C; Noirclerc, M; Maitrasse, B; Blin, D

    1999-06-01

    In the field of percutaneous access to soft tissues, our project was to improve classical pericardiocentesis by performing accurate guidance to a selected target, according to a model of the pericardial effusion acquired through three-dimensional (3D) data recording. Required hardware is an echocardiographic device and a needle, both linked to a 3D localizer, and a computer. After acquiring echographic data, a modeling procedure allows definition of the optimal puncture strategy, taking into consideration the mobility of the heart, by determining a stable region, whatever the period of the cardiac cycle. A passive guidance system is then used to reach the planned target accurately, generally a site in the middle of the stable region. After validation on a dynamic phantom and a feasibility study in dogs, an accuracy and reliability analysis protocol was realized on pigs with experimental pericardial effusion. Ten consecutive successful punctures using various trajectories were performed on eight pigs. Nonbloody liquid was collected from pericardial effusions in the stable region (5 to 9 mm wide) within 10 to 15 minutes from echographic acquisition to drainage. Accuracy of at least 2.5 mm was demonstrated. This study demonstrates the feasibility of computer-assisted pericardiocentesis. Beyond the simple improvement of the current technique, this method could be a new way to reach the heart or a new tool for percutaneous access and image-guided puncture of soft tissues. Further investigation will be necessary before routine human application.

  3. Access and privacy rights using web security standards to increase patient empowerment.

    Science.gov (United States)

    Falcão-Reis, Filipa; Costa-Pereira, Altamiro; Correia, Manuel E

    2008-01-01

    Electronic Health Record (EHR) systems are becoming more and more sophisticated and include nowadays numerous applications, which are not only accessed by medical professionals, but also by accounting and administrative personnel. This could represent a problem concerning basic rights such as privacy and confidentiality. The principles, guidelines and recommendations compiled by the OECD protection of privacy and trans-border flow of personal data are described and considered within health information system development. Granting access to an EHR should be dependent upon the owner of the record; the patient: he must be entitled to define who is allowed to access his EHRs, besides the access control scheme each health organization may have implemented. In this way, it's not only up to health professionals to decide who have access to what, but the patient himself. Implementing such a policy is walking towards patient empowerment which society should encourage and governments should promote. The paper then introduces a technical solution based on web security standards. This would give patients the ability to monitor and control which entities have access to their personal EHRs, thus empowering them with the knowledge of how much of his medical history is known and by whom. It is necessary to create standard data access protocols, mechanisms and policies to protect the privacy rights and furthermore, to enable patients, to automatically track the movement (flow) of their personal data and information in the context of health information systems. This solution must be functional and, above all, user-friendly and the interface should take in consideration some heuristics of usability in order to provide the user with the best tools. The current official standards on confidentiality and privacy in health care, currently being developed within the EU, are explained, in order to achieve a consensual idea of the guidelines that all member states should follow to transfer

  4. Web Maps and Services at NOAA for Bathymetric Data Discovery, Visualization, and Access

    Science.gov (United States)

    Varner, J. D.; Cartwright, J.

    2016-12-01

    NOAA's National Centers for Environmental Information (NCEI) ensures the security and widespread availability of marine geophysical data through long-term stewardship. NCEI stewards bathymetric data and products from numerous sources, including near-shore hydrographic survey data from NOAA's National Ocean Service, deep-water multibeam and single-beam echosounder data collected by U.S. and non-U.S. institutions, as well as digital elevation models (DEMs) that integrate ocean bathymetry and land topography. These data can be discovered, visualized, and accessed via a suite of ArcGIS web services and by using a web map which integrates these component services: the Bathymetric Data Viewer. The services provide data coverage (e.g. survey tracklines, DEM footprints), color shaded relief visualizations of bathymetry, and seamless mosaics of elevation data. These services are usable in web applications (both within and outside NOAA), and in desktop GIS software. Users can utilize the Bathymetric Data Viewer to narrow down data of interest, identify datasets, then submit an order to NCEI's extract system for data retrieval.

  5. FLOSYS--a web-accessible workflow system for protocol-driven biomolecular sequence analysis.

    Science.gov (United States)

    Badidi, E; Lang, B F; Burger, G

    2004-11-01

    FLOSYS is an interactive web-accessible bioinformatics workflow system designed to assist biologists in multi-step data analyses. FLOSYS allows the user to create complex analysis pathways (protocols) graphically, similar to drawing a flowchart: icons representing particular bioinformatics tools are dragged and dropped onto a canvas and lines connecting those icons are drawn to specify the relationships between the tools. In addition, FLOSYS permits to select input-data, execute the protocol and store the results in a personal workspace. The three-tier architecture of FLOSYS has been implemented in Java and uses a relational database system together with new technologies for distributed and web computing such as CORBA, RMI, JSP and JDBC. The prototype of FLOSYS, which is part of the bioinformatics workbench AnaBench, is accessible on-line at http://malawimonas.bcm.umontreal.ca: 8091/anabench. The entire package is available on request to academic groups who wish to have a customized local analysis environment for research or teaching.

  6. A Dynamic Web Page Prediction Model Based on Access Patterns to Offer Better User Latency

    CERN Document Server

    Mukhopadhyay, Debajyoti; Saha, Dwaipayan; Kim, Young-Chon

    2011-01-01

    The growth of the World Wide Web has emphasized the need for improvement in user latency. One of the techniques that are used for improving user latency is Caching and another is Web Prefetching. Approaches that bank solely on caching offer limited performance improvement because it is difficult for caching to handle the large number of increasingly diverse files. Studies have been conducted on prefetching models based on decision trees, Markov chains, and path analysis. However, the increased uses of dynamic pages, frequent changes in site structure and user access patterns have limited the efficacy of these static techniques. In this paper, we have proposed a methodology to cluster related pages into different categories based on the access patterns. Additionally we use page ranking to build up our prediction model at the initial stages when users haven't already started sending requests. This way we have tried to overcome the problems of maintaining huge databases which is needed in case of log based techn...

  7. SPSmart: adapting population based SNP genotype databases for fast and comprehensive web access

    Directory of Open Access Journals (Sweden)

    Carracedo Ángel

    2008-10-01

    Full Text Available Abstract Background In the last five years large online resources of human variability have appeared, notably HapMap, Perlegen and the CEPH foundation. These databases of genotypes with population information act as catalogues of human diversity, and are widely used as reference sources for population genetics studies. Although many useful conclusions may be extracted by querying databases individually, the lack of flexibility for combining data from within and between each database does not allow the calculation of key population variability statistics. Results We have developed a novel tool for accessing and combining large-scale genomic databases of single nucleotide polymorphisms (SNPs in widespread use in human population genetics: SPSmart (SNPs for Population Studies. A fast pipeline creates and maintains a data mart from the most commonly accessed databases of genotypes containing population information: data is mined, summarized into the standard statistical reference indices, and stored into a relational database that currently handles as many as 4 × 109 genotypes and that can be easily extended to new database initiatives. We have also built a web interface to the data mart that allows the browsing of underlying data indexed by population and the combining of populations, allowing intuitive and straightforward comparison of population groups. All the information served is optimized for web display, and most of the computations are already pre-processed in the data mart to speed up the data browsing and any computational treatment requested. Conclusion In practice, SPSmart allows populations to be combined into user-defined groups, while multiple databases can be accessed and compared in a few simple steps from a single query. It performs the queries rapidly and gives straightforward graphical summaries of SNP population variability through visual inspection of allele frequencies outlined in standard pie-chart format. In addition, full

  8. Integrated Web-based Oracle and GIS Access to Natural Hazards Data

    Science.gov (United States)

    Dunbar, P. K.; Cartwright, J. C.; Kowal, D.; Gaines, T.

    2002-12-01

    The National Geophysical Data Center (NGDC) catalogs information on tsunamis, significant earthquakes, and volcanoes including effects such as fatalities and damage. NGDC also maintains a large collection of geologic hazards photos. All of these databases are now stored in an Oracle relational database management system (RDMS) and accessible over the Web as tables, reports and interactive maps. Storing the data in a RDMS facilitates the search for earthquake, tsunami and volcano data related to a specific event. For example, a user might be interested in all of the earthquakes greater than magnitude 8.0 that have occurred in Alaska. If the earthquake triggered a tsunami, the user could then directly access related information from the tsunami tables without having to run a separate search of the tsunami database. Users could also first access the tsunami database and then obtain related significant earthquake and volcano data. The ArcIMS-based interactive maps provide integrated Web-based GIS access to these hazards databases as well as additional auxiliary geospatial data. The first interactive map provides access to individual GIS layers of significant earthquakes, tsunami sources, tsunami effects, volcano locations, and various spatial reference layers including topography, population density, and political boundaries. The map service also provides ftp links and hyperlinks to additional hazards information such as NGDC's extensive collection of geologic hazards photos. For example, a user could display all of the significant earthquakes that have occurred in California and then by using a hyperlinks tool, display images showing damage from a specific earthquake such as the 1989 Loma Prieta event. The second interactive map allows users to display related natural hazards GIS layers. For example, a user might first display tsunami source locations and select tsunami effects as the related feature. Using a tool developed at NGDC, the user can then select a specific

  9. WaveNet: A Web-Based Metocean Data Access, Processing and Analysis Tool; Part 5 - WW3 Database

    Science.gov (United States)

    2015-02-01

    data for project planning, design , and evaluation studies, including how to generate input files for numerical wave models. WaveNet employs a Google ...ERDC/CHL CHETN-IV-103 February 2015 Approved for public release; distribution is unlimited. WaveNet: A Web -Based Metocean Data Access, Processing...modeling and planning missions require metocean data (e.g., winds, waves, tides, water levels). WaveNet is a web -based graphical-user-interface (GUI

  10. Development of a computer assisted gantry system for gaining rapid and accurate calyceal access during percutaneous nephrolithotomy

    Directory of Open Access Journals (Sweden)

    A. D. Zarrabi

    2010-12-01

    Full Text Available PURPOSE: To design a simple, cost-effective system for gaining rapid and accurate calyceal access during percutaneous nephrolithotomy (PCNL. MATERIALS AND METHODS: The design consists of a low-cost, light-weight, portable mechanical gantry with a needle guiding device. Using C-arm fluoroscopy, two images of the contrast-filled renal collecting system are obtained: at 0-degrees (perpendicular to the kidney and 20-degrees. These images are relayed to a laptop computer containing the software and graphic user interface for selecting the targeted calyx. The software provides numerical settings for the 3 axes of the gantry, which are used to position the needle guiding device. The needle is advanced through the guide to the depth calculated by the software, thus puncturing the targeted calyx. Testing of the system was performed on 2 target types: 1 radiolucent plastic tubes the approximate size of a renal calyx (5 or 10 mm in diameter, 30 mm in length; and 2 foam-occluded, contrast-filled porcine kidneys. RESULTS: Tests using target type 1 with 10 mm diameter (n = 14 and 5 mm diameter (n = 7 tubes resulted in a 100% targeting success rate, with a mean procedure duration of 10 minutes. Tests using target type 2 (n = 2 were both successful, with accurate puncturing of the selected renal calyx, and a mean procedure duration of 15 minutes. CONCLUSIONS: The mechanical gantry system described in this paper is low-cost, portable, light-weight, and simple to set up and operate. C-arm fluoroscopy is limited to two images, thus reducing radiation exposure significantly. Testing of the system showed an extremely high degree of accuracy in gaining precise access to a targeted renal calyx.

  11. Pred-hERG: A Novel web-Accessible Computational Tool for Predicting Cardiac Toxicity.

    Science.gov (United States)

    Braga, Rodolpho C; Alves, Vinicius M; Silva, Meryck F B; Muratov, Eugene; Fourches, Denis; Lião, Luciano M; Tropsha, Alexander; Andrade, Carolina H

    2015-10-01

    The blockage of the hERG K(+) channels is closely associated with lethal cardiac arrhythmia. The notorious ligand promiscuity of this channel earmarked hERG as one of the most important antitargets to be considered in early stages of drug development process. Herein we report on the development of an innovative and freely accessible web server for early identification of putative hERG blockers and non-blockers in chemical libraries. We have collected the largest publicly available curated hERG dataset of 5,984 compounds. We succeed in developing robust and externally predictive binary (CCR≈0.8) and multiclass models (accuracy≈0.7). These models are available as a web-service freely available for public at http://labmol.farmacia.ufg.br/predherg/. Three following outcomes are available for the users: prediction by binary model, prediction by multi-class model, and the probability maps of atomic contribution. The Pred-hERG will be continuously updated and upgraded as new information became available. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Accessibility and reliability of cutaneous laser surgery information on the World Wide Web.

    Science.gov (United States)

    Bykowski, J L; Alora, M B; Dover, J S; Arndt, K A

    2000-05-01

    The World Wide Web has provided the public with easy and affordable access to a vast range of information. However, claims may be unsubstantiated and misleading. The purpose of this study was to use cutaneous laser surgery as a model to assess the availability and reliability of Web sites and to evaluate this resource for the quality of patient and provider education. Three commercial methods of searching the Internet were used, identifying nearly 500,000 possible sites. The first 100 sites listed by each search engine (a total of 300 sites) were compared. Of these, 126 were listed repeatedly within a given retrieval method, whereas only 3 sites were identified by all 3 search engines. After elimination of duplicates, 40 sites were evaluated for content and currency of information. The most common features included postoperative care suggestions, options for pain management or anesthesia, a description of the way in which lasers work, and the types of lasers used for different procedures. Potential contraindications to laser procedures were described on fewer than 30% of the sites reviewed. None of the sites contained substantiation of claims or referrals to peer-reviewed publications or research. Because of duplication and the prioritization systems of search engines, the ease of finding sites did not correlate with the quality of the site's content. Our findings show that advertisements for services exceed useful information.

  13. Keeping Libraries Relevant in the Semantic Web with RDA: Resource Description and Access

    Directory of Open Access Journals (Sweden)

    Barbara Tillett

    2011-12-01

    Full Text Available Catalogare non vuol dire semplicemente costruire un catalogo. Vuol dire far sì che gli utenti accedano tempestivamente alle informazioni pertinenti alle loro esigenze. Il lavoro di identificazione delle risorse raccolte da biblioteche, archivi, musei, dà luogo a ricchi metadati che possono essere riutilizzati per molti scopi ("le attività dell’utente". Ciò comporta la descrizione delle risorse e il mostrare le loro relazioni con persone, famiglie, enti e altre risorse, consentendo così agli utenti di navigare attraverso surrogati delle risorse per ottenere più rapidamente le informazioni di cui hanno bisogno. I metadati costruiti lungo tutto il ciclo di vita di una risorsa sono particolarmente preziosi per molti tipi di utenti: dai creatori delle risorse, agli editori, alle agenzie, ai librai, agli aggregatori di risorse, ai fornitori di sistemi, alle biblioteche, ad altre istituzioni culturali ed agli utenti finali. Il nuovo codice internazionale di catalogazione, RDA: Resource Description e Access, è progettato per soddisfare le attività di base degli utenti producendo metadati ben formati e interconnessi per l'ambiente digitale, dando la possibilità alle biblioteche di rimanere rilevanti nel web semantico. Acknowledge Il testo inglese del saggio è pubblicato in "Serials", November 2011, 24, 3, con il titolo Keeping Libraries Relevant in the Semantic Web with RDA: Resource Description and Access, DOI: http://dx.doi.org/10.1629/24266. Traduzione italiana di Maria Chiara Iorio e Tiziana Possemato, che ringraziano Carlo Bianchini e Mauro Guerrini per la rilettura della traduzione.

  14. iDrug: a web-accessible and interactive drug discovery and design platform.

    Science.gov (United States)

    Wang, Xia; Chen, Haipeng; Yang, Feng; Gong, Jiayu; Li, Shiliang; Pei, Jianfeng; Liu, Xiaofeng; Jiang, Hualiang; Lai, Luhua; Li, Honglin

    2014-01-01

    The progress in computer-aided drug design (CADD) approaches over the past decades accelerated the early-stage pharmaceutical research. Many powerful standalone tools for CADD have been developed in academia. As programs are developed by various research groups, a consistent user-friendly online graphical working environment, combining computational techniques such as pharmacophore mapping, similarity calculation, scoring, and target identification is needed. We presented a versatile, user-friendly, and efficient online tool for computer-aided drug design based on pharmacophore and 3D molecular similarity searching. The web interface enables binding sites detection, virtual screening hits identification, and drug targets prediction in an interactive manner through a seamless interface to all adapted packages (e.g., Cavity, PocketV.2, PharmMapper, SHAFTS). Several commercially available compound databases for hit identification and a well-annotated pharmacophore database for drug targets prediction were integrated in iDrug as well. The web interface provides tools for real-time molecular building/editing, converting, displaying, and analyzing. All the customized configurations of the functional modules can be accessed through featured session files provided, which can be saved to the local disk and uploaded to resume or update the history work. iDrug is easy to use, and provides a novel, fast and reliable tool for conducting drug design experiments. By using iDrug, various molecular design processing tasks can be submitted and visualized simply in one browser without installing locally any standalone modeling softwares. iDrug is accessible free of charge at http://lilab.ecust.edu.cn/idrug.

  15. Urban Poor community Access to reproductive Health care in Surabaya. An Equity Analysis Using Spider Web

    Directory of Open Access Journals (Sweden)

    Ernawaty Ernawaty

    2015-01-01

    Full Text Available background:Poverty in urban area triggered new problem related with their access to health care services. Heavy burden of life in urban makes health, especially reproductive health, never become first priority for urban poor community. This study is aimed to analyzed equity of urban poor community when accessing the reproductive health care. Methods:This is descriptive study with cross sectional design applied. There were 78 women residents of Penjaringansari II Flats in Surabaya selected by simple random sampling (α=10% as respondents. Penjaringansari II Flats was choosen because it is a slum area that residence by unskilled labours and odd employees. The need fulfillment of respondents in reproductive health care was analyzed by spider web analysis. result: Most of respondents (55.1% were first married before 20 years old. There were 60.5% of them gave birth before 20 years old also, so it belongs to high risk pregnant women. Spider web area for health care of under age married was less than ideal age married which means that under age married urban poor experienced inequity for ANC health services. Approximately 10.3% of respondents had never use contraceptives because they fear of side effects and inhibiton of their husband. conclutions:Better equity shown in the prevention of cervical cancer. Being perceived as the poor who need assistance, free pap smear was often held by Penjaringansari II residents. Poor conditions experienced by a group not only can promote health care inequity, but also promote health care equity. recomendation:Health care equity for urban poor can be pursued through both aid schemes provided by the community or the government.

  16. Mantenere il ruolo delle biblioteche nel web semantico tramite RDA: Resource Description and Access

    Directory of Open Access Journals (Sweden)

    Barbara Tillett

    2011-10-01

    Full Text Available Catalogare non vuol dire semplicemente costruire un catalogo. Vuol dire far sì che gli utenti accedano tempestivamente alle informazioni pertinenti alle loro esigenze. Il lavoro di identificazione delle risorse raccolte da biblioteche, archivi, musei, dà luogo a ricchi metadati che possono essere riutilizzati per molti scopi ("le attività dell’utente". Ciò comporta la descrizione delle risorse e il mostrare le loro relazioni con persone, famiglie, enti e altre risorse, consentendo così agli utenti di navigare attraverso surrogati delle risorse per ottenere più rapidamente le informazioni di cui hanno bisogno. I metadati costruiti lungo tutto il ciclo di vita di una risorsa sono particolarmente preziosi per molti tipi di utenti: dai creatori delle risorse, agli editori, alle agenzie, ai librai, agli aggregatori di risorse, ai fornitori di sistemi, alle biblioteche, ad altre istituzioni culturali ed agli utenti finali. Il nuovo codice internazionale di catalogazione, RDA: Resource Description e Access, è progettato per soddisfare le attività di base degli utenti producendo metadati ben formati e interconnessi per l'ambiente digitale, dando la possibilità alle biblioteche di rimanere rilevanti nel web semantico. Acknowledge Il testo inglese del saggio è pubblicato in "Serials", November 2011, 24, 3, con il titolo Keeping Libraries Relevant in the Semantic Web with RDA: Resource Description and Access, DOI: http://dx.doi.org/10.1629/24266. Traduzione italiana di Maria Chiara Iorio e Tiziana Possemato, che ringraziano Carlo Bianchini e Mauro Guerrini per la rilettura della traduzione.

  17. Research of Communication of Lubrication Station Control System Based on WebAccess%基于WebAccess的润滑站控制系统通信的研究

    Institute of Scientific and Technical Information of China (English)

    巴鹏; 张雨; 焦圳

    2015-01-01

    Through establishing the communication between site plant and IPC configuration software WebAccess, it achieves the filling oil monitoring, operation control and data processing of the lubrication station control system. This arti-cle uses VB to establish procedures for communication as the data exchange program, combining configuration software WebAccess to build the lubrication station automation injection oil monitoring and management system. It can effectively solve the configuration software lack of driver and monitoring system of data transmission is not timely, inaccurate data re-cords, and other issues. The experiment results show that the system is easy to operate, accurate data transmission, and stable running and easy to maintain. It is the development trend of lubrication station in the future.%通过建立现场设备与工控机组态软件WebAccess的通信,实现了对润滑站控制系统加注油品监测、运行控制和数据处理。本文采用VB建立通讯连接程序作为数据交换程序,结合组态软件WebAccess建立润滑站自动加注油品的监控与管理系统,有效地解决了组态软件缺乏驱动和监控系统数据传递不及时、数据记录不准确等问题。实验结果表明:该系统易于操作,数据传输准确,运行稳定和便于维护,是润滑站今后发展的趋势。

  18. Development of a Web-Accessible Population Pharmacokinetic Service—Hemophilia (WAPPS-Hemo): Study Protocol

    Science.gov (United States)

    Foster, Gary; Navarro-Ruan, Tamara; McEneny-King, Alanna; Edginton, Andrea N; Thabane, Lehana

    2016-01-01

    Background Individual pharmacokinetic assessment is a critical component of tailored prophylaxis for hemophilia patients. Population pharmacokinetics allows using individual sparse data, thus simplifying individual pharmacokinetic studies. Implementing population pharmacokinetics capacity for the hemophilia community is beyond individual reach and requires a system effort. Objective The Web-Accessible Population Pharmacokinetic Service—Hemophilia (WAPPS-Hemo) project aims to assemble a database of patient pharmacokinetic data for all existing factor concentrates, develop and validate population pharmacokinetics models, and integrate these models within a Web-based calculator for individualized pharmacokinetic estimation in patients at participating treatment centers. Methods Individual pharmacokinetic studies on factor VIII and IX concentrates will be sourced from pharmaceutical companies and independent investigators. All factor concentrate manufacturers, hemophilia treatment centers (HTCs), and independent investigators (identified via a systematic review of the literature) having on file pharmacokinetic data and willing to contribute full or sparse pharmacokinetic data will be eligible for participation. Multicompartmental modeling will be performed using a mixed-model approach for derivation and Bayesian forecasting for estimation of individual sparse data. NONMEM (ICON Development Solutions) will be used as modeling software. Results The WAPPS-Hemo research network has been launched and is currently joined by 30 HTCs from across the world. We have gathered dense individual pharmacokinetic data on 878 subjects, including several replicates, on 21 different molecules from 17 different sources. We have collected sparse individual pharmacokinetic data on 289 subjects from the participating centers through the testing phase of the WAPPS-Hemo Web interface. We have developed prototypal population pharmacokinetics models for 11 molecules. The WAPPS-Hemo website

  19. The Live Access Server - A Web-Services Framework for Earth Science Data

    Science.gov (United States)

    Schweitzer, R.; Hankin, S. C.; Callahan, J. S.; O'Brien, K.; Manke, A.; Wang, X. Y.

    2005-12-01

    The Live Access Server (LAS) is a general purpose Web-server for delivering services related to geo-science data sets. Data providers can use the LAS architecture to build custom Web interfaces to their scientific data. Users and client programs can then access the LAS site to search the provider's on-line data holdings, make plots of data, create sub-sets in a variety of formats, compare data sets and perform analysis on the data. The Live Access server software has continued to evolve by expanding the types of data (in-situ observations and curvilinear grids) it can serve and by taking advantages of advances in software infrastructure both in the earth sciences community (THREDDS, the GrADS Data Server, the Anagram framework and Java netCDF 2.2) and in the Web community (Java Servlet and the Apache Jakarta frameworks). This presentation will explore the continued evolution of the LAS architecture towards a complete Web-services-based framework. Additionally, we will discuss the redesign and modernization of some of the support tools available to LAS installers. Soon after the initial implementation, the LAS architecture was redesigned to separate the components that are responsible for the user interaction (the User Interface Server) from the components that are responsible for interacting with the data and producing the output requested by the user (the Product Server). During this redesign, we changed the implementation of the User Interface Server from CGI and JavaScript to the Java Servlet specification using Apache Jakarta Velocity backed by a database store for holding the user interface widget components. The User Interface server is now quite flexible and highly configurable because we modernized the components used for the implementation. Meanwhile, the implementation of the Product Server has remained a Perl CGI-based system. Clearly, the time has come to modernize this part of the LAS architecture. Before undertaking such a modernization it is

  20. The Accessibility, Usability, and Reliability of Chinese Web-Based Information on HIV/AIDS

    Science.gov (United States)

    Niu, Lu; Luo, Dan; Liu, Ying; Xiao, Shuiyuan

    2016-01-01

    Objective: The present study was designed to assess the quality of Chinese-language Internet-based information on HIV/AIDS. Methods: We entered the following search terms, in Chinese, into Baidu and Sogou: “HIV/AIDS”, “symptoms”, and “treatment”, and evaluated the first 50 hits of each query using the Minervation validation instrument (LIDA tool) and DISCERN instrument. Results: Of the 900 hits identified, 85 websites were included in this study. The overall score of the LIDA tool was 63.7%; the mean score of accessibility, usability, and reliability was 82.2%, 71.5%, and 27.3%, respectively. Of the top 15 sites according to the LIDA score, the mean DISCERN score was calculated at 43.1 (95% confidence intervals (CI) = 37.7–49.5). Noncommercial websites showed higher DISCERN scores than commercial websites; whereas commercial websites were more likely to be found in the first 20 links obtained from each search engine than the noncommercial websites. Conclusions: In general, the HIV/AIDS related Chinese-language websites have poor reliability, although their accessibility and usability are fair. In addition, the treatment information presented on Chinese-language websites is far from sufficient. There is an imperative need for professionals and specialized institutes to improve the comprehensiveness of web-based information related to HIV/AIDS. PMID:27556475

  1. A Data Capsule Framework For Web Services: Providing Flexible Data Access Control To Users

    CERN Document Server

    Kannan, Jayanthkumar; Chun, Byung-Gon

    2010-01-01

    This paper introduces the notion of a secure data capsule, which refers to an encapsulation of sensitive user information (such as a credit card number) along with code that implements an interface suitable for the use of such information (such as charging for purchases) by a service (such as an online merchant). In our capsule framework, users provide their data in the form of such capsules to web services rather than raw data. Capsules can be deployed in a variety of ways, either on a trusted third party or the user's own computer or at the service itself, through the use of a variety of hardware or software modules, such as a virtual machine monitor or trusted platform module: the only requirement is that the deployment mechanism must ensure that the user's data is only accessed via the interface sanctioned by the user. The framework further allows an user to specify policies regarding which services or machines may host her capsule, what parties are allowed to access the interface, and with what parameter...

  2. 一种基于 Web 访问模型的网络隐蔽通道%A New Network Covert Channel Based on Web Access Model

    Institute of Scientific and Technical Information of China (English)

    廖晓锋; 邱桂华

    2013-01-01

      网络隐蔽信道是将窃取的机密信息隐藏在正常的网络传输协议中的一种通信方法。由于网络时间隐蔽信道不修改网络数据包的内容,因此更加难以检测和限制,从而具有更大的威胁。提出一种新的基于 Web 访问模型的网络时间隐蔽信道,恶意用户通过规律性的访问 Web 服务器实现机密信息传输;实现了该网络隐蔽信道原型,并给出了信道的性能分析结果。%Network covert channel is a transmission scheme which hides the confidential information to normal network channel. Network covert timing channel does not modify the network packets, therefore it is more difficult to detect and more dangerous. This paper presents a new network covert timing channel based on Web access model. Malicious users transfer the confidential information by regularly access the Web server in this channel. We implement the prototype of the covert channel, and analyze the channel performance.

  3. Occupancy by key transcription factors is a more accurate predictor of enhancer activity than histone modifications or chromatin accessibility.

    Science.gov (United States)

    Dogan, Nergiz; Wu, Weisheng; Morrissey, Christapher S; Chen, Kuan-Bei; Stonestrom, Aaron; Long, Maria; Keller, Cheryl A; Cheng, Yong; Jain, Deepti; Visel, Axel; Pennacchio, Len A; Weiss, Mitchell J; Blobel, Gerd A; Hardison, Ross C

    2015-01-01

    Regulated gene expression controls organismal development, and variation in regulatory patterns has been implicated in complex traits. Thus accurate prediction of enhancers is important for further understanding of these processes. Genome-wide measurement of epigenetic features, such as histone modifications and occupancy by transcription factors, is improving enhancer predictions, but the contribution of these features to prediction accuracy is not known. Given the importance of the hematopoietic transcription factor TAL1 for erythroid gene activation, we predicted candidate enhancers based on genomic occupancy by TAL1 and measured their activity. Contributions of multiple features to enhancer prediction were evaluated based on the results of these and other studies. TAL1-bound DNA segments were active enhancers at a high rate both in transient transfections of cultured cells (39 of 79, or 56%) and transgenic mice (43 of 66, or 65%). The level of binding signal for TAL1 or GATA1 did not help distinguish TAL1-bound DNA segments as active versus inactive enhancers, nor did the density of regulation-related histone modifications. A meta-analysis of results from this and other studies (273 tested predicted enhancers) showed that the presence of TAL1, GATA1, EP300, SMAD1, H3K4 methylation, H3K27ac, and CAGE tags at DNase hypersensitive sites gave the most accurate predictors of enhancer activity, with a success rate over 80% and a median threefold increase in activity. Chromatin accessibility assays and the histone modifications H3K4me1 and H3K27ac were sensitive for finding enhancers, but they have high false positive rates unless transcription factor occupancy is also included. Occupancy by key transcription factors such as TAL1, GATA1, SMAD1, and EP300, along with evidence of transcription, improves the accuracy of enhancer predictions based on epigenetic features.

  4. J-TEXT WebScope: An efficient data access and visualization system for long pulse fusion experiment

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Wei, E-mail: zhenghaku@gmail.com [State Key Laboratory of Advanced Electromagnetic Engineering and Technology in Huazhong University of Science and Technology, Wuhan 430074 (China); School of Electrical and Electronic Engineering in Huazhong University of Science and Technology, Wuhan 430074 (China); Wan, Kuanhong; Chen, Zhi; Hu, Feiran; Liu, Qiang [State Key Laboratory of Advanced Electromagnetic Engineering and Technology in Huazhong University of Science and Technology, Wuhan 430074 (China); School of Electrical and Electronic Engineering in Huazhong University of Science and Technology, Wuhan 430074 (China)

    2016-11-15

    Highlights: • No matter how large the data is, the response time is always less than 500 milliseconds. • It is intelligent and just gives you the data you want. • It can be accessed directly over the Internet without installing special client software if you already have a browser. • Adopt scale and segment technology to organize data. • To support a new database for the WebScope is quite easy. • With the configuration stored in user’s profile, you have your own portable WebScope. - Abstract: Fusion research is an international collaboration work. To enable researchers across the world to visualize and analyze the experiment data, a web based data access and visualization tool is quite important [1]. Now, a new WebScope based on RIA (Rich Internet Application) is designed and implemented to meet these requirements. On the browser side, a fluent and intuitive interface is provided for researchers at J-TEXT laboratory and collaborators from all over the world to view experiment data and related metadata. The fusion experiments will feature long pulse and high sampling rate in the future. The data access and visualization system in this work has adopted segment and scale concept. Large data samples are re-sampled in different scales and then split into segments for instant response. It allows users to view extremely large data on the web browser efficiently, without worrying about the limitation on the size of the data. The HTML5 and JavaScript based web front-end can provide intuitive and fluent user experience. On the server side, a RESTful (Representational State Transfer) web API, which is based on ASP.NET MVC (Model View Controller), allows users to access the data and its metadata through HTTP (HyperText Transfer Protocol). An interface to the database has been designed to decouple the data access and visualization system from the data storage. It can be applied upon any data storage system like MDSplus or JTEXTDB, and this system is very easy to

  5. A web accessible scientific workflow system for vadoze zone performance monitoring: design and implementation examples

    Science.gov (United States)

    Mattson, E.; Versteeg, R.; Ankeny, M.; Stormberg, G.

    2005-12-01

    Long term performance monitoring has been identified by DOE, DOD and EPA as one of the most challenging and costly elements of contaminated site remedial efforts. Such monitoring should provide timely and actionable information relevant to a multitude of stakeholder needs. This information should be obtained in a manner which is auditable, cost effective and transparent. Over the last several years INL staff has designed and implemented a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition from diverse sensors (geophysical, geochemical and hydrological) with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic javascript and html/css) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This system has been implemented and is operational for several sites, including the Ruby Gulch Waste Rock Repository (a capped mine waste rock dump on the Gilt Edge Mine Superfund Site), the INL Vadoze Zone Research Park and an alternative cover landfill. Implementations for other vadoze zone sites are currently in progress. These systems allow for autonomous performance monitoring through automated data analysis and report generation. This performance monitoring has allowed users to obtain insights into system dynamics, regulatory compliance and residence times of water. Our system uses modular components for data selection and graphing and WSDL compliant webservices for external functions such as statistical analyses and model invocations. Thus, implementing this system for novel sites and extending functionality (e.g. adding novel models) is relatively straightforward. As system access requires a standard webbrowser

  6. A Promising Practicum Pilot--Exploring Associate Teachers' Access and Interactions with a Web-Based Learning Tool

    Science.gov (United States)

    Petrarca, Diana

    2013-01-01

    This paper explores how a small group of associate teachers (i.e., the classroom teachers who host, supervise, and mentor teacher candidates during practicum placements) accessed and interacted with the Associate Teacher Learning Tool (ATLT), a web-based learning tool created specifically for this new group of users. The ATLT is grounded in…

  7. Research on Application of Metacognitive Strategy in English Listening in the Web-based Self-access Learning Environment

    Institute of Scientific and Technical Information of China (English)

    罗雅清

    2012-01-01

      Metacognitive strategies are regarded as advanced strategies in all the learning strategies. This study focuses on the ap⁃plication of metacognitive strategies in English listening in the web-based self-access learning environment (WSLE) and tries to provide some references for those students and teachers in the vocational colleges.

  8. Development of Remote Monitoring and a Control System Based on PLC and WebAccess for Learning Mechatronics

    Directory of Open Access Journals (Sweden)

    Wen-Jye Shyr

    2013-02-01

    Full Text Available This study develops a novel method for learning mechatronics using remote monitoring and control, based on a programmable logic controller (PLC and WebAccess. A mechatronics module, a Web‐CAM and a PLC were integrated with WebAccess software to organize a remote laboratory. The proposed system enables users to access the Internet for remote monitoring and control of the mechatronics module via a web browser, thereby enhancing work flexibility by enabling personnel to control mechatronics equipment from a remote location. Mechatronics control and long‐distance monitoring were realized by establishing communication between the PLC and WebAccess. Analytical results indicate that the proposed system is feasible. The suitability of this system is demonstrated in the department of industrial education and technology at National Changhua University of Education, Taiwan. Preliminary evaluation of the system was encouraging and has shown that it has achieved success in helping students understand concepts and master remote monitoring and control techniques.

  9. Factors explaining adoption and implementation processes for web accessibility standards within eGovernment systems and organizations

    NARCIS (Netherlands)

    Velleman, Eric M.; Nahuis, Inge; Geest, van der Thea

    2015-01-01

    Local government organizations such as municipalities often seem unable to fully adopt or implement web accessibility standards even if they are actively pursuing it. Based on existing adoption models, this study identifies factors in five categories that influence the adoption and implementation of

  10. 78 FR 67881 - Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and...

    Science.gov (United States)

    2013-11-12

    ... section 508 standards for Web content, forms and applications.\\12\\ \\10\\ See 75 FR 43460-43467 (July 26... information and communication technology (ICT) procurements that specifically proposes WCAG 2.0 Level AA as... Factors (HF); Accessibility Requirements for Public Procurement of ICT products and services in...

  11. Retrieval of very large numbers of items in the Web of Science: an exercise to develop accurate search strategies

    NARCIS (Netherlands)

    Arencibia-Jorge, R.; Leydesdorff, L.; Chinchilla-Rodríguez, Z.; Rousseau, R.; Paris, S.W.

    2009-01-01

    The Web of Science interface counts at most 100,000 retrieved items from a single query. If the query results in a dataset containing more than 100,000 items the number of retrieved items is indicated as >100,000. The problem studied here is how to find the exact number of items in a query that lead

  12. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-05-04

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.

  13. New Web Services for Broader Access to National Deep Submergence Facility Data Resources Through the Interdisciplinary Earth Data Alliance

    Science.gov (United States)

    Ferrini, V. L.; Grange, B.; Morton, J. J.; Soule, S. A.; Carbotte, S. M.; Lehnert, K.

    2016-12-01

    The National Deep Submergence Facility (NDSF) operates the Human Occupied Vehicle (HOV) Alvin, the Remotely Operated Vehicle (ROV) Jason, and the Autonomous Underwater Vehicle (AUV) Sentry. These vehicles are deployed throughout the global oceans to acquire sensor data and physical samples for a variety of interdisciplinary science programs. As part of the EarthCube Integrative Activity Alliance Testbed Project (ATP), new web services were developed to improve access to existing online NDSF data and metadata resources. These services make use of tools and infrastructure developed by the Interdisciplinary Earth Data Alliance (IEDA) and enable programmatic access to metadata and data resources as well as the development of new service-driven user interfaces. The Alvin Frame Grabber and Jason Virtual Van enable the exploration of frame-grabbed images derived from video cameras on NDSF dives. Metadata available for each image includes time and vehicle position, data from environmental sensors, and scientist-generated annotations, and data are organized and accessible by cruise and/or dive. A new FrameGrabber web service and service-driven user interface were deployed to offer integrated access to these data resources through a single API and allows users to search across content curated in both systems. In addition, a new NDSF Dive Metadata web service and service-driven user interface was deployed to provide consolidated access to basic information about each NDSF dive (e.g. vehicle name, dive ID, location, etc), which is important for linking distributed data resources curated in different data systems.

  14. Global Location-Based Access to Web Applications Using Atom-Based Automatic Update

    Science.gov (United States)

    Singh, Kulwinder; Park, Dong-Won

    We propose an architecture which enables people to enquire about information available in directory services by voice using regular phones. We implement a Virtual User Agent (VUA) which mediates between the human user and a business directory service. The system enables the user to search for the nearest clinic, gas station by price, motel by price, food / coffee, banks/ATM etc. and fix an appointment, or automatically establish a call between the user and the business party if the user prefers. The user also has an option to receive appointment confirmation by phone, SMS, or e-mail. The VUA is accessible by a toll free DID (Direct Inward Dialing) number using a phone by anyone, anywhere, anytime. We use the Euclidean formula for distance measurement. Since, shorter geodesic distances (on the Earth’s surface) correspond to shorter Euclidean distances (measured by a straight line through the Earth). Our proposed architecture uses Atom XML syndication format protocol for data integration, VoiceXML for creating the voice user interface (VUI) and CCXML for controlling the call components. We also provide an efficient algorithm for parsing Atom feeds which provide data to the system. Moreover, we describe a cost-effective way for providing global access to the VUA based on Asterisk (an open source IP-PBX). We also provide some information on how our system can be integrated with GPS for locating the user coordinates and therefore efficiently and spontaneously enhancing the system response. Additionally, the system has a mechanism for validating the phone numbers in its database, and it updates the number and other information such as daily price of gas, motel etc. automatically using an Atom-based feed. Currently, the commercial directory services (Example 411) do not have facilities to update the listing in the database automatically, so that why callers most of the times get out-of-date phone numbers or other information. Our system can be integrated very easily

  15. A revaluation of the cultural dimension of disability policy in the European Union: the impact of digitization and web accessibility.

    Science.gov (United States)

    Ferri, Delia; Giannoumis, G Anthony

    2014-01-01

    Reflecting the commitments undertaken by the EU through the conclusion of the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), the European Disability Strategy 2010–2020 not only gives a prominent position to accessibility, broadly interpreted, but also suggests an examination of the obligations for access to cultural goods and services. The European Disability Strategy 2010–2020 expressly acknowledges that EU action will support national activities to make sports, leisure, cultural and recreational organizations and activities accessible, and use the possibilities for copyright exceptions in the Directive 2001/29/EC (Infosoc Directive). This article discusses to what extent the EU has realized the principle of accessibility and the right to access cultural goods and services envisaged in the UNCRPD. Previous research has yet to explore how web accessibility and digitization interact with the cultural dimension of disability policy in the European Union. This examination attempts to fill this gap by discussing to what extent the European Union has put this cultural dimension into effect and how web accessibility policies and the digitization of cultural materials influence these efforts.

  16. Retrieval of very large numbers of items in the Web of Science: an exercise to develop accurate search strategies

    CERN Document Server

    Arencibia-Jorge, Ricardo; Chinchilla-Rodriguez, Zaida; Rousseau, Ronald; Paris, Soren W

    2009-01-01

    The current communication presents a simple exercise with the aim of solving a singular problem: the retrieval of extremely large amounts of items in the Web of Science interface. As it is known, Web of Science interface allows a user to obtain at most 100,000 items from a single query. But what about queries that achieve a result of more than 100,000 items? The exercise developed one possible way to achieve this objective. The case study is the retrieval of the entire scientific production from the United States in a specific year. Different sections of items were retrieved using the field Source of the database. Then, a simple Boolean statement was created with the aim of eliminating overlapping and to improve the accuracy of the search strategy. The importance of team work in the development of advanced search strategies was noted.

  17. PSI/TM-Coffee: a web server for fast and accurate multiple sequence alignments of regular and transmembrane proteins using homology extension on reduced databases.

    Science.gov (United States)

    Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming

    2016-07-08

    The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Online Access to Weather Satellite Imagery Through the World Wide Web

    Science.gov (United States)

    Emery, W.; Baldwin, D.

    1998-01-01

    Both global area coverage (GAC) and high-resolution picture transmission (HRTP) data from the Advanced Very High Resolution Radiometer (AVHRR) are made available to laternet users through an online data access system. Older GOES-7 data am also available. Created as a "testbed" data system for NASA's future Earth Observing System Data and Information System (EOSDIS), this testbed provides an opportunity to test both the technical requirements of an onune'd;ta system and the different ways in which the -general user, community would employ such a system. Initiated in December 1991, the basic data system experienced five major evolutionary changes In response to user requests and requirements. Features added with these changes were the addition of online browse, user subsetting, dynamic image Processing/navigation, a stand-alone data storage system, and movement,from an X-windows graphical user Interface (GUI) to a World Wide Web (WWW) interface. Over Its lifetime, the system has had as many as 2500 registered users. The system on the WWW has had over 2500 hits since October 1995. Many of these hits are by casual users that only take the GIF images directly from the interface screens and do not specifically order digital data. Still, there b a consistent stream of users ordering the navigated image data and related products (maps and so forth). We have recently added a real-time, seven- day, northwestern United States normalized difference vegetation index (NDVI) composite that has generated considerable Interest. Index Terms-Data system, earth science, online access, satellite data.

  19. MATISSE a web-based tool to access, visualize and analyze high resolution minor bodies observation

    Science.gov (United States)

    Zinzi, Angelo; Capria, Maria Teresa; Palomba, Ernesto; Antonelli, Lucio Angelo; Giommi, Paolo

    2016-07-01

    In the recent years planetary exploration missions acquired data from minor bodies (i.e., dwarf planets, asteroid and comets) at a detail level never reached before. Since these objects often present very irregular shapes (as in the case of the comet 67P Churyumov-Gerasimenko target of the ESA Rosetta mission) "classical" bidimensional projections of observations are difficult to understand. With the aim of providing the scientific community a tool to access, visualize and analyze data in a new way, ASI Science Data Center started to develop MATISSE (Multi-purposed Advanced Tool for the Instruments for the Solar System Exploration - http://tools.asdc.asi.it/matisse.jsp) in late 2012. This tool allows 3D web-based visualization of data acquired by planetary exploration missions: the output could either be the straightforward projection of the selected observation over the shape model of the target body or the visualization of a high-order product (average/mosaic, difference, ratio, RGB) computed directly online with MATISSE. Standard outputs of the tool also comprise downloadable files to be used with GIS software (GeoTIFF and ENVI format) and 3D very high-resolution files to be viewed by means of the free software Paraview. During this period the first and most frequent exploitation of the tool has been related to visualization of data acquired by VIRTIS-M instruments onboard Rosetta observing the comet 67P. The success of this task, well represented by the good number of published works that used images made with MATISSE confirmed the need of a different approach to correctly visualize data coming from irregular shaped bodies. In the next future the datasets available to MATISSE are planned to be extended, starting from the addition of VIR-Dawn observations of both Vesta and Ceres and also using standard protocols to access data stored in external repositories, such as NASA ODE and Planetary VO.

  20. Declarative Access Control for WebDSL: Combining Language Integration and Separation of Concerns

    NARCIS (Netherlands)

    Groenewegen, D.; Visser, E.

    2008-01-01

    Preprint of paper published in: ICWE 2008 - 8th International Conference on Web Engineering, 14-18 July 2008; doi:10.1109/ICWE.2008.15 In this paper, we present the extension of WebDSL, a domain-specific language for web application development, with abstractions for declarative definition of acces

  1. Study of HTML Meta-Tags Utilization in Web-based Open-Access Journals

    Directory of Open Access Journals (Sweden)

    Pegah Pishva

    2007-04-01

    Full Text Available The present study investigates the extent of utilization of two meta tags – “keywords” and “descriptors” – in Web-based Open-Access Journals. A sample composed of 707 journals taken from DOAJ was used. These were analyzed on the account of utilization of the said meta tags. Findings demonstrated that these journals utilized “keywords” and “descriptors” meta-tags, 33.1% and 29.9% respectively. It was further demonstrated that among various subject classifications, “General Journals” had been the highest while “Mathematics and Statistics Journals” had the least utilization as “keywords” meta-tags. Moreover, “General Journals” and “Chemistry journals”, with 55.6% and 15.4% utilization respectively, had the highest and the lowest “descriptors” meta-tag usage rate. Based on our findings, and when compared against other similar research findings, there had been no significant growth experienced in utilization of these meta tags.

  2. Managing Large Scale Project Analysis Teams through a Web Accessible Database

    Science.gov (United States)

    O'Neil, Daniel A.

    2008-01-01

    Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.

  3. Human membrane transporter database: a Web-accessible relational database for drug transport studies and pharmacogenomics.

    Science.gov (United States)

    Yan, Q; Sadée, W

    2000-01-01

    The human genome contains numerous genes that encode membrane transporters and related proteins. For drug discovery, development, and targeting, one needs to know which transporters play a role in drug disposition and effects. Moreover, genetic polymorphisms in human membrane transporters may contribute to interindividual differences in the response to drugs. Pharmacogenetics, and, on a genome-wide basis, pharmacogenomics, address the effect of genetic variants on an individual's response to drugs and xenobiotics. However, our knowledge of the relevant transporters is limited at present. To facilitate the study of drug transporters on a broad scale, including the use of microarray technology, we have constructed a human membrane transporter database (HMTD). Even though it is still largely incomplete, the database contains information on more than 250 human membrane transporters, such as sequence, gene family, structure, function, substrate, tissue distribution, and genetic disorders associated with transporter polymorphisms. Readers are invited to submit additional data. Implemented as a relational database, HMTD supports complex biological queries. Accessible through a Web browser user interface via Common Gateway Interface (CGI) and Java Database Connection (JDBC), HMTD also provides useful links and references, allowing interactive searching and downloading of data. Taking advantage of the features of an electronic journal, this paper serves as an interactive tutorial for using the database, which we expect to develop into a research tool.

  4. Access to a syllabus of human hemoglobin variants (1996) via the World Wide Web.

    Science.gov (United States)

    Hardison, R C; Chui, D H; Riemer, C R; Miller, W; Carver, M F; Molchanova, T P; Efremov, G D; Huisman, T H

    1998-03-01

    Information on mutations in human hemoglobin is important in many efforts, including understanding the pathophysiology of hemoglobin diseases, developing therapies, elucidating the dynamics of sequence alterations inhuman populations, and dissecting the details of protein structure/function relationships. Currently, information is available on a large number of mutations and variants, but is distributed among thousands of papers. In an effort to organize this voluminous data set, two Syllabi have been prepared compiling succinct information on human hemoglobin abnormalities. In both of these, each entry provides amino acid and/or DNA sequence alterations, hematological and clinical data, methodology used for characterization, ethnic distribution, and functional properties and stability of the hemoglobin, together with appropriate literature references. A Syllabus of Human Hemoglobin Variants (1996) describes 693 abnormal hemoglobins resulting from alterations in the alpha-, beta-, gamma-, and delta-globin chains, including special abnormalities such as double mutations, hybrid chains, elongated chains, deletions, and insertions. We have converted this resource to an electronic form that is accessible via the World Wide Web at the Globin Gene Server (http://globin.cse.psu.edu). Hyperlinks are provided from each entry in the tables of variants to the corresponding full description. In addition, a simple query interface allows the user to find all entries containing a designated word or phrase. We are in the process of converting A Syllabus of Thalassemia Mutations (1997) to a similar electronic format.

  5. A web-accessible content-based cervicographic image retrieval system

    Science.gov (United States)

    Xue, Zhiyun; Long, L. Rodney; Antani, Sameer; Jeronimo, Jose; Thoma, George R.

    2008-03-01

    Content-based image retrieval (CBIR) is the process of retrieving images by directly using image visual characteristics. In this paper, we present a prototype system implemented for CBIR for a uterine cervix image (cervigram) database. This cervigram database is a part of data collected in a multi-year longitudinal effort by the National Cancer Institute (NCI), and archived by the National Library of Medicine (NLM), for the study of the origins of, and factors related to, cervical precancer/cancer. Users may access the system with any Web browser. The system is built with a distributed architecture which is modular and expandable; the user interface is decoupled from the core indexing and retrieving algorithms, and uses open communication standards and open source software. The system tries to bridge the gap between a user's semantic understanding and image feature representation, by incorporating the user's knowledge. Given a user-specified query region, the system returns the most similar regions from the database, with respect to attributes of color, texture, and size. Experimental evaluation of the retrieval performance of the system on "groundtruth" test data illustrates its feasibility to serve as a possible research tool to aid the study of the visual characteristics of cervical neoplasia.

  6. ProtSA: a web application for calculating sequence specific protein solvent accessibilities in the unfolded ensemble

    Directory of Open Access Journals (Sweden)

    Blackledge Martin

    2009-04-01

    Full Text Available Abstract Background The stability of proteins is governed by the heat capacity, enthalpy and entropy changes of folding, which are strongly correlated to the change in solvent accessible surface area experienced by the polypeptide. While the surface exposed in the folded state can be easily determined, accessibilities for the unfolded state at the atomic level cannot be obtained experimentally and are typically estimated using simplistic models of the unfolded ensemble. A web application providing realistic accessibilities of the unfolded ensemble of a given protein at the atomic level will prove useful. Results ProtSA, a web application that calculates sequence-specific solvent accessibilities of the unfolded state ensembles of proteins has been developed and made freely available to the scientific community. The input is the amino acid sequence of the protein of interest. ProtSA follows a previously published calculation protocol which uses the Flexible-Meccano algorithm to generate unfolded conformations representative of the unfolded ensemble of the protein, and uses the exact analytical software ALPHASURF to calculate atom solvent accessibilities, which are averaged on the ensemble. Conclusion ProtSA is a novel tool for the researcher investigating protein folding energetics. The sequence specific atom accessibilities provided by ProtSA will allow obtaining better estimates of the contribution of the hydrophobic effect to the free energy of folding, will help to refine existing parameterizations of protein folding energetics, and will be useful to understand the influence of point mutations on protein stability.

  7. PDTD: a web-accessible protein database for drug target identification

    Directory of Open Access Journals (Sweden)

    Gao Zhenting

    2008-02-01

    Full Text Available Abstract Background Target identification is important for modern drug discovery. With the advances in the development of molecular docking, potential binding proteins may be discovered by docking a small molecule to a repository of proteins with three-dimensional (3D structures. To complete this task, a reverse docking program and a drug target database with 3D structures are necessary. To this end, we have developed a web server tool, TarFisDock (Target Fishing Docking http://www.dddc.ac.cn/tarfisdock, which has been used widely by others. Recently, we have constructed a protein target database, Potential Drug Target Database (PDTD, and have integrated PDTD with TarFisDock. This combination aims to assist target identification and validation. Description PDTD is a web-accessible protein database for in silico target identification. It currently contains >1100 protein entries with 3D structures presented in the Protein Data Bank. The data are extracted from the literatures and several online databases such as TTD, DrugBank and Thomson Pharma. The database covers diverse information of >830 known or potential drug targets, including protein and active sites structures in both PDB and mol2 formats, related diseases, biological functions as well as associated regulating (signaling pathways. Each target is categorized by both nosology and biochemical function. PDTD supports keyword search function, such as PDB ID, target name, and disease name. Data set generated by PDTD can be viewed with the plug-in of molecular visualization tools and also can be downloaded freely. Remarkably, PDTD is specially designed for target identification. In conjunction with TarFisDock, PDTD can be used to identify binding proteins for small molecules. The results can be downloaded in the form of mol2 file with the binding pose of the probe compound and a list of potential binding targets according to their ranking scores. Conclusion PDTD serves as a comprehensive and

  8. Looking back, looking forward: 10 years of development to collect, preserve and access the Danish Web

    DEFF Research Database (Denmark)

    Laursen, Ditte; Møldrup-Dalum, Per

    Digital heritage archiving is an ongoing activity that requires commitment, involvement and cooperation between heritage institutions and policy makers as well as producers and users of information. In this presentation, we will address how a web archive is created over time as well as what or who...... we see the development of the web archive in the near future. Findings are relevant for curators and researchers interested in the web archive as a historical source....

  9. Usage of data-encoded web maps with client side color rendering for combined data access, visualization, and modeling purposes

    Science.gov (United States)

    Pliutau, Denis; Prasad, Narasimha S.

    2013-05-01

    Current approaches to satellite observation data storage and distribution implement separate visualization and data access methodologies which often leads to the need in time consuming data ordering and coding for applications requiring both visual representation as well as data handling and modeling capabilities. We describe an approach we implemented for a data-encoded web map service based on storing numerical data within server map tiles and subsequent client side data manipulation and map color rendering. The approach relies on storing data using the lossless compression Portable Network Graphics (PNG) image data format which is natively supported by web-browsers allowing on-the-fly browser rendering and modification of the map tiles. The method is easy to implement using existing software libraries and has the advantage of easy client side map color modifications, as well as spatial subsetting with physical parameter range filtering. This method is demonstrated for the ASTER-GDEM elevation model and selected MODIS data products and represents an alternative to the currently used storage and data access methods. One additional benefit includes providing multiple levels of averaging due to the need in generating map tiles at varying resolutions for various map magnification levels. We suggest that such merged data and mapping approach may be a viable alternative to existing static storage and data access methods for a wide array of combined simulation, data access and visualization purposes.

  10. Accessing the SEED genome databases via Web services API: tools for programmers

    National Research Council Canada - National Science Library

    Disz, Terry; Akhter, Sajia; Cuevas, Daniel; Olson, Robert; Overbeek, Ross; Vonstein, Veronika; Stevens, Rick; Edwards, Robert A

    2010-01-01

    .... The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes...

  11. Web standards facilitating accessibility in a digitally inclusive South Africa – Perspectives from developing the South African National Accessibility Portal

    CSIR Research Space (South Africa)

    Coetzee, L

    2008-11-01

    Full Text Available Many factors impact on the ability to create a digitally inclusive society in a developing world context. These include lack of access to information and communication technology (ICT), infrastructure, low literacy levels as well as low ICT related...

  12. Potential impacts of ocean acidification on the Puget Sound food web (NCEI Accession 0134852)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The ecosystem impacts of ocean acidification (OA) were explored by imposing scenarios designed to mimic OA on a food web model of Puget Sound, a large estuary in the...

  13. BioIMAX: A Web 2.0 approach for easy exploratory and collaborative access to multivariate bioimage data

    Directory of Open Access Journals (Sweden)

    Khan Michael

    2011-07-01

    Full Text Available Abstract Background Innovations in biological and biomedical imaging produce complex high-content and multivariate image data. For decision-making and generation of hypotheses, scientists need novel information technology tools that enable them to visually explore and analyze the data and to discuss and communicate results or findings with collaborating experts from various places. Results In this paper, we present a novel Web2.0 approach, BioIMAX, for the collaborative exploration and analysis of multivariate image data by combining the webs collaboration and distribution architecture with the interface interactivity and computation power of desktop applications, recently called rich internet application. Conclusions BioIMAX allows scientists to discuss and share data or results with collaborating experts and to visualize, annotate, and explore multivariate image data within one web-based platform from any location via a standard web browser requiring only a username and a password. BioIMAX can be accessed at http://ani.cebitec.uni-bielefeld.de/BioIMAX with the username "test" and the password "test1" for testing purposes.

  14. How much data resides in a web collection: how to estimate size of a web collection

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; Keulen, van Maurice

    2013-01-01

    With increasing amount of data in deep web sources (hidden from general search engines behind web forms), accessing this data has gained more attention. In the algorithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate decisions in sto

  15. How much data resides in a web collection: how to estimate size of a web collection

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2013-01-01

    With increasing amount of data in deep web sources (hidden from general search engines behind web forms), accessing this data has gained more attention. In the algorithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate decisions in

  16. Web服务访问控制规范及其实现%Specification and realization of access control of Web services

    Institute of Scientific and Technical Information of China (English)

    张赛男

    2011-01-01

    This paper proposes an access control model for Web services. The integration of the security model into Web services can realize dynamic right changes of security access control on Web services for improving static access control at present. The new model provides view policy language to describe access control policy of Web services. At the end of the paper we describe an infrastructure of integration of the security model into Web services to enforce access control polices of Web services.%提出了一种用于Web服务的访问控制模型,这种模型和Web服务相结合,能够实现Web服务下安全访问控制权限的动态改变,改善目前静态访问控制问题。新的模型提供的视图策略语言VPL用于描述Web服务的访问控制策略。给出了新的安全模型和Web服务集成的结构,用于执行Web服务访问控制策略。

  17. Using NASA's Giovanni Web Portal to Access and Visualize Satellite-based Earth Science Data in the Classroom

    Science.gov (United States)

    Lloyd, Steven; Acker, James G.; Prados, Ana I.; Leptoukh, Gregory G.

    2008-01-01

    One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite-based remote sensing data sets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable data set to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface.

  18. Visibilidad y accesibilidad web de las tesinas de licenciatura en Bibliotecología y Documentación en la Argentina = Web Visibility and Accessibility of Theses in Library Science and Documentation in Argentina

    Directory of Open Access Journals (Sweden)

    Sandra Gisela Martín

    2013-06-01

    Full Text Available Se busca describir la visibilidad y accesibilidad web de la investigación en Bibliotecología y Documentación en la Argentina mediante el estudio de las tesinas presentadas para optar al título de licenciatura. Constituye un estudio exploratorio descriptivo con enfoque cuantitativo donde se investiga la visibilidad de tesinas en catálogos y repositorios institucionales, la cantidad de tesinas visibles en la web /cantidad de tesinas accesibles en texto completo, los metadatos de los registros en los catálogos, los metadatos de los registros en los repositorios, la producción de tesinas por año según visibilidad web, la cantidad de autores por tesina según visibilidad web y la distribución temática del contenido de las tesinas. Se concluye que la producción científica de tesinas en Bibliotecología en la Argentina está muy dispersa en la web y que la visibilidad y accesibilidad a las mismas es muy escasa = It describes the web visibility and accessibility of research in library and documentation in Argentina by studying dissertations submitted to qualify for the bachelor's degree. It is an exploratory study with quantitative approach where the visibility of theses in catalogs and institutional repositories, the number of theses visible on the web/amount accessible in full text, metadata records in catalogs, metadata records in the repositories, the production of dissertations per year according to web visibility, the number of authors per dissertation as web visibility and thematic distribution of the content of dissertations. It is concluded that the production of dissertations in library science in Argentina is spread on the web and that the visibility and accessibility of these is very low.

  19. Evaluación comparativa de la accesibilidad de los espacios web de las bibliotecas universitarias españolas y norteamericanas Comparative accessibility assessment of Web spaces in Spanish and American university libraries

    Directory of Open Access Journals (Sweden)

    Laura Caballero-Cortés

    2009-04-01

    Full Text Available El objetivo principal de la presente investigación es analizar y comparar el grado de cumplimiento de determinadas pautas de accesibilidad web en dos grupos de espacios web que pertenecen a una misma tipología conceptual: "Bibliotecas Universitarias", pero que forman parte de dos realidades geográficas, sociales y económicas diferentes: España y Norteamérica. La interpretación de los resultados pone de manifiesto que es posible utilizar técnicas webmétricas basadas en las características de accesibilidad web para contrastar dos conjuntos de espacios web cerrados.The main objective of this research is to analyze and compare the degree in which certain Accessibility Guidelines comply with two groups of web spaces which belong to the same conceptual typology: "University Libraries", but conform two different geographic, social and economical realities -Spain and the United States. Interpretation of results reveals the possibility of using web metrics techniques based on Web accessibility characteristics in order to contrast two categories of closed web spaces.

  20. Addressing Challenges in Web Accessibility for the Blind and Visually Impaired

    Science.gov (United States)

    Guercio, Angela; Stirbens, Kathleen A.; Williams, Joseph; Haiber, Charles

    2011-01-01

    Searching for relevant information on the web is an important aspect of distance learning. This activity is a challenge for visually impaired distance learners. While sighted people have the ability to filter information in a fast and non sequential way, blind persons rely on tools that process the information in a sequential way. Learning is…

  1. El acceso a VacciMonitor puede hacerse a través de la Web of Science / Accessing VacciMonitor by the Web of Science

    Directory of Open Access Journals (Sweden)

    Daniel Francisco Arencibia-Arrebola

    2015-01-01

    Full Text Available VacciMonitor has gradually increased its visibility by access to different databases. Thus, it was introduced in the project SciELO, EBSCO, HINARI, Redalyc, SCOPUS, DOAJ, SICC Data Bases, SeCiMed, among almost thirty well-known index sites, including the virtual libraries of the main universities from United States of America and other countries. Through an agreement SciELO-Web of Science (WoS it will be possible to include the journals that are indexed in SciELO in the WoS, however this collaboration work is already presenting its outcomes, it is possible to access the content of SciELO by WoS in the link: http://wokinfo.com/products_tools/multidisciplinar y/scielo/ WoS was designed by the Institute for Scientific Information (ISI and it is one of the products of the pack ISI Web of Knowledge, currently property of Thomson Reuters (1. WoS is a service of citation index and databases, worldwide on-line leader with multidisciplinary information covering the knowledge fields of sciences in general, social sciences as well as arts and humanities with more than 46 million of bibliographical references and other hundreds of citations, that made possible navigation in the broad web of journal articles, lecture materials and other registers included in its collection (1. The logic of the functioning of WoS is based on quantitative criteria, since a bigger production demonstrates a greater number of registered papers in most recognized Journals and to what extend these papers are cited by these journals (2. The information obtained from WoS databases are very useful to address efforts of scientific research to a personal, institutional or national level. Scientists publishing in WoS journals not only produce more scientific literature but also this literature is more consulted and used (3. However, it should be considered that statistics of this site for the bibliometric analysis only take into account those journals in this web, but contains three

  2. Comparing Accessibility Auditing Methods for Ebooks: Crowdsourced, Functionality-Led Versus Web Content Methodologies.

    Science.gov (United States)

    James, Abi; Draffan, E A; Wald, Mike

    2017-01-01

    This paper presents a gap analysis between crowdsourced functional accessibility evaluations of ebooks conducted by non-experts and the technical accessibility standards employed by developers. It also illustrates how combining these approaches can provide more appropriate information for a wider group of users with print impairments.

  3. Robust Query Processing for Personalized Information Access on the Semantic Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Stuckenschmidt, Heiner; Wache, Holger

    and user preferences. We describe a framework for information access that combines query refinement and relaxation in order to provide robust, personalized access to heterogeneous RDF data as well as an implementation in terms of rewriting rules and explain its application in the context of e...

  4. The Personal Sequence Database: a suite of tools to create and maintain web-accessible sequence databases

    Directory of Open Access Journals (Sweden)

    Sullivan Christopher M

    2007-12-01

    Full Text Available Abstract Background Large molecular sequence databases are fundamental resources for modern bioscientists. Whether for project-specific purposes or sharing data with colleagues, it is often advantageous to maintain smaller sequence databases. However, this is usually not an easy task for the average bench scientist. Results We present the Personal Sequence Database (PSD, a suite of tools to create and maintain small- to medium-sized web-accessible sequence databases. All interactions with PSD tools occur via the internet with a web browser. Users may define sequence groups within their database that can be maintained privately or published to the web for public use. A sequence group can be downloaded, browsed, searched by keyword or searched for sequence similarities using BLAST. Publishing a sequence group extends these capabilities to colleagues and collaborators. In addition to being able to manage their own sequence databases, users can enroll sequences in BLASTAgent, a BLAST hit tracking system, to monitor NCBI databases for new entries displaying a specified level of nucleotide or amino acid similarity. Conclusion The PSD offers a valuable set of resources unavailable elsewhere. In addition to managing sequence data and BLAST search results, it facilitates data sharing with colleagues, collaborators and public users. The PSD is hosted by the authors and is available at http://bioinfo.cgrb.oregonstate.edu/psd/.

  5. Prototype and Evaluation of AutoHelp: A Case-based, Web-accessible Help Desk System for EOSDIS

    Science.gov (United States)

    Mitchell, Christine M.; Thurman, David A.

    1999-01-01

    AutoHelp is a case-based, Web-accessible help desk for users of the EOSDIS. Its uses a combination of advanced computer and Web technologies, knowledge-based systems tools, and cognitive engineering to offload the current, person-intensive, help desk facilities at the DAACs. As a case-based system, AutoHelp starts with an organized database of previous help requests (questions and answers) indexed by a hierarchical category structure that facilitates recognition by persons seeking assistance. As an initial proof-of-concept demonstration, a month of email help requests to the Goddard DAAC were analyzed and partially organized into help request cases. These cases were then categorized to create a preliminary case indexing system, or category structure. This category structure allows potential users to identify or recognize categories of questions, responses, and sample cases similar to their needs. Year one of this research project focused on the development of a technology demonstration. User assistance 'cases' are stored in an Oracle database in a combination of tables linking prototypical questions with responses and detailed examples from the email help requests analyzed to date. When a potential user accesses the AutoHelp system, a Web server provides a Java applet that displays the category structure of the help case base organized by the needs of previous users. When the user identifies or requests a particular type of assistance, the applet uses Java database connectivity (JDBC) software to access the database and extract the relevant cases. The demonstration will include an on-line presentation of how AutoHelp is currently structured. We will show how a user might request assistance via the Web interface and how the AutoHelp case base provides assistance. The presentation will describe the DAAC data collection, case definition, and organization to date, as well as the AutoHelp architecture. It will conclude with the year 2 proposal to more fully develop the

  6. COMPARE: a web accessible tool for investigating mechanisms of cell growth inhibition.

    Science.gov (United States)

    Zaharevitz, Daniel W; Holbeck, Susan L; Bowerman, Christopher; Svetlik, Penny A

    2002-01-01

    For more than 10 years the National Cancer Institute (NCI) has tested compounds for their ability to inhibit the growth of human tumor cell lines in culture (NCI screen). Work of Ken Paull [J. Natl. Cancer Inst. 81 (1989) 1088] demonstrated that compounds with similar mechanism of cell growth inhibition show similar patterns of activity in the NCI screen. This observation was developed into an algorithm called COMPARE and has been successfully used to predict mechanisms for a wide variety of compounds. More recently, this method has been extended to associate patterns of cell growth inhibition by compounds with measurements of molecular entities (such as gene expression) in the cell lines in the NCI screen. The COMPARE method and associated data are freely available on the Developmental Therapeutics Program (DTP) web site (http://dtp.nci.nih.gov/). Examples of the use of COMPARE on these web pages will be explained and demonstrated. Published by Elsevier Science Inc.

  7. Web-oriented interface for remotely access the Kiev Internet-telescope

    Science.gov (United States)

    Kleshchonok, V.; Luk'yanyk, I.

    2017-06-01

    The partial revision of the Kiev internet-telescope was described in the article. The structure of the telescope and software and features its work. Methods of work with the telescope with help of remotely access were examined.

  8. Automating testbed documentation and database access using World Wide Web (WWW) tools

    Science.gov (United States)

    Ames, Charles; Auernheimer, Brent; Lee, Young H.

    1994-01-01

    A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.

  9. The information-seeking behaviour of paediatricians accessing web-based resources.

    LENUS (Irish Health Repository)

    Prendiville, T W

    2012-02-01

    OBJECTIVES: To establish the information-seeking behaviours of paediatricians in answering every-day clinical queries. DESIGN: A questionnaire was distributed to every hospital-based paediatrician (paediatric registrar and consultant) working in Ireland. RESULTS: The study received 156 completed questionnaires, a 66.1% response. 67% of paediatricians utilised the internet as their first "port of call" when looking to answer a medical question. 85% believe that web-based resources have improved medical practice, with 88% reporting web-based resources are essential for medical practice today. 93.5% of paediatricians believe attempting to answer clinical questions as they arise is an important component in practising evidence-based medicine. 54% of all paediatricians have recommended websites to parents or patients. 75.5% of paediatricians report finding it difficult to keep up-to-date with new information relevant to their practice. CONCLUSIONS: Web-based paediatric resources are of increasing significance in day-to-day clinical practice. Many paediatricians now believe that the quality of patient care depends on it. Information technology resources play a key role in helping physicians to deliver, in a time-efficient manner, solutions to clinical queries at the point of care.

  10. Controlling and accessing vehicle functions by mobile from remote place by sending GPS Co-ordinates to the Web server

    Directory of Open Access Journals (Sweden)

    Dr. Khanna SamratVivekanand Omprakash

    2012-01-01

    Full Text Available This paper represents how the co-ordinates from the Google map stored into database . It stored into the central web server . This co-ordinates then transfer to client program for searching the locations of particular location for electronic device . Client can access the data from internet and use into program by using API . Development of software for a particular device for putting into the vehicle has been develop. In the inbuilt circuit assigning sim card and transferring the signal to the network. Supplying a single text of co-ordinates of locations using google map in terms of latitudes and longitudes. The information in terms of string separated by comma can be extracted and stored into the database of web server . Different mobile number with locations can be stored into the database simultaneously into the server of different clients . The concept of 3 Tier Client /Server architecture is used. The sim card can access information of GPRS system with the network provider of card . Setting of electronic device signal for receiving and sending message done. Different operations can be performed on the device as it can be attached with other electronic circuit of vehicle. Windows Mobile application developed for client slide. User can take different decision of vehicle from mobile by sending sms to the device . Device receives the operation and send to the electronic circuit of vehicle for certain operations. From remote place using mobile you can get the information of your vehicle and also you can control vehicle it by providing password to the electronic circuit for authorization and authentication. The concept of vehicle security and location of vehicle can be identified. The functions of vehicle can be accessed and control like speed , brakes and lights etc as per the software application interface with electronic circuit of vehicle.

  11. The GL service: Web service to exchange GL string encoded HLA & KIR genotypes with complete and accurate allele and genotype ambiguity.

    Science.gov (United States)

    Milius, Robert P; Heuer, Michael; George, Mike; Pollack, Jane; Hollenbach, Jill A; Mack, Steven J; Maiers, Martin

    2016-03-01

    Genotype list (GL) Strings use a set of hierarchical character delimiters to represent allele and genotype ambiguity in HLA and KIR genotypes in a complete and accurate fashion. A RESTful web service called genotype list service was created to allow users to register a GL string and receive a unique identifier for that string in the form of a URI. By exchanging URIs and dereferencing them through the GL service, users can easily transmit HLA genotypes in a variety of useful formats. The GL service was developed to be secure, scalable, and persistent. An instance of the GL service is configured with a nomenclature and can be run in strict or non-strict modes. Strict mode requires alleles used in the GL string to be present in the allele database using the fully qualified nomenclature. Non-strict mode allows any GL string to be registered as long as it is syntactically correct. The GL service source code is free and open source software, distributed under the GNU Lesser General Public License (LGPL) version 3 or later.

  12. The GL Service: Web Service to Exchange GL String Encoded HLA & KIR Genotypes With Complete and Accurate Allele and Genotype Ambiguity

    Science.gov (United States)

    Milius, Robert P.; Heuer, Michael; George, Mike; Pollack, Jane; Hollenbach, Jill A.; Mack, Steven J.; Maiers, Martin

    2015-01-01

    Genotype List (GL) Strings use a set of hierarchical character delimiters to represent allele and genotype ambiguity in HLA and KIR genotypes in a complete and accurate fashion. A RESTful web service called Genotype List Service was created to allow users to register a GL String and receive a unique identifier for that string in the form of a URI. By exchanging URIs and dereferencing them through the GL Service, users can easily transmit HLA genotypes in a variety of useful formats. The GL Service was developed to be secure, scalable, and persistent. An instance of the GL Service is configured with a nomenclature and can be run in strict or non-strict modes. Strict mode requires alleles used in the GL String to be present in the allele database using the fully qualified nomenclature. Non-strict mode allows any GL String to be registered as long as it is syntactically correct. The GL Service source code is free and open source software, distributed under the GNU Lesser General Public License (LGPL) version 3 or later. PMID:26621609

  13. Accurate microRNA target prediction using detailed binding site accessibility and machine learning on proteomics data

    Directory of Open Access Journals (Sweden)

    Martin eReczko

    2012-01-01

    Full Text Available MicroRNAs (miRNAs are a class of small regulatory genes regulating gene expression by targetingmessenger RNA. Though computational methods for miRNA target prediction are the prevailingmeans to analyze their function, they still miss a large fraction of the targeted genes and additionallypredict a large number of false positives. Here we introduce a novel algorithm called DIANAmicroT-ANN which combines multiple novel target site features through an artificial neural network(ANN and is trained using recently published high-throughput data measuring the change of proteinlevels after miRNA overexpression, providing positive and negative targeting examples. The featurescharacterizing each miRNA recognition element include binding structure, conservation level and aspecific profile of structural accessibility. The ANN is trained to integrate the features of eachrecognition element along the 3’ untranslated region into a targeting score, reproducing the relativerepression fold change of the protein. Tested on two different sets the algorithm outperforms otherwidely used algorithms and also predicts a significant number of unique and reliable targets notpredicted by the other methods. For 542 human miRNAs DIANA-microT-ANN predicts 120,000targets not provided by TargetScan 5.0. The algorithm is freely available athttp://microrna.gr/microT-ANN.

  14. Web Access to Digitised Content of the Exhibition Novo Mesto 1848-1918 at the Dolenjska Museum, Novo Mesto

    Directory of Open Access Journals (Sweden)

    Majda Pungerčar

    2013-09-01

    Full Text Available EXTENDED ABSTRACTFor the first time, the Dolenjska museum Novo mesto provided access to digitised museum resources when they took the decision to enrich the exhibition Novo mesto 1848-1918 by adding digital content. The following goals were identified: the digital content was created at the time of exhibition planning and design, it met the needs of different age groups of visitors, and during the exhibition the content was accessible via touch screen. As such, it also served for educational purposes (content-oriented lectures or problem solving team work. In the course of exhibition digital content was accessible on the museum website http://www.novomesto1848-1918.si. The digital content was divided into the following sections: the web photo gallery, the quiz and the game. The photo gallery was designed in the same way as the exhibition and the print catalogue and extended by the photos of contemporary Novo mesto and accompanied by the music from the orchestron machine. The following themes were outlined: the Austrian Empire, the Krka and Novo mesto, the town and its symbols, images of the town and people, administration and economy, social life and Novo mesto today followed by digitised archive materials and sources from that period such as the Commemorative book of the Uniformed Town Guard, the National Reading Room Guest Book, the Kazina guest book, the album of postcards and the Diploma of Honoured Citizen Josip Gerdešič. The Web application was also a tool for a simple and on line selection of digitised material and the creation of new digital content which proved to be much more convenient for lecturing than Power Point presentations. The quiz consisted of 40 questions relating to the exhibition theme and the catalogue. Each question offered a set of three answers only one of them being correct and illustrated by photography. The application auto selected ten questions and valued the answers immediately. The quiz could be accessed

  15. A Web-based computer system supporting information access, exchange and management during building processes

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt

    1998-01-01

    During the last two decades, a number of research efforts have been made in the field of computing systmes related to the building construction industry. Most of the projects have focused on a part of the entire design process and have typically been limited to a specific domain. This paper prese...... presents a newly developed computer system based on the World Wide Web on the Internet. The focus is on the simplicity of the systems structure and on an intuitive and user friendly interface...

  16. Making Statistical Data More Easily Accessible on the Web Results of the StatSearch Case Study

    CERN Document Server

    Rajman, M; Boynton, I M; Fridlund, B; Fyhrlund, A; Sundgren, B; Lundquist, P; Thelander, H; Wänerskär, M

    2005-01-01

    In this paper we present the results of the StatSearch case study that aimed at providing an enhanced access to statistical data available on the Web. In the scope of this case study we developed a prototype of an information access tool combining a query-based search engine with semi-automated navigation techniques exploiting the hierarchical structuring of the available data. This tool enables a better control of the information retrieval, improving the quality and ease of the access to statistical information. The central part of the presented StatSearch tool consists in the design of an algorithm for automated navigation through a tree-like hierarchical document structure. The algorithm relies on the computation of query related relevance score distributions over the available database to identify the most relevant clusters in the data structure. These most relevant clusters are then proposed to the user for navigation, or, alternatively, are the support for the automated navigation process. Several appro...

  17. EarthServer2 : The Marine Data Service - Web based and Programmatic Access to Ocean Colour Open Data

    Science.gov (United States)

    Clements, Oliver; Walker, Peter

    2017-04-01

    The ESA Ocean Colour - Climate Change Initiative (ESA OC-CCI) has produced a long-term high quality global dataset with associated per-pixel uncertainty data. This dataset has now grown to several hundred terabytes (uncompressed) and is freely available to download. However, the sheer size of the dataset can act as a barrier to many users; large network bandwidth, local storage and processing requirements can prevent researchers without the backing of a large organisation from taking advantage of this raw data. The EC H2020 project, EarthServer2, aims to create a federated data service providing access to more than 1 petabyte of earth science data. Within this federation the Marine Data Service already provides an innovative on-line tool-kit for filtering, analysing and visualising OC-CCI data. Data are made available, filtered and processed at source through a standards-based interface, the Open Geospatial Consortium Web Coverage Service and Web Coverage Processing Service. This work was initiated in the EC FP7 EarthServer project where it was found that the unfamiliarity and complexity of these interfaces itself created a barrier to wider uptake. The continuation project, EarthServer2, addresses these issues by providing higher level tools for working with these data. We will present some examples of these tools. Many researchers wish to extract time series data from discrete points of interest. We will present a web based interface, based on NASA/ESA WebWorldWind, for selecting points of interest and plotting time series from a chosen dataset. In addition, a CSV file of locations and times, such as a ship's track, can be uploaded and these points extracted and returned in a CSV file allowing researchers to work with the extract locally, such as a spreadsheet. We will also present a set of Python and JavaScript APIs that have been created to complement and extend the web based GUI. These APIs allow the selection of single points and areas for extraction. The

  18. BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications.

    Science.gov (United States)

    Whetzel, Patricia L; Noy, Natalya F; Shah, Nigam H; Alexander, Paul R; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A

    2011-07-01

    The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

  19. Web聊天室探测系统的网页获取和改进研究%Web Access and Improvement Study on Detection System of the Web Chat Rooms

    Institute of Scientific and Technical Information of China (English)

    孙群; 漆正东

    2012-01-01

    网络聊天以它低成本,高效率的优势给网络用户提供了在线实时通信的功能,从而成为目前互联网使用最广泛的网络服务。以网络聊天室的探测为载体深入研究网页获取和预处理的技术问题。主要探讨网络爬虫的原理和工作流程,在网络爬虫器中引入网络并行多线程处理技术。讨论WebLech的技术特点和实现技术,对WebLech做出了改进。%Web chat with its low-cost,high-efficiency advantages of online real-time communication capabilities,thus becoming the most widely used Internet network services to network users.Detection of Internet chat rooms as a carrier-depth study of Web access to technical problems and the pretreatment.Of the principles and workflow of the web crawler,Web crawler in the introduction of network parallel multi-threading technology.Discuss the technical features of the WebLech and implementation technology,improvements made WebLech.

  20. Impact of web accessibility barriers on users with a hearing impairment

    Directory of Open Access Journals (Sweden)

    Afra Pascual

    2015-01-01

    Full Text Available Se realizaron pruebas de usuarios a personas con discapacidad a uditiva evaluando el impacto que las diferentes barreras de acc esibilidad causan en este tipo de usuarios. El objetivo de recoger esta in formación fue para comunicar a personas que editan contenido en la Web de forma más empática los problemas d e accesibilidad que más afect an a este colectivo, las personas con discapacidad auditiva,y a sí evitar las barreras de accesibilidad que potencialmente podrían estar creando. Como resultado, se obse rva que las barreras que causan mas impacto a usuarios con discapacidad audi tiva son el “texto complejo” y el “contenido multimedia” sin alternativas. En ambos casos los editores de contenido deberían tener en cuenta vigilar la legibilidad del c ontenido web y acompañar de subtítulos y lenguaje de signos el contenido multimedia.

  1. Web-based access to teaching files in a filmless radiology environment

    Science.gov (United States)

    Rubin, Richard K.; Henri, Christopher J.; Cox, Robert D.; Bret, Patrice M.

    1998-07-01

    This paper describes the incorporation of radiology teaching files within our existing filmless radiology Picture Archiving and Communications System (PACS). The creation of teaching files employs an intuitive World Wide Web (WWW) application that relieves the creator of the technical details involving the underlying PACS and obviates the need for knowledge of Internet publishing. Currently, our PACS supports filmless operation of CT, MRI, and ultrasound modalities, conforming to the Digital Imaging and Communications in Medicine (DICOM) and Health Level 7 (HL7) standards. Web-based teaching files are one module in a suite of WWW tools, developed in-house, for platform independent management of radiology data. The WWW browser tools act as liaison between inexpensive desktop PCs and the DICOM PACS. The creation of a teaching file is made as efficient as possible by allowing the creator to select the images and prepare the text within a single application, while finding and reviewing existing teaching files is simplified with a flexible, multi-criteria searching tool. This efficient and easy-to-use interface is largely responsible for the development of a database, currently containing over 400 teaching files, that has been generated in a short period of time.

  2. Translating access into utilization: lessons from the design and evaluation of a health insurance Web site to promote reproductive health care for young women in Massachusetts.

    Science.gov (United States)

    Janiak, Elizabeth; Rhodes, Elizabeth; Foster, Angel M

    2013-12-01

    Following state-level health care reform in Massachusetts, young women reported confusion over coverage of contraception and other sexual and reproductive health services under newly available health insurance products. To address this gap, a plain-language Web site titled "My Little Black Book for Sexual Health" was developed by a statewide network of reproductive health stakeholders. The purpose of this evaluation was to assess the health literacy demands and usability of the site among its target audience, women ages 18-26 years. We performed an evaluation of the literacy demands of the Web site's written content and tested the Web site's usability in a health communications laboratory. Participants found the Web site visually appealing and its overall design concept accessible. However, the Web site's literacy demands were high, and all participants encountered problems navigating through the Web site. Following this evaluation, the Web site was modified to be more usable and more comprehensible to women of all health literacy levels. To avail themselves of sexual and reproductive health services newly available under expanded health insurance coverage, young women require customized educational resources that are rigorously evaluated to ensure accessibility. To maximize utilization of reproductive health services under expanded health insurance coverage, US women require customized educational resources commensurate with their literacy skills. The application of established research methods from the field of health communications will enable advocates to evaluate and adapt these resources to best serve their targeted audiences. © 2013.

  3. Towards a tangible web: using physical objects to access and manipulate the Internet of Things

    CSIR Research Space (South Africa)

    Smith, Andrew C

    2013-09-01

    Full Text Available . This additional step has resulted in the phenomenon commonly referred to as the Internet of Things (IoT). In order to realise the full potential of the IoT, individuals need a mechanism to access and manipulate it. A potential mechanism for achieving...

  4. Analysis on Recommended System for Web Information Retrieval Using HMM

    Directory of Open Access Journals (Sweden)

    Himangni Rathore

    2014-11-01

    Full Text Available Web is a rich domain of data and knowledge, which is spread over the world in unstructured manner. The number of users is continuously access the information over the internet. Web mining is an application of data mining where web related data is extracted and manipulated for extracting knowledge. The data mining is used in the domain of web information mining is refers as web mining, that is further divided into three major domains web uses mining, web content mining and web structure mining. The proposed work is intended to work with web uses mining. The concept of web mining is to improve the user feedbacks and user navigation pattern discovery for a CRM system. Finally a new algorithm HMM is used for finding the pattern in data, which method promises to provide much accurate recommendation.

  5. Hanford Borehole Geologic Information System (HBGIS) Updated User’s Guide for Web-based Data Access and Export

    Energy Technology Data Exchange (ETDEWEB)

    Mackley, Rob D.; Last, George V.; Allwardt, Craig H.

    2008-09-24

    The Hanford Borehole Geologic Information System (HBGIS) is a prototype web-based graphical user interface (GUI) for viewing and downloading borehole geologic data. The HBGIS is being developed as part of the Remediation Decision Support function of the Soil and Groundwater Remediation Project, managed by Fluor Hanford, Inc., Richland, Washington. Recent efforts have focused on improving the functionality of the HBGIS website in order to allow more efficient access and exportation of available data in HBGIS. Users will benefit from enhancements such as a dynamic browsing, user-driven forms, and multi-select options for selecting borehole geologic data for export. The need for translating borehole geologic data into electronic form within the HBGIS continues to increase, and efforts to populate the database continue at an increasing rate. These new web-based tools should help the end user quickly visualize what data are available in HBGIS, select from among these data, and download the borehole geologic data into a consistent and reproducible tabular form. This revised user’s guide supersedes the previous user’s guide (PNNL-15362) for viewing and downloading data from HBGIS. It contains an updated data dictionary for tables and fields containing borehole geologic data as well as instructions for viewing and downloading borehole geologic data.

  6. Kinome Render: a stand-alone and web-accessible tool to annotate the human protein kinome tree.

    Science.gov (United States)

    Chartier, Matthieu; Chénard, Thierry; Barker, Jonathan; Najmanovich, Rafael

    2013-01-01

    Human protein kinases play fundamental roles mediating the majority of signal transduction pathways in eukaryotic cells as well as a multitude of other processes involved in metabolism, cell-cycle regulation, cellular shape, motility, differentiation and apoptosis. The human protein kinome contains 518 members. Most studies that focus on the human kinome require, at some point, the visualization of large amounts of data. The visualization of such data within the framework of a phylogenetic tree may help identify key relationships between different protein kinases in view of their evolutionary distance and the information used to annotate the kinome tree. For example, studies that focus on the promiscuity of kinase inhibitors can benefit from the annotations to depict binding affinities across kinase groups. Images involving the mapping of information into the kinome tree are common. However, producing such figures manually can be a long arduous process prone to errors. To circumvent this issue, we have developed a web-based tool called Kinome Render (KR) that produces customized annotations on the human kinome tree. KR allows the creation and automatic overlay of customizable text or shape-based annotations of different sizes and colors on the human kinome tree. The web interface can be accessed at: http://bcb.med.usherbrooke.ca/kinomerender. A stand-alone version is also available and can be run locally.

  7. Using NASA's Giovanni Web Portal to Access and Visualize Satellite-Based Earth Science Data in the Classroom

    Science.gov (United States)

    Lloyd, S. A.; Acker, J. G.; Prados, A. I.; Leptoukh, G. G.

    2008-12-01

    One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite- based remote sensing datasets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable dataset to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface. Giovanni provides a simple way to visualize, analyze and access vast amounts of satellite-based Earth science data. Giovanni's features and practical examples of its use will be demonstrated, with an emphasis on how satellite remote sensing can help students understand recent events in the atmosphere and biosphere. Giovanni is actually a series of sixteen similar web-based data interfaces, each of which covers a single satellite dataset (such as TRMM, TOMS, OMI, AIRS, MLS, HALOE, etc.) or a group of related datasets (such as MODIS and MISR for aerosols, SeaWIFS and MODIS for ocean color, and the suite of A-Train observations co-located along the CloudSat orbital path). Recently, ground-based datasets have been included in Giovanni, including the Northern Eurasian Earth Science Partnership Initiative (NEESPI), and EPA fine particulate matter (PM2.5) for air quality. Model data such as the Goddard GOCART model and MERRA meteorological reanalyses (in process) are being increasingly incorporated into Giovanni to facilitate model- data intercomparison. A full suite of data

  8. AN EFFICIENT APPROACH FOR KEYWORD SELECTION; IMPROVING ACCESSIBILITY OF WEB CONTENTS BY GENERAL SEARCH ENGINES

    Directory of Open Access Journals (Sweden)

    H. H. Kian

    2011-11-01

    Full Text Available General search engines often provide low precise results even for detailed queries. So there is a vital needto elicit useful information like keywords for search engines to provide acceptable results for user’s searchqueries. Although many methods have been proposed to show how to extract keywords automatically, allattempt to get a better recall, precision and other criteria which describe how the method has done its jobas an author. This paper presents a new automatic keyword extraction method which improves accessibilityof web content by search engines. The proposed method defines some coefficients determining featuresefficiency and tries to optimize them by using a genetic algorithm. Furthermore, it evaluates candidatekeywords by a function that utilizes the result of search engines. When comparing to the other methods,experiments demonstrate that by using the proposed method, a higher score is achieved from searchengines without losing noticeable recall or precision.

  9. Web based dosimetry system for reading and monitoring dose through internet access

    Energy Technology Data Exchange (ETDEWEB)

    Perle, S.C.; Bennett, K.; Kahilainen, J.; Vuotila, M. [Mirion Technologies (United States); Mirion Technologies (Finland)

    2010-07-01

    The Instadose{sup TM} dosemeter from Mirion Technologies is a small, rugged device based on patented direct ion storage technology and is accredited by the National Voluntary Laboratory Accreditation Program (NVLAP) through NIST, bringing radiation monitoring into the digital age. Smaller than a flash drive, this dosemeter provides an instant read-out when connected to any computer with internet access and a USB connection. Instadose devices provide radiation workers with more flexibility than today's dosemeters. Non Volatile Analog Memory Cell surrounded by a Gas Filled Ion Chamber. Dose changes the amount of Electric Charge in the DIS Analog Memory. The total charge storage capacity of the memory determines the available dose range. The state of the Analog Memory is determined by measuring the voltage across the memory cell. AMP (Account Management Program) provides secure real time access to account details, device assignments, reports and all pertinent account information. Access can be restricted based on the role assignment assigned to an individual. A variety of reports are available for download and customizing. The Advantages of the Instadose dosemeter are: - Unlimited reading capability, - Concerns about a possible exposure can be addressed immediately, - Re-readability without loss of exposure data, with cumulative exposure maintained. (authors)

  10. Providing access to risk prediction tools via the HL7 XML-formatted risk web service.

    Science.gov (United States)

    Chipman, Jonathan; Drohan, Brian; Blackford, Amanda; Parmigiani, Giovanni; Hughes, Kevin; Bosinoff, Phil

    2013-07-01

    Cancer risk prediction tools provide valuable information to clinicians but remain computationally challenging. Many clinics find that CaGene or HughesRiskApps fit their needs for easy- and ready-to-use software to obtain cancer risks; however, these resources may not fit all clinics' needs. The HughesRiskApps Group and BayesMendel Lab therefore developed a web service, called "Risk Service", which may be integrated into any client software to quickly obtain standardized and up-to-date risk predictions for BayesMendel tools (BRCAPRO, MMRpro, PancPRO, and MelaPRO), the Tyrer-Cuzick IBIS Breast Cancer Risk Evaluation Tool, and the Colorectal Cancer Risk Assessment Tool. Software clients that can convert their local structured data into the HL7 XML-formatted family and clinical patient history (Pedigree model) may integrate with the Risk Service. The Risk Service uses Apache Tomcat and Apache Axis2 technologies to provide an all Java web service. The software client sends HL7 XML information containing anonymized family and clinical history to a Dana-Farber Cancer Institute (DFCI) server, where it is parsed, interpreted, and processed by multiple risk tools. The Risk Service then formats the results into an HL7 style message and returns the risk predictions to the originating software client. Upon consent, users may allow DFCI to maintain the data for future research. The Risk Service implementation is exemplified through HughesRiskApps. The Risk Service broadens the availability of valuable, up-to-date cancer risk tools and allows clinics and researchers to integrate risk prediction tools into their own software interface designed for their needs. Each software package can collect risk data using its own interface, and display the results using its own interface, while using a central, up-to-date risk calculator. This allows users to choose from multiple interfaces while always getting the latest risk calculations. Consenting users contribute their data for future

  11. Interactive access to LP DAAC satellite data archives through a combination of open-source and custom middleware web services

    Science.gov (United States)

    Davis, Brian N.; Werpy, Jason; Friesz, Aaron M.; Impecoven, Kevin; Quenzer, Robert; Maiersperger, Tom; Meyer, David J.

    2015-01-01

    Current methods of searching for and retrieving data from satellite land remote sensing archives do not allow for interactive information extraction. Instead, Earth science data users are required to download files over low-bandwidth networks to local workstations and process data before science questions can be addressed. New methods of extracting information from data archives need to become more interactive to meet user demands for deriving increasingly complex information from rapidly expanding archives. Moving the tools required for processing data to computer systems of data providers, and away from systems of the data consumer, can improve turnaround times for data processing workflows. The implementation of middleware services was used to provide interactive access to archive data. The goal of this middleware services development is to enable Earth science data users to access remote sensing archives for immediate answers to science questions instead of links to large volumes of data to download and process. Exposing data and metadata to web-based services enables machine-driven queries and data interaction. Also, product quality information can be integrated to enable additional filtering and sub-setting. Only the reduced content required to complete an analysis is then transferred to the user.

  12. Research and Application of Role-Based Access Control Model in Web Application System%Web应用系统中RBAC模型的研究与实现

    Institute of Scientific and Technical Information of China (English)

    黄秀文

    2015-01-01

    Access control is the main strategy of security and protection in Web system, the traditional access control can not meet the needs of the growing security. With using the role based access control (RBAC) model and introducing the concept of the role in the web system, the user is mapped to a role in an organization, access to the corresponding role authorization, access authorization and control according to the user's role in an organization, so as to improve the web system flexibility and security permissions and access control.%访问控制是Web系统中安全防范和保护的主要策略,传统的访问控制已不能满足日益增长的安全性需求。本文在web应用系统中,使用基于角色的访问控制(RBAC)模型,通过引入角色的概念,将用户映射为在一个组织中的某种角色,将访问权限授权给相应的角色,根据用户在组织内所处的角色进行访问授权与控制,从而提高了在web系统中权限分配和访问控制的灵活性与安全性。

  13. Automating Information Discovery Within the Invisible Web

    Science.gov (United States)

    Sweeney, Edwina; Curran, Kevin; Xie, Ermai

    A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.

  14. AGRIS: providing access to agricultural research data exploiting open data on the web.

    Science.gov (United States)

    Celli, Fabrizio; Malapela, Thembani; Wegner, Karna; Subirats, Imma; Kokoliou, Elena; Keizer, Johannes

    2015-01-01

    AGRIS is the International System for Agricultural Science and Technology. It is supported by a large community of data providers, partners and users. AGRIS is a database that aggregates bibliographic data, and through this core data, related content across online information systems is retrieved by taking advantage of Semantic Web capabilities. AGRIS is a global public good and its vision is to be a responsive service to its user needs by facilitating contributions and feedback regarding the AGRIS core knowledgebase, AGRIS's future and its continuous development. Periodic AGRIS e-consultations, partner meetings and user feedback are assimilated to the development of the AGRIS application and content coverage. This paper outlines the current AGRIS technical set-up, its network of partners, data providers and users as well as how AGRIS's responsiveness to clients' needs inspires the continuous technical development of the application. The paper concludes by providing a use case of how the AGRIS stakeholder input and the subsequent AGRIS e-consultation results influence the development of the AGRIS application, knowledgebase and service delivery.

  15. LigoDV-web: Providing easy, secure and universal access to a large distributed scientific data store for the LIGO Scientific Collaboration

    CERN Document Server

    Areeda, Joseph S; Lundgren, Andrew P; Maros, Edward; Macleod, Duncan M; Zweizig, John

    2016-01-01

    Gravitational-wave observatories around the world, including the Laser Interferometer Gravitational-wave Observatory (LIGO), record a large volume of gravitational-wave output data and auxiliary data about the instruments and their environments. These data are stored at the observatory sites and distributed to computing clusters for data analysis. LigoDV-web is a web-based data viewer that provides access to data recorded at the LIGO Hanford, LIGO Livingston and GEO600 observatories, and the 40m prototype interferometer at Caltech. The challenge addressed by this project is to provide meaningful visualizations of small data sets to anyone in the collaboration in a fast, secure and reliable manner with minimal software, hardware and training required of the end users. LigoDV-web is implemented as a Java Enterprise Application, with Shibboleth Single Sign On for authentication and authorization and a proprietary network protocol used for data access on the back end. Collaboration members with proper credentials...

  16. FirstSearch and NetFirst--Web and Dial-up Access: Plus Ca Change, Plus C'est la Meme Chose?

    Science.gov (United States)

    Koehler, Wallace; Mincey, Danielle

    1996-01-01

    Compares and evaluates the differences between OCLC's dial-up and World Wide Web FirstSearch access methods and their interfaces with the underlying databases. Also examines NetFirst, OCLC's new Internet catalog, the only Internet tracking database from a "traditional" database service. (Author/PEN)

  17. Factors Influencing Webmasters and the Level of Web Accessibility and Section 508 Compliance at SACS Accredited Postsecondary Institutions: A Study Using the Theory of Planned Behavior

    Science.gov (United States)

    Freeman, Misty Danielle

    2013-01-01

    The purpose of this research was to explore Webmasters' behaviors and factors that influence Web accessibility at postsecondary institutions. Postsecondary institutions that were accredited by the Southern Association of Colleges and Schools were used as the population. The study was based on the theory of planned behavior, and Webmasters'…

  18. A Grounded Theory Study of the Process of Accessing Information on the World Wide Web by People with Mild Traumatic Brain Injury

    Science.gov (United States)

    Blodgett, Cynthia S.

    2008-01-01

    The purpose of this grounded theory study was to examine the process by which people with Mild Traumatic Brain Injury (MTBI) access information on the web. Recent estimates include amateur sports and recreation injuries, non-hospital clinics and treatment facilities, private and public emergency department visits and admissions, providing…

  19. Factors Influencing Webmasters and the Level of Web Accessibility and Section 508 Compliance at SACS Accredited Postsecondary Institutions: A Study Using the Theory of Planned Behavior

    Science.gov (United States)

    Freeman, Misty Danielle

    2013-01-01

    The purpose of this research was to explore Webmasters' behaviors and factors that influence Web accessibility at postsecondary institutions. Postsecondary institutions that were accredited by the Southern Association of Colleges and Schools were used as the population. The study was based on the theory of planned behavior, and Webmasters'…

  20. FirstSearch and NetFirst--Web and Dial-up Access: Plus Ca Change, Plus C'est la Meme Chose?

    Science.gov (United States)

    Koehler, Wallace; Mincey, Danielle

    1996-01-01

    Compares and evaluates the differences between OCLC's dial-up and World Wide Web FirstSearch access methods and their interfaces with the underlying databases. Also examines NetFirst, OCLC's new Internet catalog, the only Internet tracking database from a "traditional" database service. (Author/PEN)

  1. Internet accessibility and usage among urban adolescents in Southern California: implications for web-based health research.

    Science.gov (United States)

    Sun, Ping; Unger, Jennifer B; Palmer, Paula H; Gallaher, Peggy; Chou, Chih-Ping; Baezconde-Garbanati, Lourdes; Sussman, Steve; Johnson, C Anderson

    2005-10-01

    The World Wide Web (WWW) poses a distinct capability to offer interventions tailored to the individual's characteristics. To fine tune the tailoring process, studies are needed to explore how Internet accessibility and usage are related to demographic, psychosocial, behavioral, and other health related characteristics. This study was based on a cross-sectional survey conducted on 2373 7th grade students of various ethnic groups in Southern California. Measures of Internet use included Internet use at school or at home, Email use, chat-room use, and Internet favoring. Logistic regressions were conducted to assess the associations between Internet uses with selected demographic, psychosocial, behavioral variables and self-reported health statuses. The proportion of students who could access the Internet at school or home was 90% and 40%, separately. Nearly all (99%) of the respondents could access the Internet either at school or at home. Higher SES and Asian ethnicity were associated with higher internet use. Among those who could access the Internet and after adjusting for the selected demographic and psychosocial variables, depression was positively related with chat-room use and using the Internet longer than 1 hour per day at home, and hostility was positively related with Internet favoring (All ORs = 1.2 for +1 STD, p Internet use (ORs for +1 STD ranged from 1.2 to 2.0, all p Internet use. Substance use was positively related to email use, chat-room use, and at home Internet use (OR for "used" vs. "not used" ranged from 1.2 to 4.0, p Internet use at home but lower levels of Internet use at school. More physical activity was related to more email use (OR = 1.3 for +1 STD), chat room use (OR = 1.2 for +1 STD), and at school ever Internet use (OR = 1.2 for +1 STD, all p Internet use-related measures. In this ethnically diverse sample of Southern California 7(th) grade students, 99% could access the Internet at school and/or at home. This suggests that the Internet

  2. Data Quality Parameters and Web Services Facilitate User Access to Research-Ready Seismic Data

    Science.gov (United States)

    Trabant, C. M.; Templeton, M. E.; Van Fossen, M.; Weertman, B.; Ahern, T. K.; Casey, R. E.; Keyson, L.; Sharer, G.

    2016-12-01

    IRIS Data Services has the mission of providing efficient access to a wide variety of seismic and related geoscience data to the user community. With our vast archive of freely available data, we recognize that there is a constant challenge to provide data to scientists and students that are of a consistently useful level of quality. To address this issue, we began by undertaking a comprehensive survey of the data and generating metrics measurements that provide estimates of data quality. These measurements can inform the scientist of the level of suitability of a given set of data for their scientific investigation. They also serve as a quality assurance check for network operators, who can act on this information to improve their current recording or mitigate issues with already recorded data and metadata. Following this effort, IRIS Data Services is moving forward to focus on providing tools for the scientist that make it easier to access data of a quality and characteristic that suits their investigation. Data that fulfill this criterion are termed "research-ready". In addition to filtering data by type, geographic location, proximity to events, and specific time ranges, we will offer the ability to filter data based on specific quality assessments. These include signal-to-noise ratio measurements, data continuity, timing quality, absence of channel cross-talk, and potentially many other factors. Our goal is to ensure that the user receives only the data that meets their specifications and will not require extensive review and culling after delivery. We will present the latest developments of the MUSTANG automated data quality system and introduce the Research-Ready Data Sets (RRDS) service. Together these two technologies serve as a data quality assurance ecosystem that will provide benefit to the scientific community by aiding efforts to readily find appropriate and suitable data for use in any number of objectives.

  3. Factsheets Web Application

    Energy Technology Data Exchange (ETDEWEB)

    VIGIL,FRANK; REEDER,ROXANA G.

    2000-10-30

    The Factsheets web application was conceived out of the requirement to create, update, publish, and maintain a web site with dynamic research and development (R and D) content. Before creating the site, a requirements discovery process was done in order to accurately capture the purpose and functionality of the site. One of the high priority requirements for the site would be that no specialized training in web page authoring would be necessary. All functions of uploading, creation, and editing of factsheets needed to be accomplished by entering data directly into web form screens generated by the application. Another important requirement of the site was to allow for access to the factsheet web pages and data via the internal Sandia Restricted Network and Sandia Open Network based on the status of the input data. Important to the owners of the web site would be to allow the published factsheets to be accessible to all personnel within the department whether or not the sheets had completed the formal Review and Approval (R and A) process. Once the factsheets had gone through the formal review and approval process, they could then be published both internally and externally based on their individual publication status. An extended requirement and feature of the site would be to provide a keyword search capability to search through the factsheets. Also, since the site currently resides on both the internal and external networks, it would need to be registered with the Sandia search engines in order to allow access to the content of the site by the search engines. To date, all of the above requirements and features have been created and implemented in the Factsheet web application. These have been accomplished by the use of flat text databases, which are discussed in greater detail later in this paper.

  4. Intro and Recent Advances: Remote Data Access via OPeNDAP Web Services

    Science.gov (United States)

    Fulker, David

    2016-01-01

    During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in1.server installation,2.server configuration,3.Hyrax aggregation capabilities,4.support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS),5.support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters. Topics 2 through 7 will be relevant to data consumers, data providers and notably, due to the open-source nature of all OPeNDAP software to developers wishing to extend Hyrax, to build compatible clients and servers, and/or to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.

  5. Data Analysis Protocol for the Development and Evaluation of Population Pharmacokinetic Models for Incorporation Into the Web-Accessible Population Pharmacokinetic Service - Hemophilia (WAPPS-Hemo)

    Science.gov (United States)

    McEneny-King, Alanna; Foster, Gary; Edginton, Andrea N

    2016-01-01

    Background Hemophilia is an inherited bleeding disorder caused by a deficiency in a specific clotting factor. This results in spontaneous bleeding episodes and eventual arthropathy. The mainstay of hemophilia treatment is prophylactic replacement of the missing factor, but an optimal regimen remains to be determined. Rather, individualized prophylaxis has been suggested to improve both patient safety and resource utilization. However, uptake of this approach has been hampered by the demanding sampling schedules and complex calculations required to obtain individual estimates of pharmacokinetic (PK) parameters. The use of population pharmacokinetics (PopPK) can alleviate this burden by reducing the number of plasma samples required for accurate estimation, but few tools incorporating this approach are readily available to clinicians. Objective The Web-accessible Population Pharmacokinetic Service - Hemophilia (WAPPS-Hemo) project aims to bridge this gap by providing a Web-accessible service for the reliable estimation of individual PK parameters from only a few patient samples. This service is predicated on the development of validated brand-specific PopPK models. Methods We describe the data analysis plan for the development and evaluation of each PopPK model to be incorporated into the WAPPS-Hemo platform. The data sources and structure of the dataset are discussed first, followed by the procedures for handling both data below limit of quantification (BLQ) and absence of such BLQ data. Next, we outline the strategies for building the appropriate structural and covariate models, including the possible need for a process algorithm when PK behavior varies between subjects or significant covariates are not provided. Prior to use in a prospective manner, the models will undergo extensive evaluation using a variety of techniques such as diagnostic plots, bootstrap analysis and cross-validation. Finally, we describe the incorporation of a validated PopPK model into the

  6. Design and Realization of Embedded Web Access Control System%嵌入式Web访问控制系统的设计与实现

    Institute of Scientific and Technical Information of China (English)

    谯倩; 毛燕琴; 沈苏彬

    2011-01-01

    针对嵌入式Web系统自身的安全,结合嵌入式Web系统的特点,在对基于角色的访问控制模型研究的基础上对其进行简化修改,去掉角色继承的复杂模式,在此基础上提出了适用于嵌入式Web系统的“用户-角色-权限集(业务-页面-操作)”访问控制设计方案.并利用CGI技术实现了特定的嵌入式Web应用系统的访问控制功能,限制了合法用户对嵌入式Web系统资源的访问,防止了非法用户的侵入或因合法用户的不慎操作而造成的破坏.对实现的Web应用系统进行了测试,测试结果表明该模型具有良好的功能.%For the security of embedded Web system itself, combined with the characteristics of embedded Web system and based on the research on the model, it simplifies RBAC model to remove the role of complex patterns of inheritance and gives the embedded Web solution for access control system that is "user-role-privilege set (business-page-operation("model. The embedded Web access control system is achieved through CGI technology, limiting user access to embedded Web systems resources, and preventing the intrusion of unauthorized users or the damage caused by careless operation of legitimate users. The Web application system was tested, and the test results show that the model has good functions.

  7. OGIS Access System

    Data.gov (United States)

    National Archives and Records Administration — The OGIS Access System (OAS) provides case management, stakeholder collaboration, and public communications activities including a web presence via a web portal.

  8. 基于SOAP网关的Web Services访问控制%Study on Access Control for Web Services Based on SOAP Gateway

    Institute of Scientific and Technical Information of China (English)

    夏春涛; 陈性元; 张斌; 王婷

    2007-01-01

    本文探讨了Web Services通信中对SOAP网关的需求,提出了基于SOAP网关的Web Services访问控制架构,分析了架构中的参与者及其职责,并给出了两种SOAP网关的实现方法和基于XACML的授权服务的实现机制.

  9. Los catálogos en línea de acceso público del Mercosur disponibles en entorno web Web accessible online public access catalogs in the Mercosur

    Directory of Open Access Journals (Sweden)

    Elsa Barber

    2008-06-01

    Full Text Available Se analizan las interfaces de usuario de los catálogos en línea de acceso público (OPACs en entorno web de las bibliotecas universitarias, especializadas, públicas y nacionales de los países parte del Mercosur (Argentina, Brasil, Paraguay, Uruguay, para elaborar un diagnóstico de situación sobre: descripción bibliográfica, análisis temático, mensajes de ayuda al usuario, visualización de datos bibliográficos. Se adopta una metodología cuali-cuantitativa, se utiliza como instrumento de recolección de datos la lista de funcionalidades del sistema que proporciona Hildreth (1982, se actualiza, se obtiene un formulario que permite, mediante 38 preguntas cerradas, observar la frecuencia de aparición de las funcionalidades básicas propias de cuatro áreas: Área I - control de operaciones; Área II - control de formulación de la búsqueda y puntos de acceso; Área III - control de salida y Área IV - asistencia al usuario: información e instrucción. Se trabaja con la información correspondiente a 297 unidades. Se delimitan estratos por tipo de software, tipo de biblioteca y país. Se aplican a los resultados las pruebas de Chi-cuadrado, Odds ratio y regresión logística multinomial. El análisis corrobora la existencia de diferencias significativas en cada uno de los estratos y verifica que la mayoría de los OPACs relevados brindan prestaciones mínimas.User interfaces of web based online public access catalogs (OPACs of academic, special, public and national libraries in countries belonging to Mercosur (Argentina, Brazil, Paraguay, Uruguay are studied to provide a diagnosis of the situation of bibliographic description, subject analisis, help messages and bibliographic display. A cuali-cuantitative methodology is adopted and a checklist of systems functions created by Hildreth (1982 is updated and used as data collection tool. The resulting 38 closed questions checklist has allowed to observe the frequency of appearance of the

  10. An unified access method for Web services in IoT based on CoAP%基于 CoAP 的物联网 Web 服务统一访问方法

    Institute of Scientific and Technical Information of China (English)

    黄忠; 葛连升

    2014-01-01

    A COAP-based unified Web access architecture was proposed, through which several different RFID networks could seamlessly join the internet.Moreover, a new Web access approach was presented, which could bind SOAP to the constrained application protocol ( CoAP) .Experimental results showed that the SOAP/CoAP binding was an effec-tive unified Web access approach to RFID networks, which had much lower network overhead compared with traditional SOAP/HTTP binding approach.%提出了1种将不同规范的RFID网络无缝接入到Internet的Web服务统一访问架构,并在此基础上进一步提出了1种基于CoAP协议的Web服务访问方法---SOAP/CoAP绑定。实验结果表明, SOAP/CoAP绑定是实现RFID网络Web服务统一访问的有效方法,且相比传统的SOAP/HTTP绑定方法,具有更低的网络开销。

  11. MO-E-18C-01: Open Access Web-Based Peer-To-Peer Training and Education in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Pawlicki, T [UC San Diego Medical Center, La Jolla, CA (United States); Brown, D; Dunscombe, P [Tom Baker Cancer Centre, Calgary, AB (Canada); Mutic, S [Washington University School of Medicine, Saint Louis, MO (United States)

    2014-06-15

    Purpose: Current training and education delivery models have limitations which result in gaps in clinical proficiency with equipment, procedures, and techniques. Educational and training opportunities offered by vendors and professional societies are by their nature not available at point of need or for the life of clinical systems. The objective of this work is to leverage modern communications technology to provide peer-to-peer training and education for radiotherapy professionals, in the clinic and on demand, as they undertake their clinical duties. Methods: We have developed a free of charge web site ( https://i.treatsafely.org ) using the Google App Engine and datastore (NDB, GQL), Python with AJAX-RPC, and Javascript. The site is a radiotherapy-specific hosting service to which user-created videos illustrating clinical or physics processes and other relevant educational material can be uploaded. Efficient navigation to the material of interest is provided through several RT specific search tools and videos can be scored by users, thus providing comprehensive peer review of the site content. The site also supports multilingual narration\\translation of videos, a quiz function for competence assessment and a library function allowing groups or institutions to define their standard operating procedures based on the video content. Results: The website went live in August 2013 and currently has over 680 registered users from 55 countries; 27.2% from the United States, 9.8% from India, 8.3% from the United Kingdom, 7.3% from Brazil, and 47.5% from other countries. The users include physicists (57.4%), Oncologists (12.5%), therapists (8.2%) and dosimetrists (4.8%). There are 75 videos to date including English, Portuguese, Mandarin, and Thai. Conclusion: Based on the initial acceptance of the site, we conclude that this open access web-based peer-to-peer tool is fulfilling an important need in radiotherapy training and education. Site functionality should expand in

  12. Improving Flexibility and Accessibility of Higher Education with Web 2.0 Technologies: Needs Analysis of Public Health Education Programs in Bulgaria

    Directory of Open Access Journals (Sweden)

    I. Sarieva

    2011-12-01

    Full Text Available The case study presented in this paper aims to address the issues related to the use of Web 2.0 technology in public health education in a particular college in Bulgaria in relation to providing flexible and accessible education consistent with the current trends in public health practices. The outcomes of the case study suggest that systematic steps are needed in order to assure effective inclusion of technology into the learning process; these steps include the completion of systematic studies of attrition rate and the reasons for student drop-out, training of administration and faculty members in effective incorporation of Web 2.0 technologies, introduction and promotion of Medicine 2.0 practices, and initiating the planning of design and development of Web 2.0 learning applications and environments in Bulgarian which is the language of instruction.

  13. Comparison of trial participants and open access users of a web-based physical activity intervention regarding adherence, attrition, and repeated participation.

    Science.gov (United States)

    Wanner, Miriam; Martin-Diener, Eva; Bauer, Georg; Braun-Fahrländer, Charlotte; Martin, Brian W

    2010-02-10

    Web-based interventions are popular for promoting healthy lifestyles such as physical activity. However, little is known about user characteristics, adherence, attrition, and predictors of repeated participation on open access physical activity websites. The focus of this study was Active-online, a Web-based individually tailored physical activity intervention. The aims were (1) to assess and compare user characteristics and adherence to the website (a) in the open access context over time from 2003 to 2009, and (b) between trial participants and open access users; and (2) to analyze attrition and predictors of repeated use among participants in a randomized controlled trial compared with registered open access users. Data routinely recorded in the Active-online user database were used. Adherence was defined as: the number of pages viewed, the proportion of visits during which a tailored module was begun, the proportion of visits during which tailored feedback was received, and the time spent in the tailored modules. Adherence was analyzed according to six one-year periods (2003-2009) and according to the context (trial or open access) based on first visits and longest visits. Attrition and predictors of repeated participation were compared between trial participants and open access users. The number of recorded visits per year on Active-online decreased from 42,626 in 2003-2004 to 8343 in 2008-2009 (each of six one-year time periods ran from April 23 to April 22 of the following year). The mean age of users was between 38.4 and 43.1 years in all time periods and both contexts. The proportion of women increased from 49.5% in 2003-2004 to 61.3% in 2008-2009 (Popen access users. For open access users, adherence was similar during the first and the longest visits; for trial participants, adherence was lower during the first visits and higher during the longest visits. Of registered open access users and trial participants, 25.8% and 67.3% respectively visited Active

  14. Web Access Control on Petrochemical Information Service System%石油化工信息系统Web权限管理的研究

    Institute of Scientific and Technical Information of China (English)

    贾红阳; 郭力; 李晓霞; 杨章远; 姜林; 陈晓青

    2001-01-01

    对Web权限控制进行了研究分析和应用。首先分析了进行权限控制的必要性;介绍了进行权限控制的几种实现形式,包括利用Web Server本身权限管理工具,通过在ASP/PHP页面中嵌入权限认证代码,或是将二者结合;最后,基于Apache服务器开发了图形化的权限管理系统,并已将它应用在Internet石化信息服务系统中。该软件可以方便地完成增删改用户/组,为用户/组设定权限,限制某些IP对本系统的访问等功能;并可以方便地移植到其他类似系统中。%Web Access Control is analyzed and applied to information service system in this article. First, the need of Access Control is discussed. Second, a few of implementation methods are introduced . Web servers have access control functions by itself. In addition, we may insert some codes in ASP/PHP page to check access rights. CGI/ISAPI may use either or both of the above methods. As to Internet Petrochemical Information Service System, we design and complete a software to finish this job. It has a series of functions such as add, delete, edit users/groups' information, grant or revoke access to users/groups, allow or deny some IPs to access the information system, etc. It can also be applied to other similar information systems conveniently.

  15. LigoDV-web: Providing easy, secure and universal access to a large distributed scientific data store for the LIGO scientific collaboration

    Science.gov (United States)

    Areeda, J. S.; Smith, J. R.; Lundgren, A. P.; Maros, E.; Macleod, D. M.; Zweizig, J.

    2017-01-01

    Gravitational-wave observatories around the world, including the Laser Interferometer Gravitational-Wave Observatory (LIGO), record a large volume of gravitational-wave output data and auxiliary data about the instruments and their environments. These data are stored at the observatory sites and distributed to computing clusters for data analysis. LigoDV-web is a web-based data viewer that provides access to data recorded at the LIGO Hanford, LIGO Livingston and GEO600 observatories, and the 40 m prototype interferometer at Caltech. The challenge addressed by this project is to provide meaningful visualizations of small data sets to anyone in the collaboration in a fast, secure and reliable manner with minimal software, hardware and training required of the end users. LigoDV-web is implemented as a Java Enterprise Application, with Shibboleth Single Sign On for authentication and authorization, and a proprietary network protocol used for data access on the back end. Collaboration members with proper credentials can request data be displayed in any of several general formats from any Internet appliance that supports a modern browser with Javascript and minimal HTML5 support, including personal computers, smartphones, and tablets. Since its inception in 2012, 634 unique users have visited the LigoDV-web website in a total of 33 , 861 sessions and generated a total of 139 , 875 plots. This infrastructure has been helpful in many analyses within the collaboration including follow-up of the data surrounding the first gravitational-wave events observed by LIGO in 2015.

  16. Measuring Personalization of Web Search

    DEFF Research Database (Denmark)

    Hannak, Aniko; Sapiezynski, Piotr; Kakhki, Arash Molavi

    2013-01-01

    Web search is an integral part of our daily lives. Recently, there has been a trend of personalization in Web search, where different users receive different results for the same search query. The increasing personalization is leading to concerns about Filter Bubble effects, where certain users...... are simply unable to access information that the search engines’ algorithm decidesis irrelevant. Despitetheseconcerns, there has been little quantification of the extent of personalization in Web search today, or the user attributes that cause it. In light of this situation, we make three contributions....... First, we develop a methodology for measuring personalization in Web search results. While conceptually simple, there are numerous details that our methodology must handle in order to accurately attribute differences in search results to personalization. Second, we apply our methodology to 200 users...

  17. Evaluation of a web portal for improving public access to evidence-based health information and health literacy skills: a pragmatic trial.

    Directory of Open Access Journals (Sweden)

    Astrid Austvoll-Dahlgren

    Full Text Available BACKGROUND: Using the conceptual framework of shared decision-making and evidence-based practice, a web portal was developed to serve as a generic (non disease-specific tailored intervention to improve the lay public's health literacy skills. OBJECTIVE: To evaluate the effects of the web portal compared to no intervention in a real-life setting. METHODS: A pragmatic randomised controlled parallel trial using simple randomisation of 96 parents who had children aged <4 years. Parents were allocated to receive either access to the portal or no intervention, and assigned three tasks to perform over a three-week period. These included a searching task, a critical appraisal task, and reporting on perceptions about participation. Data were collected from March through June 2011. RESULTS: Use of the web portal was found to improve attitudes towards searching for health information. This variable was identified as the most important predictor of intention to search in both samples. Participants considered the web portal to have good usability, usefulness, and credibility. The intervention group showed slight increases in the use of evidence-based information, critical appraisal skills, and participation compared to the group receiving no intervention, but these differences were not statistically significant. CONCLUSION: Despite the fact that the study was underpowered, we found that the web portal may have a positive effect on attitudes towards searching for health information. Furthermore, participants considered the web portal to be a relevant tool. It is important to continue experimenting with web-based resources in order to increase user participation in health care decision-making. TRIAL REGISTRATION: ClinicalTrials.gov NCT01266798.

  18. 2B-Alert Web: An Open-Access Tool for Predicting the Effects of Sleep/Wake Schedules and Caffeine Consumption on Neurobehavioral Performance.

    Science.gov (United States)

    Reifman, Jaques; Kumar, Kamal; Wesensten, Nancy J; Tountas, Nikolaos A; Balkin, Thomas J; Ramakrishnan, Sridhar

    2016-12-01

    Computational tools that predict the effects of daily sleep/wake amounts on neurobehavioral performance are critical components of fatigue management systems, allowing for the identification of periods during which individuals are at increased risk for performance errors. However, none of the existing computational tools is publicly available, and the commercially available tools do not account for the beneficial effects of caffeine on performance, limiting their practical utility. Here, we introduce 2B-Alert Web, an open-access tool for predicting neurobehavioral performance, which accounts for the effects of sleep/wake schedules, time of day, and caffeine consumption, while incorporating the latest scientific findings in sleep restriction, sleep extension, and recovery sleep. We combined our validated Unified Model of Performance and our validated caffeine model to form a single, integrated modeling framework instantiated as a Web-enabled tool. 2B-Alert Web allows users to input daily sleep/wake schedules and caffeine consumption (dosage and time) to obtain group-average predictions of neurobehavioral performance based on psychomotor vigilance tasks. 2B-Alert Web is accessible at: https://2b-alert-web.bhsai.org. The 2B-Alert Web tool allows users to obtain predictions for mean response time, mean reciprocal response time, and number of lapses. The graphing tool allows for simultaneous display of up to seven different sleep/wake and caffeine schedules. The schedules and corresponding predicted outputs can be saved as a Microsoft Excel file; the corresponding plots can be saved as an image file. The schedules and predictions are erased when the user logs off, thereby maintaining privacy and confidentiality. The publicly accessible 2B-Alert Web tool is available for operators, schedulers, and neurobehavioral scientists as well as the general public to determine the impact of any given sleep/wake schedule, caffeine consumption, and time of day on performance of a

  19. 76 FR 59307 - Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and...

    Science.gov (United States)

    2011-09-26

    ... Room Web site, http://www.regulationroom.org , to learn about the rule and the rulemaking process, to... carriers enter into written agreements spelling out the respective responsibilities of the parties for... W3C adopted WCAG 2.0, incorporating developments in Web technology and lessons learned since WCAG...

  20. Brokering access to massive climate and landscape data via web services: observations and lessons learned after five years of the Geo Data Portal project.

    Science.gov (United States)

    Blodgett, D. L.; Walker, J. I.; Read, J. S.

    2015-12-01

    The USGS Geo Data Portal (GDP) project started in 2010 with the goal of providing climate and landscape model output data to hydrology and ecology modelers in model-ready form. The system takes a user-specified collection of polygons and a gridded time series dataset and returns a time series of spatial statistics for each polygon. The GDP is designed for scalability and is generalized such that any data, hosted anywhere on the Internet adhering to the NetCDF-CF conventions, can be processed. Five years into the project, over 600 unique users from more than 200 organizations have used the system's web user interface and some datasets have been accessed thousands of times. In addition to the web interface, python and R client libraries have seen steady usage growth and several third-party web applications have been developed to use the GDP for easy data access. Here, we will present lessons learned and improvements made after five years of operation of the system's user interfaces, processing server, and data holdings. A vision for the future availability and processing of massive climate and landscape data will be outlined.

  1. Dynamic Science Data Services for Display, Analysis and Interaction in Widely-Accessible, Web-Based Geospatial Platforms Project

    Data.gov (United States)

    National Aeronautics and Space Administration — TerraMetrics, Inc., proposes an SBIR Phase I R/R&D program to investigate and develop a key web services architecture that provides data processing, storage and...

  2. The ATS Web Page Provides "Tool Boxes" for: Access Opportunities, Performance, Interfaces, Volume, Environments, "Wish List" Entry and Educational Outreach

    Science.gov (United States)

    1999-01-01

    This viewgraph presentation gives an overview of the Access to Space website, including information on the 'tool boxes' available on the website for access opportunities, performance, interfaces, volume, environments, 'wish list' entry, and educational outreach.

  3. The EarthScope Array Network Facility: application-driven low-latency web-based tools for accessing high-resolution multi-channel waveform data

    Science.gov (United States)

    Newman, R. L.; Lindquist, K. G.; Clemesha, A.; Vernon, F. L.

    2008-12-01

    Since April 2004 the EarthScope USArray seismic network has grown to over 400 broadband stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. Providing secure, yet open, access to real-time and archived data for a broad range of audiences is best served by a series of platform agnostic low-latency web-based applications. We present a framework of tools that interface between the world wide web and Boulder Real Time Technologies Antelope Environmental Monitoring System data acquisition and archival software. These tools provide audiences ranging from network operators and geoscience researchers, to funding agencies and the general public, with comprehensive information about the experiment. This ranges from network-wide to station-specific metadata, state-of-health metrics, event detection rates, archival data and dynamic report generation over a stations two year life span. Leveraging open source web-site development frameworks for both the server side (Perl, Python and PHP) and client-side (Flickr, Google Maps/Earth and jQuery) facilitates the development of a robust extensible architecture that can be tailored on a per-user basis, with rapid prototyping and development that adheres to web-standards.

  4. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access todigital imaging and communication in medicinepersistent object protocol

    Directory of Open Access Journals (Sweden)

    Hui-Qun Wu

    2013-12-01

    Full Text Available AIM:To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS framework in conformance with digital imaging and communication in medicine (DICOM and health level 7 (HL7 protocol to realize fundus images and reports sharing and communication through internet.METHODS: Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO protocol, which contains three tiers.RESULTS:In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images.CONCLUSION:Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  5. Medical high-resolution image sharing and electronic whiteboard system: A pure-web-based system for accessing and discussing lossless original images in telemedicine.

    Science.gov (United States)

    Qiao, Liang; Li, Ying; Chen, Xin; Yang, Sheng; Gao, Peng; Liu, Hongjun; Feng, Zhengquan; Nian, Yongjian; Qiu, Mingguo

    2015-09-01

    There are various medical image sharing and electronic whiteboard systems available for diagnosis and discussion purposes. However, most of these systems ask clients to install special software tools or web plug-ins to support whiteboard discussion, special medical image format, and customized decoding algorithm of data transmission of HRIs (high-resolution images). This limits the accessibility of the software running on different devices and operating systems. In this paper, we propose a solution based on pure web pages for medical HRIs lossless sharing and e-whiteboard discussion, and have set up a medical HRI sharing and e-whiteboard system, which has four-layered design: (1) HRIs access layer: we improved an tile-pyramid model named unbalanced ratio pyramid structure (URPS), to rapidly share lossless HRIs and to adapt to the reading habits of users; (2) format conversion layer: we designed a format conversion engine (FCE) on server side to real time convert and cache DICOM tiles which clients requesting with window-level parameters, to make browsers compatible and keep response efficiency to server-client; (3) business logic layer: we built a XML behavior relationship storage structure to store and share users' behavior, to keep real time co-browsing and discussion between clients; (4) web-user-interface layer: AJAX technology and Raphael toolkit were used to combine HTML and JavaScript to build client RIA (rich Internet application), to meet clients' desktop-like interaction on any pure webpage. This system can be used to quickly browse lossless HRIs, and support discussing and co-browsing smoothly on any web browser in a diversified network environment. The proposal methods can provide a way to share HRIs safely, and may be used in the field of regional health, telemedicine and remote education at a low cost.

  6. Evaluación de la accesibilidad de páginas web de universidades españolas y extranjeras incluidas en rankings universitarios internacionales/Accessibility assessment of web pages of Spanish and foreign universities included in international rankings

    National Research Council Canada - National Science Library

    José R Hilera; Luis Fernández; Esther Suárez; Elena T Vilar

    2013-01-01

      This article describes a study conducted to evaluate the accessibility of the contents of the Web sites of some of the most important universities in Spain and the rest or the world, according with...

  7. Web使用模式研究中的数据挖掘%Web Access Pattern Data-mining

    Institute of Scientific and Technical Information of China (English)

    张娥; 冯秋红; 宣慧玉; 田增瑞

    2001-01-01

    Web使用模式挖掘是利用Web使用数据的高级手段,是对Web使用数据的深层次分析,从而挖掘出有效的、新颖的、潜在的、有用的及最终可以理解的知识,以帮助管理决策。综述了Web使用模式的数据挖掘研究技术的内容、现状和研究的方向。%Companies are interested in how the users use their Web sites and what they mostly care day by day, for it is fundamental in company making it's strategy. Web usage data mining is an effective means to deeply analyze Web usage data, and it can offer valid, novelty and useful knowledge, then it would be helpful to management decision. In this paper, we introduce what is Web usage or Web usability data-mining, at the same time we present the method be used and the question should be solved in this domain in the future.

  8. 基于WebAccess的远程实验物流控制系统设计%Design of a Remote Logistics Control System Based on WebAccess

    Institute of Scientific and Technical Information of China (English)

    朱光灿; 郑萍; 邵子惠; 彭昱; 温百东

    2012-01-01

    根据对远程监控的需求,提出了一种完全基于IE浏览器的网际组态软件WebAccess实现对实验室物流控制系统的远程监控设计.该设计构建了一个具有现场控制对象、控制层、网络层以及基于西门子组态软件WinCC的监控管理层3层网络的物流控制系统,同时充分利用网际组态软件WebAccess便捷的网际功能,通过OPC方式与监控管理层的WinCC服务器进行数据交换,实现了系统的远程控制、远程组态以及远程访问的客户监控数无限扩展.实际运行证明,该系统成本低,网络层次分明,是一种可激发学生创新能力,可实现现代大综合设计实验的良好平台.%With the application of configuration software fully based on IE browser-WebAccess, a design to fulfill the remote monitoring and control of the laboratory logistics control system is presented. The logistics control system with control objects at the scene has been constructed by three layers: the control layer, the network layer, and the management and monitoring layer based on Siemens configuration software WinCC. Meanwhile, by making sufficient use of the WebAccess' s convenient internet function, through internal data exchanging with the WinCC server in the management and monitoring layer by OPC , the number of the clients monitored, remote configuration and remote access can be infinitely expanded. Actual practice proves that the proposed system is economical and clearly structured, and is a good platform for modern comprehensive design experiment,which can arouse students' innovative ability.

  9. Integrated web-based viewing and secure remote access to a clinical data repository and diverse clinical systems.

    Science.gov (United States)

    Duncan, R G; Saperia, D; Dulbandzhyan, R; Shabot, M M; Polaschek, J X; Jones, D T

    2001-01-01

    The advent of the World-Wide-Web protocols and client-server technology has made it easy to build low-cost, user-friendly, platform-independent graphical user interfaces to health information systems and to integrate the presentation of data from multiple systems. The authors describe a Web interface for a clinical data repository (CDR) that was moved from concept to production status in less than six months using a rapid prototyping approach, multi-disciplinary development team, and off-the-shelf hardware and software. The system has since been expanded to provide an integrated display of clinical data from nearly 20 disparate information systems.

  10. Extraction of Web Content to Adapt Web Pages for Mobile Devices

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2011-03-01

    Full Text Available Now a day's mobile phones are replacing conventional PCs' as users are browsing and searching the Internet via their mobile handsets. Web based services and information can be accessed from any location with the help of these Mobile devices such as mobile phones, Personal Digital Assistants (PDA with relative ease. To access the educational data on mobile devices, web page adaptation is needed, keeping in mind security and quality of data. Various researchers are working on adaptation techniques. Educational web miner aims to develop an interface for kids to use mobile devices in a secure way. This paper presents a framework for adapting the web pages as part of educational web miner so that educational data can be accessed accurately, securely and concisely. The present paper is a part of the project whose aim is to develop an interface for kids, so that they can access the current knowledge bases from mobile devices in a secure way and to get accurate and concise information at ease. The related studies for adaptation technique are also presented in this paper.

  11. Ajax and Web Services

    CERN Document Server

    Pruett, Mark

    2006-01-01

    Ajax and web services are a perfect match for developing web applications. Ajax has built-in abilities to access and manipulate XML data, the native format for almost all REST and SOAP web services. Using numerous examples, this document explores how to fit the pieces together. Examples demonstrate how to use Ajax to access publicly available web services fromYahoo! and Google. You'll also learn how to use web proxies to access data on remote servers and how to transform XML data using XSLT.

  12. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lamberth, K; Harndahl, M

    2008-01-01

    been used to predict possible MHC-binding peptides in a series of pathogen viral proteomes including SARS, Influenza and HIV, resulting in an average of 75–80% confirmed MHC binders. Here, the performance is further validated and benchmarked using a large set of newly published affinity data, non...

  13. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11.

    Science.gov (United States)

    Lundegaard, Claus; Lamberth, Kasper; Harndahl, Mikkel; Buus, Søren; Lund, Ole; Nielsen, Morten

    2008-07-01

    NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding. The predictions are based on artificial neural networks trained on data from 55 MHC alleles (43 Human and 12 non-human), and position-specific scoring matrices (PSSMs) for additional 67 HLA alleles. As only the MHC class I prediction server is available, predictions are possible for peptides of length 8-11 for all 122 alleles. artificial neural network predictions are given as actual IC(50) values whereas PSSM predictions are given as a log-odds likelihood scores. The output is optionally available as download for easy post-processing. The training method underlying the server is the best available, and has been used to predict possible MHC-binding peptides in a series of pathogen viral proteomes including SARS, Influenza and HIV, resulting in an average of 75-80% confirmed MHC binders. Here, the performance is further validated and benchmarked using a large set of newly published affinity data, non-redundant to the training set. The server is free of use and available at: http://www.cbs.dtu.dk/services/NetMHC.

  14. Eduquito: ferramentas de autoria e de colaboração acessíveis na perspectiva da web 2.0 Eduquito: accessible authorship and collaboration tools from the perspective of web 2.0

    Directory of Open Access Journals (Sweden)

    Lucila Maria Costi Santarosa

    2012-09-01

    Full Text Available o Eduquito, ambiente digital/virtual de aprendizagem desenvolvido pela equipe de pesquisadores do NIEE/UFRGS, busca apoiar processos de inclusão sociodigital, por ser projetado em sintonia com os princípios de acessibilidade e de desenho universal, normatizados pela WAI/W3C. O desenvolvimento da plataforma digital/virtual acessível e os resultados da utilização por pessoas com deficiências são discutidos, revelando um processo permanente de verificação e de validação dos recursos e da funcionalidade do ambiente Eduquito junto a diversidade humana. Apresentamos e problematizamos duas ferramentas de autoria individual e coletiva - a Oficina Multimídia e o Bloguito, um blog acessível -, novos recursos do ambiente Eduquito que emergem da aplicabilidade do conceito de pervasividade, buscando instituir espaços de letramento e impulsionar práticas de mediação tecnológica para a inclusão sociodigital no contexto da Web 2.0.the Eduquito, a digital/virtual learning environment developed by a NIEE / UFRGS team of researchers, seeks to support processes of socio-digital inclusion, and for that reason it was devised according to accessibility principles and universal design systematized by WAI/W3C. We discuss the development of a digital/virtual accessible platform and the results of its use by people with special needs, revealing an ongoing process of verification and validation of features and functionality of the Eduquito environment considering human diversity. We present and question two individual and collective authorship tools - the Multimedia Workshop and Bloguito, an accessible blog - new features of Eduquito Environment that emerge from the applicability of the concept of pervasiveness, in order to establish literacy spaces and boost technological mediation for socio-digital inclusion in the Web 2.0 context.

  15. Accessibility, usability, and usefulness of a Web-based clinical decision support tool to enhance provider-patient communication around Self-management TO Prevent (STOP) Stroke.

    Science.gov (United States)

    Anderson, Jane A; Godwin, Kyler M; Saleem, Jason J; Russell, Scott; Robinson, Joshua J; Kimmel, Barbara

    2014-12-01

    This article reports redesign strategies identified to create a Web-based user-interface for the Self-management TO Prevent (STOP) Stroke Tool. Members of a Stroke Quality Improvement Network (N = 12) viewed a visualization video of a proposed prototype and provided feedback on implementation barriers/facilitators. Stroke-care providers (N = 10) tested the Web-based prototype in think-aloud sessions of simulated clinic visits. Participants' dialogues were coded into themes. Access to comprehensive information and the automated features/systematized processes were the primary accessibility and usability facilitator themes. The need for training, time to complete the tool, and computer-centric care were identified as possible usability barriers. Patient accountability, reminders for best practice, goal-focused care, and communication/counseling themes indicate that the STOP Stroke Tool supports the paradigm of patient-centered care. The STOP Stroke Tool was found to prompt clinicians on secondary stroke-prevention clinical-practice guidelines, facilitate comprehensive documentation of evidence-based care, and support clinicians in providing patient-centered care through the shared decision-making process that occurred while using the action-planning/goal-setting feature of the tool.

  16. ON WEB SERVICES ACCESS CONTROL BASED ON QUANTIFIED-ROLE%基于量化角色的Web服务访问控制研究

    Institute of Scientific and Technical Information of China (English)

    吴春雷; 崔学荣

    2012-01-01

    The concepts of permission value and quantified-role are introduced to build a fine-grained Web services access control model. By defining the resources of Web services, service attributes and access modes set, the definitions of permissions set is expanded. The definition and distribution of permission values are studied, and the validation and representation of quantified-role are analysed. The concept of ' behaviour value' of Web services user is proposed, and the correlation between the behaviour values with the role quantity of a user is established. The dynamic calculation of behaviour value and the adjustment of users permissions are achieved based on users behaviours and the context.%引入权限量值和量化角色的概念,建立一个细粒度的Web服务访问控制模型.通过定义Web服务和服务属性资源以及访问模式集,扩展权限集的定义;研究Web服务权限量值的定义和分配,以及量化角色的验证和表示形式;提出Web服务主体的行为量值的概念,建立与主体的角色量值的关联,实现根据Web服务主体的行为和上下文环境动态计算行为量值并调整主体权限的方法.

  17. Enabling Web-Based GIS Tools for Internet and Mobile Devices To Improve and Expand NASA Data Accessibility and Analysis Functionality for the Renewable Energy and Agricultural Applications

    Science.gov (United States)

    Ross, A.; Stackhouse, P. W.; Tisdale, B.; Tisdale, M.; Chandler, W.; Hoell, J. M., Jr.; Kusterer, J.

    2014-12-01

    The NASA Langley Research Center Science Directorate and Atmospheric Science Data Center have initiated a pilot program to utilize Geographic Information System (GIS) tools that enable, generate and store climatological averages using spatial queries and calculations in a spatial database resulting in greater accessibility of data for government agencies, industry and private sector individuals. The major objectives of this effort include the 1) Processing and reformulation of current data to be consistent with ESRI and openGIS tools, 2) Develop functions to improve capability and analysis that produce "on-the-fly" data products, extending these past the single location to regional and global scales. 3) Update the current web sites to enable both web-based and mobile application displays for optimization on mobile platforms, 4) Interact with user communities in government and industry to test formats and usage of optimization, and 5) develop a series of metrics that allow for monitoring of progressive performance. Significant project results will include the the development of Open Geospatial Consortium (OGC) compliant web services (WMS, WCS, WFS, WPS) that serve renewable energy and agricultural application products to users using GIS software and tools. Each data product and OGC service will be registered within ECHO, the Common Metadata Repository, the Geospatial Platform, and Data.gov to ensure the data are easily discoverable and provide data users with enhanced access to SSE data, parameters, services, and applications. This effort supports cross agency, cross organization, and interoperability of SSE data products and services by collaborating with DOI, NRCan, NREL, NCAR, and HOMER for requirements vetting and test bed users before making available to the wider public.

  18. Organization-based Access Control Model for Web Service%基于组织的Web服务访问控制模型

    Institute of Scientific and Technical Information of China (English)

    李怀明; 王慧佳; 符林

    2014-01-01

    For the problem of current access control strategies difficultly guaranteeing the flexibility of authorization of complex E-government system for Web service,this paper proposes an organization-based access control model for Web services on the basis of the research of the organization-based 4 level access control model. The model takes organization as the core and studies the issue of access control and authorization management from the perspective of management. Through importing the position agent and authorization unit in the model,the authorization can be adjusted according to the change of the environment context information to implement the dynamic authorization,while taking advantage of the state migration of authorization units,provides support for workflow patterns. Furthermore,the model divides permissions into service permissions and service attribute permissions, and achieves fine-grained resource protection. Application examples show that the model can commendably fit the complex organization structure in E-government system. Moreover,it can make authorization more efficient and flexible meanwhile protecting the Web service resources.%针对现有访问控制策略难以保障面向Web服务的复杂电子政务系统授权的灵活性问题,在研究基于组织的四层访问控制模型(OB4LAC)的基础上,提出一种基于组织的Web服务访问控制模型。以组织为核心,从管理的视角研究访问控制与授权管理问题。通过引入岗位代理和授权单元,使授权随着环境上下文信息的变化而调整,从而实现动态授权,同时利用授权单元的状态迁移,对工作流模式提供支持。并且模型将权限分为服务权限和服务属性权限2级,实现细粒度的资源保护。应用实例结果表明,该模型能够契合电子政务系统中的复杂组织结构,在保护Web服务资源的同时,使得授权更加高效和灵活。

  19. Web Services-Based Access to Local Clinical Trial Databases: A Standards Initiative of the Association of American Cancer Institutes

    OpenAIRE

    Stahl, Douglas C.; Evans, Richard M.; Afrin, Lawrence B.; DeTeresa, Richard M.; Ko, Dave; Mitchell, Kevin

    2003-01-01

    Electronic discovery of the clinical trials being performed at a specific research center is a challenging task, which presently requires manual review of the center’s locally maintained databases or web pages of protocol listings. Near real-time automated discovery of available trials would increase the efficiency and effectiveness of clinical trial searching, and would facilitate the development of new services for information providers and consumers. Automated discovery efforts to date hav...

  20. Web services-based access to local clinical trial databases: a standards initiative of the Association of American Cancer Institutes.

    Science.gov (United States)

    Stahl, Douglas C; Evans, Richard M; Afrin, Lawrence B; DeTeresa, Richard M; Ko, Dave; Mitchell, Kevin

    2003-01-01

    Electronic discovery of the clinical trials being performed at a specific research center is a challenging task, which presently requires manual review of the center's locally maintained databases or web pages of protocol listings. Near real-time automated discovery of available trials would increase the efficiency and effectiveness of clinical trial searching, and would facilitate the development of new services for information providers and consumers. Automated discovery efforts to date have been hindered by issues such as disparate database schemas, vocabularies, and insufficient standards for easy intersystem exchange of high-level data, but adequate infrastructure now exists that make possible the development of applications for near real-time automated discovery of trials. This paper describes the current state (design and implementation) of the Web Services Specification for Publication and Discovery of Clinical Trials as developed by the Technology Task Force of the Association of American Cancer Institutes. The paper then briefly discusses a prototype web service-based application that implements the specification. Directions for evolution of this specification are also discussed.

  1. Network of Research Infrastructures for European Seismology (NERIES)-Web Portal Developments for Interactive Access to Earthquake Data on a European Scale

    Science.gov (United States)

    Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.

    2009-04-01

    The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology-specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the

  2. A SMART groundwater portal: An OGC web services orchestration framework for hydrology to improve data access and visualisation in New Zealand

    Science.gov (United States)

    Klug, Hermann; Kmoch, Alexander

    2014-08-01

    Transboundary and cross-catchment access to hydrological data is the key to designing successful environmental policies and activities. Electronic maps based on distributed databases are fundamental for planning and decision making in all regions and for all spatial and temporal scales. Freshwater is an essential asset in New Zealand (and globally) and the availability as well as accessibility of hydrological information held by or held for public authorities and businesses are becoming a crucial management factor. Access to and visual representation of environmental information for the public is essential for attracting greater awareness of water quality and quantity matters. Detailed interdisciplinary knowledge about the environment is required to ensure that the environmental policy-making community of New Zealand considers regional and local differences of hydrological statuses, while assessing the overall national situation. However, cross-regional and inter-agency sharing of environmental spatial data is complex and challenging. In this article, we firstly provide an overview of the state of the art standard compliant techniques and methodologies for the practical implementation of simple, measurable, achievable, repeatable, and time-based (SMART) hydrological data management principles. Secondly, we contrast international state of the art data management developments with the present status for groundwater information in New Zealand. Finally, for the topics (i) data access and harmonisation, (ii) sensor web enablement and (iii) metadata, we summarise our findings, provide recommendations on future developments and highlight the specific advantages resulting from a seamless view, discovery, access, and analysis of interoperable hydrological information and metadata for decision making.

  3. 76 FR 71914 - Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and...

    Science.gov (United States)

    2011-11-21

    ... industry. We are aware that there are pros and cons to our proposal to require carriers to work with their... the pros and cons from their perspectives of this approach. 6. Ongoing Costs To Maintain an Accessible... compatible with screen-reader technology and is activated by a single click on the homepage of the...

  4. Sign Language Web Pages

    Science.gov (United States)

    Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.

    2006-01-01

    The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…

  5. Web 2.0

    CERN Document Server

    Han, Sam

    2012-01-01

    Web 2.0 is a highly accessible introductory text examining all the crucial discussions and issues which surround the changing nature of the World Wide Web. It not only contextualises the Web 2.0 within the history of the Web, but also goes on to explore its position within the broader dispositif of emerging media technologies.The book uncovers the connections between diverse media technologies including mobile smart phones, hand-held multimedia players, ""netbooks"" and electronic book readers such as the Amazon Kindle, all of which are made possible only by the Web 2.0. In addition, Web 2.0 m

  6. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    The change of the web content is rapid. In Focused Web Harvesting [?], which aims at achieving a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a complete set of related web data to their interesting topics. Whether you are a fan

  7. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    Web content changes rapidly [18]. In Focused Web Harvesting [17] which aim it is to achieve a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a set of all the relevant web data to their topics of interest. Whether you are a fan

  8. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammad; Hiemstra, Djoerd; Keulen, van Maurice

    2016-01-01

    The change of the web content is rapid. In Focused Web Harvesting [?], which aims at achieving a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a complete set of related web data to their interesting topics. Whether you are a fan foll

  9. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammad; Hiemstra, Djoerd; Keulen, van Maurice

    2016-01-01

    Web content changes rapidly [18]. In Focused Web Harvesting [17] which aim it is to achieve a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a set of all the relevant web data to their topics of interest. Whether you are a fan followi

  10. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    The change of the web content is rapid. In Focused Web Harvesting [?], which aims at achieving a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a complete set of related web data to their interesting topics. Whether you are a fan foll

  11. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    Web content changes rapidly [18]. In Focused Web Harvesting [17] which aim it is to achieve a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a set of all the relevant web data to their topics of interest. Whether you are a fan followi

  12. 基于Web访问日志的异常行为检测%Abnormal Behavior Detection Based on Web Access Log

    Institute of Scientific and Technical Information of China (English)

    刘志宏; 孙长国

    2015-01-01

    With the rapid development of the Internet, all kinds of site of the attack technology emerge in an endless stream. This paper introduces the use log analysis of huge amount of web access log analysis process, also by using characteristic string matching and access frequency statistical analysis and other methods to excavate the aggressive behavior, through the practical application scenarios to show the described in the actual attack occurred after how to find the source of the attack, so as to improve the detection capability of security threats.%随着互联网的快速发展,各类对网站的攻击技术层出不穷,文章介绍了使用日志分析技术对海量Web访问日志进行分析的流程,同时通过使用特征字符匹配、访问频率统计分析等方法去挖掘攻击行为,并通过实际应用场景的展现,描述了在实际攻击发生后如何发现攻击源,从而提高安全威胁的检测能力。

  13. A Study of Semi-automatic Web Accessibility Evaluation Tool Based on the Subjective and Objective Detection Method%基于主客观检测法的半自动网页信息无障碍测评工具研究

    Institute of Scientific and Technical Information of China (English)

    赵英; 傅沛蕾

    2016-01-01

    Purpose/Significance] Web accessibility detection is a key problem in the process of web page accessibility construction. Ex-isting web accessibility detection methods have their own advantages, but also their shortcomings. So we need more in-depth research for web accessibility detection method, providing more effective method for barrier-free detection and then promoting the web page accessibil-ity construction. [ Method/Process] On the basis of introducing the research status about subjective and objective detection methods, we expound the value of information accessibility testing, and respectively analyze advantages and disadvantages in the process of using sub-jective and objective detection methods in detail. Through comparison and analysis of the subjective and objective detection methods round-ly, we have cleared their respective advantages and disadvantages, on the basis of which we propose designing semi-automatic detection tool, and analyzing the framework of the tool, logical structure, system partition in detail. [ Result/Conclusion] This tool is a semi-auto-matic web accessibility evaluation tool between artificial subjective detection and automatic machine testing, which combines the concept of automated testing tool and subjective evaluation based on the professional knowledge together, thus can perform a comprehensive, objec-tive, accurate, fast and convenient detection for web page.%[目的/意义]网页无障碍检测是信息无障碍建设中的核心问题,现有网页无障碍检测方法虽然各有优势,但也各自存在不足。因此,需要对网页无障碍检测方法进行更加深入的研究,为信息无障碍检测提供更有效的方法,促进网页的无障碍建设。[方法/过程]在介绍主、客观检测方法研究现状的基础上,阐述了信息无障碍检测的价值,并分析了主、客观检测方法各自在使用中的优劣性。通过对主、客观检测方法进行全面的对比分析,明

  14. Implementation Method of Non-Intrusive Monitoring Mechanism for Web Services Testing

    Institute of Scientific and Technical Information of China (English)

    Lun Cai; Jing-Ling Liu; Xi Wang

    2009-01-01

    In web services testing, accessing the interactive contents of measured services and the information of service condition accurately are the key issues of system design and realization. A non-intrusive solution based on axis2 is presented to overcome the difficulty of the information retrieval in web service testing. It can be plugged in server side or client side freely to test pre-deployed or deployed web services. Moreover, it provides a monitoring interface and the corresponding subscription publication mechanism for users based on web services to support the quality assurance grounded on service-oriented architecture (SOA) application service.

  15. The Study and Implementation of Accessing Web Services for the Peer in P2P Network%P2P网络Peer访问Web Service的研究和实现

    Institute of Scientific and Technical Information of China (English)

    翟丽芳; 张卫

    2004-01-01

    文章首先简要介绍了P2P网络和Web Service的技术背景,接着提出了P2P网络与Web Service集成的思想,为此提出Web Service Broker的概念,从而实现P2P网络Peer透明访问Web Service.

  16. Web Mining: An Overview

    Directory of Open Access Journals (Sweden)

    P. V. G. S. Mudiraj B. Jabber K. David raju

    2011-12-01

    Full Text Available Web usage mining is a main research area in Web mining focused on learning about Web users and their interactions with Web sites. The motive of mining is to find users’ access models automatically and quickly from the vast Web log data, such as frequent access paths, frequent access page groups and user clustering. Through web usage mining, the server log, registration information and other relative information left by user provide foundation for decision making of organizations. This article provides a survey and analysis of current Web usage mining systems and technologies. There are generally three tasks in Web Usage Mining: Preprocessing, Pattern analysis and Knowledge discovery. Preprocessing cleans log file of server by removing log entries such as error or failure and repeated request for the same URL from the same host etc... The main task of Pattern analysis is to filter uninteresting information and to visualize and interpret the interesting pattern to users. The statistics collected from the log file can help to discover the knowledge. This knowledge collected can be used to take decision on various factors like Excellent, Medium, Weak users and Excellent, Medium and Weak web pages based on hit counts of the web page in the web site. The design of the website is restructured based on user’s behavior or hit counts which provides quick response to the web users, saves memory space of servers and thus reducing HTTP requests and bandwidth utilization. This paper addresses challenges in three phases of Web Usage mining along with Web Structure Mining.This paper also discusses an application of WUM, an online Recommender System that dynamically generates links to pages that have not yet been visited by a user and might be of his potential interest. Differently from the recommender systems proposed so far, ONLINE MINER does not make use of any off-line component, and is able to manage Web sites made up of pages dynamically generated.

  17. 基于角色-功能的Web应用系统访问控制方法%Access Control Method for Web Application System Based on Role-function

    Institute of Scientific and Technical Information of China (English)

    庞希愚; 王成; 仝春玲

    2014-01-01

    The access control requirements of Web application system and the shortcomings in Web application system with Role-based Access Control(RBAC) model are analyzed, a fundamental idea of access control based on role-function model is proposed and its implementation details are discussed. Based on naturally formed Web page organization structure according to the business function requirements of the system and access control requirements of users, business functions of pages are partitioned in bottom menu in order to form the basic unit of permissions configuration. Through configuring the relation between user, role, page, menu, function to control user access to system resources such as Web page, the html element and operation in the page. Through the practical application of scientific research management system in Shandong Jiaotong University, application shows that implementation of access control in the page and menu to achieve business function, can well meet the enterprise requirements for user access control of Web system. It has the advantages of simple operation, strong versatility, and effectively reduces the workload of Web system development.%分析现有基于角色的访问控制模型在Web应用系统中的不足,提出一种基于角色-功能模型的用户访问控制方法,并对其具体的实现进行讨论。以系统业务功能需求自然形成的Web页面组织结构和用户访问控制需求为基础,划分最底层菜单中页面实现的业务功能,以业务功能作为权限配置的基本单位,通过配置用户、角色、页面、菜单、功能之间的关系,控制用户对页面、页面中所包含的html元素及其操作等Web系统资源的访问。在山东交通学院科研管理系统中的实际应用结果表明,该方法在菜单及页面实现的业务功能上实施访问控制,可使Web系统用户访问控制较好地满足用户要求,有效降低Web系统开发的工作量。

  18. An Efficient Cluster Based Web Object Filters From Web Pre-Fetching And Web Caching On Web User Navigation

    Directory of Open Access Journals (Sweden)

    A. K. Santra

    2012-05-01

    Full Text Available The World Wide Web is a distributed internet system, which provides dynamic and interactive services includes on line tutoring, video/audio conferencing, e-commerce, and etc., which generated heavy demand on network resources and web servers. It increase over the past few year at a very rapidly rate, due to which the amount of traffic over the internet is increasing. As a result, the network performance has now become very slow. Web Pre-fetching and Caching is one of the effective solutions to reduce the web access latency and improve the quality of service. The existing model presented a Cluster based pre-fetching scheme identified clusters of correlated Web pages based on users access patterns. Web Pre-fetching and Caching cause significant improvements on the performance of Web infrastructure. In this paper, we present an efficient Cluster based Web Object Filters from Web Pre-fetching and Web caching scheme to evaluate the web user navigation patterns and user references of product search. Clustering of web page objects obtained from pre-fetched and web cached contents. User Navigation is evaluated from the web cluster objects with similarity retrieval in subsequent user sessions. Web Object Filters are built with the interpretation of the cluster web pages related to the unique users by discarding redundant pages. Ranking is done on users web page product preferences at multiple sessions of each individual user. The performance is measured in terms of Objective function, Number of clusters and cluster accuracy.

  19. Web Map Services (WMS) Global Mosaic

    Science.gov (United States)

    Percivall, George; Plesea, Lucian

    2003-01-01

    The WMS Global Mosaic provides access to imagery of the global landmass using an open standard for web mapping. The seamless image is a mosaic of Landsat 7 scenes; geographically-accurate with 30 and 15 meter resolutions. By using the OpenGIS Web Map Service (WMS) interface, any organization can use the global mosaic as a layer in their geospatial applications. Based on a trade study, an implementation approach was chosen that extends a previously developed WMS hosting a Landsat 5 CONUS mosaic developed by JPL. The WMS Global Mosaic supports the NASA Geospatial Interoperability Office goal of providing an integrated digital representation of the Earth, widely accessible for humanity's critical decisions.

  20. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...

  1. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures

    Directory of Open Access Journals (Sweden)

    Wasik Szymon

    2010-05-01

    Full Text Available Abstract Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA

  2. Pro web project management

    CERN Document Server

    Emond, Justin

    2012-01-01

    Caters to an under-served niche market of small and medium-sized web consulting projects Eases people's project management pain Uses a clear, simple, and accessible style that eschews theory and hates walls of text

  3. Web Design Matters

    Science.gov (United States)

    Mathews, Brian

    2009-01-01

    The web site is a library's most important feature. Patrons use the web site for numerous functions, such as renewing materials, placing holds, requesting information, and accessing databases. The homepage is the place they turn to look up the hours, branch locations, policies, and events. Whether users are at work, at home, in a building, or on…

  4. Web Design Matters

    Science.gov (United States)

    Mathews, Brian

    2009-01-01

    The web site is a library's most important feature. Patrons use the web site for numerous functions, such as renewing materials, placing holds, requesting information, and accessing databases. The homepage is the place they turn to look up the hours, branch locations, policies, and events. Whether users are at work, at home, in a building, or on…

  5. 基于Web服务的学生公寓门禁管理系统设计与研究%Design and Research on the Access Control Management System of Student Apartments Based on Web Services

    Institute of Scientific and Technical Information of China (English)

    汤新昌

    2013-01-01

    随着网络技术的进一步发展,Web服务(Web Services)技术逐渐被应用于各类管理系统中,Web服务本身具有组件模型无关性、平台无关性、编程语言无关性的优良特性,使得Web服务可以用于系统的集成。本文着重介绍一种基于Web服务的学生公寓门禁管理系统,从系统结构、系统设计模式、Web服务关键性技术等方面阐释系统的设计,构建于Web服务基础上的学生公寓门禁管理系统的数据能够被其它应用系统直接调用,用于高校信息系统集成化建设。%With the in-depth development of network technology, web services technology is gradually applied to vari-ous types of management systems. Web services can be used for the integration of the system due to the excellent characteristics of its own component model-independent, platform independent, programming language independence. In this paper, a kind of access control management system is designed for student apartments based on web services;the system design is illustrated with system architecture, system design patterns and web services critical technology. The data of building the students the apartment access control management system based on web services can be directly transferred by other applying system and applied for the other applications with the construction of university information systems integration.

  6. A Beta Version of the GIS-Enabled NASA Surface meteorology and Solar Energy (SSE) Web Site With Expanded Data Accessibility and Analysis Functionality for Renewable Energy and Other Applications

    Science.gov (United States)

    Stackhouse, P. W.; Barnett, A. J.; Tisdale, M.; Tisdale, B.; Chandler, W.; Hoell, J. M., Jr.; Westberg, D. J.; Quam, B.

    2015-12-01

    The NASA LaRC Atmospheric Science Data Center has deployed it's beta version of an existing geophysical parameter website employing off the shelf Geographic Information System (GIS) tools. The revitalized web portal is entitled the "Surface meteorological and Solar Energy" (SSE - https://eosweb.larc.nasa.gov/sse/) and has been supporting an estimated 175,000 users with baseline solar and meteorological parameters as well as calculated parameters that enable feasibility studies for a wide range of renewable energy systems, particularly those systems featuring solar energy technologies. The GIS tools enable, generate and store climatological averages using spatial queries and calculations (by parameter for the globe) in a spatial database resulting in greater accessibility by government agencies, industry and individuals. The data parameters are produced from NASA science projects and reformulated specifically for the renewable energy industry and other applications. This first version includes: 1) processed and reformulated set of baseline data parameters that are consistent with Esri and open GIS tools, 2) development of a limited set of Python based functions to compute additional parameters "on-the-fly" from the baseline data products, 3) updated the current web sites to enable web-based displays of these parameters for plotting and analysis and 4) provided for the output of data parameters in geoTiff, ASCII and .netCDF data formats. The beta version is being actively reviewed through interaction with a group of collaborators from government and industry in order to test web site usability, display tools and features, and output data formats. This presentation provides an overview of this project and the current version of the new SSE-GIS web capabilities through to the end usage. This project supports cross agency and cross organization interoperability and access to NASA SSE data products and OGC compliant web services and aims also to provide mobile platform

  7. VisPort: Web-Based Access to Community-Specific Visualization Functionality [Shedding New Light on Exploding Stars: Visualization for TeraScale Simulation of Neutrino-Driven Supernovae (Final Technical Report)

    Energy Technology Data Exchange (ETDEWEB)

    Baker, M Pauline

    2007-06-30

    The VisPort visualization portal is an experiment in providing Web-based access to visualization functionality from any place and at any time. VisPort adopts a service-oriented architecture to encapsulate visualization functionality and to support remote access. Users employ browser-based client applications to choose data and services, set parameters, and launch visualization jobs. Visualization products typically images or movies are viewed in the user's standard Web browser. VisPort emphasizes visualization solutions customized for specific application communities. Finally, VisPort relies heavily on XML, and introduces the notion of visualization informatics - the formalization and specialization of information related to the process and products of visualization.

  8. An Implementation of Semantic Web System for Information retrieval using J2EE Technologies.

    OpenAIRE

    B.Hemanth kumar,; Prof. M. Surendra Prasad Babu

    2011-01-01

    Accessing web resources (Information) is an essential facility provided by web applications to every body. Semantic web is one of the systems that provide a facility to access the resources through web service applications. Semantic web and web Services are new emerging web based technologies. An automatic information processing system can be developed by using semantic web and web services, each having its own contribution within the context of developing web-based information systems and ap...

  9. Ontology Based Access Control

    Directory of Open Access Journals (Sweden)

    Özgü CAN

    2010-02-01

    Full Text Available As computer technologies become pervasive, the need for access control mechanisms grow. The purpose of an access control is to limit the operations that a computer system user can perform. Thus, access control ensures to prevent an activity which can lead to a security breach. For the success of Semantic Web, that allows machines to share and reuse the information by using formal semantics for machines to communicate with other machines, access control mechanisms are needed. Access control mechanism indicates certain constraints which must be achieved by the user before performing an operation to provide a secure Semantic Web. In this work, unlike traditional access control mechanisms, an "Ontology Based Access Control" mechanism has been developed by using Semantic Web based policies. In this mechanism, ontologies are used to model the access control knowledge and domain knowledge is used to create policy ontologies.

  10. 基于群体智慧的Web访问日志会话主题识别研究%Swarm Intelligence Based Topic Identification for Sessions in Web Access Log

    Institute of Scientific and Technical Information of China (English)

    方奇; 刘奕群; 张敏; 茹立云; 马少平

    2011-01-01

    Web访问日志中的会话(session)是指特定用户在一定时间范围内的访问行为的连续序列.会话主题(topic)是指会话中具有相同用户意图的部分.从会话中进一步识别出能体现用户意图的处理单元(topic)是进行用户访问行为分析的重要基础.目前相关工作主要集中在边界识别上,无法处理用户意图交叉情况.为了解决该问题,该文重新形式化定义了session和topic的相关概念,提出最大划分的求解任务,并设计出了基于用户群体智慧的会话主题识别算法.在使用大规模真实Web访问日志的实验中,我们的算法取得了不错的效果.%A session in Web access log denotes a continuous-time sequence of user's Web browsing behavior. A topic of a session represents a hidden browsing intent of a Web user. It is fundamental to identify several topic-based log units from a session. Existing work mainly focuses on detecting boundaries without considering the common situation in which different topics often overlap in one session. In this paper, we first re-define the concept of session and topic, and then the task of largest segmentation is proposed. We further design the session topic identification algorithm based on crowd wisdom of Web users. The effectiveness of the algorithm is validated by the experiments performed on large scale of realistic Web access logs.

  11. Accurate cloud-based smart IMT measurement, its validation and stroke risk stratification in carotid ultrasound: A web-based point-of-care tool for multicenter clinical trial.

    Science.gov (United States)

    Saba, Luca; Banchhor, Sumit K; Suri, Harman S; Londhe, Narendra D; Araki, Tadashi; Ikeda, Nobutaka; Viskovic, Klaudija; Shafique, Shoaib; Laird, John R; Gupta, Ajay; Nicolaides, Andrew; Suri, Jasjit S

    2016-08-01

    . Statistical tests were performed to demonstrate consistency, reliability and accuracy of the results. The proposed AtheroCloud™ system is completely reliable, automated, fast (3-5 seconds depending upon the image size having an internet speed of 180Mbps), accurate, and an intelligent, web-based clinical tool for multi-center clinical trials and routine telemedicine clinical care.

  12. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M.B. Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  13. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M B Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  14. Association Rule Mining for Web Recommendation

    Directory of Open Access Journals (Sweden)

    R. Suguna

    2012-10-01

    Full Text Available Web usage mining is the application of web mining to discover the useful patterns from the web in order to understand and analyze the behavior of the web users and web based applications. It is theemerging research trend for today’s researchers. It entirely deals with web log files which contain the user website access information. It is an interesting thing to analyze and understand the user behaviorabout the web access. Web usage mining normally has three categories: 1. Preprocessing, 2. Pattern Discovery and 3. Pattern Analysis. This paper proposes the association rule mining algorithms for betterWeb Recommendation and Web Personalization. Web recommendation systems are considered as an important role to understand customers’ behavior, interest, improving customer convenience, increasingservice provider profits and future needs.

  15. Research on security access control model of the Web-based database of nonfer-rous metal physical & chemical properties%基于Web有色金属物性数据库安全访问控制模型

    Institute of Scientific and Technical Information of China (English)

    李尚勇; 谢刚; 俞小花; 周明

    2009-01-01

    针对基于Web的有色金属物性数据库的访问特点,分析了有色金属物性数据库在多层架构体系中存在的非法入侵、越权访问、信息重放攻击等安全性问题,提出了适应其软件架构要求的安全访问控制模型.并对所提出的安全模型分别进行了访问性能和安全性测试,测试结果表明,访问模型安全性较好,性能稳定.%According to accessing characteristic of the Web-based database of non-ferrous metal physical & chemical properties, the se-curity issues are discussed in Multi-tier Application Architecture of database of non-ferrous metal physical & chemical properties, such as illegal invasion, unauthorized access, and information replay attack. The security access control model which is adaptive to charac-teristic of Multi-tier Application Architecture is proposed and tested in accessing performance and security. The test results show that the model is better in accessing performance and stability.

  16. Bioprocess-Engineering Education with Web Technology

    NARCIS (Netherlands)

    Sessink, O.

    2006-01-01

    Development of learning material that is distributed through and accessible via the World Wide Web. Various options from web technology are exploited to improve the quality and efficiency of learning material.

  17. Access/AML -

    Data.gov (United States)

    Department of Transportation — The AccessAML is a web-based internet single application designed to reduce the vulnerability associated with several accounts assinged to a single users. This is a...

  18. APLIKASI WEB CRAWLER UNTUK WEB CONTENT PADA MOBILE PHONE

    Directory of Open Access Journals (Sweden)

    Sarwosri Sarwosri

    2009-01-01

    Full Text Available Crawling is the process behind a search engine, which served through the World Wide Web in a structured and with certain ethics. Applications that run the crawling process is called Web Crawler, also called web spider or web robot. The growth of mobile search services provider, followed by growth of a web crawler that can browse web pages in mobile content type. Crawler Web applications can be accessed by mobile devices and only web pages that type Mobile Content to be explored is the Web Crawler. Web Crawler duty is to collect a number of Mobile Content. A mobile application functions as a search application that will use the results from the Web Crawler. Crawler Web server consists of the Servlet, Mobile Content Filter and datastore. Servlet is a gateway connection between the client with the server. Datastore is the storage media crawling results. Mobile Content Filter selects a web page, only the appropriate web pages for mobile devices or with mobile content that will be forwarded.

  19. Characteristics of scientific web publications

    DEFF Research Database (Denmark)

    Thorlund Jepsen, Erik; Seiden, Piet; Ingwersen, Peter Emil Rerup

    2004-01-01

    Because of the increasing presence of scientific publications on the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research on techniques and methods for retrieval of scientific Web publications is called for. In this article, we report on the......Vista and AllTheWeb retrieved a higher degree of accessible scientific content than Google. Because of the search engine cutoffs of accessible URLs, the feasibility of using search engine output for Web content analysis is also discussed....

  20. A novel design of hidden web crawler using ontology

    OpenAIRE

    Manvi; Bhatia, Komal Kumar; Dixit, Ashutosh

    2015-01-01

    Deep Web is content hidden behind HTML forms. Since it represents a large portion of the structured, unstructured and dynamic data on the Web, accessing Deep-Web content has been a long challenge for the database community. This paper describes a crawler for accessing Deep-Web using Ontologies. Performance evaluation of the proposed work showed that this new approach has promising results.

  1. Engineering Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    suit the user profile the most. This paper summarizes the domain engineering framework for such adaptive web applications. The framework provides guidelines to develop adaptive web applications as members of a family. It suggests how to utilize the design artifacts as knowledge which can be used......Information and services on the web are accessible for everyone. Users of the web differ in their background, culture, political and social environment, interests and so on. Ambient intelligence was envisioned as a concept for systems which are able to adapt to user actions and needs....... With the growing amount of information and services, the web applications become natural candidates to adopt the concepts of ambient intelligence. Such applications can deal with divers user intentions and actions based on the user profile and can suggest the combination of information content and services which...

  2. Semantic Web

    Directory of Open Access Journals (Sweden)

    Anna Lamandini

    2011-06-01

    Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.

  3. Chemistry WebBook

    Science.gov (United States)

    SRD 69 NIST Chemistry WebBook (Web, free access)   The NIST Chemistry WebBook contains: Thermochemical data for over 7000 organic and small inorganic compounds; thermochemistry data for over 8000 reactions; IR spectra for over 16,000 compounds; mass spectra for over 33,000 compounds; UV/Vis spectra for over 1600 compounds; electronic and vibrational spectra for over 5000 compounds; constants of diatomic molecules(spectroscopic data) for over 600 compounds; ion energetics data for over 16,000 compounds; thermophysical property data for 74 fluids.

  4. Head First Web Design

    CERN Document Server

    Watrall, Ethan

    2008-01-01

    Want to know how to make your pages look beautiful, communicate your message effectively, guide visitors through your website with ease, and get everything approved by the accessibility and usability police at the same time? Head First Web Design is your ticket to mastering all of these complex topics, and understanding what's really going on in the world of web design. Whether you're building a personal blog or a corporate website, there's a lot more to web design than div's and CSS selectors, but what do you really need to know? With this book, you'll learn the secrets of designing effecti

  5. Web-based access to near real-time and archived high-density time-series data: cyber infrastructure challenges & developments in the open-source Waveform Server

    Science.gov (United States)

    Reyes, J. C.; Vernon, F. L.; Newman, R. L.; Steidl, J. H.

    2010-12-01

    The Waveform Server is an interactive web-based interface to multi-station, multi-sensor and multi-channel high-density time-series data stored in Center for Seismic Studies (CSS) 3.0 schema relational databases (Newman et al., 2009). In the last twelve months, based on expanded specifications and current user feedback, both the server-side infrastructure and client-side interface have been extensively rewritten. The Python Twisted server-side code-base has been fundamentally modified to now present waveform data stored in cluster-based databases using a multi-threaded architecture, in addition to supporting the pre-existing single database model. This allows interactive web-based access to high-density (broadband @ 40Hz to strong motion @ 200Hz) waveform data that can span multiple years; the common lifetime of broadband seismic networks. The client-side interface expands on it's use of simple JSON-based AJAX queries to now incorporate a variety of User Interface (UI) improvements including standardized calendars for defining time ranges, applying on-the-fly data calibration to display SI-unit data, and increased rendering speed. This presentation will outline the various cyber infrastructure challenges we have faced while developing this application, the use-cases currently in existence, and the limitations of web-based application development.

  6. Comprendre le Web caché

    OpenAIRE

    Senellart, Pierre

    2007-01-01

    The hidden Web (also known as deep or invisible Web), that is, the part of the Web not directly accessible through hyperlinks, but through HTML forms or Web services, is of great value, but difficult to exploit. We discuss a process for the fully automatic discovery, syntactic and semantic analysis, and querying of hidden-Web services. We propose first a general architecture that relies on a semi-structured warehouse of imprecise (probabilistic) content. We provide a detailed complexity analy...

  7. Remote sensing education and Internet/World Wide Web technology

    Science.gov (United States)

    Griffith, J.A.; Egbert, S.L.

    2001-01-01

    Remote sensing education is increasingly in demand across academic and professional disciplines. Meanwhile, Internet technology and the World Wide Web (WWW) are being more frequently employed as teaching tools in remote sensing and other disciplines. The current wealth of information on the Internet and World Wide Web must be distilled, nonetheless, to be useful in remote sensing education. An extensive literature base is developing on the WWW as a tool in education and in teaching remote sensing. This literature reveals benefits and limitations of the WWW, and can guide its implementation. Among the most beneficial aspects of the Web are increased access to remote sensing expertise regardless of geographic location, increased access to current material, and access to extensive archives of satellite imagery and aerial photography. As with other teaching innovations, using the WWW/Internet may well mean more work, not less, for teachers, at least at the stage of early adoption. Also, information posted on Web sites is not always accurate. Development stages of this technology range from on-line posting of syllabi and lecture notes to on-line laboratory exercises and animated landscape flyovers and on-line image processing. The advantages of WWW/Internet technology may likely outweigh the costs of implementing it as a teaching tool.

  8. Hidden Page WebCrawler Model for Secure Web Pages

    Directory of Open Access Journals (Sweden)

    K. F. Bharati

    2013-03-01

    Full Text Available The traditional search engines available over the internet are dynamic in searching the relevant content over the web. The search engine has got some constraints like getting the data asked from a varied source, where the data relevancy is exceptional. The web crawlers are designed only to more towards a specific path of the web and are restricted in moving towards a different path as they are secured or at times restricted due to the apprehension of threats. It is possible to design a web crawler that will have the capability of penetrating through the paths of the web, not reachable by the traditional web crawlers, in order to get a better solution in terms of data, time and relevancy for the given search query. The paper makes use of a newer parser and indexer for coming out with a novel idea of web crawler and a framework to support it. The proposed web crawler is designed to attend Hyper Text Transfer Protocol Secure (HTTPS based websites and web pages that needs authentication to view and index. User has to fill a search form and his/her creditionals will be used by the web crawler to attend secure web server for authentication. Once it is indexed the secure web server will be inside the web crawler’s accessible zone

  9. Construction of Community Web Directories based on Web usage Data

    CERN Document Server

    Sandhyarani, Ramancha; Gyani, Jayadev; 10.5121/acij.2012.3205

    2012-01-01

    This paper support the concept of a community Web directory, as a Web directory that is constructed according to the needs and interests of particular user communities. Furthermore, it presents the complete method for the construction of such directories by using web usage data. User community models take the form of thematic hierarchies and are constructed by employing clustering approach. We applied our methodology to the ODP directory and also to an artificial Web directory, which was generated by clustering Web pages that appear in the access log of an Internet Service Provider. For the discovery of the community models, we introduced a new criterion that combines a priori thematic informativeness of the Web directory categories with the level of interest observed in the usage data. In this context, we introduced and evaluated new clustering method. We have tested the methodology using access log files which are collected from the proxy servers of an Internet Service Provider and provided results that ind...

  10. The design and implementation of web mining in web sites security

    Institute of Scientific and Technical Information of China (English)

    ZHANG Guo-yin; GU Guo-chang; LI Jian-li

    2003-01-01

    The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information,so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.

  11. Persistent Web References – Best Practices and New Suggestions

    DEFF Research Database (Denmark)

    Zierau, Eld; Nyvang, Caroline; Kromann, Thomas Hvid

    In this paper, we suggest adjustments to best practices for persistent web referencing; adjustments that aim at preservation and long time accessibility of web referenced resources in general, but with focus on web references in web archives. Web referencing is highly relevant and crucial...

  12. Design and Implementation of Domain based Semantic Hidden Web Crawler

    OpenAIRE

    Manvi; Bhatia, Komal Kumar; Dixit, Ashutosh

    2015-01-01

    Web is a wide term which mainly consists of surface web and hidden web. One can easily access the surface web using traditional web crawlers, but they are not able to crawl the hidden portion of the web. These traditional crawlers retrieve contents from web pages, which are linked by hyperlinks ignoring the information hidden behind form pages, which cannot be extracted using simple hyperlink structure. Thus, they ignore large amount of data hidden behind search forms. This paper emphasizes o...

  13. Professional Access 2013 programming

    CERN Document Server

    Hennig, Teresa; Hepworth, George; Yudovich, Dagi (Doug)

    2013-01-01

    Authoritative and comprehensive coverage for building Access 2013 Solutions Access, the most popular database system in the world, just opened a new frontier in the Cloud. Access 2013 provides significant new features for building robust line-of-business solutions for web, client and integrated environments.  This book was written by a team of Microsoft Access MVPs, with consulting and editing by Access experts, MVPs and members of the Microsoft Access team. It gives you the information and examples to expand your areas of expertise and immediately start to develop and upgrade projects. Exp

  14. Oracle Application Express 5 for beginners a practical guide to rapidly develop data-centric web applications accessible from desktop, laptops, tablets, and smartphones

    CERN Document Server

    2015-01-01

    Oracle Application Express has taken another big leap towards becoming a true next generation RAD tool. It has entered into its fifth version to build robust web applications. One of the most significant feature in this release is a new page designer that helps developers create and edit page elements within a single page design view, which enormously maximizes developer productivity. Without involving the audience too much into the boring bits, this full colored edition adopts an inspiring approach that helps beginners practically evaluate almost every feature of Oracle Application Express, including all features new to version 5. The most convincing way to explore a technology is to apply it to a real world problem. In this book, you’ll develop a sales application that demonstrates almost every feature to practically expose the anatomy of Oracle Application Express 5. The short list below presents some main topics of Oracle APEX covered in this book: Rapid web application development for desktops, la...

  15. Adaptive web data extraction policies

    Directory of Open Access Journals (Sweden)

    Provetti, Alessandro

    2008-12-01

    Full Text Available Web data extraction is concerned, among other things, with routine data accessing and downloading from continuously-updated dynamic Web pages. There is a relevant trade-off between the rate at which the external Web sites are accessed and the computational burden on the accessing client. We address the problem by proposing a predictive model, typical of the Operating Systems literature, of the rate-of-update of each Web source. The presented model has been implemented into a new version of the Dynamo project: a middleware that assists in generating informative RSS feeds out of traditional HTML Web sites. To be effective, i.e., make RSS feeds be timely and informative and to be scalable, Dynamo needs a careful tuning and customization of its polling policies, which are described in detail.

  16. Value of Information Web Application

    Science.gov (United States)

    2015-04-01

    2.1 Demographics The initial page that the user encounters when accessing the VoI web application is Demographics (Fig. 1). On this page , the user...deck is empty, the web application sets up the next deck for the user or sends the user to the Results page if all decks have been played. The user...confirmation that the submittal was successful or not successful. This ends the users’ interaction with the web application. 5 Fig. 4 Results page

  17. Analysis of Web Logs And Web User In Web Mining

    Directory of Open Access Journals (Sweden)

    L.K. Joshila Grace

    2011-01-01

    Full Text Available Log files contain information about User Name, IP Address, Time Stamp, Access Request, number of Bytes Transferred, Result Status, URL that Referred and User Agent. The log files are maintained by the web servers. By analysing these log files gives a neat idea about the user. This paper gives a detailed discussion about these log files, their formats, their creation, access procedures, their uses, various algorithms used and the additional parameters that can be used in the log files which in turn gives way to an effective mining. It also provides the idea of creating an extended log file and learning the user behaviour.

  18. Analysis of Web Logs and Web User in Web Mining

    CERN Document Server

    Grace, L K Joshila; Nagamalai, Dhinaharan

    2011-01-01

    Log files contain information about User Name, IP Address, Time Stamp, Access Request, number of Bytes Transferred, Result Status, URL that Referred and User Agent. The log files are maintained by the web servers. By analysing these log files gives a neat idea about the user. This paper gives a detailed discussion about these log files, their formats, their creation, access procedures, their uses, various algorithms used and the additional parameters that can be used in the log files which in turn gives way to an effective mining. It also provides the idea of creating an extended log file and learning the user behaviour.

  19. Web Page Recommendation Models Theory and Algorithms

    CERN Document Server

    Gündüz-Ögüdücü, Sule

    2010-01-01

    One of the application areas of data mining is the World Wide Web (WWW or Web), which serves as a huge, widely distributed, global information service for every kind of information such as news, advertisements, consumer information, financial management, education, government, e-commerce, health services, and many other information services. The Web also contains a rich and dynamic collection of hyperlink information, Web page access and usage information, providing sources for data mining. The amount of information on the Web is growing rapidly, as well as the number of Web sites and Web page

  20. Web Service: MedlinePlus

    Science.gov (United States)

    ... an alternate method of accessing MedlinePlus data. Base URL https ://wsearch.nlm.nih.gov/ws/query Please ... the Web service. All special characters must be URL encoded. Spaces may be replaced by '+' signs, which ...

  1. Digging Deeper: The Deep Web.

    Science.gov (United States)

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  2. The World Wide Web Revisited

    Science.gov (United States)

    Owston, Ron

    2007-01-01

    Nearly a decade ago the author wrote in one of the first widely-cited academic articles, Educational Researcher, about the educational role of the web. He argued that educators must be able to demonstrate that the web (1) can increase access to learning, (2) must not result in higher costs for learning, and (3) can lead to improved learning. These…

  3. Digging Deeper: The Deep Web.

    Science.gov (United States)

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  4. A large web-based observer reliability study of early ischaemic signs on computed tomography. The Acute Cerebral CT Evaluation of Stroke Study (ACCESS.

    Directory of Open Access Journals (Sweden)

    Joanna M Wardlaw

    Full Text Available BACKGROUND: Early signs of ischaemic stroke on computerised tomography (CT scanning are subtle but CT is the most widely available diagnostic test for stroke. Scoring methods that code for the extent of brain ischaemia may improve stroke diagnosis and quantification of the impact of ischaemia. METHODOLOGY AND PRINCIPAL FINDINGS: We showed CT scans from patients with acute ischaemic stroke (n = 32, with different patient characteristics and ischaemia signs to doctors in stroke-related specialties world-wide over the web. CT scans were shown twice, randomly and blindly. Observers entered their scan readings, including early ischaemic signs by three scoring methods, into the web database. We compared observers' scorings to a reference standard neuroradiologist using area under receiver operator characteristic curve (AUC analysis, Cronbach's alpha and logistic regression to determine the effect of scales, patient, scan and observer variables on detection of early ischaemic changes. Amongst 258 readers representing 33 nationalities and six specialties, the AUCs comparing readers with the reference standard detection of ischaemic signs were similar for all scales and both occasions. Being a neuroradiologist, slower scan reading, more pronounced ischaemic signs and later time to CT all improved detection of early ischaemic signs and agreement on the rating scales. Scan quality, stroke severity and number of years of training did not affect agreement. CONCLUSIONS: Large-scale observer reliability studies are possible using web-based tools and inform routine practice. Slower scan reading and use of CT infarct rating scales improve detection of acute ischaemic signs and should be encouraged to improve stroke diagnosis.

  5. A Large Web-Based Observer Reliability Study of Early Ischaemic Signs on Computed Tomography. The Acute Cerebral CT Evaluation of Stroke Study (ACCESS)

    Science.gov (United States)

    Wardlaw, Joanna M.; von Kummer, Rüdiger; Farrall, Andrew J.; Chappell, Francesca M.; Hill, Michael; Perry, David

    2010-01-01

    Background Early signs of ischaemic stroke on computerised tomography (CT) scanning are subtle but CT is the most widely available diagnostic test for stroke. Scoring methods that code for the extent of brain ischaemia may improve stroke diagnosis and quantification of the impact of ischaemia. Methodology and Principal Findings We showed CT scans from patients with acute ischaemic stroke (n = 32, with different patient characteristics and ischaemia signs) to doctors in stroke-related specialties world-wide over the web. CT scans were shown twice, randomly and blindly. Observers entered their scan readings, including early ischaemic signs by three scoring methods, into the web database. We compared observers' scorings to a reference standard neuroradiologist using area under receiver operator characteristic curve (AUC) analysis, Cronbach's alpha and logistic regression to determine the effect of scales, patient, scan and observer variables on detection of early ischaemic changes. Amongst 258 readers representing 33 nationalities and six specialties, the AUCs comparing readers with the reference standard detection of ischaemic signs were similar for all scales and both occasions. Being a neuroradiologist, slower scan reading, more pronounced ischaemic signs and later time to CT all improved detection of early ischaemic signs and agreement on the rating scales. Scan quality, stroke severity and number of years of training did not affect agreement. Conclusions Large-scale observer reliability studies are possible using web-based tools and inform routine practice. Slower scan reading and use of CT infarct rating scales improve detection of acute ischaemic signs and should be encouraged to improve stroke diagnosis. PMID:21209901

  6. Analysis and Comparison of Web Database Interactive Dynamic Accessing Technologies%Web数据库交互式动态访问技术分析与比较

    Institute of Scientific and Technical Information of China (English)

    高福友

    2005-01-01

    早期Web页面主要用来传输静态HTML文档,随着Web数据库的应用和Web数据库访问技术的发展,出现了交互式动态的Web页面.介绍了实现交互式动态web页面的CGI、Web API、ASP、PHP和GoldFusion的访问技术,并对这五种技术进行了较为深入的分析和比较,这些为Web数据库访问技术的应用提供了参考.

  7. Research on Integrated Filter Platform-WebFilter for Identity Authentication and Access Control%身份认证及访问控制综合过滤平台WebFilter的研究

    Institute of Scientific and Technical Information of China (English)

    戚玉松; 是湘全

    2005-01-01

    在分析电子政务网络安全建设的现状和不足的基础上,结合电子政务对应用安全的需求,设计了电子政务身份认证和访问控制综合过滤平台WebFilter,说明了该平台的设计模式、功能及组成,详述了平台设计的关键技术和实现方法,在客户端与应用服务器之间构建了安全的数据访问通道,为其他基于Internet的应用安全建设提供了有效的解决方案.

  8. Access to Psychotherapy in the Era of Web 2.0 – New Media, Old Inequalities? / Zugang zur Psychotherapie in der Ära des Web 2.0 – Neue Medien, Alte Ungleichheiten?

    Directory of Open Access Journals (Sweden)

    Apolinario-Hagen Jennifer Anette

    2015-12-01

    Full Text Available Hintergrund: Angesichts der Debatte über regionale sowie soziostrukturell bedingte Versorgungslücken in der psychotherapeutischen Versorgung erhöht sich gegenwärtig das Interesse an E-Mental-Health-Interventionen wie der internetbasierten Psychotherapie, Online-Selbsthilfe und an neuen Ansätzen zur Selbstermächtigung. Profitieren könnten Gesundheitsberufe im Hinblick auf eine informierte Entscheidungsfindung, wenn sie die neuesten Entwicklungen kennen. Wenn allerdings diese “digitale Revolution” jene Patienten, die mit dem Web 2.0 nur unzureichend vertraut sind, nicht erreichen kann, wird sich der Zugang zu Psychotherapien kaum verbessern lassen. Daher soll mit dieser Übersichtsarbeit geklärt werden, ob und inwieweit Internettherapien als eine wirksame Alternative zur konventionellen Psychotherapie in der Grundversorgung empfohlen werden können.

  9. Accurate Information, Virtual Reality, Good Librarianship Doğru Bilgi, Sanal Gerçeklik, İyi Kütüphanecilik

    Directory of Open Access Journals (Sweden)

    M. Tayfun Gülle

    2010-03-01

    Full Text Available Departing from the idea that internet, which has become a deep information tunnel, is causing a problem in access to “accurate information”, it is expressed that societies are imprisoned within the world of “virtual reality” with web 2.0/web 3.0 technologies and social media applications. In order to diagnose this problem correctly, the media used from past to present for accessing information are explained shortly as “social tools.” Furthermore, it is emphasised and summarised with an editorial viewpoint that the means of reaching accurate information can be increased via the freedom of expression channel which will be brought forth by “good librarianship” applications. IFLA Principles of Freedom of Expression and Good Librarianship is referred to at the end of the editorial.

  10. Deep web content monitoring

    NARCIS (Netherlands)

    Khelghati, Mohammadreza

    2016-01-01

    In this thesis, we investigate the path towards a focused web harvesting approach which can automatically and efficiently query websites, navigate through results, download data, store it and track data changes over time. Such an approach can also facilitate users to access a complete collection of

  11. RS-WebPredictor

    DEFF Research Database (Denmark)

    Zaretzki, J.; Bergeron, C.; Huang, T.-W.;

    2013-01-01

    . RS-WebPredictor is the first freely accessible server that predicts the regioselectivity of the last six isozymes. Server execution time is fast, taking on average 2s to encode a submitted molecule and 1s to apply a given model, allowing for high-throughput use in lead optimization projects...

  12. Medical Web Interface for Wireless Sensor Networks

    OpenAIRE

    Andrei Maciuca; Dan Popescu

    2013-01-01

    The current paper proposes a smart web interface designed for monitoring the status of the elderly people. There are four main user types used in the web application: the administrator (who has power access to all the application’s functionalities), the patient (who has access to his own personal data, like parameters history, personal details), relatives of the patient (who have administrable access to the person in care, access that is defined by the patient) and the medic (who can view ...

  13. 28 CFR 16.41 - Requests for access to records.

    Science.gov (United States)

    2010-07-01

    ... the Government Printing Office's World Wide Web site (which can be found at http://www.access.gpo.gov... accessed electronically at the Government Printing Office's World Wide Web site (which can be found at...

  14. From intermediation to disintermediation and apomediation: new models for consumers to access and assess the credibility of health information in the age of Web2.0.

    Science.gov (United States)

    Eysenbach, Gunther

    2007-01-01

    This theoretical paper discusses the model that, as a result of the social process of disintermediation enabled by digital media, traditional intermediaries are replaced by what this author calls apomediaries, which are tools and peers standing by to guide consumers to trustworthy information, or adding credibility to information. For apomediation to be an attractive and successful model for consumers, the recipient has to reach a certain degree of maturity and autonomy. Different degrees of autonomy may explain differences in information seeking and credibility appraisal behaviours. It is hypothesized that in an apomediated environment, tools, influential peers and opinion leaders are the primary conveyors of trust and credibility. In this environment, apomediary credibility may become equally or more important than source credibility or even message credibility. It is suggested to use tools of network analysis to study the dynamics of apomediary credibility in a networked digital world. There are practical implications of the apomediation model for developers of consumer health websites which aspire to come across as "credible: Consumers need and want to be able to be co-creators of content, not merely be an audience who is broadcasted to. Web2.0 technology enables such sites. Engaging and credible Web sites are about building community and communities are built upon personal and social needs.

  15. Web application to access U.S. Army Corps of Engineers Civil Works and Restoration Projects information for the Rio Grande Basin, southern Colorado, New Mexico, and Texas

    Science.gov (United States)

    Archuleta, Christy-Ann M.; Eames, Deanna R.

    2009-01-01

    The Rio Grande Civil Works and Restoration Projects Web Application, developed by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers (USACE) Albuquerque District, is designed to provide publicly available information through the Internet about civil works and restoration projects in the Rio Grande Basin. Since 1942, USACE Albuquerque District responsibilities have included building facilities for the U.S. Army and U.S. Air Force, providing flood protection, supplying water for power and public recreation, participating in fire remediation, protecting and restoring wetlands and other natural resources, and supporting other government agencies with engineering, contracting, and project management services. In the process of conducting this vast array of engineering work, the need arose for easily tracking the locations of and providing information about projects to stakeholders and the public. This fact sheet introduces a Web application developed to enable users to visualize locations and search for information about USACE (and some other Federal, State, and local) projects in the Rio Grande Basin in southern Colorado, New Mexico, and Texas.

  16. Oracle application express 5.1 basics and beyond a practical guide to rapidly develop data-centric web applications accessible from desktop, laptops, tablets, and smartphones

    CERN Document Server

    2017-01-01

    You will find stuff about workspace, application, page, and so on in every APEX book. But this book is unique because the information it contains is not available anywhere else! Unlike other books, it adopts a stimulating approach to reveal almost every feature necessary for the beginners of Oracle APEX and also takes them beyond the basics. As a technology enthusiast I write on a variety of new technologies, but writing books on Oracle Application Express is my passion. The blood pumping comments I get from my readers on Amazon (and in my inbox) are the main forces that motivate me to write a book whenever a new version of Oracle APEX is launched. This is my fifth book on Oracle APEX (and the best so far) written after discovering the latest 5.1 version. As usual, I’m sharing my personal learning experience through this book to expose this unique rapid web application development platform. In Oracle Application Express you can build robust web applications. The new version is launched with some more prol...

  17. CRISPy-web

    DEFF Research Database (Denmark)

    Blin, Kai; Pedersen, Lasse Ebdrup; Weber, Tilmann

    2016-01-01

    CRISPR/Cas9-based genome editing has been one of the major achievements of molecular biology, allowing the targeted engineering of a wide range of genomes. The system originally evolved in prokaryotes as an adaptive immune system against bacteriophage infections. It now sees widespread application...... designing sgRNAs for non-model organisms exist. Here, we present CRISPy-web (http://crispy.secondarymetabolites.org/), an easy to use web tool based on CRISPy to design sgRNAs for any user-provided microbial genome. CRISPy-web allows researchers to interactively select a region of their genome of interest...... to scan for possible sgRNAs. After checks for potential off-target matches, the resulting sgRNA sequences are displayed graphically and can be exported to text files. All steps and information are accessible from a web browser without the requirement to install and use command line scripts....

  18. Prey interception drives web invasion and spider size determines successful web takeover in nocturnal orb-web spiders

    Directory of Open Access Journals (Sweden)

    Wenjin Gan

    2015-10-01

    Full Text Available A striking feature of web-building spiders is the use of silk to make webs, mainly for prey capture. However, building a web is energetically expensive and increases the risk of predation. To reduce such costs and still have access to abundant prey, some web-building spiders have evolved web invasion behaviour. In general, no consistent patterns of web invasion have emerged and the factors determining web invasion remain largely unexplored. Here we report web invasion among conspecifics in seven nocturnal species of orb-web spiders, and examined the factors determining the probability of webs that could be invaded and taken over by conspecifics. About 36% of webs were invaded by conspecifics, and 25% of invaded webs were taken over by the invaders. A web that was built higher and intercepted more prey was more likely to be invaded. Once a web was invaded, the smaller the size of the resident spider, the more likely its web would be taken over by the invader. This study suggests that web invasion, as a possible way of reducing costs, may be widespread in nocturnal orb-web spiders.

  19. Prey interception drives web invasion and spider size determines successful web takeover in nocturnal orb-web spiders.

    Science.gov (United States)

    Gan, Wenjin; Liu, Shengjie; Yang, Xiaodong; Li, Daiqin; Lei, Chaoliang

    2015-09-24

    A striking feature of web-building spiders is the use of silk to make webs, mainly for prey capture. However, building a web is energetically expensive and increases the risk of predation. To reduce such costs and still have access to abundant prey, some web-building spiders have evolved web invasion behaviour. In general, no consistent patterns of web invasion have emerged and the factors determining web invasion remain largely unexplored. Here we report web invasion among conspecifics in seven nocturnal species of orb-web spiders, and examined the factors determining the probability of webs that could be invaded and taken over by conspecifics. About 36% of webs were invaded by conspecifics, and 25% of invaded webs were taken over by the invaders. A web that was built higher and intercepted more prey was more likely to be invaded. Once a web was invaded, the smaller the size of the resident spider, the more likely its web would be taken over by the invader. This study suggests that web invasion, as a possible way of reducing costs, may be widespread in nocturnal orb-web spiders.

  20. Advanced Call Center Supporting WAP Access

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Traditional call centers can be accessed via speech only, and the call center based on web provides both data and speech access, but it needs a powerful terminal-computer. By analyzing traditional call centers and call centers based on web, this paper presents the framework of an advanced call center supporting WAP access. A typical service is also described in detail.

  1. Designing, Implementing, and Evaluating Secure Web Browsers

    Science.gov (United States)

    Grier, Christopher L.

    2009-01-01

    Web browsers are plagued with vulnerabilities, providing hackers with easy access to computer systems using browser-based attacks. Efforts that retrofit existing browsers have had limited success since modern browsers are not designed to withstand attack. To enable more secure web browsing, we design and implement new web browsers from the ground…

  2. Web-Mediated Knowledge Synthesis for Educators

    Science.gov (United States)

    DeSchryver, Michael

    2015-01-01

    Ubiquitous and instant access to information on the Web is challenging what constitutes 21st century literacies. This article explores the notion of Web-mediated knowledge synthesis, an approach to integrating Web-based learning that may result in generative synthesis of ideas. This article describes the skills and strategies that may support…

  3. Pro Access 2010 Development

    CERN Document Server

    Collins, Mark

    2011-01-01

    Pro Access 2010 Development is a fundamental resource for developing business applications that take advantage of the features of Access 2010 and the many sources of data available to your business. In this book, you'll learn how to build database applications, create Web-based databases, develop macros and Visual Basic for Applications (VBA) tools for Access applications, integrate Access with SharePoint and other business systems, and much more. Using a practical, hands-on approach, this book will take you through all the facets of developing Access-based solutions, such as data modeling, co

  4. Access 2013 for dummies

    CERN Document Server

    Ulrich Fuller, Laurie

    2013-01-01

    The easy guide to Microsoft Access returns with updates on the latest version! Microsoft Access allows you to store, organize, view, analyze, and share data; the new Access 2013 release enables you to build even more powerful, custom database solutions that integrate with the web and enterprise data sources. Access 2013 For Dummies covers all the new features of the latest version of Accessand serves as an ideal reference, combining the latest Access features with the basics of building usable databases. You'll learn how to create an app from the Welcome screen, get support

  5. A Research on UCON Enhanced Dynamic Access Control Model for the Business Process of Composite Web Services%UCON支持的组合Web服务业务流程访问控制模型

    Institute of Scientific and Technical Information of China (English)

    上超望; 刘清堂; 赵呈领; 王艳凤; 杨琳

    2011-01-01

    Business process access control mechanism is a difficult problem in Web services composition application.The dynamic Interactivity and Coordination of business process activities have been ignored in the existing research,which can not meet the demands for the dynamic business process access control.An UCON enhanced business process dynamic access control model(WS-BPUCON) is proposed,which unbinds the coupling relationship of organization model and the process model,and provides sufficient flexibility to implement the dynamic and fine-gained access control based on the authorization,obligation and condition for business process.The paper also describes the implementation architecture of WS-BPUCON in the end.%业务流程访问控制机制是组合Web服务应用中的难点,现有的访问控制模型忽视了流程活动之间动态交互性和协同性的特点,不能适应业务流程权限的动态管理.本文提出一种使用控制支持的组合Web服务业务流程动态访问控制模型WS-BPUCON,模型通过角色和权限的分离解除了组织模型和业务流程模型的耦合关系,能够根据分布式开放网络环境中的属性信息,基于授权、职责和条件三种约束决定策略来检查访问控制决策,具有上下文感知、细粒度访问管理等特性,给出了WS-BPUCON的实施框架.

  6. Partnerships---A Way of Making Astrophysics Research Accessible to the K--12 Community through the Internet and World Wide Web

    Science.gov (United States)

    Hawkins, I.; Battle, R.; Miller-Bagwell, A.

    1996-05-01

    We describe a partnership approach in use at UC Berkeley's Center for EUV Astrophysics (CEA) that facilitates the adaptation of astrophysics data and information---in particular from NASA's EUVE satellite---for use in the K--12 classroom. Our model is founded on a broad collaboration of personnel from research institutions, centers of informal science teaching, schools of education, and K--12 schools. Several CEA-led projects follow this model of collaboration and have yielded multimedia, Internet-based, lesson plans for grades 6 through 12 that are created and distributed on the World Wide Web (http://www.cea.berkeley.edu/Education). Use of technology in the classroom can foster an environment that more closely reflects the processes scientists use in doing research (Linn, diSessa, Pea, & Songer 1994, J.Sci.Ed.Tech., ``Can Research on Science Learning and Instruction Inform Standards for Science Education?"). For instance, scientists rely on technological tools to model, analyze, and ultimately store data. Linn et al. suggest introducing technological tools to students from the earliest years to facilitate scientific modeling, scientific collaborations, and electronic communications in the classroom. Our investigation aims to construct and evaluate a methodology for effective participation of scientists in K--12 education, thus facilitating fruitful interactions with teachers and other educators and increasing effective use of technology in the classroom. We describe several team-based strategies emerging from these project collaborations. These strategies are particular to the use of the Internet and World Wide Web as relatively new media for authoring K--12 curriculum materials. This research has been funded by NASA contract NAS5-29298, NASA grant ED-90033.01-94A to SSL/UCB, and NASA grants NAG5-2875 and NAGW-4174 to CEA/UCB.

  7. PanCGHweb: a web tool for genotype calling in pangenome CGH data

    OpenAIRE

    Bayjanov, Jumamurat R.; Siezen, Roland J.; van Hijum, Sacha A. F. T.

    2010-01-01

    Summary: A pangenome is the total of genes present in strains of the same species. Pangenome microarrays allow determining the genomic content of bacterial strains more accurately than conventional comparative genome hybridization microarrays. PanCGHweb is the first tool that effectively calls genotype based on pangenome microarray data. Availability: PanCGHweb, the web tool is accessible from: http://bamics2.cmbi.ru.nl/websoftware/pancgh/ Contact:

  8. Getting To Know the "Invisible Web."

    Science.gov (United States)

    Smith, C. Brian

    2001-01-01

    Discusses the portions of the World Wide Web that cannot be accessed via directories or search engines, explains why they can't be accessed, and offers suggestions for reference librarians to find these sites. Lists helpful resources and gives examples of invisible Web sites which are often databases. (LRW)

  9. Sprint methods for web archive research

    NARCIS (Netherlands)

    Huurdeman, H.C.; Ben David, A.; Samar, T.

    2013-01-01

    Web archives provide access to snapshots of the Web of the past, and could be valuable for research purposes. However, access to these archives is often limited, both in terms of data availability, and interfaces to this data. This paper explores new methods to overcome these limitations. It present

  10. Importancia y situación actual de la accesibilidad web para el turismo accesible

    Directory of Open Access Journals (Sweden)

    Gabriel Fontanet Nadal

    2011-04-01

    Full Text Available Accesible Tourism is a kind of Tourism that is specially dedicated to disabled people. This Tourism refers to the removal of physical elements that difficult the disabled people mobility at the destination. The Accesible Tourism should take care of both physical and web accessibility. The Web Accessibility of a web is defined as the capability this web to be accessed by people with any kind of disability. Some organizations generate rules to improve web accessibility. An analysis of Web Accessibility in Tourist Web Sites is shown at this document.

  11. Introduction to Webometrics Quantitative Web Research for the Social Sciences

    CERN Document Server

    Thelwall, Michael

    2009-01-01

    Webometrics is concerned with measuring aspects of the web: web sites, web pages, parts of web pages, words in web pages, hyperlinks, web search engine results. The importance of the web itself as a communication medium and for hosting an increasingly wide array of documents, from journal articles to holiday brochures, needs no introduction. Given this huge and easily accessible source of information, there are limitless possibilities for measuring or counting on a huge scale (e.g., the number of web sites, the number of web pages, the number of blogs) or on a smaller scale (e.g., the number o

  12. Processing keyword queries under access limitations

    OpenAIRE

    Calì, Andrea; Martinenghi, D.; Torlone, R.

    2015-01-01

    The Deep Web is constituted by data accessible through Web pages, but not readily indexable by search engines, as they are returned in dynamic pages. In this paper we propose a framework for accessing Deep Web sources, represented as relational tables with so-called access limitations, with keyword-based queries. We formalize the notion of optimal answer and propose methods for query processing. To the best of our knowledge, ours is the first systematic approach to keyword search in such cont...

  13. Mobile authentication and access: any time, any place, any device?

    Directory of Open Access Journals (Sweden)

    Mark Williams

    2012-11-01

    Full Text Available The move from IP-based authentication to that of federated access has seen the sector support single sign-on to web-based resources, but the simplified user experience is at risk due to the rapid growth of mobile platforms and increasing variety of accompanying access methods for such devices. The user authentication experience on mobile devices is often further complicated by the poor discovery and delivery design of websites. While the introduction of tools such as Raptor permit accurate tracking of usage statistics via the UK Access Management Federation, the variety of mobile authentication methods such as native apps on devices and device paring pose additional challenges to librarians trying to gather a complete picture of resource use within their institution. In this article we examine the access challenges posed by the explosion of mobile device use.

  14. Cooperative Mobile Web Browsing

    Directory of Open Access Journals (Sweden)

    Zhang Q

    2009-01-01

    Full Text Available This paper advocates a novel approach for mobile web browsing based on cooperation among wireless devices within close proximity operating in a cellular environment. In the actual state of the art, mobile phones can access the web using different cellular technologies. However, the supported data rates are not sufficient to cope with the ever increasing traffic requirements resulting from advanced and rich content services. Extending the state of the art, higher data rates can only be achieved by increasing complexity, cost, and energy consumption of mobile phones. In contrast to the linear extension of current technology, we propose a novel architecture where mobile phones are grouped together in clusters, using a short-range communication such as Bluetooth, sharing, and accumulating their cellular capacity. The accumulated data rate resulting from collaborative interactions over short-range links can then be used for cooperative mobile web browsing. By implementing the cooperative web browsing on commercial mobile phones, it will be shown that better performance is achieved in terms of increased data rate and therefore reduced access times, resulting in a significantly enhanced web browsing user experience on mobile phones.

  15. Dark Web

    CERN Document Server

    Chen, Hsinchun

    2012-01-01

    The University of Arizona Artificial Intelligence Lab (AI Lab) Dark Web project is a long-term scientific research program that aims to study and understand the international terrorism (Jihadist) phenomena via a computational, data-centric approach. We aim to collect "ALL" web content generated by international terrorist groups, including web sites, forums, chat rooms, blogs, social networking sites, videos, virtual world, etc. We have developed various multilingual data mining, text mining, and web mining techniques to perform link analysis, content analysis, web metrics (technical

  16. 响应式自助微课学习平台的设计%Design of self-access micro-lecture learning platform based on responsive Web

    Institute of Scientific and Technical Information of China (English)

    陈璇; 岑岗; 方泽文

    2015-01-01

    通过对目前国内外微课教学及平台建设现状进行分析研究,在建构主义学习理论指导下,依据“响应式Web设计”理念,应用现代教育技术、移动通信技术、网络技术,将基于网络的微课拓展到移动教学中,设计了“响应式自助微课学习平台”,为学习者构建了一个集自助、展示、学习、检测、交流、评价为一体的学习环境。%This article analyzes and studies the current situation of micro‐lecture and its platform construction at home and abroad .Under the guidance of constructivism learning theory ,this article uses modern education technique ,mobile communicative technique and network technique to design a self‐access micro‐lecture learning platform based on responsive Web ,which expands micro‐lecture from PC to mobile phone .The platform provides a learning environment with the comprehensive functions of self‐access ,display ,learning ,detection ,communication and evaluation for learners .

  17. WebScipio: An online tool for the determination of gene structures using protein sequences

    Directory of Open Access Journals (Sweden)

    Waack Stephan

    2008-09-01

    Full Text Available Abstract Background Obtaining the gene structure for a given protein encoding gene is an important step in many analyses. A software suited for this task should be readily accessible, accurate, easy to handle and should provide the user with a coherent representation of the most probable gene structure. It should be rigorous enough to optimise features on the level of single bases and at the same time flexible enough to allow for cross-species searches. Results WebScipio, a web interface to the Scipio software, allows a user to obtain the corresponding coding sequence structure of a here given a query protein sequence that belongs to an already assembled eukaryotic genome. The resulting gene structure is presented in various human readable formats like a schematic representation, and a detailed alignment of the query and the target sequence highlighting any discrepancies. WebScipio can also be used to identify and characterise the gene structures of homologs in related organisms. In addition, it offers a web service for integration with other programs. Conclusion WebScipio is a tool that allows users to get a high-quality gene structure prediction from a protein query. It offers more than 250 eukaryotic genomes that can be searched and produces predictions that are close to what can be achieved by manual annotation, for in-species and cross-species searches alike. WebScipio is freely accessible at http://www.webscipio.org.

  18. Responsive Web Design

    OpenAIRE

    Rogatnev, Nikita

    2015-01-01

    Dissertação de mestrado em Engenharia Informática From computers to tablets and smartphones, the term ubiquity, combined with the diversity of devices available in the market, has changed the way we access and share information. It has become more and more important to offer solid user experiences to an increasing number of contexts. Responsive Web Design offers a set of tools to support the creation of web pages that adapt to any screen size. We use fluid grids, flexible images and me...

  19. AGRIS: providing access to agricultural research data exploiting open data on the web [v1; ref status: indexed, http://f1000r.es/599

    Directory of Open Access Journals (Sweden)

    Fabrizio Celli

    2015-05-01

    Full Text Available AGRIS is the International System for Agricultural Science and Technology. It is supported by a large community of data providers, partners and users. AGRIS is a database that aggregates bibliographic data, and through this core data, related content across online information systems is retrieved by taking advantage of Semantic Web capabilities. AGRIS is a global public good and its vision is to be a responsive service to its user needs by facilitating contributions and feedback regarding the AGRIS core knowledgebase, AGRIS’s future and its continuous development. Periodic AGRIS e-consultations, partner meetings and user feedback are assimilated to the development of the AGRIS application and content coverage. This paper outlines the current AGRIS technical set-up, its network of partners, data providers and users as well as how AGRIS’s responsiveness to clients’ needs inspires the continuous technical development of the application. The paper concludes by providing a use case of how the AGRIS stakeholder input and the subsequent AGRIS e-consultation results influence the development of the AGRIS application, knowledgebase and service delivery.

  20. A Prototype Web-based system for GOES-R Space Weather Data

    Science.gov (United States)

    Sundaravel, A.; Wilkinson, D. C.

    2010-12-01

    The Geostationary Operational Environmental Satellite-R Series (GOES-R) makes use of advanced instruments and technologies to monitor the Earth's surface and provide with accurate space weather data. The first GOES-R series satellite is scheduled to be launched in 2015. The data from the satellite will be widely used by scientists for space weather modeling and predictions. This project looks into the ways of how these datasets can be made available to the scientists on the Web and to assist them on their research. We are working on to develop a prototype web-based system that allows users to browse, search and download these data. The GOES-R datasets will be archived in NetCDF (Network Common Data Form) and CSV (Comma Separated Values) format. The NetCDF is a self-describing data format that contains both the metadata information and the data. The data is stored in an array-oriented fashion. The web-based system will offer services in two ways: via a web application (portal) and via web services. Using the web application, the users can download data in NetCDF or CSV format and can also plot a graph of the data. The web page displays the various categories of data and the time intervals for which the data is available. The web application (client) sends the user query to the server, which then connects to the data sources to retrieve the data and delivers it to the users. Data access will also be provided via SOAP (Simple Object Access Protocol) and REST (Representational State Transfer) web services. These provide functions which can be used by other applications to fetch data and use the data for further processing. To build the prototype system, we are making use of proxy data from existing GOES and POES space weather datasets. Java is the programming language used in developing tools that formats data to NetCDF and CSV. For the web technology we have chosen Grails to develop both the web application and the services. Grails is an open source web application

  1. Association and Sequence Mining in Web Usage

    Directory of Open Access Journals (Sweden)

    Claudia Elena DINUCA

    2011-06-01

    Full Text Available Web servers worldwide generate a vast amount of information on web users’ browsing activities. Several researchers have studied these so-called clickstream or web access log data to better understand and characterize web users. Clickstream data can be enriched with information about the content of visited pages and the origin (e.g., geographic, organizational of the requests. The goal of this project is to analyse user behaviour by mining enriched web access log data. With the continued growth and proliferation of e-commerce, Web services, and Web-based information systems, the volumes of click stream and user data collected by Web-based organizations in their daily operations has reached astronomical proportions. This information can be exploited in various ways, such as enhancing the effectiveness of websites or developing directed web marketing campaigns. The discovered patterns are usually represented as collections of pages, objects, or re-sources that are frequently accessed by groups of users with common needs or interests. The focus of this paper is to provide an overview how to use frequent pattern techniques for discovering different types of patterns in a Web log database. In this paper we will focus on finding association as a data mining technique to extract potentially useful knowledge from web usage data. I implemented in Java, using NetBeans IDE, a program for identification of pages’ association from sessions. For exemplification, we used the log files from a commercial web site.

  2. Design and Implementation of Security Access Logistics Database Based on Web Service%基于Web服务的物流数据库安全访问设计与实现

    Institute of Scientific and Technical Information of China (English)

    高峰

    2014-01-01

    访问安全是数据系统设计中的一个重要内容,针对物流管理数据库的访问特点,在分析了物流管理数据库的结构体系的基础上,提出一种基于Web服务的物流数据库安全访问模型。首先将模型划分为视图层、业务逻辑层、对象/关系映射层和数据层,然后针对每层访问机制设计相应的安全策略,最后对该模型进行仿真测试。测试结果表明,该物流数据库安全访问模型具有较好高安全性,较好的满足了数据库系统的安全性要求。%Access security is an important content in the design of database system, aiming at the access characteristics of logistics management database, analyses the architecture of logistics management foundation database, this paper puts forward a logistics database security model based on Web services. The model is divided into view house, house, business logic object relational mapping layer and data layer, and then each layer access mechanism is designed corresponding se-curity strategy, finally the simulation test is carried out for the model. The test results show that the proposed model has good high security, and can meet the safety requirements of the logistics database system.

  3. Design and Implementation of Distributed Web Crawler for Open Access Journal%分布式开放存取期刊爬虫的设计与实现

    Institute of Scientific and Technical Information of China (English)

    杨镇雄; 蔡祖锐; 陈国华; 汤庸; 张龙

    2014-01-01

    开放存取(open access,OA)期刊属于网络深层资源且分散在互联网中,传统的搜索引擎不能对其建立索引,不能满足用户获取OA期刊资源的需求,从而造成了开放资源的浪费。针对如何集中采集万维网上分散的开放存取期刊资源的问题,提出了一个面向OA期刊的分布式主题爬虫架构。该架构采用主从分布式设计,提出了基于用户预定义规则的OA期刊页面学术信息提取方法,由一个主控中心节点控制多个可动态增减的爬行节点,采用基于Chrome浏览器的插件机制来实现分布式爬行节点的可扩展性和部署的灵活性。%Open access journal is a kind of deep online resources and disperses on the Internet, and it is difficult for the traditional search engines to index these online resources, so the user can not access directly the open access journal via search engines, resulting in a waste of these open resources. This paper proposes a novel focused Web crawler with distributed architecture to collect the open access journal resources scattering throughout the Internet. This architecture adopts the distributed master-slave design, which consists of a master control center and multiple distributed crawler nodes, and proposes an academic information extraction method based on user predefined rules from the open access journals. These distributed crawling nodes can be adjusted dynamically and use Chrome browser based plug-in mechanism to achieve scalability and deployment flexibility.

  4. Los catálogos en línea de acceso público del Mercosur disponibles en entorno web: características del Proyecto UBACYT F054 Online public access catalogs of Mercosur in a web environment: characteristics of UBACYT F054 Project

    Directory of Open Access Journals (Sweden)

    Elsa E. Barber

    2005-06-01

    Full Text Available Se presentan los lineamientos teórico-metodológicos del proyecto de investigación UBACYT F054 (Programación Científica y Técnica de la Universidad de Buenos Aires 2004-2007. Se analiza la problemática de los catálogos en línea de acceso público (OPACs disponibles en entorno web de las bibliotecas nacionales, universitarias, especializadas y públicas del Mercosur. Se estudian los aspectos vinculados con el control operativo, la formulación de la búsqueda, los puntos de acceso, el control de salida y la asistencia al usuario. El proyecto se propone, desde un abordaje cuantitativo y cualitativo, efectuar un diagnóstico de situación válido para los catálogos de la región. Plantea, además, un estudio comparativo con el fin de vislumbrar las tendencias existentes dentro de esta temática en bibliotecas semejantes de Argentina, Brasil, Paraguay y Uruguay.The theoretical-methodological aspects of the research project UBACYT F054 (Universidad de Buenos Aires Technical and Scientific Program, 2004- 2007 are outlined. Online Public Access Catalogs (OPACs in web environment in national, academic, public and special libraries in countries belonging to Mercosur are analized. Aspects related to the operational control, search formulation, access points, output control and user assistance are studied. The project aims, both quantitatively and qualitatively, to make a situation diagnosis valid for the catalogs of the region. It also offers a comparative study in order to see the existing tendencies on the subject in similar libraries in Argentina, Brasil, Paraguay and Uruguay.

  5. 基于属性的Web服务访问控制模型%Attribute-based Access Control Model for Web Services

    Institute of Scientific and Technical Information of China (English)

    傅鹤岗; 李竞

    2007-01-01

    传统访问控制模型都是静态的、粗粒度的,不能很好地在面向服务的环境中应用.本文提出了一种基于属性的访问控制模型(ABAC),它结合SAML(Security Assertion Markup Language,安全声明标记语言)和XACML(Extensible Access Control Markup Language,可扩展访问控制标记语言)标准,能够基于主体、客体和环境的属性来动态地、细粒度地进行授权.新的模型更加灵活,特别适用于动态的Web服务环境.

  6. The Salmonella In Silico Typing Resource (SISTR): An Open Web-Accessible Tool for Rapidly Typing and Subtyping Draft Salmonella Genome Assemblies.

    Science.gov (United States)

    Yoshida, Catherine E; Kruczkiewicz, Peter; Laing, Chad R; Lingohr, Erika J; Gannon, Victor P J; Nash, John H E; Taboada, Eduardo N

    2016-01-01

    -typing allows for continuity with historical serotyping data as we transition towards the increasing adoption of genomic analyses in epidemiology. The SISTR platform is freely available on the web at https://lfz.corefacility.ca/sistr-app/.

  7. The Program Management Challenges of Web 2.0

    Science.gov (United States)

    2010-06-01

    internal networks. Web servers and pages existed that supported multi-media in a limited fashion. Most were static Web environments with initial...read-only Web from 1992–1994 with static Web page content that used Mosaic or Netscape browsers to access static HTML Webpages. These pages had

  8. Users’ recognition in web using web mining techniques

    Directory of Open Access Journals (Sweden)

    Hamed Ghazanfaripoor

    2013-06-01

    Full Text Available The rapid growth of the web and the lack of structure or an integrated schema create various issues to access the information for users. All users’ access on web information are saved in the related server log files. The circumstance of using these files is implemented as a resource for finding some patterns of user's behavior. Web mining is a subset of data mining and it means the mining of the related data from WWW, which is categorized into three parts including web content mining, web structure mining and web usage mining, based on the part of data, which is mined. It seems necessary to have a technique, which is capable of learning the users’ interests and based on the interests, which could filter the unrelated interests automatically or it could offer the related information to the user in reasonable amount of time. The web usage mining makes a profile from users to recognize them and it has direct relationship to web personalizing. The primary objective of personalizing systems is to prepare the thing, which is required by users, without asking them explicitly. In the other way, formal models prepare the possibility of system’s behavior modeling. The Petri and queue nets as some samples of these models can analyze the user's behavior in web. The primary objective of this paper is to present a colored Petri net to model the user's interactions for offering a list of pages recommendation to them in web. Estimating the user's behavior is implemented in some cases like offering the proper pages to continue the browse in web, ecommerce and targeted advertising. The preliminary results indicate that the proposed method is able to improve the accuracy criterion 8.3% rather static method.

  9. The Salmonella In Silico Typing Resource (SISTR: An Open Web-Accessible Tool for Rapidly Typing and Subtyping Draft Salmonella Genome Assemblies.

    Directory of Open Access Journals (Sweden)

    Catherine E Yoshida

    -based methods of sub-typing allows for continuity with historical serotyping data as we transition towards the increasing adoption of genomic analyses in epidemiology. The SISTR platform is freely available on the web at https://lfz.corefacility.ca/sistr-app/.

  10. Distributed Web Service Repository

    Directory of Open Access Journals (Sweden)

    Piotr Nawrocki

    2015-01-01

    Full Text Available The increasing availability and popularity of computer systems has resulted in a demand for new, language- and platform-independent ways of data exchange. That demand has in turn led to a significant growth in the importance of systems based on Web services. Alongside the growing number of systems accessible via Web services came the need for specialized data repositories that could offer effective means of searching of available services. The development of mobile systems and wireless data transmission technologies has allowed the use of distributed devices and computer systems on a greater scale. The accelerating growth of distributed systems might be a good reason to consider the development of distributed Web service repositories with built-in mechanisms for data migration and synchronization.

  11. Web Engineering

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-06-23

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.

  12. WebFTS: File Transfer Web Interface for FTS3

    CERN Document Server

    CERN. Geneva

    2014-01-01

    WebFTS is a web-delivered file transfer and management solution which allows users to invoke reliable, managed data transfers on distributed infrastructures. The fully open source solution offers a simple graphical interface through which the power of the FTS3 service can be accessed without the installation of any special grid tools. Created following simplicity and efficiency criteria, WebFTS allows the user to access and interact with multiple grid and cloud storage. The “transfer engine” used is FTS3, the service responsible for distributing the majority of LHC data across WLCG infrastructure. This provides WebFTS with reliable, multi-protocol, adaptively optimised data transfers.The talk will focus on the recent development which allows transfers from/to Dropbox and CERNBox (CERN ownCloud deployment)

  13. 基于SDN的web访问控制应用的实现%A Web Access Control Application Based on SDN

    Institute of Scientific and Technical Information of China (English)

    王敢甫; 吴京京; 韩达

    2016-01-01

    软件定义网络(Software Defined Network,SDN)及其主流协议OpenFlow通过解耦控制平面与数据平面,提供应用平面编程接口,给数据中心提供了更为方便的网络设备管理手段。本文基于OpenDayLight(ODL)控制器北向接口,在一个对外提供web服务的网络中,实现了控制用户访问某一TCP/IP协议端口的功能,并能使管理员进行实时配置。%Software Defined Network(SDN) and it's main protocol OpenFlow provide better network management for data centers by decoupling the control plane and the data plane and providing application programming interfaces.In this paper, Based on northband interface of OpenDayLight (ODL), we achieve control users access to a port of a proto-col of TCP/IP, and enables administrators to real-time configuration.

  14. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    Science.gov (United States)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich

    2015-04-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced

  15. E-learning use patterns in the workplace – Web logs from interaction with a web based lecture

    Directory of Open Access Journals (Sweden)

    Christian Ostlund

    2012-11-01

    Full Text Available When designing for e-learning the objective is to design for learning i.e. the technology supporting the learning activity should aid and support the learning process and be an arena where learning is likely to occur. To obtain this when designing e-learning for the workplace the author argue that it is important to have knowledge on how users actually access and use e-learning systems. In order to gain this knowledge web logs from a web lecture developed for a Scandinavian public body has been analyzed. During a period of two and a half months 15 learners visited the web lecture 74 times. The web lecture consisted of streaming video with exercises and additional links to resources on the WWW to provide an opportunity to investigate the topic from multiple perspectives. The web lecture took approximately one hour to finish. Using web usage mining for the analysis seven groups or interaction patterns emerged: peaking, one go, partial order, partial unordered, single module, mixed modules, non-video modules. Furthermore the web logs paint a picture of the learning activities being interrupted. This suggests that modules needs to be fine-grained (e.g. less than 8 minutes per video clip so learners’ do not need to waste time having to watch parts of a video clip while waiting for the part of interest to appear or having to fast forward. A clear and logical structure is also important to help the learner find their way back accurately and fast.

  16. 'You've got m@il: fluoxetine coming soon!': accessibility and quality of a prescription drug sold on the web.

    Science.gov (United States)

    Gelatti, U; Pedrazzani, R; Marcantoni, C; Mascaretti, S; Repice, C; Filippucci, L; Zerbini, I; Dal Grande, M; Orizio, G; Feretti, D

    2013-09-01

    The increasing phenomenon of online pharmacies has potential for serious public health problems. This study aimed to evaluate the possibility of accessing a prescription drug in the absence of a prescription for an Italian purchaser. Fluoxetine pills were ordered from several online pharmacies. The study included website analysis, and the quality of the received product including packaging, chemical and microbiological analyses. Orders could be placed correctly on 61 of the 98 selected websites, and a sales transaction was concluded successfully on 17 websites. Thirteen drug samples were eventually received. In one case it was necessary to fill in a questionnaire before ordering the drugs. All websites displayed aggressive marketing strategies. There was wide variation in terms of domain registration, company base (when declared) and manufacturer's location (mostly India). All pills were delivered in sealed blister packs showing the lot number and manufacturer's details. A leaflet was enclosed in one case only. In three cases we received more pills than ordered, and in one case Viagra pills as a free gift. Pharmacopoeia microbiological requirements were satisfied. Chemical analysis revealed that the active principle was always present, although many samples did not meet the Pharmacopoeia "other impurities" or "total impurities" criteria. Heavy metals and solvents regulated by the Pharmacopoeia did not exceed the set limits; some of the non-regulated ones were also assessed, in some cases with a positive result (e.g. styrene). About 20% of purchase attempts resulted in delivery of the drugs, even in the absence of a medical prescription. Traceability was poor and drug quality was generally worse compared to conventional pharmacy-purchased products. Based on all these broad-spectrum results, user safety appears not to be globally guaranteed. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. A Semantic Scraping Model for Web Resources - Applying Linked Data to Web Page Screen Scraping

    OpenAIRE

    Fernández Villamor, José Ignacio; Blasco Garcia, Jacobo; Iglesias Fernandez, Carlos Angel; Garijo Ayestaran, Mercedes

    2011-01-01

    In spite of the increasing presence of Semantic Web Facilities, only a limited amount of the available resources in the Internet provide a semantic access. Recent initiatives such as the emerging Linked Data Web are providing semantic access to available data by porting existing resources to the semantic web using different technologies, such as database-semantic mapping and scraping. Nevertheless, existing scraping solutions are based on ad-hoc solutions complemented with graphical interface...

  18. Web Search Studies: Multidisciplinary Perspectives on Web Search Engines

    Science.gov (United States)

    Zimmer, Michael

    Perhaps the most significant tool of our internet age is the web search engine, providing a powerful interface for accessing the vast amount of information available on the world wide web and beyond. While still in its infancy compared to the knowledge tools that precede it - such as the dictionary or encyclopedia - the impact of web search engines on society and culture has already received considerable attention from a variety of academic disciplines and perspectives. This article aims to organize a meta-discipline of “web search studies,” centered around a nucleus of major research on web search engines from five key perspectives: technical foundations and evaluations; transaction log analyses; user studies; political, ethical, and cultural critiques; and legal and policy analyses.

  19. A New Hidden Web Crawling Approach

    OpenAIRE

    L.Saoudi; A.Boukerram; S.Mhamedi

    2015-01-01

    Traditional search engines deal with the Surface Web which is a set of Web pages directly accessible through hyperlinks and ignores a large part of the Web called hidden Web which is a great amount of valuable information of online database which is “hidden” behind the query forms. To access to those information the crawler have to fill the forms with a valid data, for this reason we propose a new approach which use SQLI technique in order to find the most promising keywords of a specific dom...

  20. A Novel Technique for Web Log mining with Better Data Cleaning and Transaction Identification

    Directory of Open Access Journals (Sweden)

    J. Vellingiri

    2011-01-01

    Full Text Available Problem statement: In the internet era web sites on the internet are useful source of information for almost every activity. So there is a rapid development of World Wide Web in its volume of traffic and the size and complexity of web sites. Web mining is the application of data mining, artificial intelligence, chart technology and so on to the web data and traces user’s visiting behaviors and extracts their interests using patterns. Because of its direct application in e-commerce, Web analytics, e-learning, information retrieval, web mining has become one of the important areas in computer and information science. There are several techniques like web usage mining exists. But all processes its own disadvantages. This study focuses on providing techniques for better data cleaning and transaction identification from the web log. Approach: Log data is usually noisy and ambiguous and preprocessing is an important process for efficient mining process. In the preprocessing, the data cleaning process includes removal of records of graphics, videos and the format information, the records with the failed HTTP status code and robots cleaning. Sessions are reconstructed and paths are completed by appending missing pages in preprocessing. And also the transactions which depict the behavior of users are constructed accurately in preprocessing by calculating the Reference Lengths of user access by considering byte rate. Results: When the number of records is considered, for example, for 1000 record, only 350 records are resulted using data cleaning. When the execution time is considered, the initial log take s119 seconds for execution, whereas, only 52 seconds are required by proposed technique. Conclusion: The experimental results show the performance of the proposed algorithm and comparatively it gives the good results for web usage mining compared to existing approaches.

  1. 基于WebService的WebGIS性能的优化%The Optimization on the Performance of WebGIS Based on Web Service

    Institute of Scientific and Technical Information of China (English)

    韩双旺

    2011-01-01

    由于GIS中不但涉及属性数据,而且还涉及地理空间数据,因此数据量相对庞大,所以在设计和实现WebGIS时。必须考虑其性能问题.为了更高效地实现基于WebService的WebGIS的相关功能,有必要对其性能进行优化,这可通过增大Web Service颗粒度,不使用XML作为WebGIS系统内部的接口,压缩SOAP,通过异步访问服务器端Web Service中的Web方法,优化数据库,使用客户端和服务器端缓存等一系列优化措施来加快数据的访问速度,减轻网络传输负载,提高基于Web Service的WebGIS性能.%It not only includes attribute data in the CIS, but also includes the geo-spatial data, arelatively large amount of data, so we must consider the performance issues in the design and implementation of WebGIS. In order to more efficiently" implement the relative functions of a WebGIS based on Web Service, it is necessary to optimize the performance, which can increase the particle size of Web Service, do not use XML as the WebGIS interface within the system; compressed SOAP, an asynchronous Access Web method in Web Service of server-side, optimizing the database, using the client and server side caching and a series of optimization measures to speed up data access speed and reduce network traffic loads and improve the performance of WebGIS based on Web Service.

  2. Web Similarity

    NARCIS (Netherlands)

    Cohen, A.R.; Vitányi, P.M.B.

    2015-01-01

    Normalized web distance (NWD) is a similarity or normalized semantic distance based on the World Wide Web or any other large electronic database, for instance Wikipedia, and a search engine that returns reliable aggregate page counts. For sets of search terms the NWD gives a similarity on a scale fr

  3. Borderless Geospatial Web (bolegweb)

    Science.gov (United States)

    Cetl, V.; Kliment, T.; Kliment, M.

    2016-06-01

    The effective access and use of geospatial information (GI) resources acquires a critical value of importance in modern knowledge based society. Standard web services defined by Open Geospatial Consortium (OGC) are frequently used within the implementations of spatial data infrastructures (SDIs) to facilitate discovery and use of geospatial data. This data is stored in databases located in a layer, called the invisible web, thus are ignored by search engines. SDI uses a catalogue (discovery) service for the web as a gateway to the GI world through the metadata defined by ISO standards, which are structurally diverse to OGC metadata. Therefore, a crosswalk needs to be implemented to bridge the OGC resources discovered on mainstream web with those documented by metadata in an SDI to enrich its information extent. A public global wide and user friendly portal of OGC resources available on the web ensures and enhances the use of GI within a multidisciplinary context and bridges the geospatial web from the end-user perspective, thus opens its borders to everybody. Project "Crosswalking the layers of geospatial information resources to enable a borderless geospatial web" with the acronym BOLEGWEB is ongoing as a postdoctoral research project at the Faculty of Geodesy, University of Zagreb in Croatia (http://bolegweb.geof.unizg.hr/). The research leading to the results of the project has received funding from the European Union Seventh Framework Programme (FP7 2007-2013) under Marie Curie FP7-PEOPLE-2011-COFUND. The project started in the November 2014 and is planned to be finished by the end of 2016. This paper provides an overview of the project, research questions and methodology, so far achieved results and future steps.

  4. A semantically enriched web usage based recommendation model

    CERN Document Server

    Ramesh, C; Govardhan, A

    2011-01-01

    With the rapid growth of internet technologies, Web has become a huge repository of information and keeps growing exponentially under no editorial control. However the human capability to read, access and understand Web content remains constant. This motivated researchers to provide Web personalized online services such as Web recommendations to alleviate the information overload problem and provide tailored Web experiences to the Web users. Recent studies show that Web usage mining has emerged as a popular approach in providing Web personalization. However conventional Web usage based recommender systems are limited in their ability to use the domain knowledge of the Web application. The focus is only on Web usage data. As a consequence the quality of the discovered patterns is low. In this paper, we propose a novel framework integrating semantic information in the Web usage mining process. Sequential Pattern Mining technique is applied over the semantic space to discover the frequent sequential patterns. Th...

  5. Complementary Advantages of Web-based Self-access English Learning and In-class English Teaching%英语网络自主学习与课堂英语教学的优势互补

    Institute of Scientific and Technical Information of China (English)

    卿雯; 殷燕

    2012-01-01

    Web-based self-access English learning and in-class English teaching,as two main ways for college students′ English learning,have their advantages and deficiencies.To make them offer better services for non-English majors' English learning,this paper investigates their present situations,integrates their advantages and then proposes possible solutions for complementary advantages of the two methods.%网络自主学习和课堂教学,作为当代大学生英语学习的两种主要形式,各有其优势和不足。因此,要针对非英语专业的大学生调查英语网络自主学习和英语课堂教学的现状,创新以整合网络自主学习与课堂教学的优势互补的可行方案,使这两种教学形式更好地为大学生的英语学习服务。

  6. Anonymous Web Browsing and Hosting

    Directory of Open Access Journals (Sweden)

    MANOJ KUMAR

    2013-02-01

    Full Text Available In today’s high tech environment every organization, individual computer users use internet for accessing web data. To maintain high confidentiality and security of the data secure web solutions are required. In this paper we described dedicated anonymous web browsing solutions which makes our browsing faster and secure. Web application which play important role for transferring our secret information including like email need more and more security concerns. This paper also describes that how we can choose safe web hosting solutions and what the main functions are which provides more security over server data. With the browser security network security is also important which can be implemented using cryptography solutions, VPN and by implementing firewalls on the network. Hackers always try to steal our identity and data, they track our activities using the network application software’s and do harmful activities. So in this paper we described that how we can monitor them from security purposes.

  7. Programming Web Services with SOAP

    CERN Document Server

    Snell, James L; Kulchenko, Pavel

    2002-01-01

    The web services architecture provides a new way to think about and implement application-to-application integration and interoperability that makes the development platform irrelevant. Two applications, regardless of operating system, programming language, or any other technical implementation detail, communicate using XML messages over open Internet protocols such as HTTP or SMTP. The Simple Open Access Protocol (SOAP) is a specification that details how to encode that information and has become the messaging protocol of choice for Web services.Programming Web Services with SOAP is a detail

  8. Web Classification Using DYN FP Algorithm

    Directory of Open Access Journals (Sweden)

    Bhanu Pratap Singh

    2014-01-01

    Full Text Available Web mining is the application of data mining techniques to extract knowledge from Web. Web mining has been explored to a vast degree and different techniques have been proposed for a variety of applications that includes Web Search, Classification and Personalization etc. The primary goal of the web site is to provide the relevant information to the users. Web mining technique is used to categorize users and pages by analyzing users behavior, the content of pages and order of URLs accessed. In this paper, proposes an auto-classification algorithm of web pages using data mining techniques. The problem of discovering association rules between terms in a set of web pages belonging to a category in a search engine database, and present an auto – classification algorithm for solving this problem that are fundamentally based on FP-growth algorithm

  9. XML databases and the semantic web

    CERN Document Server

    Thuraisingham, Bhavani

    2002-01-01

    Efficient access to data, sharing data, extracting information from data, and making use of the information have become urgent needs for today''s corporations. With so much data on the Web, managing it with conventional tools is becoming almost impossible. New tools and techniques are necessary to provide interoperability as well as warehousing between multiple data sources and systems, and to extract information from the databases. XML Databases and the Semantic Web focuses on critical and new Web technologies needed for organizations to carry out transactions on the Web, to understand how to use the Web effectively, and to exchange complex documents on the Web.This reference for database administrators, database designers, and Web designers working in tandem with database technologists covers three emerging technologies of significant impact for electronic business: Extensible Markup Language (XML), semi-structured databases, and the semantic Web. The first two parts of the book explore these emerging techn...

  10. Radio-anatomy Atlas for delineation SIRIADE web site: features and 1 year results; Site de radio-anatomie et d'aide a la delineation (SIRIADE): presentation et bilan a un an

    Energy Technology Data Exchange (ETDEWEB)

    Denisa, F. [Centre Jean-Bernard, Clinique Victor-Hugo, 72 - Le Mans (France); Pointreau, Y. [Clinique d' oncologie radiotherapie, Centre Henry-S.-Kaplan, CHU Bretonneau, 37 - Tours (France)

    2010-07-01

    3-D conformal radiotherapy is based on accurate target volumes delineation. Radio-anatomy knowledge's are useful but sometimes difficult to obtain. Moreover, the sources of recommendations for volume definition are disparate. We thus developed a free radio-anatomy web site dedicated to volumes delineation for radiation-oncologists (www.siriade.org). This web site is a search engine allowing to access to delineation characteristics of main tumours illustrated with clinical cases. It does not aim to provide guidelines. Its main purpose is to provide an iconographic training support with frequent up-datings. We present the features of this web site and one year connexion statistics. (authors)

  11. Office 2010 Web Apps For Dummies

    CERN Document Server

    Weverka, Peter

    2010-01-01

    Enhance your Microsoft Office 2010 experience with Office 2010 Web Apps!. Office Web Apps complement Office, making it easy to access and edit files from anywhere. It also simplifies collaboration with those who don't have Microsoft Office on their computers. This helpful book shows you the optimum ways you can use Office Web Apps to save time and streamline your work. Veteran For Dummies author Peter Weverka begins with an introduction to Office Web Apps and then goes on to clearly explain how Office Web Apps provide you with easier, faster, more flexible ways to get things done.: Walks you t

  12. Semantic Annotations and Querying of Web Data Sources

    Science.gov (United States)

    Hornung, Thomas; May, Wolfgang

    A large part of the Web, actually holding a significant portion of the useful information throughout the Web, consists of views on hidden databases, provided by numerous heterogeneous interfaces that are partly human-oriented via Web forms ("Deep Web"), and partly based on Web Services (only machine accessible). In this paper we present an approach for annotating these sources in a way that makes them citizens of the Semantic Web. We illustrate how queries can be stated in terms of the ontology, and how the annotations are used to selected and access appropriate sources and to answer the queries.

  13. A Deep Web Data Integration System for Job Search

    Institute of Scientific and Technical Information of China (English)

    LIU Wei; LI Xian; LING Yanyan; ZHANG Xiaoyu; MENG Xiaofeng

    2006-01-01

    With the rapid development of Web, there are more and more Web databases available for users to access. At the same time, job searchers often have difficulties in first finding the right sources and then querying over them, providing such an integrated job search system over Web databases has become a Web application in high demand. Based on such consideration, we build a deep Web data integration system that supports unified access for users to multiple job Web sites as a job meta-search engine. In this paper, the architecture of the system is given first, and the key components in the system are introduced.

  14. Web Page Recommendation Using Web Mining

    Directory of Open Access Journals (Sweden)

    Modraj Bhavsar

    2014-07-01

    Full Text Available On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1 First we describe the basics of web mining, types of web mining. 2 Details of each web mining technique.3We propose the architecture for the personalized web page recommendation.

  15. Web Personalization Using Web Mining

    Directory of Open Access Journals (Sweden)

    Ms.Kavita D.Satokar,

    2010-03-01

    Full Text Available The information on the web is growing dramatically. The users has to spend lots of time on the web finding the information they are interested in. Today, he traditional search engines do not give users enough personalized help but provide the user with lots of irrelevant information. In this paper, we present a personalize Web searchsystem, which can helps users to get the relevant web pages based on their selection from the domain list. Thus, users can obtain a set of interested domains and the web pages from the system. The system is based on features extracted from hyperlinks, such as anchor terms or URL tokens. Our methodology uses an innovative weighted URL Rank algorithm based on user interested domains and user query.

  16. Semantic web services for web databases

    CERN Document Server

    Ouzzani, Mourad

    2011-01-01

    Semantic Web Services for Web Databases introduces an end-to-end framework for querying Web databases using novel Web service querying techniques. This includes a detailed framework for the query infrastructure for Web databases and services. Case studies are covered in the last section of this book. Semantic Web Services For Web Databases is designed for practitioners and researchers focused on service-oriented computing and Web databases.

  17. Metadata Schema Used in OCLC Sampled Web Pages

    OpenAIRE

    Fei Yu

    2005-01-01

    The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources. However, many questions have been raised about the use of metadata schemas such as which metadata schemas have been used on the Web? How did they describe Web accessible information? What is the distribution of these metadata schemas among Web pages? Do certain schemas dominate the o...

  18. Sensor web

    Science.gov (United States)

    Delin, Kevin A. (Inventor); Jackson, Shannon P. (Inventor)

    2011-01-01

    A Sensor Web formed of a number of different sensor pods. Each of the sensor pods include a clock which is synchronized with a master clock so that all of the sensor pods in the Web have a synchronized clock. The synchronization is carried out by first using a coarse synchronization which takes less power, and subsequently carrying out a fine synchronization to make a fine sync of all the pods on the Web. After the synchronization, the pods ping their neighbors to determine which pods are listening and responded, and then only listen during time slots corresponding to those pods which respond.

  19. Speaking Fluently And Accurately

    Institute of Scientific and Technical Information of China (English)

    JosephDeVeto

    2004-01-01

    Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.

  20. Availability, Access, Authenticity, and Persistence: Creating the Environment for Permanent Public Access to Electronic Government Information.

    Science.gov (United States)

    Barnum, George

    2002-01-01

    Discusses efforts by the Federal Depository Library Program to make information accessible more or mostly by electronic means. Topics include Web-based locator tools; collection development; digital archives; bibliographic metadata; and access tools and user interfaces. (Author/LRW)

  1. Focused Crawling of the Deep Web Using Service Class Descriptions

    Energy Technology Data Exchange (ETDEWEB)

    Rocco, D; Liu, L; Critchlow, T

    2004-06-21

    Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address these challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.

  2. Discovering User Profiles for Web Personalized Recommendation

    Institute of Scientific and Technical Information of China (English)

    Ai-BoSong; Mao-XianZhao; Zuo-PengLiang; Yi-ShengDong; Jun-ZhouLuo

    2004-01-01

    With the growing popularity of the World Wide Web, large volume of user access data has been gathered automatically by Web servers and stored in Web logs. Discovering and understanding user behavior patterns from log files can provide Web personalized recommendation services. In this paper, a novel clustering method is presented for log files called Clustering large Weblog based on Key Path Model (CWKPM), which is based on user browsing key path model, to get user behavior profiles. Compared with the previous Boolean model, key path model considers the major features of users' accessing to the Web: ordinal, contiguous and duplicate. Moreover, for clustering, it has fewer dimensions. The analysis and experiments show that CWKPM is an efficient and effective approach for clustering large and high-dimension Web logs.

  3. Discovering User Profiles for Web Personalized Recommendation

    Institute of Scientific and Technical Information of China (English)

    Ai-Bo Song; Mao-Xian Zhao; Zuo-Peng Liang; Yi-Sheng Dong; Jun-Zhou Luo

    2004-01-01

    With the growing popularity of the World Wide Web, large volume of user access data has been gathered automatically by Web servers and stored in Web logs. Discovering and understanding user behavior patterns from log files can provide Web personalized recommendation services. In this paper, a novel clustering method is presented for log files called Clustering large Weblog based on Key Path Model (CWKPM), which is based on user browsing key path model, to get user behavior profiles. Compared with the previous Boolean model,key path model onsiders the major features of users' accessing to the Web: ordinal, contiguous and duplicate.Moreover, for clustering, it has fewer dimensions. The analysis and experiments show that CWKPM is an efficient and effective approach for clustering large and high-dimension Web logs.

  4. BORDERLESS GEOSPATIAL WEB (BOLEGWEB

    Directory of Open Access Journals (Sweden)

    V. Cetl

    2016-06-01

    Full Text Available The effective access and use of geospatial information (GI resources acquires a critical value of importance in modern knowledge based society. Standard web services defined by Open Geospatial Consortium (OGC are frequently used within the implementations of spatial data infrastructures (SDIs to facilitate discovery and use of geospatial data. This data is stored in databases located in a layer, called the invisible web, thus are ignored by search engines. SDI uses a catalogue (discovery service for the web as a gateway to the GI world through the metadata defined by ISO standards, which are structurally diverse to OGC metadata. Therefore, a crosswalk needs to be implemented to bridge the OGC resources discovered on mainstream web with those documented by metadata in an SDI to enrich its information extent. A public global wide and user friendly portal of OGC resources available on the web ensures and enhances the use of GI within a multidisciplinary context and bridges the geospatial web from the end-user perspective, thus opens its borders to everybody. Project “Crosswalking the layers of geospatial information resources to enable a borderless geospatial web” with the acronym BOLEGWEB is ongoing as a postdoctoral research project at the Faculty of Geodesy, University of Zagreb in Croatia (http://bolegweb.geof.unizg.hr/. The research leading to the results of the project has received funding from the European Union Seventh Framework Programme (FP7 2007-2013 under Marie Curie FP7-PEOPLE-2011-COFUND. The project started in the November 2014 and is planned to be finished by the end of 2016. This paper provides an overview of the project, research questions and methodology, so far achieved results and future steps.

  5. 国内图书情报学期刊网络引文的类型、分布与可追溯性分析%Type, Distribution and Accessibility of Web Citation of Chinese Library and Information Science Journals

    Institute of Scientific and Technical Information of China (English)

    丁敬达; 杨思洛

    2012-01-01

    By the empirical analysis of web citation of Chinese library and information science journals, this paper finds out that the citation of. html format papers is decreasing year by year, while the. pdf format and dynamic web one are gradually rising; academic information on wikis, blogs, forums and other new web are accepted by Chinese library and information science scholars; the accessibility of dynamic web citation is slightly higher than that of static web citation, and cites of them are between 50% and 51% ; web citations distributed in the . edu domain has the worst accessibility. And the related reasons are analyzed.%通过对我国图书情报学期刊网络引文的实证分析,得出如下结论:HTML格式网络引文的比例在逐年下降,PDF格式和动态类网络引文的比例在逐渐上升,维基、博客、论坛等新型网络学术信息正日益得到我国图书情报学者的认可和接受;动态类网络引文的可追溯性略高于静态类网络引文,但二者可追溯率都介于50%-51%之间;分布在.edu域名的网络引文的可追溯性相对较差。

  6. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    Directory of Open Access Journals (Sweden)

    Khushboo Khurana

    2016-05-01

    Full Text Available Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in deep web that can be useful to gain new insight for various domains, creating need to access the information from the deep web by developing efficient techniques. As the amount of Web content grows rapidly, the types of data sources are proliferating, which often provide heterogeneous data. So we need to select Deep Web Data sources that can be used by the integration systems. The paper discusses various techniques that can be used to surface the deep web information and techniques for Deep Web Source Selection.

  7. MedlinePlus Health Topic Web Service

    Data.gov (United States)

    U.S. Department of Health & Human Services — A search-based Web service that provides access to disease, condition and wellness information via MedlinePlus health topic data in XML format. The service accepts...

  8. Talking physics in the social web

    CERN Multimedia

    Griffiths, Martin

    2007-01-01

    "From "blogs" to "wikis", the Web is now more than a mere repository of information. Martin Griffiths investigates how this new interactivity is affecting the way physicists communicate and access information." (5 pages)

  9. Efficient Web Log Mining using Doubly Linked Tree

    CERN Document Server

    Jain, Ratnesh Kumar; Jain, Dr Suresh

    2009-01-01

    World Wide Web is a huge data repository and is growing with the explosive rate of about 1 million pages a day. As the information available on World Wide Web is growing the usage of the web sites is also growing. Web log records each access of the web page and number of entries in the web logs is increasing rapidly. These web logs, when mined properly can provide useful information for decision-making. The designer of the web site, analyst and management executives are interested in extracting this hidden information from web logs for decision making. Web access pattern, which is the frequently used sequence of accesses, is one of the important information that can be mined from the web logs. This information can be used to gather business intelligence to improve sales and advertisement, personalization for a user, to analyze system performance and to improve the web site organization. There exist many techniques to mine access patterns from the web logs. This paper describes the powerful algorithm that mine...

  10. Most recent Web Lectures

    CERN Multimedia

    Steven Goldfarb

    Web Archives of ATLAS Plenary Sessions, Workshops, Meetings, and Tutorials recorded over the past two years are available via the University of Michigan portal here. Most recent additions include the ROOT Workshop held at CERN on March 26-27, the Physics Analysis Tools Workshop held in Bergen, Norway on April 23-27, and the CTEQ Workshop: "Physics at the LHC: Early Challenges" held at Michigan State University on May 14-15. Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally. In addition, you will find access to a variety of general tutorials and events via the portal. Suggestions for events or tutorials to record in 2007, as well as feedback on existing archives is always welcome. Please contact us at wlap@umich.edu. Thank you and enjoy the lectures! The Michigan Web Lecture Team Tushar Bhatnagar, Steven Goldfarb, Jeremy Herr, Mitch McLachlan, Homer A....

  11. Intelligent Overload Control for Composite Web Services

    NARCIS (Netherlands)

    Meulenhoff, P.J.; Ostendorf, D.R.; Zivkovic, M.; Meeuwissen, H.B.; Gijsen, B.M.M.

    2009-01-01

    In this paper, we analyze overload control for composite web services in service oriented architectures by an orchestrating broker, and propose two practical access control rules which effectively mitigate the effects of severe overloads at some web services in the composite service. These two rules

  12. Intelligent overload control for composite web services

    NARCIS (Netherlands)

    Meulenhoff, P.J.; Ostendorf, D.R.; Živković, M.; Meeuwissen, H.B.; Gijsen, B.M.M.

    2009-01-01

    In this paper, we analyze overload control for composite web services in service oriented architectures by an orchestrating broker, and propose two practical access control rules which effectively mitigate the effects of severe overloads at some web services in the composite service. These two rules

  13. Bridging the Web and Digital Publishing

    NARCIS (Netherlands)

    I. Herman (Ivan); M. Gylling

    2015-01-01

    htmlabstractAlthough using advanced Web technologies at their core, e-books represent a parallel universe to everyday Web documents. Their production workflows, user interfaces, their security, access, or privacy models, etc, are all distinct. There is a lack of a vision on how to unify Digital

  14. Bridging the Web and Digital Publishing

    NARCIS (Netherlands)

    Herman, I.; Gylling, M.

    2015-01-01

    Although using advanced Web technologies at their core, e-books represent a parallel universe to everyday Web documents. Their production workflows, user interfaces, their security, access, or privacy models, etc, are all distinct. There is a lack of a vision on how to unify Digital Publishing and t

  15. Solution Kinetics Database on the Web

    Science.gov (United States)

    SRD 40 NDRL/NIST Solution Kinetics Database on the Web (Web, free access)   Data for free radical processes involving primary radicals from water, inorganic radicals and carbon-centered radicals in solution, and singlet oxygen and organic peroxyl radicals in various solvents.

  16. Bridging the Web and Digital Publishing

    NARCIS (Netherlands)

    I. Herman (Ivan); M. Gylling

    2015-01-01

    htmlabstractAlthough using advanced Web technologies at their core, e-books represent a parallel universe to everyday Web documents. Their production workflows, user interfaces, their security, access, or privacy models, etc, are all distinct. There is a lack of a vision on how to unify Digital Publ

  17. Controlling Access to RDF Graphs

    Science.gov (United States)

    Flouris, Giorgos; Fundulaki, Irini; Michou, Maria; Antoniou, Grigoris

    One of the current barriers towards realizing the huge potential of Future Internet is the protection of sensitive information, i.e., the ability to selectively expose (or hide) information to (from) users depending on their access privileges. Given that RDF has established itself as the de facto standard for data representation over the Web, our work focuses on controlling access to RDF data. We present a high-level access control specification language that allows fine-grained specification of access control permissions (at triple level) and formally define its semantics. We adopt an annotation-based enforcement model, where a user can explicitly associate data items with annotations specifying whether the item is accessible or not. In addition, we discuss the implementation of our framework, propose a set of dimensions that should be considered when defining a benchmark to evaluate the different access control enforcement models and present the results of our experiments conducted on different Semantic Web platforms.

  18. Operational Use of OGC Web Services at the Met Office

    Science.gov (United States)

    Wright, Bruce

    2010-05-01

    The Met Office has adopted the Service-Orientated Architecture paradigm to deliver services to a range of customers through Rich Internet Applications (RIAs). The approach uses standard Open Geospatial Consortium (OGC) web services to provide information to web-based applications through a range of generic data services. "Invent", the Met Office beta site, is used to showcase Met Office future plans for presenting web-based weather forecasts, product and information to the public. This currently hosts a freely accessible Weather Map Viewer, written in JavaScript, which accesses a Web Map Service (WMS), to deliver innovative web-based visualizations of weather and its potential impacts to the public. The intention is to engage the public in the development of new web-based services that more accurately meet their needs. As the service is intended for public use within the UK, it has been designed to support a user base of 5 million, the analysed level of UK web traffic reaching the Met Office's public weather information site. The required scalability has been realised through the use of multi-tier tile caching: - WMS requests are made for 256x256 tiles for fixed areas and zoom levels; - a Tile Cache, developed in house, efficiently serves tiles on demand, managing WMS request for the new tiles; - Edge Servers, externally hosted by Akamai, provide a highly scalable (UK-centric) service for pre-cached tiles, passing new requests to the Tile Cache; - the Invent Weather Map Viewer uses the Google Maps API to request tiles from Edge Servers. (We would expect to make use of the Web Map Tiling Service, when it becomes an OGC standard.) The Met Office delivers specialist commercial products to market sectors such as transport, utilities and defence, which exploit a Web Feature Service (WFS) for data relating forecasts and observations to specific geographic features, and a Web Coverage Service (WCS) for sub-selections of gridded data. These are locally rendered as maps or

  19. Advanced Call Center Supporting WAP Access

    Institute of Scientific and Technical Information of China (English)

    YUANXiao-hua; CHENJun-liang

    2001-01-01

    Traditional call centers can be accessed via speech only, and the call center based on web provides both da-ta and speech access,but it needs a powerful terminal-computer.By analyzing traditional call centers and call cen-ters based on web, this paper presents the framework of an advanced call center supporting WAP access.A typical service is also described in detail.

  20. Lightweight methodology to improve web accessibility

    CSIR Research Space (South Africa)

    Greeff, M

    2009-10-01

    Full Text Available for the selected text using the Flesch-Kincaid reading and grade levels [7, 19]. The Flesch-Kincaid Reading Ease score indicates how easy a text is to read and a high score implies an easy text, e.g. comics typically score around 90 while legal text can get a...

  1. A development process meta-model for Web based expert systems: The Web engineering point of view

    DEFF Research Database (Denmark)

    Dokas, I.M.; Alapetite, Alexandre

    2006-01-01

    Similar to many legacy computer systems, expert systems can be accessed via the Web, forming a set of Web applications known as Web based expert systems. The tough Web competition, the way people and organizations rely on Web applications and theincreasing user requirements for better services have...... raised their complexity. Unfortunately, there is so far no clear answer to the question: How may the methods and experience of Web engineering and expert systems be combined and applied in order todevelop effective and successful Web based expert systems? In an attempt to answer this question......, a development process meta-model for Web based expert systems will be presented. Based on this meta-model, a publicly available Web based expert systemcalled Landfill Operation Management Advisor (LOMA) was developed. In addition, the results of an accessibility evaluation on LOMA – the first ever reported...

  2. Recommender Systems for the Social Web

    CERN Document Server

    Pazos Arias, José J; Díaz Redondo, Rebeca P

    2012-01-01

    The recommendation of products, content and services cannot be considered newly born, although its widespread application is still in full swing. While its growing success in numerous sectors, the progress of the  Social Web has revolutionized the architecture of participation and relationship in the Web, making it necessary to restate recommendation and reconciling it with Collaborative Tagging, as the popularization of authoring in the Web, and  Social Networking, as the translation of personal relationships to the Web. Precisely, the convergence of recommendation with the above Social Web pillars is what motivates this book, which has collected contributions from well-known experts in the academy and the industry to provide a broader view of the problems that Social Recommenders might face with.  If recommender systems have proven their key role in facilitating the user access to resources on the Web, when sharing resources has become social, it is natural for recommendation strategies in the Social Web...

  3. Sexual information seeking on web search engines.

    Science.gov (United States)

    Spink, Amanda; Koricich, Andrew; Jansen, B J; Cole, Charles

    2004-02-01

    Sexual information seeking is an important element within human information behavior. Seeking sexually related information on the Internet takes many forms and channels, including chat rooms discussions, accessing Websites or searching Web search engines for sexual materials. The study of sexual Web queries provides insight into sexually-related information-seeking behavior, of value to Web users and providers alike. We qualitatively analyzed queries from logs of 1,025,910 Alta Vista and AlltheWeb.com Web user queries from 2001. We compared the differences in sexually-related Web searching between Alta Vista and AlltheWeb.com users. Differences were found in session duration, query outcomes, and search term choices. Implications of the findings for sexual information seeking are discussed.

  4. A Web Service Framework for Economic Applications

    Directory of Open Access Journals (Sweden)

    Dan BENTA

    2010-01-01

    Full Text Available The Internet offers multiple solutions to linkcompanies with their partners, customers or suppliersusing IT solutions, including a special focus on Webservices. Web services are able to solve the problem relatedto the exchange of data between business partners, marketsthat can use each other's services, problems ofincompatibility between IT applications. As web servicesare described, discovered and accessed programs based onXML vocabularies and Web protocols, Web servicesrepresents solutions for Web-based technologies for smalland medium-sized enterprises (SMEs. This paper presentsa web service framework for economic applications. Also, aprototype of this IT solution using web services waspresented and implemented in a few companies from IT,commerce and consulting fields measuring the impact ofthe solution in the business environment development.

  5. REGIONAL WEBGIS USER ACCESS PATTERNS BASED ON A WEIGHTED BIPARTITE NETWORK

    Directory of Open Access Journals (Sweden)

    R. Li

    2015-07-01

    Full Text Available With the rapid development of geographic information services, Web Geographic Information Systems (WebGIS have become an indispensable part of everyday life; correspondingly, map search engines have become extremely popular with users and WebGIS sites receive a massive volume of requests for access. These WebGIS users and the content accessed have regional characteristics; to understand regional patterns, we mined regional WebGIS user access patterns based on a weighted bipartite network. We first established a weighted bipartite network model for regional user access to a WebGIS. Then, based on the massive user WebGIS access logs, we clustered geographic information accessed and thereby identified hot access areas. Finally we quantitatively analyzed the access interests of regional users and the visitation volume characteristics of regional user access to these hot access areas in terms of user access permeability, user usage rate, and user access viscosity. Our research results show that regional user access to WebGIS is spatially aggregated, and the hot access areas that regional users accessed are associated with specific periods of time. Most regional user contact with hot accessed areas is variable and intermittent but for some users, their access to certain areas is continuous as it is associated with ongoing or recurrent objectives. The weighted bipartite network model for regional user WebGIS access provides a valid analysis method for studying user behaviour in WebGIS and the proposed access pattern exhibits access interest of regional user is spatiotemporal aggregated and presents a heavy-tailed distribution. Understanding user access patterns is good for WebGIS providers and supports better operational decision-making, and helpful for developers when optimizing WebGIS system architecture and deployment, so as to improve the user experience and to expand the popularity of WebGIS.

  6. Regional Webgis User Access Patterns Based on a Weighted Bipartite Network

    Science.gov (United States)

    Li, R.; Shen, Y.; Huang, W.; Wu, H.

    2015-07-01

    With the rapid development of geographic information services, Web Geographic Information Systems (WebGIS) have become an indispensable part of everyday life; correspondingly, map search engines have become extremely popular with users and WebGIS sites receive a massive volume of requests for access. These WebGIS users and the content accessed have regional characteristics; to understand regional patterns, we mined regional WebGIS user access patterns based on a weighted bipartite network. We first established a weighted bipartite network model for regional user access to a WebGIS. Then, based on the massive user WebGIS access logs, we clustered geographic information accessed and thereby identified hot access areas. Finally we quantitatively analyzed the access interests of regional users and the visitation volume characteristics of regional user access to these hot access areas in terms of user access permeability, user usage rate, and user access viscosity. Our research results show that regional user access to WebGIS is spatially aggregated, and the hot access areas that regional users accessed are associated with specific periods of time. Most regional user contact with hot accessed areas is variable and intermittent but for some users, their access to certain areas is continuous as it is associated with ongoing or recurrent objectives. The weighted bipartite network model for regional user WebGIS access provides a valid analysis method for studying user behaviour in WebGIS and the proposed access pattern exhibits access interest of regional user is spatiotemporal aggregated and presents a heavy-tailed distribution. Understanding user access patterns is good for WebGIS providers and supports better operational decision-making, and helpful for developers when optimizing WebGIS system architecture and deployment, so as to improve the user experience and to expand the popularity of WebGIS.

  7. Cooperative Mobile Web Browsing

    DEFF Research Database (Denmark)

    Perrucci, GP; Fitzek, FHP; Zhang, Qi

    2009-01-01

    This paper advocates a novel approach for mobile web browsing based on cooperation among wireless devices within close proximity operating in a cellular environment. In the actual state of the art, mobile phones can access the web using different cellular technologies. However, the supported data...... extension of current technology, we propose a novel architecture where mobile phones are grouped together in clusters, using a short-range communication such as Bluetooth, sharing, and accumulating their cellular capacity. The accumulated data rate resulting from collaborative interactions over short...... rates are not sufficient to cope with the ever increasing trafic requirements resulting from advanced and rich content services. Extending the state of the art, higher data rates can only be achieved by increasing complexity, cost, and energy consumption of mobile phones. In contrast to the linear...

  8. Swiss EMBnet node web server.

    Science.gov (United States)

    Falquet, Laurent; Bordoli, Lorenza; Ioannidis, Vassilios; Pagni, Marco; Jongeneel, C Victor

    2003-07-01

    EMBnet is a consortium of collaborating bioinformatics groups located mainly within Europe (http://www.embnet.org). Each member country is represented by a 'node', a group responsible for the maintenance of local services for their users (e.g. education, training, software, database distribution, technical support, helpdesk). Among these services a web portal with links and access to locally developed and maintained software is essential and different for each node. Our web portal targets biomedical scientists in Switzerland and elsewhere, offering them access to a collection of important sequence analysis tools mirrored from other sites or developed locally. We describe here the Swiss EMBnet node web site (http://www.ch.embnet.org), which presents a number of original services not available anywhere else.

  9. Accessible Website Content Guidelines for Users with Intellectual Disabilities

    NARCIS (Netherlands)

    Karreman, Joyce; Geest, van der Thea; Buursink, Esmee

    2007-01-01

    Background: The W3C Web Accessibility Initiative has issued guidelines for making websites better and easier to access for people with various disabilities (W3C Web Accessibility Initiative guidelines 1999). - Method: The usability of two versions of a website (a non-adapted site and a site that wa

  10. Evolution of the cosmic web

    NARCIS (Netherlands)

    Cautun, Marius; van de Weygaert, Rien; Jones, Bernard J. T.; Frenk, Carlos S.

    2014-01-01

    The cosmic web is the largest scale manifestation of the anisotropic gravitational collapse of matter. It represents the transitional stage between linear and non-linear structures and contains easily accessible information about the early phases of structure formation processes. Here we investigate

  11. Mobile response in web panels

    NARCIS (Netherlands)

    de Bruijne, M.A.; Wijnant, A.

    2014-01-01

    This article investigates unintended mobile access to surveys in online, probability-based panels. We find that spontaneous tablet usage is drastically increasing in web surveys, while smartphone usage remains low. Further, we analyze the bias of respondent profiles using smartphones and tablets com

  12. World Wide Web Homepage Design.

    Science.gov (United States)

    Tillman, Michael L.

    This paper examines hypermedia design and draws conclusions about how educational research and theory applies to various aspects of World Wide Web (WWW) homepage design. "Hypermedia" is defined as any collection of information which may be textual, graphical, visual, or auditory in nature and which may be accessed via a nonlinear route.…

  13. 基于活动授权的组合Web服务业务流程动态访问控制模型研究%STUDY ON ACTIVITY AUTHORIZATION BASED DYNAMIC ACCESS CONTROL MODEL FOR COMPOSITE WEB SERVICES BUSINESS PROCESS

    Institute of Scientific and Technical Information of China (English)

    上超望; 刘清堂; 赵呈领; 童名文

    2014-01-01

    Business process access control mechanism is a difficult problem in composite web services security applications.Considering the deficiency in current researches,an Activity Authorization Based Dynamic Access Control Model for BPEL4WS (AACBP)is proposed.By dissolving the coupling relationship between the organization model and the business process model,AACBP utilizes activity authorization as the basic unit to implement BPEL4WS access control.Through the activity instances,the model implements fine-gained access control of the activities,and realizes the synchronization of authorization and business process execution.At last,the paper also describes the implementa-tion architecture of AACBP model in web services secure composition.%业务流程访问控制机制是组合Web服务安全应用中的难点问题。针对现有研究不足,提出基于活动授权的Web服务业务流程动态访问控制模型AACBP(Activity Authorization Based Dynamic Access Control Model for BPEL4WS)。通过解除组织模型和业务流程模型间的耦合关系,AACBP将活动授权作为BPEL4WS(Business Process Expression Language for Web Services)活动访问控制实施的基本单元。依据活动实例动态感知上下文,AACBP细粒度约束活动访问授权,实现授权流与业务流程执行同步。最后给出AACBP模型在Web服务安全组合中的实施机制。

  14. Surfing for Data: A Gathering Trend in Data Storage Is the Use of Web-Based Applications that Make It Easy for Authorized Users to Access Hosted Server Content with Just a Computing Device and Browser

    Science.gov (United States)

    Technology & Learning, 2005

    2005-01-01

    In recent years, the widespread availability of networks and the flexibility of Web browsers have shifted the industry from a client-server model to a Web-based one. In the client-server model of computing, clients run applications locally, with the servers managing storage, printing functions, and network traffic. Because every client is…

  15. New web technologies for astronomy

    Science.gov (United States)

    Sprimont, P.-G.; Ricci, D.; Nicastro, L.

    2014-12-01

    Thanks to the new HTML5 capabilities and the huge improvements of the JavaScript language, it is now possible to design very complex and interactive web user interfaces. On top of that, the once monolithic and file-server oriented web servers are evolving into easily programmable server applications capable to cope with the complex interactions made possible by the new generation of browsers. We believe that the whole community of amateur and professionals astronomers can benefit from the potential of these new technologies. New web interfaces can be designed to provide the user with a large deal of much more intuitive and interactive tools. Accessing astronomical data archives, schedule, control and monitor observatories, and in particular robotic telescopes, supervising data reduction pipelines, all are capabilities that can now be implemented in a JavaScript web application. In this paper we describe the Sadira package we are implementing exactly to this aim.

  16. From Web 2.0 to Teacher 2.0

    Science.gov (United States)

    Thomas, David A.; Li, Qing

    2008-01-01

    The World Wide Web is evolving in response to users who demand faster and more efficient access to information, portability, and reusability of digital objects between Web-based and computer-based applications and powerful communication, publication, collaboration, and teaching and learning tools. This article reviews current uses of Web-based…

  17. The 'Don'ts' of Web Page Design.

    Science.gov (United States)

    Balas, Janet L.

    1999-01-01

    Discusses online resources that focus on what not to do in Web page design. "Don'ts" include: making any of the top 10 mistakes identified by Nielsen, qualifying for a "muddie" award for bad Web sites, forgetting to listen to users, and forgetting accessibility. A sidebar lists the Web site addresses for the nine resources…

  18. A Survey of Web Information Technology and Application

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    @@ 0 Introduction The surprising growth of the Internet, coupled with the rapid development of Web technique and more and more emergence of web information system and application, is bring great opportunities and big challenges to us. Since the Web provides cross-platform universal access to resources for the massive user population, even greater demand is requested to manage data and services effectively.

  19. 32 CFR 806b.51 - Privacy and the Web.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Privacy and the Web. 806b.51 Section 806b.51... PROGRAM Disclosing Records to Third Parties § 806b.51 Privacy and the Web. Do not post personal information on publicly accessible DoD web sites unless clearly authorized by law and implementing...

  20. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  1. Keyword search in the Deep Web

    OpenAIRE

    Calì, Andrea; Martinenghi, D.; Torlone, R.

    2015-01-01

    The Deep Web is constituted by data accessible through Web\\ud pages, but not readily indexable by search engines, as they are returned\\ud in dynamic pages. In this paper we propose a framework for accessing\\ud Deep Web sources, represented as relational tables with so-called ac-\\ud cess limitations, with keyword-based queries. We formalize the notion\\ud of optimal answer and investigate methods for query processing. To our\\ud knowledge, this problem has never been studied in a systematic way.

  2. DAMEWARE - Data Mining & Exploration Web Application Resource

    CERN Document Server

    Brescia, Massimo; Esposito, Francesco; Fiore, Michelangelo; Garofalo, Mauro; Guglielmo, Magda; Longo, Giuseppe; Manna, Francesco; Nocella, Alfonso; Vellucci, Civita

    2016-01-01

    Astronomy is undergoing through a methodological revolution triggered by an unprecedented wealth of complex and accurate data. DAMEWARE (DAta Mining & Exploration Web Application and REsource) is a general purpose, Web-based, Virtual Observatory compliant, distributed data mining framework specialized in massive data sets exploration with machine learning methods. We present the DAMEWARE (DAta Mining & Exploration Web Application REsource) which allows the scientific community to perform data mining and exploratory experiments on massive data sets, by using a simple web browser. DAMEWARE offers several tools which can be seen as working environments where to choose data analysis functionalities such as clustering, classification, regression, feature extraction etc., together with models and algorithms.

  3. Web Based Personal Nutrition Management Tool

    Science.gov (United States)

    Bozkurt, Selen; Zayim, Neşe; Gülkesen, Kemal Hakan; Samur, Mehmet Kemal

    Internet is being used increasingly as a resource for accessing health-related information because of its several advantages. Therefore, Internet tailoring becomes quite preferable in health education and personal health management recently. Today, there are many web based health programs de-signed for individuals. Among these studies nutrition and weight management is popular because, obesity has become a heavy burden for populations worldwide. In this study, we designed a web based personal nutrition education and management tool, The Nutrition Web Portal, in order to enhance patients’ nutrition knowledge, and provide behavioral change against obesity. The present paper reports analysis, design and development processes of The Nutrition Web Portal.

  4. Harnessing the Deep Web: Present and Future

    OpenAIRE

    J. Madhavan; Afanasiev, L.; Antova, L.; Halevy, A

    2009-01-01

    Over the past few years, we have built a system that has exposed large volumes of Deep-Web content to Google.com users. The content that our system exposes contributes to more than 1000 search queries per-second and spans over 50 languages and hundreds of domains. The Deep Web has long been acknowledged to be a major source of structured data on the web, and hence accessing Deep-Web content has long been a problem of interest in the data management community. In this paper, we report on where...

  5. Harnessing the Deep Web: present and future

    OpenAIRE

    Madhavan, J; Afanasiev, L.; Antova, L.; Halevy, A.

    2009-01-01

    Over the past few years, we have built a system that has exposed large volumes of Deep-Web content to Google.com users. The content that our system exposes contributes to more than 1000 search queries per-second and spans over 50 languages and hundreds of domains. The Deep Web has long been acknowledged to be a major source of structured data on the web, and hence accessing Deep-Web content has long been a problem of interest in the data management community. In this paper, we report on where...

  6. WILI - Web Interface for people with Lowvision Issues

    CERN Document Server

    Kuppusamy, K S; Aghila, G

    2012-01-01

    Though World Wide Web is the single largest source of information, it is ill-equipped to serve the people with vision related problems. With the prolific increase in the interest to make the web accessible to all sections of the society, solving this accessibility problem becomes mandatory. This paper presents a technique for making web pages accessible for people with low vision issues. A model for making web pages accessible, WILI (Web Interface for people with Low-vision Issues) has been proposed. The approach followed in this work is to automatically replace the existing display style of a web page with a new skin following the guidelines given by Clear Print Booklet provided by Royal National Institute of Blind. "Single Click Solution" is one of the primary advantages provided by WILI. A prototype using the WILI model is implemented and various experiments are conducted. The results of experiments conducted on WILI indicate 82% effective conversion rate.

  7. An Intuitive Approach for Web Scale Mining using W-Miner for Web Personalization

    Directory of Open Access Journals (Sweden)

    R.Lokeshkumar

    2014-08-01

    Full Text Available Web usage mining performs mining on web usage data or web logs. It is now possible to perform data mining on web log records collected from the web page history. A web log is a listing of page reference data/click stream data. The behavior of the web page readers is imprinted in the web server log files. By looking at the sequence of pages a user accesses, a user profile could be developed thus aiding in personalization. With personalization, web access or the contents of web page are modified to better fit the desires of the user and also to identify the browsing behavior of the user can improve system performance, enhance the quality and delivery of Internet Information services to the end user and identify the population of potential customers. With clustering, the desires are determined based on similarities. In this study, a Fuzzy clustering algorithm is designed and implemented. For the proposed algorithm, meaningful behavior patterns are extracted by applying efficient Fuzzy clustering algorithm, to log data. It is proved that performance of the proposed system is better than that of the existing best algorithm. The proposed Fuzzy clustering w-miner algorithm can provide popular information to web page visitors.

  8. A New Hidden Web Crawling Approach

    Directory of Open Access Journals (Sweden)

    L.Saoudi

    2015-10-01

    Full Text Available Traditional search engines deal with the Surface Web which is a set of Web pages directly accessible through hyperlinks and ignores a large part of the Web called hidden Web which is a great amount of valuable information of online database which is “hidden” behind the query forms. To access to those information the crawler have to fill the forms with a valid data, for this reason we propose a new approach which use SQLI technique in order to find the most promising keywords of a specific domain for automatic form submission. The effectiveness of proposed framework has been evaluated through experiments using real web sites and encouraging preliminary results were obtained

  9. Gestor de contenidos web

    OpenAIRE

    García Populin, Iván

    2014-01-01

    Trabajo final de carrera desarrollado en .NET. Presenta un gestor de contenidos web para generar una web publicitaria. Treball final de carrera desenvolupat en .NET. Presenta un gestor de continguts web per generar una web publicitària.

  10. Web archiving in the National and University Library

    Directory of Open Access Journals (Sweden)

    Alenka Kavčič-Čolić

    2011-01-01

    Full Text Available The National and University Library (NUK of Slovenia has been investigating web archiving methods and techniques since 2001. Under the new Legal Deposit Law adopted in 2006, NUK is the responsible institution for harvesting and archiving the Slovenian web. In 2008 NUK started archiving the Slovenian web by making use of the web harvesting and access tools developed by the IIPC International Internet Preservation Consortium (IIPC. The paper presents the complexity of web harvesting and gives an overview of the international practice and NUK’s cooperation in the IIPC consortium. Special attention is given to the analysis of public sector web content,harvested since 2008. Main goals of future developement of the web archive are an increase of harvested Slovenian web sites, the development of a user interface for public access and development of improved methods for harvesting technically problematic content.

  11. DIRAC: Secure web user interface

    Energy Technology Data Exchange (ETDEWEB)

    Casajus Ramo, A [University of Barcelona, Diagonal 647, ES-08028 Barcelona (Spain); Sapunov, M, E-mail: sapunov@in2p3.f [Centre de Physique des Particules de Marseille, 163 Av de Luminy Case 902 13288 Marseille (France)

    2010-04-01

    Traditionally the interaction between users and the Grid is done with command line tools. However, these tools are difficult to use by non-expert users providing minimal help and generating outputs not always easy to understand especially in case of errors. Graphical User Interfaces are typically limited to providing access to the monitoring or accounting information and concentrate on some particular aspects failing to cover the full spectrum of grid control tasks. To make the Grid more user friendly more complete graphical interfaces are needed. Within the DIRAC project we have attempted to construct a Web based User Interface that provides means not only for monitoring the system behavior but also allows to steer the main user activities on the grid. Using DIRAC's web interface a user can easily track jobs and data. It provides access to job information and allows performing actions on jobs such as killing or deleting. Data managers can define and monitor file transfer activity as well as check requests set by jobs. Production managers can define and follow large data productions and react if necessary by stopping or starting them. The Web Portal is build following all the grid security standards and using modern Web 2.0 technologies which allow to achieve the user experience similar to the desktop applications. Details of the DIRAC Web Portal architecture and User Interface will be presented and discussed.

  12. MISA-web: a web server for microsatellite prediction.

    Science.gov (United States)

    Beier, Sebastian; Thiel, Thomas; Münch, Thomas; Scholz, Uwe; Mascher, Martin

    2017-08-15

    Microsatellites are a widely-used marker system in plant genetics and forensics. The development of reliable microsatellite markers from resequencing data is challenging. We extended MISA, a computational tool assisting the development of microsatellite markers, and reimplemented it as a web-based application. We improved compound microsatellite detection and added the possibility to display and export MISA results in GFF3 format for downstream analysis. MISA-web can be accessed under http://misaweb.ipk-gatersleben.de/. The website provides tutorials, usage note as well as download links to the source code. scholz@ipk-gatersleben.de.

  13. Web Interactive Campus Map

    Directory of Open Access Journals (Sweden)

    Marylene S. Eder

    2015-03-01

    Full Text Available Abstract Interactive campus map is a web based application that can be accessed through a web browser. With the Google Map Application Programming Interface availability of the overlay function has been taken advantage to create custom map functionalities. Collection of building points were gathered for routing and to create polygons which serves as a representation of each building. The previous campus map provides a static visual representation of the campus. It uses legends building name and its corresponding building number in providing information. Due to its limited capabilities it became a realization to the researchers to create an interactive campus map.Storing data about the building room and staff information and university events and campus guide are among the primary features that this study has to offer. Interactive Web-based Campus Information System is intended in providing a Campus Information System.It is open to constant updates user-friendly for both trained and untrained users and capable of responding to all needs of users and carrying out analyses. Based on the data gathered through questionnaires researchers analyzed the results of the test survey and proved that the system is user friendly deliver information to users and the important features that the students expect.

  14. Establishment of web interface for tensiomiography measurements display

    OpenAIRE

    Poljanšek, Nejc

    2014-01-01

    The purpose of this thesis was to develop a solution to display tensiomyographic measurements. The web application enables the user to access the measurement data stored in the cloud database. The open source Laravel framework written in the PHP programming language was used for developing the web interface. The architecture of the whole system and the web app user interface are introduced in more details. The main app functionalities and the way of using the web interface are also descri...

  15. DATA EXTRACTION AND LABEL ASSIGNMENT FOR WEB DATABASES

    Directory of Open Access Journals (Sweden)

    T. Rajesh

    2015-10-01

    Full Text Available Deep Web contents are accessed by queries submitted to Web databases and the returned data records are en wrapped in dynamically generated Web pages (they will be called deep Web pages in this paper. The structured data that Extracting from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now, a too many number of techniques have been proposed to address this problem, but all of them have limitations because they are Web-page-programming-language dependent.

  16. Linked Data Evolving the Web into a Global Data Space

    CERN Document Server

    Heath, Tom

    2011-01-01

    The World Wide Web has enabled the creation of a global information space comprising linked documents. As the Web becomes ever more enmeshed with our daily lives, there is a growing desire for direct access to raw data not currently available on the Web or bound up in hypertext documents. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. In this Synthesis lecture we provide readers with a detailed technical i

  17. An Introduction to Search Engines and Web Navigation

    CERN Document Server

    Levene, Mark

    2010-01-01

    This book is a second edition, updated and expanded to explain the technologies that help us find information on the web.  Search engines and web navigation tools have become ubiquitous in our day to day use of the web as an information source, a tool for commercial transactions and a social computing tool. Moreover, through the mobile web we have access to the web's services when we are on the move.  This book demystifies the tools that we use when interacting with the web, and gives the reader a detailed overview of where we are and where we are going in terms of search engine

  18. State prescription drug price Web sites: how useful to consumers?

    Science.gov (United States)

    Tu, Ha T; Corey, Catherine G

    2008-02-01

    To aid consumers in comparing prescription drug costs, many states have launched Web sites to publish drug prices offered by local retail pharmacies. The current push to make retail pharmacy prices accessible to consumers is part of a much broader movement to increase price transparency throughout the health-care sector. Efforts to encourage price-based shopping for hospital and physician services have encountered widespread concerns, both on grounds that prices for complex services are difficult to measure and compare accurately and that quality varies substantially across providers. Experts agree, however, that prescription drugs are much easier to shop for than other, more complex health services. However, extensive gaps in available price information--the result of relying on Medicaid data--seriously hamper the effectiveness of state drug price-comparison Web sites, according to a new study by the Center for Studying Health System Change (HSC). An alternative approach--requiring pharmacies to submit price lists to the states--would improve the usefulness of price information, but pharmacies typically oppose such a mandate. Another limitation of most state Web sites is that price information is restricted to local pharmacies, when online pharmacies, both U.S. and foreign, often sell prescription drugs at substantially lower prices. To further enhance consumer shopping tools, states might consider expanding the types of information provided, including online pharmacy comparison tools, lists of deeply discounted generic drugs offered by discount retailers, and lists of local pharmacies offering price matches.

  19. Groundwater recharge: Accurately representing evapotranspiration

    CSIR Research Space (South Africa)

    Bugan, Richard DH

    2011-09-01

    Full Text Available Groundwater recharge is the basis for accurate estimation of groundwater resources, for determining the modes of water allocation and groundwater resource susceptibility to climate change. Accurate estimations of groundwater recharge with models...

  20. FPA Depot - Web Application

    Science.gov (United States)

    Avila, Edwin M. Martinez; Muniz, Ricardo; Szafran, Jamie; Dalton, Adam

    2011-01-01

    Lines of code (LOC) analysis is one of the methods used to measure programmer productivity and estimate schedules of programming projects. The Launch Control System (LCS) had previously used this method to estimate the amount of work and to plan development efforts. The disadvantage of using LOC as a measure of effort is that one can only measure 30% to 35% of the total effort of software projects involves coding [8]. In the application, instead of using the LOC we are using function point for a better estimation of hours in each software to develop. Because of these disadvantages, Jamie Szafran of the System Software Branch of Control And Data Systems (NE-C3) at Kennedy Space Canter developed a web application called Function Point Analysis (FPA) Depot. The objective of this web application is that the LCS software architecture team can use the data to more accurately estimate the effort required to implement customer requirements. This paper describes the evolution of the domain model used for function point analysis as project managers continually strive to generate more accurate estimates.

  1. WEB Services Implementation on The Report of Dengue Hemorrhagic Fever (DHF At Health Office Karanganyar

    Directory of Open Access Journals (Sweden)

    Ragil Saputra

    2013-06-01

    Full Text Available Abstract— Dengue Hemorrhagic Fever (DHF is one of the infectious diseases that frequently leads to Extraordinary Situation. The management of report is conducted by Health Community Center which subsequently gives report to Health Office. A problem arising from the report management is the fact that the report is conducted manually, therefore, the data is less valid and is not processed as quickly as possible. The quick and accurate data report system enables to lessen the risk of Dengue Hemorrhagic Fever. Due to this fact, it is undeniable necessary to provide an integrated inter-system of Dengue Fever report. This system includes an inter-system between one Health Community Center to another and to the system in Health Office. The integration of inter-system report is able to be conducted by the use of web service technology. Therefore, this research focuses on the development of Web Service based integrated system on the report of Dengue Fever. Data exchange is conducted in XML form by the application of SOAP and WSDL technologies. Library NuSOAP is necessary to provide class soapClient and soapServer. In other words, it functions as the listener whose functions are to receive and to respond at the access demand toward web service. The result is web service based report system which has dual functions since the system has functions to be either server or client. Keywords— web service, integration, SOAP, DHF.

  2. Ontology Based Qos Driven Web Service Discovery

    Directory of Open Access Journals (Sweden)

    R Suganyakala

    2011-07-01

    Full Text Available In today's scenario web services have become a grand vision to implement the business process functionalities. With increase in number of similar web services, one of the essential challenges is to discover relevant web service with regard to user specification. Relevancy of web service discovery can be improved by augmenting semantics through expressive formats like OWL. QoS based service selection will play a significant role in meeting the non-functional user requirements. Hence QoS and semantics has been used as finer search constraints to discover the most relevant service. In this paper, we describe a QoS framework for ontology based web service discovery. The QoS factors taken into consideration are execution time, response time, throughput, scalability, reputation, accessibility and availability. The behavior of each web service at various instances is observed over a period of time and their QoS based performance is analyzed.

  3. Aspectos y normas de accesibilidad web

    Directory of Open Access Journals (Sweden)

    Jairo Armando Riaño Herrera

    2014-12-01

    Full Text Available Este artículo revisa la diferencia y relación que existe entre los términos de accesibilidad y usabilidad en contextos de software y páginas web. Además con base en la definición de accesibilidad web hace una revisión de las normas y estándares que existen y que son mantenidas por el consorcio internacional W3C con su iniciativa “Web Accessibility Inititive”, con el objetivo de garantizar la accesibilidad a los sitios web por cualquier persona usuario independiente de su estado de discapacidad. Finalmente se enumeran aspectos a tener en cuenta al momento de publicar contenidos en la web y algunas herramientas de validación en línea que permiten validar el cumplimiento de normas de accesibilidad de los sitios web.

  4. Mobile Web Design for Dummies

    CERN Document Server

    Warner, Janine

    2010-01-01

    The perfect place to learn how to design Web sites for mobile devices!. With the popularity of Internet access via cell phones and other mobile devices, Web designers now have to consider as many as eight operating systems, several browsers, and a slew of new devices as they plan a new site, a new interface, or a new sub-site. This easy-to-follow friendly book guides you through this brave new world with a clear look at the fundamentals and offers practical techniques and tricks you may not have considered.: Explores all issues to consider in planning a mobile site; Covers the tools needed for

  5. Instant web scraping with Java

    CERN Document Server

    Mitchell, Ryan

    2013-01-01

    This book is full of short, concise recipes to learn a variety of useful web scraping techniques using Java. You will start with a simple basic recipe of setting up your Java environment and gradually learn some more advanced recipes such as using complex Scrapers.Instant Web Scraping with Java is aimed at developers who, while not necessarily familiar with Java, are at least ready to dive into the complexities of this language with simple, step-by-step instructions leading the way. It is assumed that you have at least an intermediate knowledge of HTML, some knowledge of MySQL, and access to a

  6. Beginning Joomla! Web Site Development

    CERN Document Server

    Webb, Cory

    2009-01-01

    By programmers for programmers-the essential beginner's guide to building websites with Joomla!. Want to build and maintain dynamic websites without having to learn HTML and CSS? Joomla! open-source web content management system and this beginner's guide are all you need. This book walks you step-by-step through the process of building a website with Joomla!, providing detailed instruction in Wrox's practical, programmer-to- programmer style. The book explores key concepts and shows how each concept relates to the development of an actual real-world web site you can access online.: Joomla! is

  7. WEB 238 Courses Tutorial / indigohelp

    OpenAIRE

    2015-01-01

    WEB 238 Week 2 JavaScript Events WEB 238 Week 3 Cookies WEB 238 Week 4 Dynamic HTML WEB 238 Week 5 Web Programming Languages WEB 238 Week 1 DQs WEB 238 Week 2DQs WEB 238 Week 3DQs WEB 238 Week 4DQs WEB 238 Week 5DQs  

  8. Usare WebDewey

    OpenAIRE

    Baldi, Paolo

    2016-01-01

    This presentation shows how to use the WebDewey tool. Features of WebDewey. Italian WebDewey compared with American WebDewey. Querying Italian WebDewey. Italian WebDewey and MARC21. Italian WebDewey and UNIMARC. Numbers, captions, "equivalente verbale": Dewey decimal classification in Italian catalogues. Italian WebDewey and Nuovo soggettario. Italian WebDewey and LCSH. Italian WebDewey compared with printed version of Italian Dewey Classification (22. edition): advantages and disadvantages o...

  9. Web Annotation and Threaded Forum: How Did Learners Use the Two Environments in an Online Discussion?

    Science.gov (United States)

    Sun, Yanyan; Gao, Fei

    2014-01-01

    Web annotation is a Web 2.0 technology that allows learners to work collaboratively on web pages or electronic documents. This study explored the use of Web annotation as an online discussion tool by comparing it to a traditional threaded discussion forum. Ten graduate students participated in the study. Participants had access to both a Web…

  10. Binary Particle Swarm Optimization based Biclustering of Web usage Data

    CERN Document Server

    Bagyamani, R Rathipriya K Thangavel J

    2011-01-01

    Web mining is the nontrivial process to discover valid, novel, potentially useful knowledge from web data using the data mining techniques or methods. It may give information that is useful for improving the services offered by web portals and information access and retrieval tools. With the rapid development of biclustering, more researchers have applied the biclustering technique to different fields in recent years. When biclustering approach is applied to the web usage data it automatically captures the hidden browsing patterns from it in the form of biclusters. In this work, swarm intelligent technique is combined with biclustering approach to propose an algorithm called Binary Particle Swarm Optimization (BPSO) based Biclustering for Web Usage Data. The main objective of this algorithm is to retrieve the global optimal bicluster from the web usage data. These biclusters contain relationships between web users and web pages which are useful for the E-Commerce applications like web advertising and marketin...

  11. Design and evaluation of web-based image transmission and display with different protocols

    Science.gov (United States)

    Tan, Bin; Chen, Kuangyi; Zheng, Xichuan; Zhang, Jianguo

    2011-03-01

    There are many Web-based image accessing technologies used in medical imaging area, such as component-based (ActiveX Control) thick client Web display, Zerofootprint thin client Web viewer (or called server side processing Web viewer), Flash Rich Internet Application(RIA) ,or HTML5 based Web display. Different Web display methods have different peformance in different network environment. In this presenation, we give an evaluation on two developed Web based image display systems. The first one is used for thin client Web display. It works between a PACS Web server with WADO interface and thin client. The PACS Web server provides JPEG format images to HTML pages. The second one is for thick client Web display. It works between a PACS Web server with WADO interface and thick client running in browsers containing ActiveX control, Flash RIA program or HTML5 scripts. The PACS Web server provides native DICOM format images or JPIP stream for theses clients.

  12. Web users’ language utilization behaviors in China

    Institute of Scientific and Technical Information of China (English)

    LAI; Maosheng; QU; Peng; ZHAO; Kang

    2009-01-01

    The paper focuses on the habits of China Web users’language utilization behaviors in accessing the Web.It also seeks to make a general study on the basic nature of language phenomenon with regard to digital accessing.A questionnaire survey was formulated and distributed online for these research purposes.There were 1,267 responses collected.The data were analyzed with descriptive statistics,Chi-square testing and contingency table analyses.Results revealed the following findings.Tagging has already played an important role in Web2.0 communication for China’s Web users.China users rely greatly on all kinds of taxonomies in browsing and have also an awareness of them in effective searching.These imply that the classified languages in digital environment may aid Chinese Web users in a more satisfying manner.Highly subject-specific words,especially those from authorized tools,yielded better results in searching.Chinese users have high recognition for related terms.As to the demographic aspect,there is little difference between different genders in the utilization of information retrieval languages.Age may constitute a variable element to a certain degree.Educational background has a complex effect on language utilizations in searching.These research findings characterize China Web users’behaviors in digital information accessing.They also can be potentially valuable for the modeling and further refinement of digital accessing services.

  13. Designing a Semantic Web Path to e-Science

    OpenAIRE

    Di Donato, Francesca

    2005-01-01

    This paper aims at designing a possible path of convergence between the Open Access and the Semantic Web communities. In section 1, it focuses on the problems that the current Web has to face to become a fully effective research means, with particular regard to the question of selection according to subjective quality criteria. Section 2 exposes the main principles and standards which lie behind the Open Access movement, and tries to demonstrate that the Open Access community is a fertile gro...

  14. An Authentication system of Web Services Based on Web Server Log Analysis

    Directory of Open Access Journals (Sweden)

    R. Joseph Manoj

    2014-01-01

    Full Text Available Authentication is a method which validates users' identity prior to permitting them to access the web services. To enhance the security of web services, providers follow varieties of authentication methods to restrict malicious users from accessing the services. This paper proposes a new authentication method which claims user’s identity by analyzing web server log files which includes the details of requesting user’s IP address, username, password, date and time of request, status code, URL etc., and checks IP address spoofing using ingress packet filtering method. This paper also analyses the resultant data and performance of the proposed work.

  15. Opal web services for biomedical applications.

    Science.gov (United States)

    Ren, Jingyuan; Williams, Nadya; Clementi, Luca; Krishnan, Sriram; Li, Wilfred W

    2010-07-01

    Biomedical applications have become increasingly complex, and they often require large-scale high-performance computing resources with a large number of processors and memory. The complexity of application deployment and the advances in cluster, grid and cloud computing require new modes of support for biomedical research. Scientific Software as a Service (sSaaS) enables scalable and transparent access to biomedical applications through simple standards-based Web interfaces. Towards this end, we built a production web server (http://ws.nbcr.net) in August 2007 to support the bioinformatics application called MEME. The server has grown since to include docking analysis with AutoDock and AutoDock Vina, electrostatic calculations using PDB2PQR and APBS, and off-target analysis using SMAP. All the applications on the servers are powered by Opal, a toolkit that allows users to wrap scientific applications easily as web services without any modification to the scientific codes, by writing simple XML configuration files. Opal allows both web forms-based access and programmatic access of all our applications. The Opal toolkit currently supports SOAP-based Web service access to a number of popular applications from the National Biomedical Computation Resource (NBCR) and affiliated collaborative and service projects. In addition, Opal's programmatic access capability allows our applications to be accessed through many workflow tools, including Vision, Kepler, Nimrod/K and VisTrails. From mid-August 2007 to the end of 2009, we have successfully executed 239,814 jobs. The number of successfully executed jobs more than doubled from 205 to 411 per day between 2008 and 2009. The Opal-enabled service model is useful for a wide range of applications. It provides for interoperation with other applications with Web Service interfaces, and allows application developers to focus on the scientific tool and workflow development. Web server availability: http://ws.nbcr.net.

  16. WebTag: Web browsing into sensor tags over NFC.

    Science.gov (United States)

    Echevarria, Juan Jose; Ruiz-de-Garibay, Jonathan; Legarda, Jon; Alvarez, Maite; Ayerbe, Ana; Vazquez, Juan Ignacio

    2012-01-01

    Information and Communication Technologies (ICTs) continue to overcome many of the challenges related to wireless sensor monitoring, such as for example the design of smarter embedded processors, the improvement of the network architectures, the development of efficient communication protocols or the maximization of the life cycle autonomy. This work tries to improve the communication link of the data transmission in wireless sensor monitoring. The upstream communication link is usually based on standard IP technologies, but the downstream side is always masked with the proprietary protocols used for the wireless link (like ZigBee, Bluetooth, RFID, etc.). This work presents a novel solution (WebTag) for a direct IP based access to a sensor tag over the Near Field Communication (NFC) technology for secure applications. WebTag allows a direct web access to the sensor tag by means of a standard web browser, it reads the sensor data, configures the sampling rate and implements IP based security policies. It is, definitely, a new step towards the evolution of the Internet of Things paradigm.

  17. WebTag: Web Browsing into Sensor Tags over NFC

    Directory of Open Access Journals (Sweden)

    Juan Jose Echevarria

    2012-06-01

    Full Text Available Information and Communication Technologies (ICTs continue to overcome many of the challenges related to wireless sensor monitoring, such as for example the design of smarter embedded processors, the improvement of the network architectures, the development of efficient communication protocols or the maximization of the life cycle autonomy. This work tries to improve the communication link of the data transmission in wireless sensor monitoring. The upstream communication link is usually based on standard IP technologies, but the downstream side is always masked with the proprietary protocols used for the wireless link (like ZigBee, Bluetooth, RFID, etc.. This work presents a novel solution (WebTag for a direct IP based access to a sensor tag over the Near Field Communication (NFC technology for secure applications. WebTag allows a direct web access to the sensor tag by means of a standard web browser, it reads the sensor data, configures the sampling rate and implements IP based security policies. It is, definitely, a new step towards the evolution of the Internet of Things paradigm.

  18. WebTag: Web Browsing into Sensor Tags over NFC

    Science.gov (United States)

    Echevarria, Juan Jose; Ruiz-de-Garibay, Jonathan; Legarda, Jon; Álvarez, Maite; Ayerbe, Ana; Vazquez, Juan Ignacio

    2012-01-01

    Information and Communication Technologies (ICTs) continue to overcome many of the challenges related to wireless sensor monitoring, such as for example the design of smarter embedded processors, the improvement of the network architectures, the development of efficient communication protocols or the maximization of the life cycle autonomy. This work tries to improve the communication link of the data transmission in wireless sensor monitoring. The upstream communication link is usually based on standard IP technologies, but the downstream side is always masked with the proprietary protocols used for the wireless link (like ZigBee, Bluetooth, RFID, etc.). This work presents a novel solution (WebTag) for a direct IP based access to a sensor tag over the Near Field Communication (NFC) technology for secure applications. WebTag allows a direct web access to the sensor tag by means of a standard web browser, it reads the sensor data, configures the sampling rate and implements IP based security policies. It is, definitely, a new step towards the evolution of the Internet of Things paradigm. PMID:23012511

  19. Evolution of the cosmic web

    CERN Document Server

    Cautun, Marius; Jones, Bernard J T; Frenk, Carlos S

    2014-01-01

    The cosmic web is the largest scale manifestation of the anisotropic gravitational collapse of matter. It represents the transitional stage between linear and non-linear structures and contains easily accessible information about the early phases of structure formation processes. Here we investigate the characteristics and the time evolution of morphological components since. Our analysis involves the application of the NEXUS Multiscale Morphology Filter (MMF) technique, predominantly its NEXUS+ version, to high resolution and large volume cosmological simulations. We quantify the cosmic web components in terms of their mass and volume content, their density distribution and halo populations. We employ new analysis techniques to determine the spatial extent of filaments and sheets, like their total length and local width. This analysis identifies cluster and filaments as the most prominent components of the web. In contrast, while voids and sheets take most of the volume, they correspond to underdense environ...

  20. Managing and monitoring tuberculosis using web-based tools in combination with traditional approaches

    Directory of Open Access Journals (Sweden)

    Chapman AL

    2013-11-01

    Full Text Available Ann LN Chapman,1 Thomas C Darton,2 Rachel A Foster11Department of Infection and Tropical Medicine, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, 2Oxford Vaccine Group, Centre for Clinical Vaccinology and Tropical Medicine, University of Oxford, Oxford, UKAbstract: Tuberculosis (TB remains a global health emergency. Ongoing challenges include the coordination of national and international control programs, high levels of drug resistance in many parts of the world, and availability of accurate and rapid diagnostic tests. The increasing availability and reliability of Internet access throughout both affluent and resource-limited countries brings new opportunities to improve TB management and control through the integration of web-based technologies with traditional approaches. In this review, we explore current and potential future use of web-based tools in the areas of TB diagnosis, treatment, epidemiology, service monitoring, and teaching and training.Keywords: tuberculosis, information communication technology, Internet