Harper, Simon; Yesilada, Yeliz
Access to, and movement around, complex online environments, of which the World Wide Web (Web) is the most popular example, has long been considered an important and major issue in the Web design and usability field. The commonly used slang phrase ‘surfing the Web’ implies rapid and free access, pointing to its importance among designers and users alike. It has also been long established that this potentially complex and difficult access is further complicated, and becomes neither rapid nor free, if the user is disabled. There are millions of people who have disabilities that affect their use of the Web. Web accessibility aims to help these people to perceive, understand, navigate, and interact with, as well as contribute to, the Web, and thereby the society in general. This accessibility is, in part, facilitated by the Web Content Accessibility Guidelines (WCAG) currently moving from version one to two. These guidelines are intended to encourage designers to make sure their sites conform to specifications, and in that conformance enable the assistive technologies of disabled users to better interact with the page content. In this way, it was hoped that accessibility could be supported. While this is in part true, guidelines do not solve all problems and the new WCAG version two guidelines are surrounded by controversy and intrigue. This chapter aims to establish the published literature related to Web accessibility and Web accessibility guidelines, and discuss limitations of the current guidelines and future directions.
Dean, Andrew S.
Determining the best method for granting World Wide Web (Web) users access to remote relational databases is difficult. Choosing the best supporting Web/database link method for implementation requires an in-depth understanding of the methods available and the relationship between the link designer's goals and the underlying issues of Performance and Functionality, Cost, Development Time and Ease, Serviceability, Flexibility and Openness, Security, State and Session. This thesis examined exis...
Israa Wahbi Kamal
Full Text Available University web portals are considered one of the main access gateways for universities. Typically, they have a large candidate audience among the current students, employees, and faculty members aside from previous and future students, employees, and faculty members. Web accessibility is the concept of providing web content universal access to different machines and people with different ages, skills, education levels, and abilities. Several web accessibility metrics have been proposed in previous years to measure web accessibility. We integrated and extracted common web accessibility metrics from the different accessibility tools used in this study. This study evaluates web accessibility metrics for 36 Jordanian universities and educational institute websites. We analyze the level of web accessibility using a number of available evaluation tools against the standard guidelines for web accessibility. Receiver operating characteristic quality measurements is used to evaluate the effectiveness of the integrated accessibility metrics.
Qadri, Jameel A
Web Accessibility for disabled people has posed a challenge to the civilized societies that claim to uphold the principles of equal opportunity and nondiscrimination. Certain concrete measures have been taken to narrow down the digital divide between normal and disabled users of Internet technology. The efforts have resulted in enactment of legislations and laws and mass awareness about the discriminatory nature of the accessibility issue, besides the efforts have resulted in the development of commensurate technological tools to develop and test the Web accessibility. World Wide Web consortium's (W3C) Web Accessibility Initiative (WAI) has framed a comprehensive document comprising of set of guidelines to make the Web sites accessible to the users with disabilities. This paper is about the issues and aspects surrounding Web Accessibility. The details and scope are kept limited to comply with the aim of the paper which is to create awareness and to provide basis for in-depth investigation.
Swallow, David; Petrie, Helen; Power, Christopher
This paper describes the design and evaluation of a Web Accessibility Information Resource (WebAIR) for supporting web developers to create and evaluate accessible websites. WebAIR was designed with web developers in mind, recognising their current working practices and acknowledging their existing understanding of web accessibility. We conducted an evaluation with 32 professional web developers in which they used either WebAIR or an existing accessibility information resource, the Web Content Accessibility Guidelines, to identify accessibility problems. The findings indicate that several design decisions made in relation to the language, organisation, and volume of WebAIR were effective in supporting web developers to undertake web accessibility evaluations.
Obrenovic, Z.; Ossenbruggen, J.R. van
A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities,
Fernandes, N.; Lopes, R.; Carriço, L.
DUAN Lei; SHEN Liren
Web services is used in Experimental Physics and Industrial Control System (EPICS). Combined with EPICS Channel Access protocol, Web services' high usability, platform independence and language independence can be used to design a fully transparent and uniform software interface layer, which helps us complete channel data acquisition, modification and monitoring functions. This software interface layer, a cross-platform of cross-language,has good interopcrability and reusability.
David A. Bradbard
Full Text Available Web accessibility is the practice of making Web sites accessible to all, particularly those with disabilities. As the Internet becomes a central part of post-secondary instruction, it is imperative that instructional Web sites be designed for accessibility to meet the needs of disabled students. The purpose of this article is to introduce Web accessibility to university faculty in theory and practice. With respect to theory, this article first reviews empirical studies, highlights legal mandates related to Web accessibility, overviews the standards related to Web accessibility, and reviews authoring and evaluation tools available for designing accessible Web sites. With respect to practice, the article presents two diaries representing the authors’ experiences in making their own Web sites accessible. Finally, based on these experiences, we discuss the implications of faculty efforts to improve Web accessibility.
The purpose of this web-accessible database is for the public to be able to view instantaneous readings from a solar-powered air monitoring station located in a public location (prototype pilot test is outside of a library in Durham County, NC). The data are wirelessly transmitte...
Foley, Alan; Regan, Bob
Discusses Web design for people with disabilities and outlines a process-based approach to accessibility policy implementation. Topics include legal mandates; determining which standards apply to a given organization; validation, or evaluation of the site; site architecture; navigation; and organizational needs. (Author/LRW)
Leon, John; Cutlip, William; Hametz, Mark
The Access To Space (ATS) Group at NASA's Goddard Space Flight Center (GSFC) supports the science and technology community at GSFC by facilitating frequent and affordable opportunities for access to space. Through partnerships established with access mode suppliers, the ATS Group has developed an interactive Mission Design web site. The ATS web site provides both the information and the tools necessary to assist mission planners in selecting and planning their ride to space. This includes the evaluation of single payloads vs. ride-sharing opportunities to reduce the cost of access to space. Features of this site include the following: (1) Mission Database. Our mission database contains a listing of missions ranging from proposed missions to manifested. Missions can be entered by our user community through data input tools. Data is then accessed by users through various search engines: orbit parameters, ride-share opportunities, spacecraft parameters, other mission notes, launch vehicle, and contact information. (2) Launch Vehicle Toolboxes. The launch vehicle toolboxes provide the user a full range of information on vehicle classes and individual configurations. Topics include: general information, environments, performance, payload interface, available volume, and launch sites.
Bradbard, David A.; Peters, Cara
Web accessibility is the practice of making Web sites accessible to all, particularly those with disabilities. As the Internet becomes a central part of post-secondary instruction, it is imperative that instructional Web sites be designed for accessibility to meet the needs of disabled students. The purpose of this article is to introduce Web…
Full Text Available Web accessibility makes it possible for the disabled to get equal access to information provided in web like the normal. Therefore, to enable the disabled to use web, there is a need for construction of web page abide by accessibility. The text on the web site is output by sound using screen reader, so that the visually impaired can recognize the meaning of text. However, screen reader cannot recognize image. This paper studies a method for explaining images included in web pages using QR-Code. When producing web page adapting the method provided in this paper, it will help the visually impaired to understand the contents of webpage.
Suneet Kumar; Anuj Kumar Yadav; Rakesh Bharti; Rani Choudhary
Searching Focused web crawlers have recently emerged as an alternative to the well-established web search engines. While the well-known focused crawlers retrieve relevant web-pages, there are various applications which target whole websites instead of single web-pages. For example, companies are represented by websites, not by individual web-pages. To answer queries targeted at websites, web directories are an established solution. In this paper, we introduce a novel focused website crawler t...
Web accessibility makes it possible for the disabled to get equal access to information provided in web like the normal. Therefore, to enable the disabled to use web, there is a need for construction of web page abide by accessibility. The text on the web site is output by sound using screen reader, so that the visually impaired can recognize the meaning of text. However, screen reader cannot recognize image. This paper studies a method for explaining images included in web pages using QR-...
Centelles Velilla, Miquel; Ribera, Mireia; Rodríguez Santiago, Inmaculada
This paper presents a research concerning the conversion of non-accessible web pages containing mathematical formulae into accessible versions through an OCR (Optical Character Recognition) tool. The objective of this research is twofold. First, to establish criteria for evaluating the potential accessibility of mathematical web sites, i.e. the feasibility of converting non-accessible (non-MathML) math sites into accessible ones (Math-ML). Second, to propose a data model and a mechanism to pu...
Goodwin, Morten; Susar, Deniz; Nietzio, Annika;
Equal access to public information and services for all is an essential part of the United Nations (UN) Declaration of Human Rights. Today, the Web plays an important role in providing information and services to citizens. Unfortunately, many government Web sites are poorly designed and have...... accessibility barriers that prevent people with disabilities from using them. This article combines current Web accessibility benchmarking methodologies with a sound strategy for comparing Web accessibility among countries and continents. Furthermore, the article presents the first global analysis of the Web...... accessibility of 192 United Nation Member States made publically available. The article also identifies common properties of Member States that have accessible and inaccessible Web sites and shows that implementing antidisability discrimination laws is highly beneficial for the accessibility of Web sites, while...
Full Text Available The success of web-based applications depends on how well it is perceive by the end-users. The various web accessibility guidelines have promoted to help improve accessing, understanding the content of web pages. Designing for the total User Experience (UX is an evolving discipline of the World Wide Web mainstream that focuses on how the end users will work to achieve their target goals. To satisfy end-users, web-based applications must fulfill some common needs like clarity, accessibility and availability. The aim of this study is to evaluate how the User Experience characteristics of web-based application are related to web accessibility guidelines (WCAG 2.0, ISO 9241:151 and Section 508.
Dragut, Eduard Constantin
An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…
Examines the pros and cons of providing access to the World Wide Web for library patrons as well as suggesting solutions to problems. Discusses the establishment of a library policy governing the use of the Web, as well as the importance of workshops and instructional materials on Web use in libraries. (Author/AEF)
Ahmi, Aidi; Mohamad, Rosli
Despite the fact that Malaysian public institutions have progressed considerably on website and portal usage, web accessibility has been reported as one of the issues deserves special attention. Consistent with the government moves to promote an effective use of web and portal, it is essential for the government institutions to ensure compliance with established standards and guidelines on web accessibility. This paper evaluates accessibility of 25 Malaysian ministries websites using automated tools i.e. WAVE and Achecker. Both tools are designed to objectively evaluate web accessibility in conformance with Web Content Accessibility Guidelines 2.0 (WCAG 2.0) and United States Rehabilitation Act 1973 (Section 508). The findings reported somewhat low compliance to web accessibility standard amongst the ministries. Further enhancement is needed in the aspect of input elements such as label and checkbox to be associated with text as well as image-related elements. This findings could be used as a mechanism for webmasters to locate and rectify errors pertaining to the web accessibility and to ensure equal access of the web information and services to all citizen.
Full Text Available Abstract Background The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. Results The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. Conclusions We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.
Full Text Available Semantic Web approaches try to get the interoperability and communication among technologies and organizations. Nevertheless, sometimes it is forgotten that the Web must be useful for every user, consequently it is necessary to include tools and techniques doing Semantic Web be accessible. Accessibility and usability are two usually joined concepts widely used in web application development, however their meaning are different. Usability means the way to make easy the use but accessibility is referred to the access possibility. For the first one, there are many well proved approaches in real cases. However, accessibility field requires a deeper research that will make feasible the access to disable people and also the access to novel non-disable people due to the cost to automate and maintain accessible applications. In this paper, we propose one architecture to achieve the accessibility in web-environments dealing with the WAI accessibility standard and the Universal Design paradigm. This architecture tries to control the accessibility in web applications development life-cycle following a methodology starting from a semantic conceptual model and leans on description languages and controlled vocabularies.
Why do librarians and library staff other than Web librarians and developers need to know about accessibility? Web services staff do not--or should not--operate in isolation from the rest of the library staff. It is important to consider what areas of online accessibility are applicable to other areas of library work and to colleagues' regular job…
Lee, MW; Chen, SY; Liu, X.
Web-based technology has already been adopted as a tool to support teaching and learning in higher education. One criterion affecting the usability of such a technology is the design of web-based interface (WBI) within web-based learning programs. How different users access the WBIs has been investigated by several studies, which mainly analyze the collected data using statistical methods. In this paper, we propose to analyze users’ learning behavior using Data Mining (DM) techniques. Finding...
Offers an introduction to web accessibility and usability for information professionals, offering advice on the concerns relevant to library and information organizations. This book can be used as a resource for developing staff training and awareness activities. It will also be of value to website managers involved in web design and development.
The aim of this paper is to describe ongoing research being carried out to enable people with visual impairments to communicate directly with designers and specifiers of hobby and community web sites to maximise the accessibility of their sites. The research started with an investigation of the accessibility of community and hobby web sites as perceived by a group of visually impaired end users. It is continuing with an investigation into how to best to communicate with web designers who are not experts in web accessibility. The research is making use of communication theory to investigate how terminology describing personal experience can be used in the most effective and powerful way. By working with the users using a Delphi study the research has ensured that the views of the visually impaired end users is successfully transmitted. PMID:26294465
The aim of this paper is to describe ongoing research being carried out to enable people with visual impairments to communicate directly with designers and specifiers of hobby and community web sites to maximise the accessibility of their sites. The research started with an investigation of the accessibility of community and hobby web sites as perceived by a group of visually impaired end users. It is continuing with an investigation into how to best to communicate with web designers who are not experts in web accessibility. The research is making use of communication theory to investigate how terminology describing personal experience can be used in the most effective and powerful way. By working with the users using a Delphi study the research has ensured that the views of the visually impaired end users is successfully transmitted.
Aydogmus, Z.; Aydogmus, O.
The Internet provides an opportunity for students to access laboratories from outside the campus. This paper presents a Web-based remote access real-time laboratory using SCADA (supervisory control and data acquisition) control. The control of an induction motor is used as an example to demonstrate the effectiveness of this remote laboratory,…
Gomathi, C.; Moorthi, M.; Duraiswamy, K.
Web Access Pattern (WAP), which is the sequence of accesses pursued by users frequently, is a kind of interesting and useful knowledge in practice. Sequential Pattern mining is the process of applying data mining techniques to a sequential database for the purposes of discovering the correlation relationships that exist among an ordered list of…
Robbins Kay A
Full Text Available Abstract Background Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. Findings SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. Conclusions We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.
Valentine, D. W.; Jennings, B.; Zaslavsky, I.; Maidment, D. R.
The CUAHSI hydrologic information system (HIS) is designed to be a live, multiscale web portal system for accessing, querying, visualizing, and publishing distributed hydrologic observation data and models for any location or region in the United States. The HIS design follows the principles of open service oriented architecture, i.e. system components are represented as web services with well defined standard service APIs. WaterOneFlow web services are the main component of the design. The currently available services have been completely re-written compared to the previous version, and provide programmatic access to USGS NWIS. (steam flow, groundwater and water quality repositories), DAYMET daily observations, NASA MODIS, and Unidata NAM streams, with several additional web service wrappers being added (EPA STORET, NCDC and others.). Different repositories of hydrologic data use different vocabularies, and support different types of query access. Resolving semantic and structural heterogeneities across different hydrologic observation archives and distilling a generic set of service signatures is one of the main scalability challenges in this project, and a requirement in our web service design. To accomplish the uniformity of the web services API, data repositories are modeled following the CUAHSI Observation Data Model. The web service responses are document-based, and use an XML schema to express the semantics in a standard format. Access to station metadata is provided via web service methods, GetSites, GetSiteInfo and GetVariableInfo. The methdods form the foundation of CUAHSI HIS discovery interface and may execute over locally-stored metadata or request the information from remote repositories directly. Observation values are retrieved via a generic GetValues method which is executed against national data repositories. The service is implemented in ASP.Net, and other providers are implementing WaterOneFlow services in java. Reference implementation of
Purpose -- This paper investigates the impact and techniques for mitigating the effects of web robots on usage statistics collected by Open Access institutional repositories (IRs). Design/methodology/approach -- A review of the literature provides a comprehensive list of web robot detection techniques. Reviews of system documentation and open source code are carried out along with personal interviews to provide a comparison of the robot detection techniques used in the major IR platforms. An ...
Full Text Available Normal 0 Government's use of the Web in the UK is prolific and a wide range of services are now available though this channel. The government set out to address the problem that links from Hansard (the transcripts of Parliamentary debates were not maintained over time and that therefore there was need for some long-term storage and stewardship of information, including maintaining access. Further investigation revealed that linking was key, not only in maintaining access to information, but also to the discovery of information. This resulted in a project that affects the entire government Web estate, with a solution leveraging the basic building blocks of the Internet (DNS and the Web (HTTP and URIs in a pragmatic way, to ensure that an infrastructure is in place to provide access to important information both now and in the future.
Tso, Kam S.; Pajevski, Michael J.
Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers
Full Text Available We present KnoWDiaL, an approach for Learning and using task-relevant Knowledge from human-robot Dialog and access to the Web. KnoWDiaL assumes that there is an autonomous agent that performs tasks, as requested by humans through speech. The agent needs to “understand” the request, (i.e., to fully ground the task until it can proceed to plan for and execute it. KnoWDiaL contributes such understanding by using and updating a Knowledge Base, by dialoguing with the user, and by accessing the web. We believe that KnoWDiaL, as we present it, can be applied to general autonomous agents. However, we focus on our work with our autonomous collaborative robot, CoBot, which executes service tasks in a building, moving around and transporting objects between locations. Hence, the knowledge acquired and accessed consists of groundings of language to robot actions, and building locations, persons, and objects. KnoWDiaL handles the interpretation of voice commands, is robust regarding speech recognition errors, and is able to learn commands involving referring expressions in an open domain, (i.e., without requiring a lexicon. We present in detail the multiple components of KnoWDiaL, namely a frame-semantic parser, a probabilistic grounding model, a web-based predicate evaluator, a dialog manager, and the weighted predicate-based Knowledge Base. We illustrate the knowledge access and updates from the dialog and Web access, through detailed and complete examples. We further evaluate the correctness of the predicate instances learned into the Knowledge Base, and show the increase in dialog efficiency as a function of the number of interactions. We have extensively and successfully used KnoWDiaL in CoBot dialoguing and accessing the Web, and extract a few corresponding example sequences from captured videos.
SanthanaVannan, Suresh K [ORNL; Cook, Robert B [ORNL; Pan, Jerry Yun [ORNL; Wilson, Bruce E [ORNL
Remote sensing data from satellites have provided valuable information on the state of the earth for several decades. Since March 2000, the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on board NASA s Terra and Aqua satellites have been providing estimates of several land parameters useful in understanding earth system processes at global, continental, and regional scales. However, the HDF-EOS file format, specialized software needed to process the HDF-EOS files, data volume, and the high spatial and temporal resolution of MODIS data make it difficult for users wanting to extract small but valuable amounts of information from the MODIS record. To overcome this usability issue, the NASA-funded Distributed Active Archive Center (DAAC) for Biogeochemical Dynamics at Oak Ridge National Laboratory (ORNL) developed a Web service that provides subsets of MODIS land products using Simple Object Access Protocol (SOAP). The ORNL DAAC MODIS subsetting Web service is a unique way of serving satellite data that exploits a fairly established and popular Internet protocol to allow users access to massive amounts of remote sensing data. The Web service provides MODIS land product subsets up to 201 x 201 km in a non-proprietary comma delimited text file format. Users can programmatically query the Web service to extract MODIS land parameters for real time data integration into models, decision support tools or connect to workflow software. Information regarding the MODIS SOAP subsetting Web service is available on the World Wide Web (WWW) at http://daac.ornl.gov/modiswebservice.
Bemmel, van J.; Wegdam, M.; Lagerberg, K.
Web services fail to deliver on the promise of ubiquitous deployment and seamless interoperability due to the lack of a uniform, standards-based approach to all aspects of security. In particular, the enforcement of access policies in a service oriented architecture is not addressed adequately. We p
Full Text Available A Turing machine has an important role in education in the field of computer science, as it is a milestone in courses related to automata theory, theory of computation and computer architecture. Its value is also recognized in the Computing Curricula proposed by the Association for Computing Machinery (ACM and IEEE Computer Society. In this paper we present a physical implementation of the Turing machine accessed through Web. To enable remote access to the Turing machine, an implementation of the client-server architecture is built. The web interface is described in detail and illustrations of remote programming, initialization and the computation of the Turing machine are given. Advantages of such approach and expected benefits obtained by using remotely accessible physical implementation of the Turing machine as an educational tool in the teaching process are discussed.
Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.
Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the
K.V.S. Jaharsh Samayan
Full Text Available The motive of this study is to suggest a protocol which can be implemented to observe the activities of any node within a network whose contribution to the organization needs to be measured. Many associates working in any organization misuse the resources allocated to them and waste their working time in unproductive work which is of no use to the organization. In order to tackle this problem the dynamic approach in monitoring web pages accessed by user using cookies gives a very efficient way of tracking all the activities of the individual and store in cookies which are generated based on their recent web activity and display a statistical information of how the users web activity for the time period has been utilized for every IP-address in the network. In a ever challenging dynamic world monitoring the productivity of the associates in the organization plays an utmost important role.
Srirama, Satish Narayana
It is now feasible to host basic web services on a smart phone due to the advances in wireless devices and mobile communication technologies. While the applications are quite welcoming, the ability to provide secure and reliable communication in the vulnerable and volatile mobile ad-hoc topologies is vastly becoming necessary. The paper mainly addresses the details and issues in providing secured communication and access control for the mobile web service provisioning domain. While the basic message-level security can be provided, providing proper access control mechanisms for the Mobile Host still poses a great challenge. This paper discusses details of secure communication and proposes the distributed semantics-based authorization mechanism.
Tso, Kam S.; Pajevski, Michael J.; Johnson, Bryan
Cyber security has gained national and international attention as a result of near continuous headlines from financial institutions, retail stores, government offices and universities reporting compromised systems and stolen data. Concerns continue to rise as threats of service interruption, and spreading of viruses become ever more prevalent and serious. Controlling access to application layer resources is a critical component in a layered security solution that includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. In this paper we discuss the development of an application-level access control solution, based on an open-source access manager augmented with custom software components, to provide protection to both Web-based and Java-based client and server applications.
Haibo Shen; Yu Cheng
As mobile web services becomes more pervasive, applications based on mobile web services will need flexible access control mechanisms. Unlike traditional approaches based on the identity or role for access control, access decisions for these applications will depend on the combination of the required attributes of users and the contextual information. This paper proposes a semantic context-based access control model (called SCBAC) to be applied in mobile web services environment by combining ...
Full Text Available Web logs are a young and dynamic media type. Due to the intrinsic relationship among Web objects and the deficiency of a uniform schema of web documents, Web community mining has become significant area for Web data management and analysis. The research of Web communities extents a number of research domains. In this paper an ontological model has been present with some recent studies on this topic, which cover finding relevant Web pages based on linkage information, discovering user access patterns through analyzing Web log files from Web data. A simulation has been created with the academic website crawled data. The simulation is done in JAVA and ORACLE environment. Results show that prediction of user session could give us plenty of vital information for the Business Intelligence. Search Engine Optimization could also use these potential results which are discussed in the paper in detail.
Fuertes Castro, José Luis; González, Ricardo; Gutiérrez, Emmanuelle; Martínez Normand, Loïc
Website accessibility evaluation is a complex task requiring a combination of human expertise and software support. There are several online and offline tools to support the manual web accessibility evaluation process. However, they all have some weaknesses because none of them includes all the desired features. In this paper we present Hera-FFX, an add-on for the Firefox web browser that supports semi-automatic web accessibility evaluation.
Gondara, Mandeep Kaur
Semantic Web is an open, distributed, and dynamic environment where access to resources cannot be controlled in a safe manner unless the access decision takes into account during discovery of web services. Security becomes the crucial factor for the adoption of the semantic based web services. An access control means that the users must fulfill certain conditions in order to gain access over web services. Access control is important in both perspectives i.e. legal and security point of view. This paper discusses important requirements for effective access control in semantic web services which have been extracted from the literature surveyed. I have also discussed open research issues in this context, focusing on access control policies and models in this paper.
Ulltveit-Moe, Nils; Olsen, Morten Goodwin; Pillai, Anand B.;
The European Internet Accessibility project (EIAO) has developed an Observatory for performing large scale automatic web accessibility evaluations of public sector web sites in Europe. The architecture includes a distributed web crawler that crawls web sites for links until either a given budget...... of web pages have been identified or the web site has been crawled exhaustively. Subsequently, a uniform random subset of the crawled web pages is sampled and sent for accessibility evaluation and the evaluation results are stored in a Resource Description Format (RDF) database that is later loaded...... challenges that the project faced and the solutions developed towards building a system capable of regular large-scale accessibility evaluations with sufficient capacity and stability. It also outlines some possible future architectural improvements....
Nelson, Michael L.; Bianco, David J.
NASA Langley Research Center (LaRC) began using the World Wide Web (WWW) in the summer of 1993, becoming the first NASA installation to provide a Center-wide home page. This coincided with a reorganization of LaRC to provide a more concentrated focus on technology transfer to both aerospace and non-aerospace industry. Use of WWW and NCSA Mosaic not only provides automated information dissemination, but also allows for the implementation, evolution and integration of many technology transfer and technology awareness applications. This paper describes several of these innovative applications, including the on-line presentation of the entire Technology OPportunities Showcase (TOPS), an industrial partnering showcase that exists on the Web long after the actual 3-day event ended. The NASA Technical Report Server (NTRS) provides uniform access to many logically similar, yet physically distributed NASA report servers. WWW is also the foundation of the Langley Software Server (LSS), an experimental software distribution system which will distribute LaRC-developed software. In addition to the more formal technology distribution projects, WWW has been successful in connecting people with technologies and people with other people.
Mari Carmen González-Videgaray
Full Text Available Problems with mathematics learning, “math anxiety” or “statistics anxiety” among university students can be avoided by using teaching strategies and technological tools. Besides personal suffering, low achievement in mathematics reduces terminal efficiency and decreases enrollment in careers related to science, technology and mathematics. This paper has two main goals: 1 to offer an organized inventory of open access web resources for math learning in higher education, and 2 to explore to what extent these resources are currently known and used by students and teachers. The first goal was accomplished by running a search in Google and then classifying resources. For the second, we conducted a survey among a sample of students (n=487 and teachers (n=60 from mathematics and engineering within the largest public university in Mexico. We categorized 15 high-quality web resources. Most of them are interactive simulations and computer algebra systems. ResumenLos problemas en el aprendizaje de las matemáticas, como “ansiedad matemática” y “ansiedad estadística” pueden evitarse si se usan estrategias de enseñanza y herramientas tecnológicas. Además de un sufrimiento personal, el bajo rendimiento en matemáticas reduce la eficiencia terminal y decrementa la matrícula en carreras relacionadas con ciencia, tecnología y matemáticas. Este artículo tiene dos objetivos: 1 ofrecer un inventario organizado de recursos web de acceso abierto para aprender matemáticas en la universidad, y 2 explorar en qué medida estos recursos se usan actualmente entre alumnos y profesores. El primer objetivo se logró con un perfil de búsqueda en Google y una clasificación. Para el segundo, se condujo una encuesta en una muestra de estudiantes (n=487 y maestros (n=60 de matemáticas e ingeniería de la universidad más grande de México. Categorizamos 15 recursos web de alta calidad. La mayoría son simulaciones interactivas y
to broaden the scope to any type of user and any type of use case. The document provides an introduction to some required concepts and technical standards for designing accessible Web sites. A brief review of thelegal requirements in a few countries for Web accessibility complements the recommendations...
Khelghati, Mohammadreza; Keulen, van Maurice; Hiemstra, Djoerd
The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester wh
Full Text Available As smartphone clients are restricted in computational power and bandwidth, it is important to minimise the overhead of transmitted messages. This paper identifies and studies methods that reduce the amount of data being transferred via wireless links between a web service client and a web service. Measurements were performed in a real environment based on a web service prototype providing public transport information for the city of Hamburg in Germany, using actual wireless links with a mobile smartphone device. REST based web services using the data exchange formats JSON, XML and Fast Infoset were evaluated against the existing SOAP based web service.
The article is intended to introduce the readers to the concept and background of Web accessibility in the United States. I will first discuss different definitions of Web accessibility. The beneficiaries of accessible Web or the sufferers from inaccessible Web will be discussed based on the type of disability. The importance of Web accessibility will be introduced from the perspectives of ethical, demographic, legal, and financial importance. Web accessibility related standards and legislations will be discussed in great detail. Previous research on evaluating Web accessibility will be presented. Lastly, a system for automated Web accessibility transformation will be introduced as an alternative approach for enhancing Web accessibility.
Full Text Available The article attempts to answer the question: How in terms of web availability presents a group of web services type of e-shops operated by selected polish e-commerce companies? Discusses the essence of the web availability in the context of WCAG 2.0 standard and business benefits for companies arising from ownership accessible website fulfilling the recommendations of WCAG 2.0. Assessed of level the web accessibility of e-shops of selected polish e-commerce companies.
Highlights: • We present H1DS, a new RESTful web service for accessing fusion data. • We examine the scalability and extensibility of H1DS. • We present a fast and user friendly web browser client for the H1DS web service. • A summary relational database is presented as an application of the H1DS API. - Abstract: A new data access system, H1DS, has been developed and deployed for the H-1 Heliac at the Australian Plasma Fusion Research Facility. The data system provides access to fusion data via a RESTful web service. With the URL acting as the API to the data system, H1DS provides a scalable and extensible framework which is intuitive to new users, and allows access from any internet connected device. The H1DS framework, originally designed to work with MDSplus, has a modular design which can be extended to provide access to alternative data storage systems
Emig, Christian; Brandt, Frank; Abeck, Sebastian; Biermann, Jürgen; Klarl, Heiko
With the mutual consent to use WSDL (Web Service Description Language) to describe web service interfaces and SOAP as the basic communication protocol, the cornerstone for web service-oriented architecture (WSOA) has been established. Considering the momentum observable by the growing number of specifications in the web service domain for the indispensable cross-cutting concern of identity management (IdM) it is still an open issue how a WSOA-aware IdM architecture is built and how it is link...
Hinds, Richard M; Klifto, Christopher S; Naik, Amish A; Sapienza, Anthony; Capo, John T
The Internet is a common resource for applicants of hand surgery fellowships, however, the quality and accessibility of fellowship online information is unknown. The objectives of this study were to evaluate the accessibility of hand surgery fellowship Web sites and to assess the quality of information provided via program Web sites. Hand fellowship Web site accessibility was evaluated by reviewing the American Society for Surgery of the Hand (ASSH) on November 16, 2014 and the National Resident Matching Program (NRMP) fellowship directories on February 12, 2015, and performing an independent Google search on November 25, 2014. Accessible Web sites were then assessed for quality of the presented information. A total of 81 programs were identified with the ASSH directory featuring direct links to 32% of program Web sites and the NRMP directory directly linking to 0%. A Google search yielded direct links to 86% of program Web sites. The quality of presented information varied greatly among the 72 accessible Web sites. Program description (100%), fellowship application requirements (97%), program contact email address (85%), and research requirements (75%) were the most commonly presented components of fellowship information. Hand fellowship program Web sites can be accessed from the ASSH directory and, to a lesser extent, the NRMP directory. However, a Google search is the most reliable method to access online fellowship information. Of assessable programs, all featured a program description though the quality of the remaining information was variable. Hand surgery fellowship applicants may face some difficulties when attempting to gather program information online. Future efforts should focus on improving the accessibility and content quality on hand surgery fellowship program Web sites. PMID:27625537
National Aeronautics and Space Administration — Global Science & Technology, Inc. (GST) proposes to investigate information processing and delivery technologies to provide near-real-time Web-based access to...
World Wide Web is becoming increasingly necessary for everybody regardless of age, gender, culture, health and individual disabilities. Unfortunately, the information on the Web is still not accessible to deaf and hard of hearing Web users since these people require translations of written forms into their first language: sign language, which is based on facial expressions, hands and body movements and has its own linguistic structure. This thesis introduces a possible solution (method) for p...
Zeng, Xiaoming; Sligar, Steven R.
Human resource development programs in various institutions communicate with their constituencies including persons with disabilities through websites. Web sites need to be accessible for legal, economic and ethical reasons. We used an automated web usability evaluation tool, aDesigner, to evaluate 205 home pages from the organizations of AHRD…
Offers an introduction to the web of data and the semantic web, exploring technologies including APIs, microformats and linked data. This title includes topical commentary and practical examples that explore how information professionals can harness the power of this phenomenon to inform strategy and become facilitators of access to data.
Khelghati, Mohammadreza; Keulen, van, S.; Hiemstra, Djoerd
The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester which can be applied to different settings and situations. To develop such a harvester, a number of issues should be considered. Among these issues, business domain features, targeted websites' featu...
Steen-Hansen, Linn; Fagernes, Siri
Current accessibility research shows that in the web development, the process itself may lead to inaccessible web sites and applications. Common practices typically do not allow sufficient testing. The focus is mainly on complying with minimum standards, and treating accessibility compliance as a sort of bug-fixing process, missing the user perspective. In addition, there is an alarming lack of knowledge and experience with accessibility issues. It has also been argued that bringing accessibility into the development process at all stages is the only way to achieve the highest possible level of accessibility. The work presented in this paper is based on a previous project focusing on guidelines for developing accessible rich Internet applications. The guidelines were classified as either process-oriented or technology-oriented. In this paper, we examine the process-oriented guidelines and give a practical perspective on how these guidelines will make the development process more accessibility-friendly. PMID:27534339
Full Text Available As mobile web services becomes more pervasive, applications based on mobile web services will need flexible access control mechanisms. Unlike traditional approaches based on the identity or role for access control, access decisions for these applications will depend on the combination of the required attributes of users and the contextual information. This paper proposes a semantic context-based access control model (called SCBAC to be applied in mobile web services environment by combining semantic web technologies with context-based access control mechanism. The proposed model is a context-centric access control solutions, context is the first-class principle that explicitly guides both policy specification and enforcement process. In order to handle context information in the model, this paper proposes a context ontology to represent contextual information and employ it in the inference engine. As well as, this paper specifies access control policies as rules over ontologies representing the concepts introduced in the SCBAC model, and uses semantic web rule language (SWRL to form policy rule and infer those rules by JESS inference engine. The proposed model can also be applied to context-aware applications.
Full Text Available Web page access prediction gained its importance from the ever increasing number of e-commerce Web information systems and e-businesses. Web page prediction, that involves personalizing the Web users browsing experiences, assists Web masters in the improvement of the Website structure and helps Web users in navigating the site and accessing the information they need. The most widely used approach for this purpose is the pattern discovery process of Web usage mining that entails many techniques like Markov model, association rules and clustering. Implementing pattern discovery techniques as such helps predict the next page to be accessed by the Web user based on the users previous browsing patterns. However, each of the aforementioned techniques has its own limitations, especially when it comes to accuracy and space complexity. This paper achieves better accuracy as well as less state space complexity and rules generated by performing the following combinations. We integrate low -order Markov model and clustering. The data sets are clustered and Markov model analysis is performed on each cluster instead of the whole data sets. The outcome of the integration is better accuracy than the combination with less state space complexity than higher order Markov model.
In this article, we describe the present situation of access network management, enumerate a few problems during the development of network management systems, then put forward a distributed Intranet/Web solution named iMAN to the integrated management of access networks, present its architecture and protocol stack, and describe its application in practice.
While accessibility of information technologies is often acknowledged as important, it is frequently not well addressed in practice. The purpose of this study was to examine the work of web developers and content managers to explore why and how accessibility is or is not addressed as an objective as websites are planned, built and maintained.…
Radovan, Marko; Perdih, Mojca
E-learning is a rapidly developing form of education. One of the key characteristics of e-learning is flexibility, which enables easier access to knowledge for everyone. Information and communications technology (ICT), which is e-learning's main component, enables alternative means of accessing the web-based learning materials that comprise the…
Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly
Davies, Daniel K.; Stock, Steven E.; Wehmeyer, Michael L.
In this study, a prototype web browser, called Web Trek, that utilizes multimedia to provide access for individuals with cognitive disabilities was developed and pilot-tested with 12 adults with mental retardation. The Web Trek browser provided greater independence in accessing the Internet compared to Internet Explorer. (Contains references.)…
Luis Joyanes Aguilar
Full Text Available Due to the significant increase in threats, attacks and vulnerabilities that affect the Web in recent years has resulted the development and implementation of pools and methods to ensure security measures in the privacy, confidentiality and data integrity of users and businesses. Under certain circumstances, despite the implementation of these tools do not always get the flow of information which is passed in a secure manner. Many of these security tools and methods cannot be accessed by people who have disabilities or assistive technologies which enable people to access the Web efficiently. Among these security tools that are not accessible are the virtual keyboard, the CAPTCHA and other technologies that help to some extent to ensure safety on the Internet and are used in certain measures to combat malicious code and attacks that have been increased in recent times on the Web. Through the implementation of intelligent systems can detect, recover and receive information on the characteristics and properties of the different tools and hardware devices or software with which the user is accessing a web application and through analysis and interpretation of these intelligent systems can infer and automatically adjust the characteristics necessary to have these tools to be accessible by anyone regardless of disability or navigation context. This paper defines a set of guidelines and specific features that should have the security tools and methods to ensure the Web accessibility through the implementation of intelligent systems.
Esmeralda Serrano Mascaraque
Full Text Available Los organismos oficiales deben facilitar recursos informativos y prestar servicios a través de diversos medios en aras de conseguir el derecho a la información que le asiste a todo ciudadano. En el momento actual la Web es uno de los recursos más extendidos y por ello es fundamental evaluar el grado de accesibilidad que tienen los contenidos volcados en la Red. Para lograr esto se aplicarán las herramientas y software necesarios y se evaluará el nivel de accesibilidad de un grupo de sitios web representativos. Además se intentará determinar si existe algún tipo de relación entre accesibilidad y usabilidad, ya que ambos son aspectos deseables (o incluso exigibles legalmente, en el caso de la accesibilidad para tener un correcto diseño de web.Government agencies should provide information resources and services through various means in order to achieve the right to information that assists all citizens. Being the Web one of the most widespread resources, it becomes essential to evaluate the degree of its content accessibility. We will evaluate this level on a representative group of websites, and we will try to determine whether there is any relationship between accessibility and usability since both aspects are desired (or even legally required in the case of the accesibility in a proper Web design.
A new study shows that students aged 6 to 17 who have access to the Interact at home are growing afore and more dissatisfied with the access to the Net available to them at school. Grunwald Associates, a California market research firm, released the results of their survey, "Children, Families and the Internet," on December 4. Seventy-six percent…
As part of an effort to migrate the National Nuclear Data Center (NNDC) databases to a relational platform, a new web interface has been developed for the dissemination of the nuclear structure datasets stored in the Evaluated Nuclear Structure Data File and Experimental Unevaluated Nuclear Data List.
Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.
Full Text Available Sequential Pattern mining is the process of applying data mining techniques to asequential database for the purposes of discovering the correlation relationships that existamong an ordered list of events. The task of discovering frequent sequences ischallenging, because the algorithm needs to process a combinatorially explosive numberof possible sequences. Discovering hidden information from Web log data is called Webusage mining. One common usage in web applications is the mining of users’ accessbehaviour for the purpose of predicting and hence pre-fetching the web pages that theuser is likely to visit. The aim of discovering frequent Sequential patterns in Web log datais to obtain information about the access behaviour of the users.Finding Frequent Sequential Pattern (FSP is an important problem in web usagemining. In this paper, we explore a new frequent sequence pattern technique calledAWAPT (Adaptive Web Access Pattern Tree, for FSP mining. An AWAPT combinesSuffix tree and Prefix tree for efficient storage of all the sequences that contain a givenitem. It eliminates recursive reconstruction of intermediate WAP tree during the miningby assigning the binary codes to each node in the WAP Tree. Web access pattern tree(WAP-tree mining is a sequential pattern mining technique for web log accesssequences, which first stores the original web access sequence database(WASD on aprefix tree, similar to the frequent pattern tree (FP-tree for storing non-sequential data.WAP-tree algorithm then, mines the frequent sequences from the WAP-tree byrecursively re-constructing intermediate trees, starting with suffix sequences and endingwith prefix sequences. An attempt has been made to AWAPT approach for improvingefficiency. AWAPT totally eliminates the need to engage in numerous reconstructions ofintermediate WAP-trees during mining and considerably reduces execution time.
夏春涛; 杨艳丽; 曹利峰
为解决Web Services访问控制问题,分析了传统访问控制模型在Web Services应用中的不足,给出了面向Web Services 的基于属性的访问控制模型ABAC(Attribute Based Access Control)的定义,设计了ABAC访问控制架构,并利用可扩展的访问控制标记语言XACML( eXtensible Access Control Markup Language)实现了细粒度的Web Services访问控制系统.系统的应用有效保护了Web Services资源.%To deal with access control for web services, the problem of application of traditional access control model in web services is analysed, then the definition of web services-oriented attribute-based access control ( ABAC) model is presented, and the architecture of ABAC is designed. Furthermore, the fine-grained access control system for web services is implemented with XACML, the application of the system has effectively protected the resources of web services.
National Aeronautics and Space Administration — We propose to investigate the feasibility and value of the "Software as a Service" paradigm in facilitating access to Earth Science numerical models. We...
Chen, Alex Qiang
The World Wide Web (Web) has evolved from a collection of static pages that need reloading every time the content changes, into dynamic pages where parts of the page updates independently, without reloading it. As such, users are required to work with dynamic pages with components that react to events either from human interaction or machine automation. Often elderly and visually impaired users are the most disadvantaged when dealing with this form of interaction. Operating widgets require th...
Furche, Tim; Gottlob, Georg; Grasso, Giovanni; Guo, Xiaonan; Orsi, Giorgio; Schallhart, Christian
Forms are our gates to the web. They enable us to access the deep content of web sites. Automatic form understanding provides applications, ranging from crawlers over meta-search engines to service integrators, with a key to this content. Yet, it has received little attention other than as component in specific applications such as crawlers or meta-search engines. No comprehensive approach to form understanding exists, let alone one that produces rich models for semantic services or integrati...
Bakker, R.; Tiesinga, P.; Kotter, R.
The Scalable Brain Atlas (SBA) is a collection of web services that provide unified access to a large collection of brain atlas templates for different species. Its main component is an atlas viewer that displays brain atlas data as a stack of slices in which stereotaxic coordinates and brain regions can be selected. These are subsequently used to launch web queries to resources that require coordinates or region names as input. It supports plugins which run inside the viewer and respond when...
Vitols, G; Arhipova, I
Markup languages are used to describe the content published in the World Wide Web. Aim of this article is to analyze hypertext markup language versions and identify possibilities of improvement accessibility for the web information systems with markup language elements appropriate application. Analysis of the document structure is performed. Document structure and text description elements are selected. Selected elements are practically evaluated with screen readers. From the evaluation resul...
Wen-Jye Shyr; Te-Jen Su; Chia-Ming Lin
This study develops a novel method for learning mechatronics using remote monitoring and control, based on a programmable logic controller (PLC) and WebAccess. A mechatronics module, a Web‐CAM and a PLC were integrated with WebAccess software to organize a remote laboratory. The proposed system enables users to access the Internet for remote monitoring and control of the mechatronics module via a web browser, thereby enhancing work flexibility by enabling personnel to control mechatronics equ...
Trabant, Chad; Ahern, Timothy K.
At the IRIS Data Management Center (DMC) we have developed a suite of web service interfaces to access our large archive of, primarily seismological, time series data and related metadata. The goals of these web services include providing: a) next-generation and easily used access interfaces for our current users, b) access to data holdings in a form usable for non-seismologists, c) programmatic access to facilitate integration into data processing workflows and d) a foundation for participation in federated data discovery and access systems. To support our current users, our services provide access to the raw time series data and metadata or conversions of the raw data to commonly used formats. Our services also support simple, on-the-fly signal processing options that are common first steps in many workflows. Additionally, high-level data products derived from raw data are available via service interfaces. To support data access by researchers unfamiliar with seismic data we offer conversion of the data to broadly usable formats (e.g. ASCII text) and data processing to convert the data to Earth units. By their very nature, web services are programmatic interfaces. Combined with ubiquitous support for web technologies in programming & scripting languages and support in many computing environments, web services are very well suited for integrating data access into data processing workflows. As programmatic interfaces that can return data in both discipline-specific and broadly usable formats, our services are also well suited for participation in federated and brokered systems either specific to seismology or multidisciplinary. Working within the International Federation of Digital Seismograph Networks, the DMC collaborated on the specification of standardized web service interfaces for use at any seismological data center. These data access interfaces, when supported by multiple data centers, will form a foundation on which to build discovery and access mechanisms
Web services are starting to be widely used in applications for remotely accessing data. This is of special interest for research based on small and medium scale fusion devices, since scientists participating remotely to experiments are accessing large amounts of data over the Internet. Recent tests were conducted to see how the new network traffic, generated by the use of web services, can be integrated in the existing infrastructure and what would be the impact over existing applications, especially those used in a remote participation scenario
Full Text Available As the internet is fast migrating from static web pages to dynamic web pages, the users with visual impairment find it confusing and challenging when accessing the contents on the web. There is evidence that dynamic web applications pose accessibility challenges for the visually impaired users. This study shows that a difference can be made through the basic understanding of the technical requirement of users with visual impairment and addresses a number of issues pertinent to the accessibility needs for such users. We propose that only by designing a framework that is structurally flexible, by removing unnecessary extras and thereby making every bit useful (fit-for-purpose, will visually impaired users be given an increased capacity to intuitively access e-contents. This theory is implemented in a dynamic website for the visually impaired designed in this study. Designers should be aware of how the screen reading software works to enable them make reasonable adjustments or provide alternative content that still corresponds to the objective content to increase the possibility of offering faultless service to such users. The result of our research reveals that materials can be added to a content repository or re-used from existing ones by identifying the content types and then transforming them into a flexible and accessible one that fits the requirements of the visually impaired through our method (no-frill + agile methodology rather than computing in advance or designing according to a given specification.
ZHAN Li-qiang; LIU Da-xin
We propose an efficient hybrid algorithm WDHP in this paper for mining frequent access patterns.WDHP adopts the techniques of DHP to optimize its performance, which is using hash table to filter candidate set and trimming database.Whenever the database is trimmed to a size less than a specified threshold, the algorithm puts the database into main memory by constructing a tree, and finds frequent patterns on the tree.The experiment shows that WDHP outperform algorithm DHP and main memory based algorithm WAP in execution efficiency.
Evans, J. D.; Valente, E. G.
We are building a scalable, cloud computing-based infrastructure for Web access to near-real-time data products synthesized from the U.S. National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP) and other geospatial and meteorological data. Given recent and ongoing changes in the the NPP and NPOESS programs (now Joint Polar Satellite System), the need for timely delivery of NPP data is urgent. We propose an alternative to a traditional, centralized ground segment, using distributed Direct Broadcast facilities linked to industry-standard Web services by a streamlined processing chain running in a scalable cloud computing environment. Our processing chain, currently implemented on Amazon.com's Elastic Compute Cloud (EC2), retrieves raw data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) and synthesizes data products such as Sea-Surface Temperature, Vegetation Indices, etc. The cloud computing approach lets us grow and shrink computing resources to meet large and rapid fluctuations (twice daily) in both end-user demand and data availability from polar-orbiting sensors. Early prototypes have delivered various data products to end-users with latencies between 6 and 32 minutes. We have begun to replicate machine instances in the cloud, so as to reduce latency and maintain near-real time data access regardless of increased data input rates or user demand -- all at quite moderate monthly costs. Our service-based approach (in which users invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored and composite (e.g., false-color multiband) products on demand. To facilitate broad impact and adoption of our technology, we have emphasized open, industry-standard software interfaces and open source software. Through our work, we envision the widespread establishment of similar, derived, or interoperable systems for
Dai, Jianli; Chen, Yuansha; Lauzardo, Michael
Mycobacteria include a large number of pathogens. Identification to species level is important for diagnoses and treatments. Here, we report the development of a Web-accessible database of the hsp65 locus sequences (http://msis.mycobacteria.info) from 149 out of 150 Mycobacterium species/subspecies. This database can serve as a reference for identifying Mycobacterium species.
Brown, K. E.; Newby, K.; Caley, M.; Danahay, A.; Kehal, I.
Sexual health service access is fundamental to good sexual health, yet interventions designed to address this have rarely been implemented or evaluated. In this article, pilot evaluation findings for a targeted public health behavior change intervention, delivered via a website and web-app, aiming to increase uptake of sexual health services among…
Weertman, B. R.; Trabant, C.; Karstens, R.; Suleiman, Y. Y.; Ahern, T. K.; Casey, R.; Benson, R. B.
The IRIS Data Management Center (DMC) has developed a suite of web services that provide access to the DMC's time series holdings, their related metadata and earthquake catalogs. In addition, services are available to perform simple, on-demand time series processing at the DMC prior to being shipped to the user. The primary goal is to provide programmatic access to data and processing services in a manner usable by and useful to the research community. The web services are relatively simple to understand and use and will form the foundation on which future DMC access tools will be built. Based on standard Web technologies they can be accessed programmatically with a wide range of programming languages (e.g. Perl, Python, Java), command line utilities such as wget and curl or with any web browser. We anticipate these services being used for everything from simple command line access, used in shell scripts and higher programming languages to being integrated within complex data processing software. In addition to improving access to our data by the seismological community the web services will also make our data more accessible to other disciplines. The web services available from the DMC include ws-bulkdataselect for the retrieval of large volumes of miniSEED data, ws-timeseries for the retrieval of individual segments of time series data in a variety of formats (miniSEED, SAC, ASCII, audio WAVE, and PNG plots) with optional signal processing, ws-station for station metadata in StationXML format, ws-resp for the retrieval of instrument response in RESP format, ws-sacpz for the retrieval of sensor response in the SAC poles and zeros convention and ws-event for the retrieval of earthquake catalogs. To make the services even easier to use, the DMC is developing a library that allows Java programmers to seamlessly retrieve and integrate DMC information into their own programs. The library will handle all aspects of dealing with the services and will parse the returned
In the large experimental facilities such as KEKB, RIBF, and J-PARC, the accelerators are operated by the remote control system based on EPICS (Experimental Physics and Industrial Control System). One of the advantages in EPICS-based system is the software reusability. Because it is available to develop client system by using Channel Access (CA) protocol without protocols with hardware dependencies, even if the system consists of the various kind controllers. As next-generation OPI (Operator Interface) using CA, we develop a server for the WebSocket, which is a new protocol provided by Internet Engineering Task Force (IETF), with combination of Node.js and the modules. As a result, we are able to use Web-based client system not only in the central control room but also with various types of equipment for accelerator operation. (author)
Bader A Alharbi; Thamir H Alshammari; Nathan L Felton; Victor B Zhurkin; Feng Cui
Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and param-eters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site.
Elmsheuser, Johannes; The ATLAS collaboration; Serfon, Cedric; Garonne, Vincent; Blunier, Sylvain; Lavorini, Vincenzo; Nilsson, Paul
With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyse collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.
Elmsheuser, J.; Walker, R.; Serfon, C.; Garonne, V.; Blunier, S.; Lavorini, V.; Nilsson, P.
With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyse collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.
Elmsheuser, Johannes; The ATLAS collaboration; Serfon, Cedric; Garonne, Vincent; Blunier, Sylvain; Lavorini, Vincenzo; Nilsson, Paul
With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyze collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.
Brown, K E; Newby, K; Caley, M; Danahay, A; Kehal, I
Sexual health service access is fundamental to good sexual health, yet interventions designed to address this have rarely been implemented or evaluated. In this article, pilot evaluation findings for a targeted public health behavior change intervention, delivered via a website and web-app, aiming to increase uptake of sexual health services among 13-19-year olds are reported. A pre-post questionnaire-based design was used. Matched baseline and follow-up data were identified from 148 respondents aged 13-18 years. Outcome measures were self-reported service access, self-reported intention to access services and beliefs about services and service access identified through needs analysis. Objective service access data provided by local sexual health services were also analyzed. Analysis suggests the intervention had a significant positive effect on psychological barriers to and antecedents of service access among females. Males, who reported greater confidence in service access compared with females, significantly increased service access by time 2 follow-up. Available objective service access data support the assertion that the intervention may have led to increases in service access. There is real promise for this novel digital intervention. Further evaluation is planned as the model is licensed to and rolled out by other local authorities in the United Kingdom. PMID:26928566
Berget, Gerd; Herstad, Jo; Sandnes, Frode Eika
Universal design in context of digitalisation has become an integrated part of international conventions and national legislations. A goal is to make the Web accessible for people of different genders, ages, backgrounds, cultures and physical, sensory and cognitive abilities. Political demands for universally designed solutions have raised questions about how it is achieved in practice. Developers, designers and legislators have looked towards the Web Content Accessibility Guidelines (WCAG) for answers. WCAG 2.0 has become the de facto standard for universal design on the Web. Some of the guidelines are directed at the general population, while others are targeted at more specific user groups, such as the visually impaired or hearing impaired. Issues related to cognitive impairments such as dyslexia receive less attention, although dyslexia is prevalent in at least 5-10% of the population. Navigation and search are two common ways of using the Web. However, while navigation has received a fair amount of attention, search systems are not explicitly included, although search has become an important part of people's daily routines. This paper discusses WCAG in the context of dyslexia for the Web in general and search user interfaces specifically. Although certain guidelines address topics that affect dyslexia, WCAG does not seem to fully accommodate users with dyslexia.
Berget, Gerd; Herstad, Jo; Sandnes, Frode Eika
Universal design in context of digitalisation has become an integrated part of international conventions and national legislations. A goal is to make the Web accessible for people of different genders, ages, backgrounds, cultures and physical, sensory and cognitive abilities. Political demands for universally designed solutions have raised questions about how it is achieved in practice. Developers, designers and legislators have looked towards the Web Content Accessibility Guidelines (WCAG) for answers. WCAG 2.0 has become the de facto standard for universal design on the Web. Some of the guidelines are directed at the general population, while others are targeted at more specific user groups, such as the visually impaired or hearing impaired. Issues related to cognitive impairments such as dyslexia receive less attention, although dyslexia is prevalent in at least 5-10% of the population. Navigation and search are two common ways of using the Web. However, while navigation has received a fair amount of attention, search systems are not explicitly included, although search has become an important part of people's daily routines. This paper discusses WCAG in the context of dyslexia for the Web in general and search user interfaces specifically. Although certain guidelines address topics that affect dyslexia, WCAG does not seem to fully accommodate users with dyslexia. PMID:27534340
Proton Engineering Frontier Project (PEFP) has developed a 20MeV proton accelerator, and established a distributed control system based on EPICS for sub-system components such as vacuum unit, beam diagnostics, and power supply system. The control system includes a real-time monitoring and alarm functions. From the aspect of a efficient maintenance of a control system and a additional extension of subsystems, EPICS software framework was adopted. In addition, a control system should be capable of providing an easy access for users and a real-time monitoring on a user screen. Therefore, we have implemented a new web-based monitoring server with several libraries. By adding DB module, the new IOC web monitoring system makes it possible to monitor the system through the web. By integrating EPICS Channel Access (CA) and Database libraries into a Database module, the web-based monitoring system makes it possible to monitor the sub-system status through user's internet browser. In this study, we developed a web based monitoring system by using EPICS IOC (Input Output Controller) with IBM server
Tired of all the time spent on the phone or sending emails to schedule beam time? Why not make your own schedule when it is convenient to you? The integrated web environment at the NIGMS East Coast Structural Biology Research Facility allows users to schedule their own beam time as if they were making travel arrangements and provides staff with a set of toolkits for management of routine tasks. These unique features are accessible through the MediaWiki-powered home pages. Here we describe the main features of this web environment that have shown to allow for an efficient and effective interaction between the users and the facility
Cristina Livia Iancu
Full Text Available This paper presents a solution for accessing web services in a light-secure way. Because the payload of the messages is not so sensitive, it is taken care only about protecting the user name and the password used for authentication and authorization into the web services system. The advantage of this solution compared to the common used SSL is avoiding the overhead related to the handshake and encryption, providing a faster response to the clients. The solution is intended for Windows machines and is developed using the latest stable Microsoft technologies.
GAO Fuxiang; YAO Lan; BAO Shengfei; YU Ge
A dynamic Web application, which can help the departments of enterprise to collaborate with each other conveniently, is proposed. Several popular design solutions are introduced at first. Then, dynamic Web system is chosen for developing the file access and control system. Finally, the paper gives the detailed process of the design and implementation of the system, which includes some key problems such as solutions of document management and system security. Additionally, the limitations of the system as well as the suggestions of further improvement are also explained.
A new web product data management architecture is presented. The three-tier web architecture and Simple Object Access Protocol (SOAP) are combined to build the web-based product data management (PDM) system which includes three tiers: the user services tier, the business services tier, and the data services tier. The client service component uses the serverside technology, and Extensible Markup Language (XML) web service which uses SOAP as the communication protocol is chosen as the business service component. To illustrate how to build a web-based PDM system using the proposed architecture,a case PDM system which included three logical tires was built. To use the security and central management features of the database, a stored procedure was recommended in the data services tier. The business object was implemented as an XML web service so that client could use standard internet protocols to communicate with the business object from any platform. In order to satisfy users using all sorts of browser, the server-side technology and Microsoft ASP.NET was used to create the dynamic user interface.
Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P
ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology.
Lee, Dong Uk; Won, Byung Chool; Lee, Yong Bum; Kim, Young In; Hahn, Do Hee
The SFR R and D and technology monitoring system based on the MS enterprise project management is developed for systematic effective management of 'Development of Basic Key Technologies for Gen IV SFR' project which was performed under the Mid- and Long-term Nuclear R and D Program sponsored by the Ministry of Education, Science and Technology. This system is a tool for project management based on web access. Therefore this manual is a detailed guide for Project Web Access(PWA). Section 1 describes the common guide for using of system functions such as project server 2007 client connection setting, additional outlook function setting etc. The section 2 describes the guide for system administrator. It is described the guide for project management in section 3, 4.
The SFR R and D and technology monitoring system based on the MS enterprise project management is developed for systematic effective management of 'Development of Basic Key Technologies for Gen IV SFR' project which was performed under the Mid- and Long-term Nuclear R and D Program sponsored by the Ministry of Education, Science and Technology. This system is a tool for project management based on web access. Therefore this manual is a detailed guide for Project Web Access(PWA). Section 1 describes the common guide for using of system functions such as project server 2007 client connection setting, additional outlook function setting etc. The section 2 describes the guide for system administrator. It is described the guide for project management in section 3, 4
This comprehensive guide examines the state of electronic serials cataloging with special attention paid to online capacities. E-Serials Cataloging: Access to Continuing and Integrating Resources via the Catalog and the Web presents a review of the e-serials cataloging methods of the 1990s and discusses the international standards (ISSN, ISBD[ER], AACR2) that are applicable. It puts the concept of online accessibility into historical perspective and offers a look at current applications to consider. Practicing librarians, catalogers and administrators of technical services, cataloging and serv
Full Text Available Digital forensics tools have many potential applications in the curation of digital materials in libraries, archives and museums (LAMs. Open source digital forensics tools can help LAM professionals to extract digital contents from born-digital media and make more informed preservation decisions. Many of these tools have ways to display the metadata of the digital media, but few provide file-level access without having to mount the device or use complex command-line utilities. This paper describes a project to develop software that supports access to the contents of digital media without having to mount or download the entire image. The work examines two approaches in creating this tool: First, a graphical user interface running on a local machine. Second, a web-based application running in web browser. The project incorporates existing open source forensics tools and libraries including The Sleuth Kit and libewf along with the Flask web application framework and custom Python scripts to generate web pages supporting disk image browsing.
Scharling, Peter; Hinsby, Klaus; Brennan, Kelsy
Geodata visualization and analysis is founded on proper access to all available data. Throughout several research projects Earthfx and GEUS managed to gather relevant data from both national and local databases into one platform. The web server platform which is easy accessible on the internet displays all types of spatially distributed geodata ranging from geochemistry, geological and geophysical well logs, surface- and airborne geophysics to any type of temporal measurements like water levels and trends in groundwater chemistry. Geological cross sections are an essential tool for the geoscientist. Moving beyond plan-view web mapping, GEUS and Earthfx have developed a webserver technology that provides the user with the ability to dynamically interact with geologic models developed for various projects in Denmark and in transboundary aquifers across the Danish-German border. The web map interface allows the user to interactively define the location of a multi-point profile, and the selected profile will be quickly drawn and illustrated as a slice through the 3D geologic model. Including all borehole logs within a user defined offset from the profile. A key aspect of the webserver technology is that the profiles are presented through a fully dynamic interface. Web users can select and interact with borehole logs contained in the underlying database, adjust vertical exaggeration, and add or remove off-section boreholes by dynamically adjusting the offset projection distance. In a similar manner to the profile tool, an interactive water level and water chemistry graphing tool has been integrated into the web service interface. Again, the focus is on providing a level of functionality beyond simple data display. Future extensions to the web interface and functionality are possible, as the web server utilizes the same code engine that is used for desktop geologic model construction and water quality data management. In summary, the GEUS/Earthfx web server tools
“As Bill Gates and Steve Case proclaim the global omnipresence of the Internet, the majority of non-Western nations and 97 per cent of the world's population remain unconnected to the net for lack of money, access, or knowledge. This exclusion of so vast a share of the global population from the Internet sharply contradicts the claims of those who posit the World Wide Web as a ‘universal' medium of egalitarian communication.” (Trend 2001:2)
Boren, Suzanne Austin; Gunlock, Teira L.
The objective of this pilot study was to assess the accessibility of congestive heart failure consumer information on the web. Twenty-seven education trials involving 5589 patients with congestive heart failure were analyzed. Education topics and outcomes were abstracted. Twenty education topics were linked to outcomes. A sample of 15 websites missed 56.7% of education topics and 61.8% of technical website characteristics that have suggested accuracy, reliability, and timeliness of content.
Luis Joyanes Aguilar; Gloria García Fernández; Oscar Sanjuán Martínez; Edward Rolando Núñez Valdez; Juan Manuel Cueva Lovelle
Due to the significant increase in threats, attacks and vulnerabilities that affect the Web in recent years has resulted the development and implementation of pools and methods to ensure security measures in the privacy, confidentiality and data integrity of users and businesses. Under certain circumstances, despite the implementation of these tools do not always get the flow of information which is passed in a secure manner. Many of these security tools and methods cannot be accessed by peop...
While seismological observatories detect and locate earthquakes based on measurements of the ground motion, they neither know a priori whether an earthquake has been felt by the public nor is it known, where it has been felt. Such information is usually gathered by evaluating feedback reported by the public through on-line forms on the web. However, after a felt earthquake in Switzerland, many people visit the webpages of the Swiss Seismological Service (SED) at the ETH Zurich and each such visit leaves traces in the logfiles on our web-servers. Data mining techniques, applied to these logfiles and mining publicly available data bases on the internet open possibilities to obtain previously unknown information about our virtual visitors. In order to provide precise information to authorities and the media, it would be desirable to rapidly know from which locations these web-accesses origin. The method 'Salander' (Seismic Activitiy Linked to Area codes - Nimble Detection of Earthquake Rumbles) will be introduced and it will be explained, how the IP-addresses (each computer or router directly connected to the internet has a unique IP-address; an example would be 126.96.36.199) of a sufficient amount of our virtual visitors were linked to their geographical area. This allows us to unprecedentedly quickly know whether and where an earthquake was felt in Switzerland. It will also be explained, why the method Salander is superior to commercial so-called geolocation products. The corresponding products of the Salander method, animated SalanderMaps, which are routinely generated after each earthquake with a magnitude of M>2 in Switzerland (http://www.seismo.ethz.ch/prod/salandermaps/, available after March 2013), demonstrate how the wavefield of earthquakes propagates through Switzerland and where it was felt. Often, such information is available within less than 60 seconds after origin time, and we always get a clear picture within already five minutes after origin time
McCann, M. P.
Using the STOQS Web Application for Access to in situ Oceanographic Data Mike McCann 7 August 2012 With increasing measurement and sampling capabilities of autonomous oceanographic platforms (e.g. Gliders, Autonomous Underwater Vehicles, Wavegliders), the need to efficiently access and visualize the data they collect is growing. The Monterey Bay Aquarium Research Institute has designed and built the Spatial Temporal Oceanographic Query System (STOQS) specifically to address this issue. The need for STOQS arises from inefficiencies discovered from using CF-NetCDF point observation conventions for these data. The problem is that access efficiency decreases with decreasing dimension of CF-NetCDF data. For example, the Trajectory Common Data Model feature type has only one coordinate dimension, usually Time - positions of the trajectory (Depth, Latitude, Longitude) are stored as non-indexed record variables within the NetCDF file. If client software needs to access data between two depth values or from a bounded geographic area, then the whole data set must be read and the selection made within the client software. This is very inefficient. What is needed is a way to easily select data of interest from an archive given any number of spatial, temporal, or other constraints. Geospatial relational database technology provides this capability. The full STOQS application consists of a Postgres/PostGIS database, Mapserver, and Python-Django running on a server and Web 2.0 technology (jQuery, OpenLayers, Twitter Bootstrap) running in a modern web browser. The web application provides faceted search capabilities allowing a user to quickly drill into the data of interest. Data selection can be constrained by spatial, temporal, and depth selections as well as by parameter value and platform name. The web application layer also provides a REST (Representational State Transfer) Application Programming Interface allowing tools such as the Matlab stoqstoolbox to retrieve data
Taneja, Harsh; Wu, Angela Xiao
The dominant understanding of Internet censorship posits that blocking access to foreign-based websites creates isolated communities of Internet users. We question this discourse for its assumption that if given access people would use all websites. We develop a conceptual framework that integrates access blockage with social structures to explain web users' choices, and argue that users visit websites they find culturally proximate and access blockage matters only when such sites are blocked...
de Filippis, Tiziana; Rocchi, Leandro; Rapisardi, Elena
The sharing of research data is a new challenge for the scientific community that may benefit from a large amount of information to solve environmental issues and sustainability in agriculture and urban contexts. Prerequisites for this challenge is the development of an infrastructure that ensure access, management and preservation of data, technical support for a coordinated and harmonious management of data that, in the framework of Open Data Policies, should encourages the reuse and the collaboration. The neogeography and the citizen as sensors approach, highlight that new data sources need a new set of tools and practices so to collect, validate, categorize, and use / access these "crowdsourced" data, that integrate the data sets produced in the scientific field, thus "feeding" the overall available data for analysis and research. When the scientific community embraces the dimension of collaboration and sharing, access and re-use, in order to accept the open innovation approach, it should redesign and reshape the processes of data management: the challenges of technological and cultural innovation, enabled by web 2.0 technologies, bring to the scenario where the sharing of structured and interoperable data will constitute the unavoidable building block to set up a new paradigm of scientific research. In this perspective the Institute of Biometeorology, CNR, whose aim is contributing to sharing and development of research data, has developed the "SensorWebHub" (SWH) infrastructure to support the scientific activities carried out in several research projects at national and international level. It is designed to manage both mobile and fixed open source meteorological and environmental sensors, in order to integrate the existing agro-meteorological and urban monitoring networks. The proposed architecture uses open source tools to ensure sustainability in the development and deployment of web applications with geographic features and custom analysis, as requested
von Haller, B.; Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F.; Delort, C.; Dénes, E.; Diviá, R.; Fuchs, U.; Niedziela, J.; Simonetti, G.; Soós, C.; Telesca, A.; Vande Vyvre, P.; Wegrzynek, A.
Faraggi, Eshel; Zhou, Yaoqi; Kloczkowski, Andrzej
We present a new approach for predicting the Accessible Surface Area (ASA) using a General Neural Network (GENN). The novelty of the new approach lies in not using residue mutation profiles generated by multiple sequence alignments as descriptive inputs. Instead we use solely sequential window information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment-based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is tested on predicting the ASA of globular proteins and found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for GENN and ASAquick are available from Research and Information Systems at http://mamiris.com, from the SPARKS Lab at http://sparks-lab.org, and from the Battelle Center for Mathematical Medicine at http://mathmed.org. PMID:25204636
Asnier Góngora R.
Full Text Available The article is associated with the creation of a usability lab where various types of tests performed using static and dynamic tools for evaluating the characteristics of usability, accessibility and Communicability by indicators in the software testing process with the user's presence. It also addresses the current situation in Cuba on the issue of evidence of these characteristics and the impact it could bring to the development teams. In addition, an analysis of the result of applying a tool (check list to multiple Web applications on tests conducted at the National Center for Software Quality Cuba (CALISOFT. We also present a set of best practices that support the development of web applications to suit the user.
Dowling, Nicki A; Rodda, Simone N; Lubman, Dan I; Jackson, Alun C
The 'concerned significant others' (CSOs) of people with problem gambling frequently seek professional support. However, there is surprisingly little research investigating the characteristics or help-seeking behaviour of these CSOs, particularly for web-based counselling. The aims of this study were to describe the characteristics of CSOs accessing the web-based counselling service (real time chat) offered by the Australian national gambling web-based counselling site, explore the most commonly reported CSO impacts using a new brief scale (the Problem Gambling Significant Other Impact Scale: PG-SOIS), and identify the factors associated with different types of CSO impact. The sample comprised all 366 CSOs accessing the service over a 21 month period. The findings revealed that the CSOs were most often the intimate partners of problem gamblers and that they were most often females aged under 30 years. All CSOs displayed a similar profile of impact, with emotional distress (97.5%) and impacts on the relationship (95.9%) reported to be the most commonly endorsed impacts, followed by impacts on social life (92.1%) and finances (91.3%). Impacts on employment (83.6%) and physical health (77.3%) were the least commonly endorsed. There were few significant differences in impacts between family members (children, partners, parents, and siblings), but friends consistently reported the lowest impact scores. Only prior counselling experience and Asian cultural background were consistently associated with higher CSO impacts. The findings can serve to inform the development of web-based interventions specifically designed for the CSOs of problem gamblers. PMID:24813552
Dowling, Nicki A; Rodda, Simone N; Lubman, Dan I; Jackson, Alun C
The 'concerned significant others' (CSOs) of people with problem gambling frequently seek professional support. However, there is surprisingly little research investigating the characteristics or help-seeking behaviour of these CSOs, particularly for web-based counselling. The aims of this study were to describe the characteristics of CSOs accessing the web-based counselling service (real time chat) offered by the Australian national gambling web-based counselling site, explore the most commonly reported CSO impacts using a new brief scale (the Problem Gambling Significant Other Impact Scale: PG-SOIS), and identify the factors associated with different types of CSO impact. The sample comprised all 366 CSOs accessing the service over a 21 month period. The findings revealed that the CSOs were most often the intimate partners of problem gamblers and that they were most often females aged under 30 years. All CSOs displayed a similar profile of impact, with emotional distress (97.5%) and impacts on the relationship (95.9%) reported to be the most commonly endorsed impacts, followed by impacts on social life (92.1%) and finances (91.3%). Impacts on employment (83.6%) and physical health (77.3%) were the least commonly endorsed. There were few significant differences in impacts between family members (children, partners, parents, and siblings), but friends consistently reported the lowest impact scores. Only prior counselling experience and Asian cultural background were consistently associated with higher CSO impacts. The findings can serve to inform the development of web-based interventions specifically designed for the CSOs of problem gamblers.
Chavanon, O; Barbe, C; Troccaz, J; Carrat, L; Ribuot, C; Noirclerc, M; Maitrasse, B; Blin, D
In the field of percutaneous access to soft tissues, our project was to improve classical pericardiocentesis by performing accurate guidance to a selected target, according to a model of the pericardial effusion acquired through three-dimensional (3D) data recording. Required hardware is an echocardiographic device and a needle, both linked to a 3D localizer, and a computer. After acquiring echographic data, a modeling procedure allows definition of the optimal puncture strategy, taking into consideration the mobility of the heart, by determining a stable region, whatever the period of the cardiac cycle. A passive guidance system is then used to reach the planned target accurately, generally a site in the middle of the stable region. After validation on a dynamic phantom and a feasibility study in dogs, an accuracy and reliability analysis protocol was realized on pigs with experimental pericardial effusion. Ten consecutive successful punctures using various trajectories were performed on eight pigs. Nonbloody liquid was collected from pericardial effusions in the stable region (5 to 9 mm wide) within 10 to 15 minutes from echographic acquisition to drainage. Accuracy of at least 2.5 mm was demonstrated. This study demonstrates the feasibility of computer-assisted pericardiocentesis. Beyond the simple improvement of the current technique, this method could be a new way to reach the heart or a new tool for percutaneous access and image-guided puncture of soft tissues. Further investigation will be necessary before routine human application.
Craig D. Howard
Full Text Available While Web 2.0 technologies provide motivated, self-access learners with unprecedented opportunities for language learning, Web 2.0 designs are not of universally equal value for learning. This article reports on research carried out at Indiana University Bloomington using an empirical method to select websites for self-access language learning. Two questions related to Web 2.0 recommendations were asked: (1 How do recommended Web 2.0 sites rank in terms of interactivity features? (2 How likely is a learner to find highly interactive sites on their own? A list of 20 sites used for supplemental and self-access activities in language programs at five universities was compiled and provided the initial data set. Purposive sampling criteria revealed 10 sites truly represented Web 2.0 design. To address the first question, a feature analysis was applied (Herring, The international handbook of internet research. Berlin: Springer, 2008. An interactivity framework was developed from previous research to identify Web 2.0 design features, and sites were ranked according to feature quantity. The method used to address the second question was an interconnectivity analysis that measured direct and indirect interconnectivity within Google results. Highly interactive Web 2.0 sites were not prominent in Google search results, nor were they often linked via third party sites. It was determined that, using typical keywords or searching via blogs and recommendation sites, self-access learners were highly unlikely to find the most promising Web 2.0 sites for language learning. A discussion of the role of the learning advisor in guiding Web 2.0 collaborative self-access, as well as some strategic short cuts to quick analysis, conclude the article.
Price, Matthew; Yuen, Erica K; Davidson, Tatiana M; Hubel, Grace; Ruggiero, Kenneth J
Although Web-based treatments have significant potential to assess and treat difficult-to-reach populations, such as trauma-exposed adolescents, the extent that such treatments are accessed and used is unclear. The present study evaluated the proportion of adolescents who accessed and completed a Web-based treatment for postdisaster mental health symptoms. Correlates of access and completion were examined. A sample of 2,000 adolescents living in tornado-affected communities was assessed via structured telephone interview and invited to a Web-based treatment. The modular treatment addressed symptoms of posttraumatic stress disorder, depression, and alcohol and tobacco use. Participants were randomized to experimental or control conditions after accessing the site. Overall access for the intervention was 35.8%. Module completion for those who accessed ranged from 52.8% to 85.6%. Adolescents with parents who used the Internet to obtain health-related information were more likely to access the treatment. Adolescent males were less likely to access the treatment. Future work is needed to identify strategies to further increase the reach of Web-based treatments to provide clinical services in a postdisaster context. PMID:25622071
Kadlec, J.; Ames, D. P.
The aim of the presented work is creating a freely accessible, dynamic and re-usable snow cover map of the world by combining snow extent and snow depth datasets from multiple sources. The examined data sources are: remote sensing datasets (MODIS, CryoLand), weather forecasting model outputs (OpenWeatherMap, forecast.io), ground observation networks (CUAHSI HIS, GSOD, GHCN, and selected national networks), and user-contributed snow reports on social networks (cross-country and backcountry skiing trip reports). For adding each type of dataset, an interface and an adapter is created. Each adapter supports queries by area, time range, or combination of area and time range. The combined dataset is published as an online snow cover mapping service. This web service lowers the learning curve that is required to view, access, and analyze snow depth maps and snow time-series. All data published by this service are licensed as open data; encouraging the re-use of the data in customized applications in climatology, hydrology, sports and other disciplines. The initial version of the interactive snow map is on the website snow.hydrodata.org. This website supports the view by time and view by site. In view by time, the spatial distribution of snow for a selected area and time period is shown. In view by site, the time-series charts of snow depth at a selected location is displayed. All snow extent and snow depth map layers and time series are accessible and discoverable through internationally approved protocols including WMS, WFS, WCS, WaterOneFlow and WaterML. Therefore they can also be easily added to GIS software or 3rd-party web map applications. The central hypothesis driving this research is that the integration of user contributed data and/or social-network derived snow data together with other open access data sources will result in more accurate and higher resolution - and hence more useful snow cover maps than satellite data or government agency produced data by
Baker, Stewart C.
This article argues that accessibility and universality are essential to good Web design. A brief review of library science literature sets the issue of Web accessibility in context. The bulk of the article explains the design philosophies of progressive enhancement and responsive Web design, and summarizes recent updates to WCAG 2.0, HTML5, CSS…
Scholl, I.; Girard, Y.; Bykowski, A.
This paper presents the architecture of a Java web-based graphical interface dedicated to the access of the SOHO Data archive. This application allows local and remote users to search in the SOHO data catalog and retrieve the SOHO data files from the archive. It has been developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France), which is one of the European Archives for the SOHO data. This development is part of a joint effort between ESA, NASA and IAS in order to implement long term archive systems for the SOHO data. The software architecture is built as a client-server application using Java language and SQL above a set of components such as an HTTP server, a JDBC gateway, a RDBMS server, a data server and a Web browser. Since HTML pages and CGI scripts are not powerful enough to allow user interaction during a multi-instrument catalog search, this type of requirement enforces the choice of Java as the main language. We also discuss performance issues, security problems and portability on different Web browsers and operating syste ms.
Eberle, Jonas; Hüttich, Christian; Schmullius, Christiane
Time series information is widely used in environmental change analyses and is also an essential information for stakeholders and governmental agencies. However, a challenging issue is the processing of raw data and the execution of time series analysis. In most cases, data has to be found, downloaded, processed and even converted in the correct data format prior to executing time series analysis tools. Data has to be prepared to use it in different existing software packages. Several packages like TIMESAT (Jönnson & Eklundh, 2004) for phenological studies, BFAST (Verbesselt et al., 2010) for breakpoint detection, and GreenBrown (Forkel et al., 2013) for trend calculations are provided as open-source software and can be executed from the command line. This is needed if data pre-processing and time series analysis is being automated. To bring both parts, automated data access and data analysis, together, a web-based system was developed to provide access to satellite based time series data and access to above mentioned analysis tools. Users of the web portal are able to specify a point or a polygon and an available dataset (e.g., Vegetation Indices and Land Surface Temperature datasets from NASA MODIS). The data is then being processed and provided as a time series CSV file. Afterwards the user can select an analysis tool that is being executed on the server. The final data (CSV, plot images, GeoTIFFs) is visualized in the web portal and can be downloaded for further usage. As a first use case, we built up a complimentary web-based system with NASA MODIS products for Germany and parts of Siberia based on the Earth Observation Monitor (www.earth-observation-monitor.net). The aim of this work is to make time series analysis with existing tools as easy as possible that users can focus on the interpretation of the results. References: Jönnson, P. and L. Eklundh (2004). TIMESAT - a program for analysing time-series of satellite sensor data. Computers and Geosciences 30
Mukhopadhyay, Debajyoti; Saha, Dwaipayan; Kim, Young-Chon
The growth of the World Wide Web has emphasized the need for improvement in user latency. One of the techniques that are used for improving user latency is Caching and another is Web Prefetching. Approaches that bank solely on caching offer limited performance improvement because it is difficult for caching to handle the large number of increasingly diverse files. Studies have been conducted on prefetching models based on decision trees, Markov chains, and path analysis. However, the increased uses of dynamic pages, frequent changes in site structure and user access patterns have limited the efficacy of these static techniques. In this paper, we have proposed a methodology to cluster related pages into different categories based on the access patterns. Additionally we use page ranking to build up our prediction model at the initial stages when users haven't already started sending requests. This way we have tried to overcome the problems of maintaining huge databases which is needed in case of log based techn...
Badidi, E; Lang, B F; Burger, G
FLOSYS is an interactive web-accessible bioinformatics workflow system designed to assist biologists in multi-step data analyses. FLOSYS allows the user to create complex analysis pathways (protocols) graphically, similar to drawing a flowchart: icons representing particular bioinformatics tools are dragged and dropped onto a canvas and lines connecting those icons are drawn to specify the relationships between the tools. In addition, FLOSYS permits to select input-data, execute the protocol and store the results in a personal workspace. The three-tier architecture of FLOSYS has been implemented in Java and uses a relational database system together with new technologies for distributed and web computing such as CORBA, RMI, JSP and JDBC. The prototype of FLOSYS, which is part of the bioinformatics workbench AnaBench, is accessible on-line at http://malawimonas.bcm.umontreal.ca: 8091/anabench. The entire package is available on request to academic groups who wish to have a customized local analysis environment for research or teaching.
A. D. Zarrabi
Full Text Available PURPOSE: To design a simple, cost-effective system for gaining rapid and accurate calyceal access during percutaneous nephrolithotomy (PCNL. MATERIALS AND METHODS: The design consists of a low-cost, light-weight, portable mechanical gantry with a needle guiding device. Using C-arm fluoroscopy, two images of the contrast-filled renal collecting system are obtained: at 0-degrees (perpendicular to the kidney and 20-degrees. These images are relayed to a laptop computer containing the software and graphic user interface for selecting the targeted calyx. The software provides numerical settings for the 3 axes of the gantry, which are used to position the needle guiding device. The needle is advanced through the guide to the depth calculated by the software, thus puncturing the targeted calyx. Testing of the system was performed on 2 target types: 1 radiolucent plastic tubes the approximate size of a renal calyx (5 or 10 mm in diameter, 30 mm in length; and 2 foam-occluded, contrast-filled porcine kidneys. RESULTS: Tests using target type 1 with 10 mm diameter (n = 14 and 5 mm diameter (n = 7 tubes resulted in a 100% targeting success rate, with a mean procedure duration of 10 minutes. Tests using target type 2 (n = 2 were both successful, with accurate puncturing of the selected renal calyx, and a mean procedure duration of 15 minutes. CONCLUSIONS: The mechanical gantry system described in this paper is low-cost, portable, light-weight, and simple to set up and operate. C-arm fluoroscopy is limited to two images, thus reducing radiation exposure significantly. Testing of the system showed an extremely high degree of accuracy in gaining precise access to a targeted renal calyx.
Ebenezer, Catherine; Bath, Peter A.; Pinfield, Stephen
1. Introduction The research project as a whole examines the factors that bear on the accessibility of online published professional information within the National Health Service (NHS) in England. The poster focuses on one aspect of this, control of access to the World Wide Web within NHS organisations. The overall aim of this study is to investigate the apparent disjunction between stated policy regarding evidence-based practice and professional learning, and actual IT (information te...
Full Text Available Abstract Background In the last five years large online resources of human variability have appeared, notably HapMap, Perlegen and the CEPH foundation. These databases of genotypes with population information act as catalogues of human diversity, and are widely used as reference sources for population genetics studies. Although many useful conclusions may be extracted by querying databases individually, the lack of flexibility for combining data from within and between each database does not allow the calculation of key population variability statistics. Results We have developed a novel tool for accessing and combining large-scale genomic databases of single nucleotide polymorphisms (SNPs in widespread use in human population genetics: SPSmart (SNPs for Population Studies. A fast pipeline creates and maintains a data mart from the most commonly accessed databases of genotypes containing population information: data is mined, summarized into the standard statistical reference indices, and stored into a relational database that currently handles as many as 4 × 109 genotypes and that can be easily extended to new database initiatives. We have also built a web interface to the data mart that allows the browsing of underlying data indexed by population and the combining of populations, allowing intuitive and straightforward comparison of population groups. All the information served is optimized for web display, and most of the computations are already pre-processed in the data mart to speed up the data browsing and any computational treatment requested. Conclusion In practice, SPSmart allows populations to be combined into user-defined groups, while multiple databases can be accessed and compared in a few simple steps from a single query. It performs the queries rapidly and gives straightforward graphical summaries of SNP population variability through visual inspection of allele frequencies outlined in standard pie-chart format. In addition, full
刘建伟; 李斌; 雷宏东
ASP是目前流行的WEB应用程序开发技术。 ASP访问WEB数据库的关键在于建立与数据库的连接。本文介绍了ASP的基本工作原理，数据库访问组件ADO，并详细说明了通过ADO连接Access数据库的多种方法。%ASP is a popular Web application development technologies .The key point of ASP access to a Web database is to estab-lish a connection to the database .This article describes the basic working principle of the ASP , database access components ADO , and a detailed description of methods on connecting to an access database through ADO is given .
This thesis reports on the user-interface design guidelines for usability and accessibility in their connection to human-computer interaction and their implementation in the web design. The goal is to study the theoretical background of the design rules and apply them in designing a real-world website. The analysis of Jakobson’s communication theory applied in the web design and its implications in the design guidelines of visibility, affordance, feedback, simplicity, structure, consisten...
Wagner, Michael M.; Levander, John D.; Brown, Shawn; Hogan, William R.; Millett, Nicholas; Hanna, Josh
This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem—which we define as a configuration and a query of results—exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services. PMID:24551417
Kunszt, Peter Z; Murri, Riccardo; Tschopp, Valery
This paper describes the design and implementation of GridCertLib, a Java library leveraging a Shibboleth-based authentication infrastructure and the SLCS online certificate signing service, to provide short-lived X.509 certificates and Grid proxies. The main use case envisioned for GridCertLib, is to provide seamless and secure access to Grid/X.509 certificates and proxies in web portals: when a user logs in to the portal using SWITCHaai Shibboleth authentication, GridCertLib can automatically obtain a Grid/X.509 certificate from the SLCS service and generate a VOMS proxy from it. We give an overview of the architecture of GridCertLib and briefly describe its programming model. Application to common deployment scenarios are outlined, and we report on our practical experience integrating GridCertLib into the a portal for Bioinformatics applications, based on the popular P-GRADE software.
Bykowski, J L; Alora, M B; Dover, J S; Arndt, K A
The World Wide Web has provided the public with easy and affordable access to a vast range of information. However, claims may be unsubstantiated and misleading. The purpose of this study was to use cutaneous laser surgery as a model to assess the availability and reliability of Web sites and to evaluate this resource for the quality of patient and provider education. Three commercial methods of searching the Internet were used, identifying nearly 500,000 possible sites. The first 100 sites listed by each search engine (a total of 300 sites) were compared. Of these, 126 were listed repeatedly within a given retrieval method, whereas only 3 sites were identified by all 3 search engines. After elimination of duplicates, 40 sites were evaluated for content and currency of information. The most common features included postoperative care suggestions, options for pain management or anesthesia, a description of the way in which lasers work, and the types of lasers used for different procedures. Potential contraindications to laser procedures were described on fewer than 30% of the sites reviewed. None of the sites contained substantiation of claims or referrals to peer-reviewed publications or research. Because of duplication and the prioritization systems of search engines, the ease of finding sites did not correlate with the quality of the site's content. Our findings show that advertisements for services exceed useful information.
Braga, Rodolpho C; Alves, Vinicius M; Silva, Meryck F B; Muratov, Eugene; Fourches, Denis; Lião, Luciano M; Tropsha, Alexander; Andrade, Carolina H
The blockage of the hERG K(+) channels is closely associated with lethal cardiac arrhythmia. The notorious ligand promiscuity of this channel earmarked hERG as one of the most important antitargets to be considered in early stages of drug development process. Herein we report on the development of an innovative and freely accessible web server for early identification of putative hERG blockers and non-blockers in chemical libraries. We have collected the largest publicly available curated hERG dataset of 5,984 compounds. We succeed in developing robust and externally predictive binary (CCR≈0.8) and multiclass models (accuracy≈0.7). These models are available as a web-service freely available for public at http://labmol.farmacia.ufg.br/predherg/. Three following outcomes are available for the users: prediction by binary model, prediction by multi-class model, and the probability maps of atomic contribution. The Pred-hERG will be continuously updated and upgraded as new information became available. PMID:27490970
The past decade has seen an 'explosion' in electronically archived evidence available on the Internet. Access to, and awareness of, pre-appraised web based evidence such as is available at the Cochrane Library, and more recently the Cancer Library, is now easily accessible to both clinicians and patients. A postal survey was recently sent out to all Radiation Oncology registrars in Australia, New Zealand and Singapore. The aim of the survey was to ascertain previous training in literature searching and critical appraisal, the extent of Internet access and use of web based evidence and awareness of databases including the Cochrane Library. Sixty six (66) out of ninety (90) registrars responded (73% response rate). Fifty five percent of respondents had previously undertaken some form of training related to literature searching or critical appraisal. The majority (68%) felt confident in performing a literature search, although 80% of respondents indicated interest in obtaining further training. The majority (68%) reported accessing web-based evidence for literature searching in the previous week, and 92% in the previous month. Nearly all respondents (89%) accessed web-based evidence at work. Most (94%) were aware of the Cochrane Library with 48% of respondents having used this database. Sixty-eight percent were aware of the Cancer Library. In 2000 a similar survey revealed only 68% of registrars aware and 30% having used the Cochrane Library. These findings reveal almost universal access to the Internet and use of web-based evidence amongst Radiation Oncology registrars. There has been a marked increase in awareness and use of the Cochrane Library with the majority also aware of the recently introduced Cancer Library
Kannan, Jayanthkumar; Chun, Byung-Gon
This paper introduces the notion of a secure data capsule, which refers to an encapsulation of sensitive user information (such as a credit card number) along with code that implements an interface suitable for the use of such information (such as charging for purchases) by a service (such as an online merchant). In our capsule framework, users provide their data in the form of such capsules to web services rather than raw data. Capsules can be deployed in a variety of ways, either on a trusted third party or the user's own computer or at the service itself, through the use of a variety of hardware or software modules, such as a virtual machine monitor or trusted platform module: the only requirement is that the deployment mechanism must ensure that the user's data is only accessed via the interface sanctioned by the user. The framework further allows an user to specify policies regarding which services or machines may host her capsule, what parties are allowed to access the interface, and with what parameter...
Xavier Suresh R
Full Text Available Abstract Background Many important agricultural traits such as weight gain, milk fat content and intramuscular fat (marbling in cattle are quantitative traits. Most of the information on these traits has not previously been integrated into a genomic context. Without such integration application of these data to agricultural enterprises will remain slow and inefficient. Our goal was to populate a genomic database with data mined from the bovine quantitative trait literature and to make these data available in a genomic context to researchers via a user friendly query interface. Description The QTL (Quantitative Trait Locus data and related information for bovine QTL are gathered from published work and from existing databases. An integrated database schema was designed and the database (MySQL populated with the gathered data. The bovine QTL Viewer was developed for the integration of QTL data available for cattle. The tool consists of an integrated database of bovine QTL and the QTL viewer to display QTL and their chromosomal position. Conclusion We present a web accessible, integrated database of bovine (dairy and beef cattle QTL for use by animal geneticists. The viewer and database are of general applicability to any livestock species for which there are public QTL data. The viewer can be accessed at http://bovineqtl.tamu.edu.
Niu, Lu; Luo, Dan; Liu, Ying; Xiao, Shuiyuan
Objective: The present study was designed to assess the quality of Chinese-language Internet-based information on HIV/AIDS. Methods: We entered the following search terms, in Chinese, into Baidu and Sogou: “HIV/AIDS”, “symptoms”, and “treatment”, and evaluated the first 50 hits of each query using the Minervation validation instrument (LIDA tool) and DISCERN instrument. Results: Of the 900 hits identified, 85 websites were included in this study. The overall score of the LIDA tool was 63.7%; the mean score of accessibility, usability, and reliability was 82.2%, 71.5%, and 27.3%, respectively. Of the top 15 sites according to the LIDA score, the mean DISCERN score was calculated at 43.1 (95% confidence intervals (CI) = 37.7–49.5). Noncommercial websites showed higher DISCERN scores than commercial websites; whereas commercial websites were more likely to be found in the first 20 links obtained from each search engine than the noncommercial websites. Conclusions: In general, the HIV/AIDS related Chinese-language websites have poor reliability, although their accessibility and usability are fair. In addition, the treatment information presented on Chinese-language websites is far from sufficient. There is an imperative need for professionals and specialized institutes to improve the comprehensiveness of web-based information related to HIV/AIDS. PMID:27556475
Metacognitive strategies are regarded as advanced strategies in all the learning strategies. This study focuses on the ap⁃plication of metacognitive strategies in English listening in the web-based self-access learning environment (WSLE) and tries to provide some references for those students and teachers in the vocational colleges.
This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster
Arencibia-Jorge, Ricardo; Chinchilla-Rodriguez, Zaida; Rousseau, Ronald; Paris, Soren W
The current communication presents a simple exercise with the aim of solving a singular problem: the retrieval of extremely large amounts of items in the Web of Science interface. As it is known, Web of Science interface allows a user to obtain at most 100,000 items from a single query. But what about queries that achieve a result of more than 100,000 items? The exercise developed one possible way to achieve this objective. The case study is the retrieval of the entire scientific production from the United States in a specific year. Different sections of items were retrieved using the field Source of the database. Then, a simple Boolean statement was created with the aim of eliminating overlapping and to improve the accuracy of the search strategy. The importance of team work in the development of advanced search strategies was noted.
Singh, Kulwinder; Park, Dong-Won
We propose an architecture which enables people to enquire about information available in directory services by voice using regular phones. We implement a Virtual User Agent (VUA) which mediates between the human user and a business directory service. The system enables the user to search for the nearest clinic, gas station by price, motel by price, food / coffee, banks/ATM etc. and fix an appointment, or automatically establish a call between the user and the business party if the user prefers. The user also has an option to receive appointment confirmation by phone, SMS, or e-mail. The VUA is accessible by a toll free DID (Direct Inward Dialing) number using a phone by anyone, anywhere, anytime. We use the Euclidean formula for distance measurement. Since, shorter geodesic distances (on the Earth’s surface) correspond to shorter Euclidean distances (measured by a straight line through the Earth). Our proposed architecture uses Atom XML syndication format protocol for data integration, VoiceXML for creating the voice user interface (VUI) and CCXML for controlling the call components. We also provide an efficient algorithm for parsing Atom feeds which provide data to the system. Moreover, we describe a cost-effective way for providing global access to the VUA based on Asterisk (an open source IP-PBX). We also provide some information on how our system can be integrated with GPS for locating the user coordinates and therefore efficiently and spontaneously enhancing the system response. Additionally, the system has a mechanism for validating the phone numbers in its database, and it updates the number and other information such as daily price of gas, motel etc. automatically using an Atom-based feed. Currently, the commercial directory services (Example 411) do not have facilities to update the listing in the database automatically, so that why callers most of the times get out-of-date phone numbers or other information. Our system can be integrated very easily
Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming
The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. PMID:27106060
Emery, W.; Baldwin, D.
Both global area coverage (GAC) and high-resolution picture transmission (HRTP) data from the Advanced Very High Resolution Radiometer (AVHRR) are made available to laternet users through an online data access system. Older GOES-7 data am also available. Created as a "testbed" data system for NASA's future Earth Observing System Data and Information System (EOSDIS), this testbed provides an opportunity to test both the technical requirements of an onune'd;ta system and the different ways in which the -general user, community would employ such a system. Initiated in December 1991, the basic data system experienced five major evolutionary changes In response to user requests and requirements. Features added with these changes were the addition of online browse, user subsetting, dynamic image Processing/navigation, a stand-alone data storage system, and movement,from an X-windows graphical user Interface (GUI) to a World Wide Web (WWW) interface. Over Its lifetime, the system has had as many as 2500 registered users. The system on the WWW has had over 2500 hits since October 1995. Many of these hits are by casual users that only take the GIF images directly from the interface screens and do not specifically order digital data. Still, there b a consistent stream of users ordering the navigated image data and related products (maps and so forth). We have recently added a real-time, seven- day, northwestern United States normalized difference vegetation index (NDVI) composite that has generated considerable Interest. Index Terms-Data system, earth science, online access, satellite data.
Zinzi, Angelo; Capria, Maria Teresa; Palomba, Ernesto; Antonelli, Lucio Angelo; Giommi, Paolo
In the recent years planetary exploration missions acquired data from minor bodies (i.e., dwarf planets, asteroid and comets) at a detail level never reached before. Since these objects often present very irregular shapes (as in the case of the comet 67P Churyumov-Gerasimenko target of the ESA Rosetta mission) "classical" bidimensional projections of observations are difficult to understand. With the aim of providing the scientific community a tool to access, visualize and analyze data in a new way, ASI Science Data Center started to develop MATISSE (Multi-purposed Advanced Tool for the Instruments for the Solar System Exploration - http://tools.asdc.asi.it/matisse.jsp) in late 2012. This tool allows 3D web-based visualization of data acquired by planetary exploration missions: the output could either be the straightforward projection of the selected observation over the shape model of the target body or the visualization of a high-order product (average/mosaic, difference, ratio, RGB) computed directly online with MATISSE. Standard outputs of the tool also comprise downloadable files to be used with GIS software (GeoTIFF and ENVI format) and 3D very high-resolution files to be viewed by means of the free software Paraview. During this period the first and most frequent exploitation of the tool has been related to visualization of data acquired by VIRTIS-M instruments onboard Rosetta observing the comet 67P. The success of this task, well represented by the good number of published works that used images made with MATISSE confirmed the need of a different approach to correctly visualize data coming from irregular shaped bodies. In the next future the datasets available to MATISSE are planned to be extended, starting from the addition of VIR-Dawn observations of both Vesta and Ceres and also using standard protocols to access data stored in external repositories, such as NASA ODE and Planetary VO.
Groenewegen, D.; Visser, E.
Preprint of paper published in: ICWE 2008 - 8th International Conference on Web Engineering, 14-18 July 2008; doi:10.1109/ICWE.2008.15 In this paper, we present the extension of WebDSL, a domain-specific language for web application development, with abstractions for declarative definition of acces
O'Neil, Daniel A.
Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.
Full Text Available Abstract Background Target identification is important for modern drug discovery. With the advances in the development of molecular docking, potential binding proteins may be discovered by docking a small molecule to a repository of proteins with three-dimensional (3D structures. To complete this task, a reverse docking program and a drug target database with 3D structures are necessary. To this end, we have developed a web server tool, TarFisDock (Target Fishing Docking http://www.dddc.ac.cn/tarfisdock, which has been used widely by others. Recently, we have constructed a protein target database, Potential Drug Target Database (PDTD, and have integrated PDTD with TarFisDock. This combination aims to assist target identification and validation. Description PDTD is a web-accessible protein database for in silico target identification. It currently contains >1100 protein entries with 3D structures presented in the Protein Data Bank. The data are extracted from the literatures and several online databases such as TTD, DrugBank and Thomson Pharma. The database covers diverse information of >830 known or potential drug targets, including protein and active sites structures in both PDB and mol2 formats, related diseases, biological functions as well as associated regulating (signaling pathways. Each target is categorized by both nosology and biochemical function. PDTD supports keyword search function, such as PDB ID, target name, and disease name. Data set generated by PDTD can be viewed with the plug-in of molecular visualization tools and also can be downloaded freely. Remarkably, PDTD is specially designed for target identification. In conjunction with TarFisDock, PDTD can be used to identify binding proteins for small molecules. The results can be downloaded in the form of mol2 file with the binding pose of the probe compound and a list of potential binding targets according to their ranking scores. Conclusion PDTD serves as a comprehensive and
Laursen, Ditte; Møldrup-Dalum, Per
Digital heritage archiving is an ongoing activity that requires commitment, involvement and cooperation between heritage institutions and policy makers as well as producers and users of information. In this presentation, we will address how a web archive is created over time as well as what or who...... we see the development of the web archive in the near future. Findings are relevant for curators and researchers interested in the web archive as a historical source....
Ozyurt, I Burak; Keator, David B; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R; Bockholt, Jeremy; Grethe, Jeffrey S
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site. PMID:20567938
Lemaire, E D; Deforge, D; Marshall, S; Curran, D
A web-based transitional health record was created to provide regional healthcare professionals with ubiquitous access to information on people with brain injuries as they move through the healthcare system. Participants included public, private, and community healthcare organizations/providers in Eastern Ontario (Canada). One hundred and nineteen service providers and 39 brain injury survivors registered over 6 months. Fifty-eight percent received English and 42% received bilingual services (English-French). Public health providers contacted the regional service coordinator more than private providers (52% urban centres, 26% rural service providers, and 22% both areas). Thirty-five percent of contacts were for technical difficulties, 32% registration inquiries, 21% forms and processes, 6% resources, and 6% education. Seventeen technical enquiries required action by technical support personnel: 41% digital certificates, 29% web forms, and 12% log-in. This web-based approach to clinical information sharing provided access to relevant data as clients moved through or re-entered the health system. Improvements include automated digital certificate management, institutional health records system integration, and more referral tracking tools. More sensitive test data could be accessed on-line with increasing consumer/clinician confidence. In addition to a strong technical infrastructure, human resource issues are a major information security component and require continuing attention to ensure a viable on-line information environment. PMID:16469409
Pliutau, Denis; Prasad, Narashimha S.
Current approaches to satellite observation data storage and distribution implement separate visualization and data access methodologies which often leads to the need in time consuming data ordering and coding for applications requiring both visual representation as well as data handling and modeling capabilities. We describe an approach we implemented for a data-encoded web map service based on storing numerical data within server map tiles and subsequent client side data manipulation and map color rendering. The approach relies on storing data using the lossless compression Portable Network Graphics (PNG) image data format which is natively supported by web-browsers allowing on-the-fly browser rendering and modification of the map tiles. The method is easy to implement using existing software libraries and has the advantage of easy client side map color modifications, as well as spatial subsetting with physical parameter range filtering. This method is demonstrated for the ASTER-GDEM elevation model and selected MODIS data products and represents an alternative to the currently used storage and data access methods. One additional benefit includes providing multiple levels of averaging due to the need in generating map tiles at varying resolutions for various map magnification levels. We suggest that such merged data and mapping approach may be a viable alternative to existing static storage and data access methods for a wide array of combined simulation, data access and visualization purposes.
Pliutau, Denis; Prasad, Narasimha S.
Current approaches to satellite observation data storage and distribution implement separate visualization and data access methodologies which often leads to the need in time consuming data ordering and coding for applications requiring both visual representation as well as data handling and modeling capabilities. We describe an approach we implemented for a data-encoded web map service based on storing numerical data within server map tiles and subsequent client side data manipulation and map color rendering. The approach relies on storing data using the lossless compression Portable Network Graphics (PNG) image data format which is natively supported by web-browsers allowing on-the-fly browser rendering and modification of the map tiles. The method is easy to implement using existing software libraries and has the advantage of easy client side map color modifications, as well as spatial subsetting with physical parameter range filtering. This method is demonstrated for the ASTER-GDEM elevation model and selected MODIS data products and represents an alternative to the currently used storage and data access methods. One additional benefit includes providing multiple levels of averaging due to the need in generating map tiles at varying resolutions for various map magnification levels. We suggest that such merged data and mapping approach may be a viable alternative to existing static storage and data access methods for a wide array of combined simulation, data access and visualization purposes.
Eberle, Jonas; Urban, Marcel; Hüttich, Christian; Schmullius, Christiane
Numerous datasets providing temperature information from meteorological stations or remote sensing satellites are available. However, the challenging issue is to search in the archives and process the time series information for further analysis. These steps can be automated for each individual product, if the pre-conditions are complied, e.g. data access through web services (HTTP, FTP) or legal rights to redistribute the datasets. Therefore a python-based package was developed to provide data access and data processing tools for MODIS Land Surface Temperature (LST) data, which is provided by NASA Land Processed Distributed Active Archive Center (LPDAAC), as well as the Global Surface Summary of the Day (GSOD) and the Global Historical Climatology Network (GHCN) daily datasets provided by NOAA National Climatic Data Center (NCDC). The package to access and process the information is available as web services used by an interactive web portal for simple data access and analysis. Tools for time series analysis were linked to the system, e.g. time series plotting, decomposition, aggregation (monthly, seasonal, etc.), trend analyses, and breakpoint detection. Especially for temperature data a plot was integrated for the comparison of two temperature datasets based on the work by Urban et al. (2013). As a first result, a kernel density plot compares daily MODIS LST from satellites Aqua and Terra with daily means from GSOD and GHCN datasets. Without any data download and data processing, the users can analyze different time series datasets in an easy-to-use web portal. As a first use case, we built up this complimentary system with remotely sensed MODIS data and in situ measurements from meteorological stations for Siberia within the Siberian Earth System Science Cluster (www.sibessc.uni-jena.de). References: Urban, Marcel; Eberle, Jonas; Hüttich, Christian; Schmullius, Christiane; Herold, Martin. 2013. "Comparison of Satellite-Derived Land Surface Temperature and Air
National Oceanic and Atmospheric Administration, Department of Commerce — The ecosystem impacts of ocean acidification (OA) were explored by imposing scenarios designed to mimic OA on a food web model of Puget Sound, a large estuary in...
Full Text Available Abstract Background Innovations in biological and biomedical imaging produce complex high-content and multivariate image data. For decision-making and generation of hypotheses, scientists need novel information technology tools that enable them to visually explore and analyze the data and to discuss and communicate results or findings with collaborating experts from various places. Results In this paper, we present a novel Web2.0 approach, BioIMAX, for the collaborative exploration and analysis of multivariate image data by combining the webs collaboration and distribution architecture with the interface interactivity and computation power of desktop applications, recently called rich internet application. Conclusions BioIMAX allows scientists to discuss and share data or results with collaborating experts and to visualize, annotate, and explore multivariate image data within one web-based platform from any location via a standard web browser requiring only a username and a password. BioIMAX can be accessed at http://ani.cebitec.uni-bielefeld.de/BioIMAX with the username "test" and the password "test1" for testing purposes.
This paper proposes an access control model for Web services. The integration of the security model into Web services can realize dynamic right changes of security access control on Web services for improving static access control at present. The new model provides view policy language to describe access control policy of Web services. At the end of the paper we describe an infrastructure of integration of the security model into Web services to enforce access control polices of Web services.%提出了一种用于Web服务的访问控制模型，这种模型和Web服务相结合，能够实现Web服务下安全访问控制权限的动态改变，改善目前静态访问控制问题。新的模型提供的视图策略语言VPL用于描述Web服务的访问控制策略。给出了新的安全模型和Web服务集成的结构，用于执行Web服务访问控制策略。
Sapp, Megan R.; Van Epps, Amy S
Librarians have used the principle of equal access to protect the rights of patrons for years. In light of the digital revolution that has happened in the past ten years, equal access has taken on a new significance. The digital revolution has produced a new patron class who primarily access the library’s resources through the library website. These patrons, primarily undergraduate college students, are technologically savvy, have high expectations for customer service, and are format agnos...
Khelghati, Mohammadreza; Hiemstra, Djoerd; Keulen, van Maurice
With increasing amount of data in deep web sources (hidden from general search engines behind web forms), accessing this data has gained more attention. In the algorithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate decisions in sto
Geiger, Brian; Evans, R. R.; Cellitti, M. A.; Smith, K. Hogan; O'Neal, Marcia R.; Firsing, S. L., III; Chandan, P.
Background: The Internet can be an invaluable resource for obtaining health information by people with disabilities. Although valid and reliable information is available, previous research revealed barriers to accessing health information online. Health education specialists have the responsibilities to insure that it is accessible to all users.…
Guercio, Angela; Stirbens, Kathleen A.; Williams, Joseph; Haiber, Charles
Searching for relevant information on the web is an important aspect of distance learning. This activity is a challenge for visually impaired distance learners. While sighted people have the ability to filter information in a fast and non sequential way, blind persons rely on tools that process the information in a sequential way. Learning is…
Evaluación comparativa de la accesibilidad de los espacios web de las bibliotecas universitarias españolas y norteamericanas Comparative accessibility assessment of Web spaces in Spanish and American university libraries
Full Text Available El objetivo principal de la presente investigación es analizar y comparar el grado de cumplimiento de determinadas pautas de accesibilidad web en dos grupos de espacios web que pertenecen a una misma tipología conceptual: "Bibliotecas Universitarias", pero que forman parte de dos realidades geográficas, sociales y económicas diferentes: España y Norteamérica. La interpretación de los resultados pone de manifiesto que es posible utilizar técnicas webmétricas basadas en las características de accesibilidad web para contrastar dos conjuntos de espacios web cerrados.The main objective of this research is to analyze and compare the degree in which certain Accessibility Guidelines comply with two groups of web spaces which belong to the same conceptual typology: "University Libraries", but conform two different geographic, social and economical realities -Spain and the United States. Interpretation of results reveals the possibility of using web metrics techniques based on Web accessibility characteristics in order to contrast two categories of closed web spaces.
Daniel Francisco Arencibia-Arrebola
Full Text Available VacciMonitor has gradually increased its visibility by access to different databases. Thus, it was introduced in the project SciELO, EBSCO, HINARI, Redalyc, SCOPUS, DOAJ, SICC Data Bases, SeCiMed, among almost thirty well-known index sites, including the virtual libraries of the main universities from United States of America and other countries. Through an agreement SciELO-Web of Science (WoS it will be possible to include the journals that are indexed in SciELO in the WoS, however this collaboration work is already presenting its outcomes, it is possible to access the content of SciELO by WoS in the link: http://wokinfo.com/products_tools/multidisciplinar y/scielo/ WoS was designed by the Institute for Scientific Information (ISI and it is one of the products of the pack ISI Web of Knowledge, currently property of Thomson Reuters (1. WoS is a service of citation index and databases, worldwide on-line leader with multidisciplinary information covering the knowledge fields of sciences in general, social sciences as well as arts and humanities with more than 46 million of bibliographical references and other hundreds of citations, that made possible navigation in the broad web of journal articles, lecture materials and other registers included in its collection (1. The logic of the functioning of WoS is based on quantitative criteria, since a bigger production demonstrates a greater number of registered papers in most recognized Journals and to what extend these papers are cited by these journals (2. The information obtained from WoS databases are very useful to address efforts of scientific research to a personal, institutional or national level. Scientists publishing in WoS journals not only produce more scientific literature but also this literature is more consulted and used (3. However, it should be considered that statistics of this site for the bibliometric analysis only take into account those journals in this web, but contains three
Full Text Available MicroRNAs (miRNAs are a class of small regulatory genes regulating gene expression by targetingmessenger RNA. Though computational methods for miRNA target prediction are the prevailingmeans to analyze their function, they still miss a large fraction of the targeted genes and additionallypredict a large number of false positives. Here we introduce a novel algorithm called DIANAmicroT-ANN which combines multiple novel target site features through an artificial neural network(ANN and is trained using recently published high-throughput data measuring the change of proteinlevels after miRNA overexpression, providing positive and negative targeting examples. The featurescharacterizing each miRNA recognition element include binding structure, conservation level and aspecific profile of structural accessibility. The ANN is trained to integrate the features of eachrecognition element along the 3’ untranslated region into a targeting score, reproducing the relativerepression fold change of the protein. Tested on two different sets the algorithm outperforms otherwidely used algorithms and also predicts a significant number of unique and reliable targets notpredicted by the other methods. For 542 human miRNAs DIANA-microT-ANN predicts 120,000targets not provided by TargetScan 5.0. The algorithm is freely available athttp://microrna.gr/microT-ANN.
Ames, Charles; Auernheimer, Brent; Lee, Young H.
A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.
Yang, Y Tony; Chen, Brian
Access to the Internet is increasingly critical for health information retrieval, access to certain government benefits and services, connectivity to friends and family members, and an array of commercial and social services that directly affect health. Yet older adults, particularly those with disabilities, are at risk of being left behind in this growing age- and disability-based digital divide. The Americans with Disabilities Act (ADA) was designed to guarantee older adults and persons with disabilities equal access to employment, retail, and other places of public accommodation. Yet older Internet users sometimes face challenges when they try to access the Internet because of disabilities associated with age. Current legal interpretations of the ADA, however, do not consider the Internet to be an entity covered by law. In this article, we examine the current state of Internet accessibility protection in the United States through the lens of the ADA, sections 504 and 508 of the Rehabilitation Act, state laws and industry guidelines. We then compare U.S. rules to those of OECD (Organisation for Economic Co-Operation and Development) countries, notably in the European Union, Canada, Japan, Australia, and the Nordic countries. Our policy recommendations follow from our analyses of these laws and guidelines, and we conclude that the biggest challenge in bridging the age- and disability-based digital divide is the need to extend accessibility requirements to private, not just governmental, entities and organizations. PMID:26156518
The purpose of thesis is to present the development of a web service which eliminates the problems when using the mFi products of the company Ubiquiti Networks, inc. The problems are mainly the limited functionality offered by the software bundled with these devices. The webservice uses the communication protocol SOAP. Additionally and we encountered a relatively unknown database management system named MongoDB. Ubiquiti mFi is a family of gadgets to monitor events in buildings. The protoc...
Prendiville, T W
OBJECTIVES: To establish the information-seeking behaviours of paediatricians in answering every-day clinical queries. DESIGN: A questionnaire was distributed to every hospital-based paediatrician (paediatric registrar and consultant) working in Ireland. RESULTS: The study received 156 completed questionnaires, a 66.1% response. 67% of paediatricians utilised the internet as their first "port of call" when looking to answer a medical question. 85% believe that web-based resources have improved medical practice, with 88% reporting web-based resources are essential for medical practice today. 93.5% of paediatricians believe attempting to answer clinical questions as they arise is an important component in practising evidence-based medicine. 54% of all paediatricians have recommended websites to parents or patients. 75.5% of paediatricians report finding it difficult to keep up-to-date with new information relevant to their practice. CONCLUSIONS: Web-based paediatric resources are of increasing significance in day-to-day clinical practice. Many paediatricians now believe that the quality of patient care depends on it. Information technology resources play a key role in helping physicians to deliver, in a time-efficient manner, solutions to clinical queries at the point of care.
Dr. Khanna SamratVivekanand Omprakash
Full Text Available This paper represents how the co-ordinates from the Google map stored into database . It stored into the central web server . This co-ordinates then transfer to client program for searching the locations of particular location for electronic device . Client can access the data from internet and use into program by using API . Development of software for a particular device for putting into the vehicle has been develop. In the inbuilt circuit assigning sim card and transferring the signal to the network. Supplying a single text of co-ordinates of locations using google map in terms of latitudes and longitudes. The information in terms of string separated by comma can be extracted and stored into the database of web server . Different mobile number with locations can be stored into the database simultaneously into the server of different clients . The concept of 3 Tier Client /Server architecture is used. The sim card can access information of GPRS system with the network provider of card . Setting of electronic device signal for receiving and sending message done. Different operations can be performed on the device as it can be attached with other electronic circuit of vehicle. Windows Mobile application developed for client slide. User can take different decision of vehicle from mobile by sending sms to the device . Device receives the operation and send to the electronic circuit of vehicle for certain operations. From remote place using mobile you can get the information of your vehicle and also you can control vehicle it by providing password to the electronic circuit for authorization and authentication. The concept of vehicle security and location of vehicle can be identified. The functions of vehicle can be accessed and control like speed , brakes and lights etc as per the software application interface with electronic circuit of vehicle.
Full Text Available EXTENDED ABSTRACTFor the first time, the Dolenjska museum Novo mesto provided access to digitised museum resources when they took the decision to enrich the exhibition Novo mesto 1848-1918 by adding digital content. The following goals were identified: the digital content was created at the time of exhibition planning and design, it met the needs of different age groups of visitors, and during the exhibition the content was accessible via touch screen. As such, it also served for educational purposes (content-oriented lectures or problem solving team work. In the course of exhibition digital content was accessible on the museum website http://www.novomesto1848-1918.si. The digital content was divided into the following sections: the web photo gallery, the quiz and the game. The photo gallery was designed in the same way as the exhibition and the print catalogue and extended by the photos of contemporary Novo mesto and accompanied by the music from the orchestron machine. The following themes were outlined: the Austrian Empire, the Krka and Novo mesto, the town and its symbols, images of the town and people, administration and economy, social life and Novo mesto today followed by digitised archive materials and sources from that period such as the Commemorative book of the Uniformed Town Guard, the National Reading Room Guest Book, the Kazina guest book, the album of postcards and the Diploma of Honoured Citizen Josip Gerdešič. The Web application was also a tool for a simple and on line selection of digitised material and the creation of new digital content which proved to be much more convenient for lecturing than Power Point presentations. The quiz consisted of 40 questions relating to the exhibition theme and the catalogue. Each question offered a set of three answers only one of them being correct and illustrated by photography. The application auto selected ten questions and valued the answers immediately. The quiz could be accessed
Rajman, M; Boynton, I M; Fridlund, B; Fyhrlund, A; Sundgren, B; Lundquist, P; Thelander, H; Wänerskär, M
In this paper we present the results of the StatSearch case study that aimed at providing an enhanced access to statistical data available on the Web. In the scope of this case study we developed a prototype of an information access tool combining a query-based search engine with semi-automated navigation techniques exploiting the hierarchical structuring of the available data. This tool enables a better control of the information retrieval, improving the quality and ease of the access to statistical information. The central part of the presented StatSearch tool consists in the design of an algorithm for automated navigation through a tree-like hierarchical document structure. The algorithm relies on the computation of query related relevance score distributions over the available database to identify the most relevant clusters in the data structure. These most relevant clusters are then proposed to the user for navigation, or, alternatively, are the support for the automated navigation process. Several appro...
Dolog, Peter; Stuckenschmidt, Heiner; Wache, Holger
Research in Cooperative Query answering is triggered by the observation that users are often not able to correctly formulate queries to databases that return the intended result. Due to a lack of knowledge of the contents and the structure of a database, users will often only be able to provide...... and user preferences. We describe a framework for information access that combines query refinement and relaxation in order to provide robust, personalized access to heterogeneous RDF data as well as an implementation in terms of rewriting rules and explain its application in the context of e...
Sørensen, Lars Schiøtt
During the last two decades, a number of research efforts have been made in the field of computing systmes related to the building construction industry. Most of the projects have focused on a part of the entire design process and have typically been limited to a specific domain. This paper prese...... presents a newly developed computer system based on the World Wide Web on the Internet. The focus is on the simplicity of the systems structure and on an intuitive and user friendly interface...
Herkenhöner, Ralph; De Meer, Hermann; Jensen, Meiko;
Enforcing the right of access to personal data usually is a long-running process between a data subject and an organization that processes personal data. As of today, this task is commonly realized using a manual process based on postal communication or personal attendance and ends up conflicting...
WEB-based database access technology is the combination of Web technology and database,accessing the database service system created in the B/S structure through a browser.Based on Web database access process,this paper describes the Web database architecture and implementation,moreover,the advantages and disadvantages of several major Web database access technologies are compared and discussed detailedly.All these will help to select the most suitable implementation technology under different applied conditions.%基于WEB的数据库访问技术就是将Web技术和数据库相结合,在B/S结构下建立的通过浏览器去访问数据库的服务系统.本文根据Web数据库的访问流程描述了Web数据库的体系结构与实现,对几种主要的Web数据库访问技术,进行了优缺点及用处进行了较详细的比较和讨论.有助于在不同应用下选择最适合的技术去实现.
Full Text Available Se realizaron pruebas de usuarios a personas con discapacidad a uditiva evaluando el impacto que las diferentes barreras de acc esibilidad causan en este tipo de usuarios. El objetivo de recoger esta in formación fue para comunicar a personas que editan contenido en la Web de forma más empática los problemas d e accesibilidad que más afect an a este colectivo, las personas con discapacidad auditiva,y a sí evitar las barreras de accesibilidad que potencialmente podrían estar creando. Como resultado, se obse rva que las barreras que causan mas impacto a usuarios con discapacidad audi tiva son el “texto complejo” y el “contenido multimedia” sin alternativas. En ambos casos los editores de contenido deberían tener en cuenta vigilar la legibilidad del c ontenido web y acompañar de subtítulos y lenguaje de signos el contenido multimedia.
网络聊天以它低成本,高效率的优势给网络用户提供了在线实时通信的功能,从而成为目前互联网使用最广泛的网络服务。以网络聊天室的探测为载体深入研究网页获取和预处理的技术问题。主要探讨网络爬虫的原理和工作流程,在网络爬虫器中引入网络并行多线程处理技术。讨论WebLech的技术特点和实现技术,对WebLech做出了改进。%Web chat with its low-cost,high-efficiency advantages of online real-time communication capabilities,thus becoming the most widely used Internet network services to network users.Detection of Internet chat rooms as a carrier-depth study of Web access to technical problems and the pretreatment.Of the principles and workflow of the web crawler,Web crawler in the introduction of network parallel multi-threading technology.Discuss the technical features of the WebLech and implementation technology,improvements made WebLech.
Lloyd, S. A.; Acker, J. G.; Prados, A. I.; Leptoukh, G. G.
One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite- based remote sensing datasets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable dataset to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface. Giovanni provides a simple way to visualize, analyze and access vast amounts of satellite-based Earth science data. Giovanni's features and practical examples of its use will be demonstrated, with an emphasis on how satellite remote sensing can help students understand recent events in the atmosphere and biosphere. Giovanni is actually a series of sixteen similar web-based data interfaces, each of which covers a single satellite dataset (such as TRMM, TOMS, OMI, AIRS, MLS, HALOE, etc.) or a group of related datasets (such as MODIS and MISR for aerosols, SeaWIFS and MODIS for ocean color, and the suite of A-Train observations co-located along the CloudSat orbital path). Recently, ground-based datasets have been included in Giovanni, including the Northern Eurasian Earth Science Partnership Initiative (NEESPI), and EPA fine particulate matter (PM2.5) for air quality. Model data such as the Goddard GOCART model and MERRA meteorological reanalyses (in process) are being increasingly incorporated into Giovanni to facilitate model- data intercomparison. A full suite of data
Jones, Philip; Binns, David; McMenamin, Conor; McAnulla, Craig; Hunter, Sarah
The InterPro BioMart provides users with query-optimized access to predictions of family classification, protein domains and functional sites, based on a broad spectrum of integrated computational models ('signatures') that are generated by the InterPro member databases: Gene3D, HAMAP, PANTHER, Pfam, PIRSF, PRINTS, ProDom, PROSITE, SMART, SUPERFAMILY and TIGRFAMs. These predictions are provided for all protein sequences from both the UniProt Knowledge Base and the UniParc protein sequence archive. The InterPro BioMart is supplementary to the primary InterPro web interface (http://www.ebi.ac.uk/interpro), providing a web service and the ability to build complex, custom queries that can efficiently return thousands of rows of data in a variety of formats. This article describes the information available from the InterPro BioMart and illustrates its utility with examples of how to build queries that return useful biological information. Database URL: http://www.ebi.ac.uk/interpro/biomart/martview. PMID:21785143
Full Text Available Human protein kinases play fundamental roles mediating the majority of signal transduction pathways in eukaryotic cells as well as a multitude of other processes involved in metabolism, cell-cycle regulation, cellular shape, motility, differentiation and apoptosis. The human protein kinome contains 518 members. Most studies that focus on the human kinome require, at some point, the visualization of large amounts of data. The visualization of such data within the framework of a phylogenetic tree may help identify key relationships between different protein kinases in view of their evolutionary distance and the information used to annotate the kinome tree. For example, studies that focus on the promiscuity of kinase inhibitors can benefit from the annotations to depict binding affinities across kinase groups. Images involving the mapping of information into the kinome tree are common. However, producing such figures manually can be a long arduous process prone to errors. To circumvent this issue, we have developed a web-based tool called Kinome Render (KR that produces customized annotations on the human kinome tree. KR allows the creation and automatic overlay of customizable text or shape-based annotations of different sizes and colors on the human kinome tree. The web interface can be accessed at: http://bcb.med.usherbrooke.ca/kinomerender. A stand-alone version is also available and can be run locally.
Mackley, Rob D.; Last, George V.; Allwardt, Craig H.
The Hanford Borehole Geologic Information System (HBGIS) is a prototype web-based graphical user interface (GUI) for viewing and downloading borehole geologic data. The HBGIS is being developed as part of the Remediation Decision Support function of the Soil and Groundwater Remediation Project, managed by Fluor Hanford, Inc., Richland, Washington. Recent efforts have focused on improving the functionality of the HBGIS website in order to allow more efficient access and exportation of available data in HBGIS. Users will benefit from enhancements such as a dynamic browsing, user-driven forms, and multi-select options for selecting borehole geologic data for export. The need for translating borehole geologic data into electronic form within the HBGIS continues to increase, and efforts to populate the database continue at an increasing rate. These new web-based tools should help the end user quickly visualize what data are available in HBGIS, select from among these data, and download the borehole geologic data into a consistent and reproducible tabular form. This revised user’s guide supersedes the previous user’s guide (PNNL-15362) for viewing and downloading data from HBGIS. It contains an updated data dictionary for tables and fields containing borehole geologic data as well as instructions for viewing and downloading borehole geologic data.
The key role of public information in emergency preparedness has more recently been corroborated by the experience of the Great Eastern Japan Earthquake and Tsunami and the subsequent nuclear accident at the Fukushima NPP. Information should meet quality criteria such as openness, accessibility and authenticity. Existing information portals of radiation monitoring networks were frequently used even in Europe, although there was no imminent radiation risk. BfS responded by increasing the polling frequency, publishing current data not validated, refurbishing the web-site of the BfS 'odlinfo.bfs.de' and adding explanatory text. Public feedback served as a valuable input for improving the site's design. Additional services were implemented for developers of smart phone apps. Web-sites similar to 'ODLinfo' are available both on European and international levels. NGOs and grass root projects established platforms for uploading and visualising private dose rate measurements in Japan after 11 March 2011. The BfS site is compared with other platforms. Government information has to compete with non-official sources. Options on information strategies are discussed. (authors)
Full Text Available Web is a rich domain of data and knowledge, which is spread over the world in unstructured manner. The number of users is continuously access the information over the internet. Web mining is an application of data mining where web related data is extracted and manipulated for extracting knowledge. The data mining is used in the domain of web information mining is refers as web mining, that is further divided into three major domains web uses mining, web content mining and web structure mining. The proposed work is intended to work with web uses mining. The concept of web mining is to improve the user feedbacks and user navigation pattern discovery for a CRM system. Finally a new algorithm HMM is used for finding the pattern in data, which method promises to provide much accurate recommendation.
Corredor, Germán.; Iregui, Marcela; Arias, Viviana; Romero, Eduardo
Virtual microscopy (VM) facilitates visualization and deployment of histopathological virtual slides (VS), a useful tool for education, research and diagnosis. In recent years, it has become popular, yet its use is still limited basically because of the very large sizes of VS, typically of the order of gigabytes. Such volume of data requires efficacious and efficient strategies to access the VS content. In an educative or research scenario, several users may require to access and interact with VS at the same time, so, due to large data size, a very expensive and powerful infrastructure is usually required. This article introduces a novel JPEG2000-based service oriented architecture for streaming and visualizing very large images under scalable strategies, which in addition need not require very specialized infrastructure. Results suggest that the proposed architecture enables transmission and simultaneous visualization of large images, while it is efficient using resources and offering users proper response times.
H. H. Kian
Full Text Available General search engines often provide low precise results even for detailed queries. So there is a vital needto elicit useful information like keywords for search engines to provide acceptable results for user’s searchqueries. Although many methods have been proposed to show how to extract keywords automatically, allattempt to get a better recall, precision and other criteria which describe how the method has done its jobas an author. This paper presents a new automatic keyword extraction method which improves accessibilityof web content by search engines. The proposed method defines some coefficients determining featuresefficiency and tries to optimize them by using a genetic algorithm. Furthermore, it evaluates candidatekeywords by a function that utilizes the result of search engines. When comparing to the other methods,experiments demonstrate that by using the proposed method, a higher score is achieved from searchengines without losing noticeable recall or precision.
The InstadoseTM dosemeter from Mirion Technologies is a small, rugged device based on patented direct ion storage technology and is accredited by the National Voluntary Laboratory Accreditation Program (NVLAP) through NIST, bringing radiation monitoring into the digital age. Smaller than a flash drive, this dosemeter provides an instant read-out when connected to any computer with internet access and a USB connection. Instadose devices provide radiation workers with more flexibility than today's dosemeters. Non Volatile Analog Memory Cell surrounded by a Gas Filled Ion Chamber. Dose changes the amount of Electric Charge in the DIS Analog Memory. The total charge storage capacity of the memory determines the available dose range. The state of the Analog Memory is determined by measuring the voltage across the memory cell. AMP (Account Management Program) provides secure real time access to account details, device assignments, reports and all pertinent account information. Access can be restricted based on the role assignment assigned to an individual. A variety of reports are available for download and customizing. The Advantages of the Instadose dosemeter are: - Unlimited reading capability, - Concerns about a possible exposure can be addressed immediately, - Re-readability without loss of exposure data, with cumulative exposure maintained. (authors)
Calle Jiménez, Tania; Sánchez Gordón, Sandra; Luján Mora, Sergio
This paper describes some of the challenges that exist to make accessible massive open online courses (MOOCs) on Geographical Information Systems (GIS). These courses are known by the generic name of Geo-MOOCs. A MOOC is an online course that is open to the general public for free, which causes a massive registration. A GIS is a computer application that acquire, manipulate, manage, model and visualize geo-referenced data. The goal of a Geo-MOOC is to expand the culture of spatial thinking an...
Davis, Brian N.; Werpy, Jason; Friesz, Aaron M.; Impecoven, Kevin; Quenzer, Robert; Maiersperger, Tom; Meyer, David J.
Current methods of searching for and retrieving data from satellite land remote sensing archives do not allow for interactive information extraction. Instead, Earth science data users are required to download files over low-bandwidth networks to local workstations and process data before science questions can be addressed. New methods of extracting information from data archives need to become more interactive to meet user demands for deriving increasingly complex information from rapidly expanding archives. Moving the tools required for processing data to computer systems of data providers, and away from systems of the data consumer, can improve turnaround times for data processing workflows. The implementation of middleware services was used to provide interactive access to archive data. The goal of this middleware services development is to enable Earth science data users to access remote sensing archives for immediate answers to science questions instead of links to large volumes of data to download and process. Exposing data and metadata to web-based services enables machine-driven queries and data interaction. Also, product quality information can be integrated to enable additional filtering and sub-setting. Only the reduced content required to complete an analysis is then transferred to the user.
Access control is the main strategy of security and protection in Web system, the traditional access control can not meet the needs of the growing security. With using the role based access control (RBAC) model and introducing the concept of the role in the web system, the user is mapped to a role in an organization, access to the corresponding role authorization, access authorization and control according to the user's role in an organization, so as to improve the web system flexibility and security permissions and access control.%访问控制是Web系统中安全防范和保护的主要策略，传统的访问控制已不能满足日益增长的安全性需求。本文在web应用系统中，使用基于角色的访问控制（RBAC）模型，通过引入角色的概念，将用户映射为在一个组织中的某种角色，将访问权限授权给相应的角色，根据用户在组织内所处的角色进行访问授权与控制，从而提高了在web系统中权限分配和访问控制的灵活性与安全性。
Chipman, Jonathan; Drohan, Brian; Blackford, Amanda; Parmigiani, Giovanni; Hughes, Kevin; Bosinoff, Phil
Cancer risk prediction tools provide valuable information to clinicians but remain computationally challenging. Many clinics find that CaGene or HughesRiskApps fit their needs for easy- and ready-to-use software to obtain cancer risks; however, these resources may not fit all clinics' needs. The HughesRiskApps Group and BayesMendel Lab therefore developed a web service, called "Risk Service", which may be integrated into any client software to quickly obtain standardized and up-to-date risk predictions for BayesMendel tools (BRCAPRO, MMRpro, PancPRO, and MelaPRO), the Tyrer-Cuzick IBIS Breast Cancer Risk Evaluation Tool, and the Colorectal Cancer Risk Assessment Tool. Software clients that can convert their local structured data into the HL7 XML-formatted family and clinical patient history (Pedigree model) may integrate with the Risk Service. The Risk Service uses Apache Tomcat and Apache Axis2 technologies to provide an all Java web service. The software client sends HL7 XML information containing anonymized family and clinical history to a Dana-Farber Cancer Institute (DFCI) server, where it is parsed, interpreted, and processed by multiple risk tools. The Risk Service then formats the results into an HL7 style message and returns the risk predictions to the originating software client. Upon consent, users may allow DFCI to maintain the data for future research. The Risk Service implementation is exemplified through HughesRiskApps. The Risk Service broadens the availability of valuable, up-to-date cancer risk tools and allows clinics and researchers to integrate risk prediction tools into their own software interface designed for their needs. Each software package can collect risk data using its own interface, and display the results using its own interface, while using a central, up-to-date risk calculator. This allows users to choose from multiple interfaces while always getting the latest risk calculations. Consenting users contribute their data for future
Celli, Fabrizio; Malapela, Thembani; Wegner, Karna; Subirats, Imma; Kokoliou, Elena; Keizer, Johannes
AGRIS is the International System for Agricultural Science and Technology. It is supported by a large community of data providers, partners and users. AGRIS is a database that aggregates bibliographic data, and through this core data, related content across online information systems is retrieved by taking advantage of Semantic Web capabilities. AGRIS is a global public good and its vision is to be a responsive service to its user needs by facilitating contributions and feedback regarding the AGRIS core knowledgebase, AGRIS's future and its continuous development. Periodic AGRIS e-consultations, partner meetings and user feedback are assimilated to the development of the AGRIS application and content coverage. This paper outlines the current AGRIS technical set-up, its network of partners, data providers and users as well as how AGRIS's responsiveness to clients' needs inspires the continuous technical development of the application. The paper concludes by providing a use case of how the AGRIS stakeholder input and the subsequent AGRIS e-consultation results influence the development of the AGRIS application, knowledgebase and service delivery. PMID:26339471
Dezso, Z; Lukács, A; Racz, B; Szakadat, I; Barabási, A L
While current studies on complex networks focus on systems that change relatively slowly in time, the structure of the most visited regions of the Web is altered at the timescale from hours to days. Here we investigate the dynamics of visitation of a major news portal, representing the prototype for such a rapidly evolving network. The nodes of the network can be classified into stable nodes, that form the time independent skeleton of the portal, and news documents. The visitation of the two node classes are markedly different, the skeleton acquiring visits at a constant rate, while a news document's visitation peaking after a few hours. We find that the visitation pattern of a news document decays as a power law, in contrast with the exponential prediction provided by simple models of site visitation. This is rooted in the inhomogeneous nature of the browsing pattern characterizing individual users: the time interval between consecutive visits by the same user to the site follows a power law distribution, in...
Koehler, Wallace; Mincey, Danielle
Compares and evaluates the differences between OCLC's dial-up and World Wide Web FirstSearch access methods and their interfaces with the underlying databases. Also examines NetFirst, OCLC's new Internet catalog, the only Internet tracking database from a "traditional" database service. (Author/PEN)
Freeman, Misty Danielle
The purpose of this research was to explore Webmasters' behaviors and factors that influence Web accessibility at postsecondary institutions. Postsecondary institutions that were accredited by the Southern Association of Colleges and Schools were used as the population. The study was based on the theory of planned behavior, and Webmasters'…
Areeda, Joseph S; Lundgren, Andrew P; Maros, Edward; Macleod, Duncan M; Zweizig, John
Gravitational-wave observatories around the world, including the Laser Interferometer Gravitational-wave Observatory (LIGO), record a large volume of gravitational-wave output data and auxiliary data about the instruments and their environments. These data are stored at the observatory sites and distributed to computing clusters for data analysis. LigoDV-web is a web-based data viewer that provides access to data recorded at the LIGO Hanford, LIGO Livingston and GEO600 observatories, and the 40m prototype interferometer at Caltech. The challenge addressed by this project is to provide meaningful visualizations of small data sets to anyone in the collaboration in a fast, secure and reliable manner with minimal software, hardware and training required of the end users. LigoDV-web is implemented as a Java Enterprise Application, with Shibboleth Single Sign On for authentication and authorization and a proprietary network protocol used for data access on the back end. Collaboration members with proper credentials...
Sun, Ping; Unger, Jennifer B; Palmer, Paula H; Gallaher, Peggy; Chou, Chih-Ping; Baezconde-Garbanati, Lourdes; Sussman, Steve; Johnson, C Anderson
The World Wide Web (WWW) poses a distinct capability to offer interventions tailored to the individual's characteristics. To fine tune the tailoring process, studies are needed to explore how Internet accessibility and usage are related to demographic, psychosocial, behavioral, and other health related characteristics. This study was based on a cross-sectional survey conducted on 2373 7th grade students of various ethnic groups in Southern California. Measures of Internet use included Internet use at school or at home, Email use, chat-room use, and Internet favoring. Logistic regressions were conducted to assess the associations between Internet uses with selected demographic, psychosocial, behavioral variables and self-reported health statuses. The proportion of students who could access the Internet at school or home was 90% and 40%, separately. Nearly all (99%) of the respondents could access the Internet either at school or at home. Higher SES and Asian ethnicity were associated with higher internet use. Among those who could access the Internet and after adjusting for the selected demographic and psychosocial variables, depression was positively related with chat-room use and using the Internet longer than 1 hour per day at home, and hostility was positively related with Internet favoring (All ORs = 1.2 for +1 STD, p Internet use (ORs for +1 STD ranged from 1.2 to 2.0, all p Internet use. Substance use was positively related to email use, chat-room use, and at home Internet use (OR for "used" vs. "not used" ranged from 1.2 to 4.0, p Internet use at home but lower levels of Internet use at school. More physical activity was related to more email use (OR = 1.3 for +1 STD), chat room use (OR = 1.2 for +1 STD), and at school ever Internet use (OR = 1.2 for +1 STD, all p Internet use-related measures. In this ethnically diverse sample of Southern California 7(th) grade students, 99% could access the Internet at school and/or at home. This suggests that the Internet
Sweeney, Edwina; Curran, Kevin; Xie, Ermai
A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.
During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in1.server installation,2.server configuration,3.Hyrax aggregation capabilities,4.support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS),5.support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters. Topics 2 through 7 will be relevant to data consumers, data providers and notably, due to the open-source nature of all OPeNDAP software to developers wishing to extend Hyrax, to build compatible clients and servers, and/or to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.
Bauer, R.; Scambos, T.; Haran, T.; Maurer, J.; Bohlander, J.
A prototype of the Antarctic Cryosphere Access Portal (A-CAP) has been released for public use. Developed at the National Snow and Ice Data Center (NSIDC) Antarctic Glaciological Data Center (AGDC), A-CAP aims to be a geo-visualization and data download tool for AGDC data and other Antarctic-wide parameters, including glaciology, ice core data, snow accumulation, satellite imagery, digital elevation models (DEMs), sea ice concentration, and many other cryosphere-related scientific measurements. The user can zoom in to a specific region as well as overlay coastlines, placenames, latitude/longitude, and other geographic information. In addition to providing an interactive Web interface, customizable A-CAP map images and source data are also accessible via specific Uniform Resource Locator strings (URLs) to a standard suite of Open Geospatial Consortium (OGC) services: Web Map Service (WMS), Web Feature Service (WFS), and Web Coverage Service (WCS). The international specifications of these services provide an interoperable framework for sharing maps and geospatial data over the Internet, allowing A-CAP products to be easily exchanged with other data centers worldwide and enabling remote access for users through OGC-compliant software applications such as ArcGIS, Google Earth, ENVI, and many others. A-CAP is built on MapServer, an Open Source development environment for building spatially-enabled Internet applications. MapServer uses data sets that have been formatted as GeoTIFF or Shapefile to allow rapid sub-setting and over-the-Web presentation of large geospatial data files, and has no requirement for a user-installed client software package (besides a Web browser).
National Archives and Records Administration — The OGIS Access System (OAS) provides case management, stakeholder collaboration, and public communications activities including a web presence via a web portal.
上超望; 赵呈领; 刘清堂; 王艳凤
Access control is one of the key technologies in secure and reliable Web services composition value-added application. This paper briefly reviewed the state of the research for access control in Web services composition environment We firstly discussed the challenges to Web services secure compositioa Subsequently we analysed the security problems concerning Web services composition from a hierarchical perspective. Then, we discussed the research progress on the key access control technology from three respects of Web services composition access control architecture, atomic security policy consistent coordination and business process authorization. Finally, the conclusion was given and the problems were pointed out,which should be resolved in future research.%访问控制技术是保证Web服务组合增值应用安全性和可靠性的关键技术.主要论述了组合Web服务访问控制技术的研究现状及其问题.首先论述了组合Web服务安全面临的挑战；接着基于层的视角对组合Web服务安全问题进行了分析；然后从组合Web服务访问控制体系构架、原子安全策略的一致性协同和业务流程访问控制3个方面分析了组合Web服务访问控制核心技术研究的进展；最后,结合已有的研究成果,指出了目前研究的不足以及未来的发展趋势.
上超望; 赵呈领; 刘清堂; 王艳凤
访问控制技术是保证Web服务组合增值应用安全性和可靠性的关键技术.主要论述了组合Web服务访问控制技术的研究现状及其问题.首先论述了组合Web服务安全面临的挑战；接着基于层的视角对组合Web服务安全问题进行了分析；然后从组合Web服务访问控制体系构架、原子安全策略的一致性协同和业务流程访问控制3个方面分析了组合Web服务访问控制核心技术研究的进展；最后,结合已有的研究成果,指出了目前研究的不足以及未来的发展趋势.%Access control is one of the key technologies in secure and reliable Web services composition value-added application. This paper briefly reviewed the state of the research for access control in Web services composition environment. We firstly discussed the challenges to Web services secure composition. Subsequently we analysed the security problems concerning Web services composition from a hierarchical perspective. Then, we discussed the research progress on the key access control technology from three respects of Web services composition access control architecture, atomic security policy consistent coordination and business process authorization. Finally, the conclusion was given and the problems were pointed out,which should be resolved in future research.
Full Text Available Se analizan las interfaces de usuario de los catálogos en línea de acceso público (OPACs en entorno web de las bibliotecas universitarias, especializadas, públicas y nacionales de los países parte del Mercosur (Argentina, Brasil, Paraguay, Uruguay, para elaborar un diagnóstico de situación sobre: descripción bibliográfica, análisis temático, mensajes de ayuda al usuario, visualización de datos bibliográficos. Se adopta una metodología cuali-cuantitativa, se utiliza como instrumento de recolección de datos la lista de funcionalidades del sistema que proporciona Hildreth (1982, se actualiza, se obtiene un formulario que permite, mediante 38 preguntas cerradas, observar la frecuencia de aparición de las funcionalidades básicas propias de cuatro áreas: Área I - control de operaciones; Área II - control de formulación de la búsqueda y puntos de acceso; Área III - control de salida y Área IV - asistencia al usuario: información e instrucción. Se trabaja con la información correspondiente a 297 unidades. Se delimitan estratos por tipo de software, tipo de biblioteca y país. Se aplican a los resultados las pruebas de Chi-cuadrado, Odds ratio y regresión logística multinomial. El análisis corrobora la existencia de diferencias significativas en cada uno de los estratos y verifica que la mayoría de los OPACs relevados brindan prestaciones mínimas.User interfaces of web based online public access catalogs (OPACs of academic, special, public and national libraries in countries belonging to Mercosur (Argentina, Brazil, Paraguay, Uruguay are studied to provide a diagnosis of the situation of bibliographic description, subject analisis, help messages and bibliographic display. A cuali-cuantitative methodology is adopted and a checklist of systems functions created by Hildreth (1982 is updated and used as data collection tool. The resulting 38 closed questions checklist has allowed to observe the frequency of appearance of the
Pawlicki, T [UC San Diego Medical Center, La Jolla, CA (United States); Brown, D; Dunscombe, P [Tom Baker Cancer Centre, Calgary, AB (Canada); Mutic, S [Washington University School of Medicine, Saint Louis, MO (United States)
Le, Ha Thanh; Nguyen, Duy Cu; Briand, Lionel
This technical report details our a semi-automated framework for the reverse-engineering and testing of access control (AC) policies for web-based applications. In practice, AC specifications are often missing or poorly documented, leading to AC vulnerabilities. Our goal is to learn and recover AC policies from implementation, and assess them to find AC issues. Built on top of a suite of security tools, our framework automatically explores a system under test, mines domain input specification...
Winter, A. G.; Wildenhain, J; Tyers, M
Summary: The Biological General Repository for Interaction Datasets (BioGRID) representational state transfer (REST) service allows full URL-based access to curated protein and genetic interaction data at the BioGRID database. Appending URL parameters allows filtering of data by various attributes including gene names and identifiers, PubMed ID and evidence type. We also describe two visualization tools that interface with the REST service, the BiogridPlugin2 for Cytoscape and the BioGRID Web...
Johnson, G. W.; Gonzalez, J.; Brady, J. J.; Gaylord, A.; Manley, W. F.; Cody, R.; Dover, M.; Score, R.; Garcia-Lavigne, D.; Tweedie, C. E.
ARMAP 3D allows users to dynamically interact with information about U.S. federally funded research projects in the Arctic. This virtual globe allows users to explore data maintained in the Arctic Research & Logistics Support System (ARLSS) database providing a very valuable visual tool for science management and logistical planning, ascertaining who is doing what type of research and where. Users can “fly to” study sites, view receding glaciers in 3D and access linked reports about specific projects. Custom “Search” tasks have been developed to query by researcher name, discipline, funding program, place names and year and display results on the globe with links to detailed reports. ARMAP 3D was created with ESRI’s free ArcGIS Explorer (AGX) new build 900 providing an updated application from build 500. AGX applications provide users the ability to integrate their own spatial data on various data layers provided by ArcOnline (http://resources.esri.com/arcgisonlineservices). Users can add many types of data including OGC web services without any special data translators or costly software. ARMAP 3D is part of the ARMAP suite (http://armap.org), a collection of applications that support Arctic science tools for users of various levels of technical ability to explore information about field-based research in the Arctic. ARMAP is funded by the National Science Foundation Office of Polar Programs Arctic Sciences Division and is a collaborative development effort between the Systems Ecology Lab at the University of Texas at El Paso, Nuna Technologies, the INSTAAR QGIS Laboratory, and CH2M HILL Polar Services.
Kunicki, T.; Blodgett, D. L.; Booth, N. L.; Suftin, I.; Walker, J. I.
Environmental modelers from fields of study including climatology, hydrology, geology, and ecology need common, cross-discipline data sources and processing methods to enable working with large remote datasets. Watershed modelers, for example, need downscaled climate model data and land-cover data summaries to predict streamflow for various future climate scenarios. In turn, ecological modelers need the predicted streamflow conditions to understand how habitat of biotic communities might be affected. The U.S. Geological Survey Geo Data Portal project addresses these needs by providing a flexible application built on open-standard Web services that integrates and streamlines data retrieval and analysis. Open Geospatial Consortium Web Processing Services (WPS) were developed to allow interoperable access to data from servers delivering both defacto standard Climate and Forecast (CF) convention datasets and OGC standard Web Coverage Services (WCS). The Geo Data Portal can create commonly needed derivatives of data in numerous formats. As an example use case, a user can upload a shapefile specifying a region of interest (e.g. a watershed), pick a climate simulation, and retrieve a spreadsheet of predicted daily maximum temperature in that region up to 2100. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data. The Geo Data Portal resulting from this project will be demonstrated accessing a range of climate and landscape data.
As part of the Institute of Medical Illustrators' (IMI) scheme for continuing professional development (CPD), worksheets will be published at regular intervals in this Journal. These are designed to provide the members of IMI with a structured CPD activity that offers one way to earn credits. It is recognized that this worksheet requires some time spent undertaking the exercises. This activity has been tested, and hours have been allocated to individual tasks (see clocks), but these are intended only as a guide. The answers to the questions, along with any notes and reflections you make or other publications you find, should be kept in your CPD portfolio. PMID:15799590
Hannak, Aniko; Sapiezynski, Piotr; Kakhki, Arash Molavi;
Web search is an integral part of our daily lives. Recently, there has been a trend of personalization in Web search, where different users receive different results for the same search query. The increasing personalization is leading to concerns about Filter Bubble effects, where certain users...... are simply unable to access information that the search engines’ algorithm decidesis irrelevant. Despitetheseconcerns, there has been little quantification of the extent of personalization in Web search today, or the user attributes that cause it. In light of this situation, we make three contributions....... First, we develop a methodology for measuring personalization in Web search results. While conceptually simple, there are numerous details that our methodology must handle in order to accurately attribute differences in search results to personalization. Second, we apply our methodology to 200 users...
Blodgett, D. L.; Walker, J. I.; Read, J. S.
The USGS Geo Data Portal (GDP) project started in 2010 with the goal of providing climate and landscape model output data to hydrology and ecology modelers in model-ready form. The system takes a user-specified collection of polygons and a gridded time series dataset and returns a time series of spatial statistics for each polygon. The GDP is designed for scalability and is generalized such that any data, hosted anywhere on the Internet adhering to the NetCDF-CF conventions, can be processed. Five years into the project, over 600 unique users from more than 200 organizations have used the system's web user interface and some datasets have been accessed thousands of times. In addition to the web interface, python and R client libraries have seen steady usage growth and several third-party web applications have been developed to use the GDP for easy data access. Here, we will present lessons learned and improvements made after five years of operation of the system's user interfaces, processing server, and data holdings. A vision for the future availability and processing of massive climate and landscape data will be outlined.
刘金平; 刘磊; 邹伟
Based on Advantech WebAccess, an energy monitoring and management system for solar water heating device was developed. Its overall scheme, hardware and software design were also briefly introduced. Its core part used a series of ADAM4000 and ADAM5000 modules which realized equipment control and operation parameters acquisition. The administrators terminal software, using Advantech WebAccess configuration software and BEMS energy management system as research platform, realized the local and remote control, equipment manual/automatic control, data presentation and storage and parameters setting. Real-time and historical trends, BEMS data analysis were also done successfully.%开发了基于WebAccess的太阳能卫生热水装置能量监测管理系统.详细介绍了该系统的总体方案、主要硬件设计和软件设计.系统核心部分设备监控和能源管理系统采用的是研华ADAM4000及ADAM5000系列模块,实现设备控制,运行参数的采集.系统管理员终端软件利用研华WebAccess组态软件和BEMS为开发平台,实现了本地及远程监控、设备的手动及自动控制、数据显示与存储、实时及历史趋势显示、参数设置、BEMS数据分析.
Jothi Venkateswaran C.
Full Text Available Users of the web have their own areas of interest. Given the tremendous growth of the web, it is very difficult to redirect the users to their page of interest. Web usage mining techniques can be applied to study the users navigational behaviours based on their previous visit data. These user navigational patterns can be extracted and used for web personalization or web site reorganization recommendations. Web usage mining techniques do not use the semantic knowledge of the web site for such navigational pattern discovery. But, if ontology is applied along with web usage techniques, it can improve the quality of the detected patterns. This research work aims to design a framework that integrates semantic knowledge with web usage mining process that generates the refined website ontology that recommends personalization of web. As the web pages are seen as ontology individuals, the user navigational behaviours over a certain period are considered as the user expected ontology refinement. The user profiles and the web site ontology are compared and the variation between the two is proposed as the new refined web site ontology. The web site ontology has been semi-automatically built and evolves through the adaptation procedure. The result of implementation of this recommendation system indicates that integrating semantic information and page access patterns yield more accurate recommendations.
National Aeronautics and Space Administration — TerraMetrics, Inc., proposes an SBIR Phase I R/R&D program to investigate and develop a key web services architecture that provides data processing, storage and...
Khelghati, Mohammadreza; Hiemstra, Djoerd; Keulen, van, S.
With increasing amount of data in deep web sources (hidden from general search engines behind web forms), accessing this data has gained more attention. In the algorithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate decisions in stopping crawling or sampling processes which can be so costly in some cases . The tendency to know the sizes of data sources is increased by the competition among businesses on the Web in which th...
Lundegaard, Claus; Lamberth, K; Harndahl, M;
NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding....... The predictions are based on artificial neural networks trained on data from 55 MHC alleles (43 Human and 12 non-human), and position-specific scoring matrices (PSSMs) for additional 67 HLA alleles. As only the MHC class I prediction server is available, predictions are possible for peptides of length 8......–11 for all 122 alleles. artificial neural network predictions are given as actual IC50 values whereas PSSM predictions are given as a log-odds likelihood scores. The output is optionally available as download for easy post-processing. The training method underlying the server is the best available, and has...
Lundegaard, Claus; Lamberth, Kasper; Harndahl, Mikkel; Buus, Søren; Lund, Ole; Nielsen, Morten
NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding. The predictions are based on artificial neural networks trained on data from 55 MHC alleles (43 Human and 12 non-human), and position-specific scoring matrices (PSSMs) for additional 67 HLA alleles. As only the MHC class I prediction server is available, predictions are possible for peptides of length 8-11 for all 122 alleles. artificial neural network predictions are given as actual IC(50) values whereas PSSM predictions are given as a log-odds likelihood scores. The output is optionally available as download for easy post-processing. The training method underlying the server is the best available, and has been used to predict possible MHC-binding peptides in a series of pathogen viral proteomes including SARS, Influenza and HIV, resulting in an average of 75-80% confirmed MHC binders. Here, the performance is further validated and benchmarked using a large set of newly published affinity data, non-redundant to the training set. The server is free of use and available at: http://www.cbs.dtu.dk/services/NetMHC.
Monitoring of PMT dark noise in a neutrino detector BOREXINO is a procedure that indicates condition of the detector. Based on CAN industrial network, top level DeviceNet protocol and WEB visualization, the dark noise monitoring system having 256 channels for the internal detector and for the external muon veto was created. The system is composed as a set of controllers, converting the PMT signals to frequency and transmitting them over Can network. The software is the stack of the DeviceNet protocols, providing the data collecting and transporting. Server-side scripts build web pages of user interface and graphical visualization of data
Duncan, R G; Saperia, D; Dulbandzhyan, R; Shabot, M M; Polaschek, J X; Jones, D T
The advent of the World-Wide-Web protocols and client-server technology has made it easy to build low-cost, user-friendly, platform-independent graphical user interfaces to health information systems and to integrate the presentation of data from multiple systems. The authors describe a Web interface for a clinical data repository (CDR) that was moved from concept to production status in less than six months using a rapid prototyping approach, multi-disciplinary development team, and off-the-shelf hardware and software. The system has since been expanded to provide an integrated display of clinical data from nearly 20 disparate information systems.
Full Text Available Now a day's mobile phones are replacing conventional PCs' as users are browsing and searching the Internet via their mobile handsets. Web based services and information can be accessed from any location with the help of these Mobile devices such as mobile phones, Personal Digital Assistants (PDA with relative ease. To access the educational data on mobile devices, web page adaptation is needed, keeping in mind security and quality of data. Various researchers are working on adaptation techniques. Educational web miner aims to develop an interface for kids to use mobile devices in a secure way. This paper presents a framework for adapting the web pages as part of educational web miner so that educational data can be accessed accurately, securely and concisely. The present paper is a part of the project whose aim is to develop an interface for kids, so that they can access the current knowledge bases from mobile devices in a secure way and to get accurate and concise information at ease. The related studies for adaptation technique are also presented in this paper.
... assistance for a visually impaired passenger, and stowage of an assistive device. Automated Airport Kiosk... airport kiosks simplifies the airport ] experience of countless travelers as they independently conduct... Accessible Kiosks. airports to ensure that accessible automated airport kiosks are visually and...
Ajax and web services are a perfect match for developing web applications. Ajax has built-in abilities to access and manipulate XML data, the native format for almost all REST and SOAP web services. Using numerous examples, this document explores how to fit the pieces together. Examples demonstrate how to use Ajax to access publicly available web services fromYahoo! and Google. You'll also learn how to use web proxies to access data on remote servers and how to transform XML data using XSLT.
Web应用的访问控制一直以来受到广泛关注。由于Http的无状态性,给应用的安全设计带来较大难度。Spring Security提供了完整的访问控制机制,从而给应用安全设计提供了强大的支持。在介绍Spring Securi-ty对访问对象的访问控制整体框架的基础上,重点讨论了用户认证和基于URL的安全保护的访问授权的设置方法。并简要介绍了基于方法的安全保护及JSP页面内容的安全保护的配置及应用要点。%Access Control in web applications has been widely concerned.Because Http has no state,it brings some difficulties in security design of applications.Spring Security provides complete access control mechanism,which provides strong support for security design.On the basis of introducing the spring security framework Overall control to access objects,Focus on user authentication and setting methods of security access authorization based on URL.Security based on method as well as security setting and key points of applications of JSP page content was briefly introduced in the paper.
Lucila Maria Costi Santarosa
Full Text Available o Eduquito, ambiente digital/virtual de aprendizagem desenvolvido pela equipe de pesquisadores do NIEE/UFRGS, busca apoiar processos de inclusão sociodigital, por ser projetado em sintonia com os princípios de acessibilidade e de desenho universal, normatizados pela WAI/W3C. O desenvolvimento da plataforma digital/virtual acessível e os resultados da utilização por pessoas com deficiências são discutidos, revelando um processo permanente de verificação e de validação dos recursos e da funcionalidade do ambiente Eduquito junto a diversidade humana. Apresentamos e problematizamos duas ferramentas de autoria individual e coletiva - a Oficina Multimídia e o Bloguito, um blog acessível -, novos recursos do ambiente Eduquito que emergem da aplicabilidade do conceito de pervasividade, buscando instituir espaços de letramento e impulsionar práticas de mediação tecnológica para a inclusão sociodigital no contexto da Web 2.0.the Eduquito, a digital/virtual learning environment developed by a NIEE / UFRGS team of researchers, seeks to support processes of socio-digital inclusion, and for that reason it was devised according to accessibility principles and universal design systematized by WAI/W3C. We discuss the development of a digital/virtual accessible platform and the results of its use by people with special needs, revealing an ongoing process of verification and validation of features and functionality of the Eduquito environment considering human diversity. We present and question two individual and collective authorship tools - the Multimedia Workshop and Bloguito, an accessible blog - new features of Eduquito Environment that emerge from the applicability of the concept of pervasiveness, in order to establish literacy spaces and boost technological mediation for socio-digital inclusion in the Web 2.0 context.
简要比较了几种站点数据库访问方案,认为以ASP5.0为脚本环境、Microsoft VFP6.0为Web数据库、 Win NT Server4.0为运行平台是目前访问WEB数据库的最好解决方案。从而较为系统地介绍了利用ASP和ADO访问Web数据库的方法和技巧,并举例说明。%This paper briefly compared several solutions to access Web database,among which the author considered the best solution is that script enviroment is ASP5.0,Microsoft VFP6.0 is Web database,and Win NT Server4.0 is its running platform.So in this paper,the skills and methods of using ASP and ADO to access database is systematically introduced,at the same time an example is also presented.
The concepts of permission value and quantified-role are introduced to build a fine-grained Web services access control model. By defining the resources of Web services, service attributes and access modes set, the definitions of permissions set is expanded. The definition and distribution of permission values are studied, and the validation and representation of quantified-role are analysed. The concept of ' behaviour value' of Web services user is proposed, and the correlation between the behaviour values with the role quantity of a user is established. The dynamic calculation of behaviour value and the adjustment of users permissions are achieved based on users behaviours and the context.%引入权限量值和量化角色的概念,建立一个细粒度的Web服务访问控制模型.通过定义Web服务和服务属性资源以及访问模式集,扩展权限集的定义;研究Web服务权限量值的定义和分配,以及量化角色的验证和表示形式;提出Web服务主体的行为量值的概念,建立与主体的角色量值的关联,实现根据Web服务主体的行为和上下文环境动态计算行为量值并调整主体权限的方法.
李怀明; 王慧佳; 符林
For the problem of current access control strategies difficultly guaranteeing the flexibility of authorization of complex E-government system for Web service,this paper proposes an organization-based access control model for Web services on the basis of the research of the organization-based 4 level access control model. The model takes organization as the core and studies the issue of access control and authorization management from the perspective of management. Through importing the position agent and authorization unit in the model,the authorization can be adjusted according to the change of the environment context information to implement the dynamic authorization,while taking advantage of the state migration of authorization units,provides support for workflow patterns. Furthermore,the model divides permissions into service permissions and service attribute permissions, and achieves fine-grained resource protection. Application examples show that the model can commendably fit the complex organization structure in E-government system. Moreover,it can make authorization more efficient and flexible meanwhile protecting the Web service resources.%针对现有访问控制策略难以保障面向Web服务的复杂电子政务系统授权的灵活性问题，在研究基于组织的四层访问控制模型(OB4LAC)的基础上，提出一种基于组织的Web服务访问控制模型。以组织为核心，从管理的视角研究访问控制与授权管理问题。通过引入岗位代理和授权单元，使授权随着环境上下文信息的变化而调整，从而实现动态授权，同时利用授权单元的状态迁移，对工作流模式提供支持。并且模型将权限分为服务权限和服务属性权限2级，实现细粒度的资源保护。应用实例结果表明，该模型能够契合电子政务系统中的复杂组织结构，在保护Web服务资源的同时，使得授权更加高效和灵活。
Full Text Available The purpose of developing e-Government is to make public administrations more efficient and transparent and to allow citizens to more comfortably and effectively access information. Such benefits are even more important to people with a physical disability, allowing them to reduce waiting times in procedures and travel. However, it is not in widespread use among this group, as they not only harbor the same fears as other citizens, but also must cope with the barriers inherent to their disability. This research proposes a solution to help persons with disabilities access e-Government services. This work, in cooperation with the Spanish Federation of Spinal-Cord Injury Victims and the Severely Disabled, includes the development of a portal specially oriented towards people with disabilities to help them locate and access services offered by Spanish administrations. Use of the portal relies on digital authentication of users based on X.509, which are found in identity cards of Spanish citizens. However, an analysis of their use reveals that this feature constitutes a significant barrier to accessibility. This paper proposes a more accessible solution using a USB cryptographic token that can conceal from users all complexity entailed in access to certificate-based applications, while assuring the required security.
Elsa Barber; Silvia Pisano; Sandra Romagnoli; Verónica Parsiale; Gabriela De Pedro; Carolina Gregui
Se analizan las interfaces de usuario de los catálogos en línea de acceso público (OPACs) en entorno web de las bibliotecas universitarias, especializadas, públicas y nacionales de los países parte del Mercosur (Argentina, Brasil, Paraguay, Uruguay), para elaborar un diagnóstico de situación sobre: descripción bibliográfica, análisis temático, mensajes de ayuda al usuario, visualización de datos bibliográficos. Se adopta una metodología cuali-cuantitativa, se utiliza como instrumento de recol...
Do, Nhan; Marinkovich, Andre; Koisch, John; Wheeler, Gary
Our clinical providers spend an estimated four hours weekly answering phone messages from patients. Our nurses spend five to ten hours weekly on returning phone calls. Most of this time is spent conveying recent clinical results, reviewing with patients the discharge instructions such as consults or studies ordered during the office visits, and handling patients' requests for medication renewals. Over time this will lead to greater patients' dissatisfaction because of lengthy waiting time and lack of timely access to their medical information. This would also lead to greater nursing and providers' dissatisfaction because of unreasonable work load. PMID:14728335
Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.
The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology-specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the
U.S. Environmental Protection Agency — EPA's Envirofacts Website hosts web enabled tools accessing the Envirofacts Data Warehouse and the Internet to provide a single point of access to select EPA...
Klug, Hermann; Kmoch, Alexander
Transboundary and cross-catchment access to hydrological data is the key to designing successful environmental policies and activities. Electronic maps based on distributed databases are fundamental for planning and decision making in all regions and for all spatial and temporal scales. Freshwater is an essential asset in New Zealand (and globally) and the availability as well as accessibility of hydrological information held by or held for public authorities and businesses are becoming a crucial management factor. Access to and visual representation of environmental information for the public is essential for attracting greater awareness of water quality and quantity matters. Detailed interdisciplinary knowledge about the environment is required to ensure that the environmental policy-making community of New Zealand considers regional and local differences of hydrological statuses, while assessing the overall national situation. However, cross-regional and inter-agency sharing of environmental spatial data is complex and challenging. In this article, we firstly provide an overview of the state of the art standard compliant techniques and methodologies for the practical implementation of simple, measurable, achievable, repeatable, and time-based (SMART) hydrological data management principles. Secondly, we contrast international state of the art data management developments with the present status for groundwater information in New Zealand. Finally, for the topics (i) data access and harmonisation, (ii) sensor web enablement and (iii) metadata, we summarise our findings, provide recommendations on future developments and highlight the specific advantages resulting from a seamless view, discovery, access, and analysis of interoperable hydrological information and metadata for decision making.
Web序列模式挖掘是将数据挖掘技术应用于Web访问序列,通过对Web访问序列的模式挖掘可以发现用户与网站交互的频繁模式,利用这些模式可以建模并分析用户与网站交互的模型,进而预测未来的访问模式,这对于构建智能化Web站点和开展电子商务活动有非常重要的意义.介绍了传统的PLWAP(position coded pre-order linked WAP-tree)算法,并在此基础上提出了一种对PLWAP算法中Header table的新的构建方法的改进算法(NPLWAP).在NPL-WAP算法中Header table的构建过程中每一步都只基于当前处理的节点的后缀树集,且Header table并不存储所有的后缀树集节点,而是只存储后缀树集根节点,从而减少挖掘过程的相关判断.通过对真实数据的实验对比可以看出NPL-WAP算法在运行时间上比传统的PLWAP算法有了很大的改进.%Web sequence pattern mining is an application of data mining on web access sequences. Mining web sequence patterns can be used to find the frequent patterns from the interaction between user and the site,with these patterns we can model and analyze the interactive model and predict the future access model significantly benefiting intelligent web site construction and e-commerce business campaign. A traditional method-PLWAP is first discussed in this paper,and then we propose a new algorithm-NPLWAP-with a new method to construct Header table aiming to improve the mining process. In this algorithm we construct Header table based on the suffix trees under the current node each step,and save the root nodes only. From the result we obtain using NPLWAP on the real data we can infer it has better performance than PLWAP in terms of running time.
Wan, Miao; Jönsson, Arne; Wang, Cong; Li, Lixiang; Yang, Yixian
Users of a Web site usually perform their interest-oriented actions by clicking or visiting Web pages, which are traced in access log files. Clustering Web user access patterns may capture common user interests to a Web site, and in turn, build user profiles for advanced Web applications, such as Web caching and prefetching. The conventional Web usage mining techniques for clustering Web user sessions can discover usage patterns directly, but cannot identify the latent factors or hidden relat...
Ichihara, Yasuyo G.
Internet imaging is used as interactive visual communication. It is different form other electronic imaging fields because the imaging is transported from one client to many others. If you and I each had different color vision, we may see Internet Imaging differently. So what do you see in a digital color dot picture such as the Ishihara pseudoisochromatic plates? The ishihara pseudoisochromatic test is the most widely used screening test for red-green color deficiency. The full verison contains 38 plates. Plates 18-21 are hidden digit designs. For example, plate 20 has 45 hidden digit designs that cannot be seen by normal trichromats but can be distinguished by most color deficient observers. In this study, we present a new digital color pallette. This is the web accessibility palette where the same information on Internet imaging can be seen correctly by any color vision person. For this study, we have measured the Ishihara pseudoisochromatic test. We used the new Minolta 2D- colorimeter system, CL1040i that can define all pixels in a 4cm x 4cm square to take measurements. From the results, color groups of 8 to 10 colors in the Ishihara plates can be seen on isochromatic lines of CIE-xy color spaces. On each plate, the form of a number is composed of 4 colors and the background colors are composed of the remaining 5 colors. Normal trichromats, it is difficult to find the difference between the 4 color group which makes up the form of the number and the 5 color group of the background colors. We also found that for normal trichromats, colors like orange and red that are highly salient are included in the warm color group and are distinguished form the cool color group of blue, green and gray. Form the results of our analysis of the Ishihara pseudoisochromatic test we suggest the web accessibility palette consists of 4 colors.
Access is a small database which is widely used in lightweight WEB applications,a large number of users develop WEB applications with ASP and PHP scripting languages in Internet industry. Its database engine JET and ACE have come out one after another,but few people pay attention to the impacts of different engines on the perfor?mance of database,and there are few studies in practical applications. This paper attempts to use simple test meth?ods and procedures to confirm the differences between the old and new versions of the engine,and through the actual test get a conclusion contrary to the general understanding:Old Access database engine has a rapid treatment speed than the new one,its improvements lies in the stability rather than speed.%Access是一种被轻量级WEB应用所广泛使用的小型数据库,在互联网行业中有大量的用户配合ASP和PHP等脚本语言进行WEB应用开发.它的数据库引擎JET和ACE相继问世,但是却鲜有人关注新旧两种引擎对于数据库性能的影响,在实际应用中研究甚少.本文试图用简单的测试方法和程序证实新旧版本引擎之间的差异,并通过实际测试得出了与一般认识相悖的结论:Access数据库引擎的处理速度新不如旧,主要进步在于稳定性而非速度.
Purpose: To build an infrastructure that enables radiologists on-call and external users a teleradiological access to the HTML-based image distribution system inside the hospital via internet. In addition, no investment costs should arise on the user side and the image data should be sent renamed using cryptographic techniques. Materials and Methods: A pure HTML-based system manages the image distribution inside the hospital, with an open source project extending this system through a secure gateway outside the firewall of the hospital. The gateway handles the communication between the external users and the HTML server within the network of the hospital. A second firewall is installed between the gateway and the external users and builds up a virtual private network (VPN). A connection between the gateway and the external user is only acknowledged if the computers involved authenticate each other via certificates and the external users authenticate via a multi-stage password system. All data are transferred encrypted. External users get only access to images that have been renamed to a pseudonym by means of automated processing before. Results: With an ADSL internet access, external users achieve an image load frequency of 0.4 CT images per second. More than 90% of the delay during image transfer results from security checks within the firewalls. Data passing the gateway induce no measurable delay. (orig.)
International audience The steady growth of the World Wide Web raises challenges regarding the preservation of meaningful Web data. Tools used currently by Web archivists blindly crawl and store Web pages found while crawling, disregarding the kind of Web site currently accessed (which leads to suboptimal crawling strategies) and whatever structured content is contained in Web pages (which results in page-level archives whose content is hard to exploit). We focus in this PhD work on the cr...
Boufkhad, Yacine; Viennot, Laurent
The web is now de facto the first place to publish data. However, retrieving the whole database represented by the web appears almost impossible. Some parts are known to be hard to discover automatically, giving rise to the so called hidden or invisible web. On the one hand, search engines try to index most of the web. Almost all related work is based on discovering the web by crawling. This paper is devoted to estimate how accurate is the view of the web obtained by crawling. Our approach is...
Jones P; Binns D.; McMenamin C.; McAnulla C.; Hunter S
The InterPro BioMart provides users with query-optimized access to predictions of family classification, protein domains and functional sites, based on a broad spectrum of integrated computational models (‘signatures’) that are generated by the InterPro member databases: Gene3D, HAMAP, PANTHER, Pfam, PIRSF, PRINTS, ProDom, PROSITE, SMART, SUPERFAMILY and TIGRFAMs. These predictions are provided for all protein sequences from both the UniProt Knowledge Base and the UniParc protein sequence arc...
Web 2.0 is a highly accessible introductory text examining all the crucial discussions and issues which surround the changing nature of the World Wide Web. It not only contextualises the Web 2.0 within the history of the Web, but also goes on to explore its position within the broader dispositif of emerging media technologies.The book uncovers the connections between diverse media technologies including mobile smart phones, hand-held multimedia players, ""netbooks"" and electronic book readers such as the Amazon Kindle, all of which are made possible only by the Web 2.0. In addition, Web 2.0 m
Khelghati, Mohammad; Hiemstra, Djoerd; Keulen, van Maurice
The change of the web content is rapid. In Focused Web Harvesting [?], which aims at achieving a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a complete set of related web data to their interesting topics. Whether you are a fan foll
Lun Cai; Jing-Ling Liu; Xi Wang
In web services testing, accessing the interactive contents of measured services and the information of service condition accurately are the key issues of system design and realization. A non-intrusive solution based on axis2 is presented to overcome the difficulty of the information retrieval in web service testing. It can be plugged in server side or client side freely to test pre-deployed or deployed web services. Moreover, it provides a monitoring interface and the corresponding subscription publication mechanism for users based on web services to support the quality assurance grounded on service-oriented architecture (SOA) application service.
Mirtaheri, Seyed M.; Dinçktürk, Mustafa Emre; Hooshmand, Salman; Bochmann, Gregor v.; Jourdan, Guy-Vincent; Onut, Iosif Viorel
Web crawlers visit internet applications, collect data, and learn about new web pages from visited pages. Web crawlers have a long and interesting history. Early web crawlers collected statistics about the web. In addition to collecting statistics about the web and indexing the applications for search engines, modern crawlers can be used to perform accessibility and vulnerability checks on the application. Quick expansion of the web, and the complexity added to web applications have made the ...
文章首先简要介绍了P2P网络和Web Service的技术背景,接着提出了P2P网络与Web Service集成的思想,为此提出Web Service Broker的概念,从而实现P2P网络Peer透明访问Web Service.
The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication) within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013). As a system it seeks to overcome overload or excess of irrelevant i...
庞希愚; 王成; 仝春玲
The access control requirements of Web application system and the shortcomings in Web application system with Role-based Access Control(RBAC) model are analyzed, a fundamental idea of access control based on role-function model is proposed and its implementation details are discussed. Based on naturally formed Web page organization structure according to the business function requirements of the system and access control requirements of users, business functions of pages are partitioned in bottom menu in order to form the basic unit of permissions configuration. Through configuring the relation between user, role, page, menu, function to control user access to system resources such as Web page, the html element and operation in the page. Through the practical application of scientific research management system in Shandong Jiaotong University, application shows that implementation of access control in the page and menu to achieve business function, can well meet the enterprise requirements for user access control of Web system. It has the advantages of simple operation, strong versatility, and effectively reduces the workload of Web system development.%分析现有基于角色的访问控制模型在Web应用系统中的不足，提出一种基于角色-功能模型的用户访问控制方法，并对其具体的实现进行讨论。以系统业务功能需求自然形成的Web页面组织结构和用户访问控制需求为基础，划分最底层菜单中页面实现的业务功能，以业务功能作为权限配置的基本单位，通过配置用户、角色、页面、菜单、功能之间的关系，控制用户对页面、页面中所包含的html元素及其操作等Web系统资源的访问。在山东交通学院科研管理系统中的实际应用结果表明，该方法在菜单及页面实现的业务功能上实施访问控制，可使Web系统用户访问控制较好地满足用户要求，有效降低Web系统开发的工作量。
This dissertation addresses issues for the efficient access to Web databases and services. We propose a distributed ontology for a meaningful organization of and efficient access to Web databases. Next, we dedicate most of our work on presenting a comprehensive query infrastructure for the emerging concept of Web services. The core of this query infrastructure is to enable the efficient delivery of Web services b...
P. V. G. S. Mudiraj B. Jabber K. David raju
Full Text Available Web usage mining is a main research area in Web mining focused on learning about Web users and their interactions with Web sites. The motive of mining is to find users’ access models automatically and quickly from the vast Web log data, such as frequent access paths, frequent access page groups and user clustering. Through web usage mining, the server log, registration information and other relative information left by user provide foundation for decision making of organizations. This article provides a survey and analysis of current Web usage mining systems and technologies. There are generally three tasks in Web Usage Mining: Preprocessing, Pattern analysis and Knowledge discovery. Preprocessing cleans log file of server by removing log entries such as error or failure and repeated request for the same URL from the same host etc... The main task of Pattern analysis is to filter uninteresting information and to visualize and interpret the interesting pattern to users. The statistics collected from the log file can help to discover the knowledge. This knowledge collected can be used to take decision on various factors like Excellent, Medium, Weak users and Excellent, Medium and Weak web pages based on hit counts of the web page in the web site. The design of the website is restructured based on user’s behavior or hit counts which provides quick response to the web users, saves memory space of servers and thus reducing HTTP requests and bandwidth utilization. This paper addresses challenges in three phases of Web Usage mining along with Web Structure Mining.This paper also discusses an application of WUM, an online Recommender System that dynamically generates links to pages that have not yet been visited by a user and might be of his potential interest. Differently from the recommender systems proposed so far, ONLINE MINER does not make use of any off-line component, and is able to manage Web sites made up of pages dynamically generated.
王新建; 邢建国; 于红艳
In order to realize group control of tyre capsule vulcanizer,a master-slave control system was composed of industry computer and programmable logic controller( PLC). The industry computer was taken as host computer. WebAccess configuration software was applied as development platform. The industry computer can realize dynamic display of data,storage,query,statistics,reports and other functions. PLC was adopted as slave machine to control vulcanizer. The communication of industry computer and PLC was realized through serial port. The temperature of vulcanizer was controlled with fuzzy control strategy. And the fuzzy control strategy was given. The test results indicate that the group control system has friendly interface and operates conveniently. Compared with single control system,the system can improve the capsule quality,production efficiency and level of automation.%为实现轮胎胶囊注射硫化机群控系统,采用工控机和PLC组成了上下位机控制系统.工控机作为上位机,以组态软件WebAccess为开发平台,可实现数据动态显示、存储、查询、统计、报表等功能；PLC作为下位机,对硫化机工作过程进行控制；采用串行通讯方式实现了上、下位机的通讯；基于模糊控制策略实现了硫化机温度的控制,并给出了模糊控制器的设计.研究结果表明,系统界面友好,操作方便,与单机控制相比,群控系统可以提高胶囊质量和生产效率,提高生产管理信息化水平.
Percivall, George; Plesea, Lucian
The WMS Global Mosaic provides access to imagery of the global landmass using an open standard for web mapping. The seamless image is a mosaic of Landsat 7 scenes; geographically-accurate with 30 and 15 meter resolutions. By using the OpenGIS Web Map Service (WMS) interface, any organization can use the global mosaic as a layer in their geospatial applications. Based on a trade study, an implementation approach was chosen that extends a previously developed WMS hosting a Landsat 5 CONUS mosaic developed by JPL. The WMS Global Mosaic supports the NASA Geospatial Interoperability Office goal of providing an integrated digital representation of the Earth, widely accessible for humanity's critical decisions.
A. K. Santra
Full Text Available The World Wide Web is a distributed internet system, which provides dynamic and interactive services includes on line tutoring, video/audio conferencing, e-commerce, and etc., which generated heavy demand on network resources and web servers. It increase over the past few year at a very rapidly rate, due to which the amount of traffic over the internet is increasing. As a result, the network performance has now become very slow. Web Pre-fetching and Caching is one of the effective solutions to reduce the web access latency and improve the quality of service. The existing model presented a Cluster based pre-fetching scheme identified clusters of correlated Web pages based on users access patterns. Web Pre-fetching and Caching cause significant improvements on the performance of Web infrastructure. In this paper, we present an efficient Cluster based Web Object Filters from Web Pre-fetching and Web caching scheme to evaluate the web user navigation patterns and user references of product search. Clustering of web page objects obtained from pre-fetched and web cached contents. User Navigation is evaluated from the web cluster objects with similarity retrieval in subsequent user sessions. Web Object Filters are built with the interpretation of the cluster web pages related to the unique users by discarding redundant pages. Ranking is done on users web page product preferences at multiple sessions of each individual user. The performance is measured in terms of Objective function, Number of clusters and cluster accuracy.
Full Text Available Abstract Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA
Rodriguez, Jose Manuel; Carro, Angel; Valencia, Alfonso; Tress, Michael L.
This paper introduces the APPRIS WebServer (http://appris.bioinfo.cnio.es) and WebServices (http://apprisws.bioinfo.cnio.es). Both the web servers and the web services are based around the APPRIS Database, a database that presently houses annotations of splice isoforms for five different vertebrate genomes. The APPRIS WebServer and WebServices provide access to the computational methods implemented in the APPRIS Database, while the APPRIS WebServices also allows retrieval of the annotations. ...
Dolog, Peter; Nejdl, Wolfgang
Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...
VisPort: Web-Based Access to Community-Specific Visualization Functionality [Shedding New Light on Exploding Stars: Visualization for TeraScale Simulation of Neutrino-Driven Supernovae (Final Technical Report)
Baker, M Pauline
The VisPort visualization portal is an experiment in providing Web-based access to visualization functionality from any place and at any time. VisPort adopts a service-oriented architecture to encapsulate visualization functionality and to support remote access. Users employ browser-based client applications to choose data and services, set parameters, and launch visualization jobs. Visualization products typically images or movies are viewed in the user's standard Web browser. VisPort emphasizes visualization solutions customized for specific application communities. Finally, VisPort relies heavily on XML, and introduces the notion of visualization informatics - the formalization and specialization of information related to the process and products of visualization.
随着网络技术的进一步发展，Web服务（Web Services）技术逐渐被应用于各类管理系统中，Web服务本身具有组件模型无关性、平台无关性、编程语言无关性的优良特性，使得Web服务可以用于系统的集成。本文着重介绍一种基于Web服务的学生公寓门禁管理系统，从系统结构、系统设计模式、Web服务关键性技术等方面阐释系统的设计，构建于Web服务基础上的学生公寓门禁管理系统的数据能够被其它应用系统直接调用，用于高校信息系统集成化建设。%With the in-depth development of network technology, web services technology is gradually applied to vari-ous types of management systems. Web services can be used for the integration of the system due to the excellent characteristics of its own component model-independent, platform independent, programming language independence. In this paper, a kind of access control management system is designed for student apartments based on web services;the system design is illustrated with system architecture, system design patterns and web services critical technology. The data of building the students the apartment access control management system based on web services can be directly transferred by other applying system and applied for the other applications with the construction of university information systems integration.
Stackhouse, P. W.; Barnett, A. J.; Tisdale, M.; Tisdale, B.; Chandler, W.; Hoell, J. M., Jr.; Westberg, D. J.; Quam, B.
The NASA LaRC Atmospheric Science Data Center has deployed it's beta version of an existing geophysical parameter website employing off the shelf Geographic Information System (GIS) tools. The revitalized web portal is entitled the "Surface meteorological and Solar Energy" (SSE - https://eosweb.larc.nasa.gov/sse/) and has been supporting an estimated 175,000 users with baseline solar and meteorological parameters as well as calculated parameters that enable feasibility studies for a wide range of renewable energy systems, particularly those systems featuring solar energy technologies. The GIS tools enable, generate and store climatological averages using spatial queries and calculations (by parameter for the globe) in a spatial database resulting in greater accessibility by government agencies, industry and individuals. The data parameters are produced from NASA science projects and reformulated specifically for the renewable energy industry and other applications. This first version includes: 1) processed and reformulated set of baseline data parameters that are consistent with Esri and open GIS tools, 2) development of a limited set of Python based functions to compute additional parameters "on-the-fly" from the baseline data products, 3) updated the current web sites to enable web-based displays of these parameters for plotting and analysis and 4) provided for the output of data parameters in geoTiff, ASCII and .netCDF data formats. The beta version is being actively reviewed through interaction with a group of collaborators from government and industry in order to test web site usability, display tools and features, and output data formats. This presentation provides an overview of this project and the current version of the new SSE-GIS web capabilities through to the end usage. This project supports cross agency and cross organization interoperability and access to NASA SSE data products and OGC compliant web services and aims also to provide mobile platform
Caters to an under-served niche market of small and medium-sized web consulting projects Eases people's project management pain Uses a clear, simple, and accessible style that eschews theory and hates walls of text
The database means a collection of many types of occurrences of logical records containing relationships between records and data elementary aggregates. Management System database (DBMS) - a set of programs for creating and operation of a database. Theoretically, any relational DBMS can be used to store data needed by a Web server. Basically, it was observed that the simple DBMS such as Fox Pro or Access is not suitable for Web sites that are used intensively. For large-scale Web applications...
The web site is a library's most important feature. Patrons use the web site for numerous functions, such as renewing materials, placing holds, requesting information, and accessing databases. The homepage is the place they turn to look up the hours, branch locations, policies, and events. Whether users are at work, at home, in a building, or on…
胡罗凯; 陈旭; 柴新; 应时
提出一种基于多本体体系的语义Web服务访问控制方法.首先,基于分布式描述逻辑DDL,刻画了一种基于桥接本体的跨域多本体体系,它为语义Web服务的访问控制提供了知识库；其次,在基于语义的访问控制方法基础上,给出了适用于语义Web服务的访问控制模型；最后,设计了基于多本体体系的语义Web服务访问控制方法及其体系结构,并给出了该方法的案例应用.在语义Web服务的访问控制方法中,基于桥接本体的跨域多本体体系既为 各安全域的语义模型提供了语义关联,又保证了各安全域中语义表示的隐私性.%A multi-ontology system based access control approach for semantic Web services was proposed First, a bridge ontology based cross-domain multi-ontology system (CDMOS), which provides a semantic model for access control of Semantic Web Service, was presented based on the distributed description logic (DDL). Secondly,on the basis of semantic access control technology, the access control model for semantic Web service was given. Finally, this paper gave the architecture of multi-ontology system based access control approach for semantic Web service and the case study of this approach. In the access control for semantic Web service,CDMOS not only provides the semantic mapping for the semantic model of security domains, but also ensures the semantic independence among the security domains.
B.Hemanth kumar,; Prof. M.Surendra Prasad Babu
Accessing web resources (Information) is an essential facility provided by web applications to every body. Semantic web is one of the systems that provide a facility to access the resources through web service applications. Semantic web and web Services are new emerging web based technologies. An automatic information processing system can be developed by using semantic web and web services, each having its own contribution within the context of developing web-based information systems and ap...
M. Tayfun Gülle
Departing from the idea that internet, which has become a deep information tunnel, is causing a problem in access to “accurate information”, it is expressed that societies are imprisoned within the world of “virtual reality” with web 2.0/web 3.0 technologies and social media applications. In order to diagnose this problem correctly, the media used from past to present for accessing information are explained shortly as “social tools.” Furthermore, it is emphasised and summarised with an editor...
Babu, B. Ramesh; O'Brien, Ann
Discussion of Web-based online public access catalogs (OPACs) focuses on a review of six Web OPAC interfaces in use in academic libraries in the United Kingdom. Presents a checklist and guidelines of important features and functions that are currently available, including search strategies, access points, display, links, and layout. (Author/LRW)
Saba, Luca; Banchhor, Sumit K; Suri, Harman S; Londhe, Narendra D; Araki, Tadashi; Ikeda, Nobutaka; Viskovic, Klaudija; Shafique, Shoaib; Laird, John R; Gupta, Ajay; Nicolaides, Andrew; Suri, Jasjit S
. Statistical tests were performed to demonstrate consistency, reliability and accuracy of the results. The proposed AtheroCloud™ system is completely reliable, automated, fast (3-5 seconds depending upon the image size having an internet speed of 180Mbps), accurate, and an intelligent, web-based clinical tool for multi-center clinical trials and routine telemedicine clinical care.
Saba, Luca; Banchhor, Sumit K; Suri, Harman S; Londhe, Narendra D; Araki, Tadashi; Ikeda, Nobutaka; Viskovic, Klaudija; Shafique, Shoaib; Laird, John R; Gupta, Ajay; Nicolaides, Andrew; Suri, Jasjit S
. Statistical tests were performed to demonstrate consistency, reliability and accuracy of the results. The proposed AtheroCloud™ system is completely reliable, automated, fast (3-5 seconds depending upon the image size having an internet speed of 180Mbps), accurate, and an intelligent, web-based clinical tool for multi-center clinical trials and routine telemedicine clinical care. PMID:27318571
In order for decision makers to efficiently make accurate decisions, pertinent information must be accessed easily and quickly. Component based architectures are suitable for creating today's three-tiered client-server systems. Experts in each particular field can develop each tier independently. The first tier can be built using HTML and web browsers. The middle tier can be implemented by using existing server side programming technologies that enable dynamic web page creation. The third tie...
A Content Standard for Computational Models; Digital Rights Management (DRM) Architectures; A Digital Object Approach to Interoperable Rights Management: Finely-Grained Policy Enforcement Enabled by a Digital Object Infrastructure; LOCKSS: A Permanent Web Publishing and Access System; Tapestry of Time and Terrain.
Hill, Linda L.; Crosier, Scott J.; Smith, Terrence R.; Goodchild, Michael; Iannella, Renato; Erickson, John S.; Reich, Vicky; Rosenthal, David S. H.
Includes five articles. Topics include requirements for a content standard to describe computational models; architectures for digital rights management systems; access control for digital information objects; LOCKSS (Lots of Copies Keep Stuff Safe) that allows libraries to run Web caches for specific journals; and a Web site from the U.S.…
Cömert, Çetin; Akıncı, Halil
Web services have emerged as the next generation of Web-based technology for interoperability. Web services are modular, self-describing, self-contained applications that are accessible over the Internet. Various communities that either produce or use Information and Communication Technologies are working on web services nowadays. There are already a number of software companies providing tools to develop and deploy Web Services. In the Web Services view, every different system or component o...
Himangni Rathore; Hemant Verma
Web is a rich domain of data and knowledge, which is spread over the world in unstructured manner. The number of users is continuously access the information over the internet. Web mining is an application of data mining where web related data is extracted and manipulated for extracting knowledge. The data mining is used in the domain of web information mining is refers as web mining, that is further divided into three major domains web uses mining, web content mining and web stru...
Khushboo Khurana; M. B. Chandak
Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...
Department of Transportation — The AccessAML is a web-based internet single application designed to reduce the vulnerability associated with several accounts assinged to a single users. This is a...
Development of learning material that is distributed through and accessible via the World Wide Web. Various options from web technology are exploited to improve the quality and efficiency of learning material.
Manvi; Bhatia, Komal Kumar; Dixit, Ashutosh
Deep Web is content hidden behind HTML forms. Since it represents a large portion of the structured, unstructured and dynamic data on the Web, accessing Deep-Web content has been a long challenge for the database community. This paper describes a crawler for accessing Deep-Web using Ontologies. Performance evaluation of the proposed work showed that this new approach has promising results.
Cetl, V.; T. Kliment; Kliment, M.
The effective access and use of geospatial information (GI) resources acquires a critical value of importance in modern knowledge based society. Standard web services defined by Open Geospatial Consortium (OGC) are frequently used within the implementations of spatial data infrastructures (SDIs) to facilitate discovery and use of geospatial data. This data is stored in databases located in a layer, called the invisible web, thus are ignored by search engines. SDI uses a catalogue (discovery) ...
乔磊; 潘松峰; 刘福乾
Hennig, Teresa; Hepworth, George; Yudovich, Dagi (Doug)
Authoritative and comprehensive coverage for building Access 2013 Solutions Access, the most popular database system in the world, just opened a new frontier in the Cloud. Access 2013 provides significant new features for building robust line-of-business solutions for web, client and integrated environments. This book was written by a team of Microsoft Access MVPs, with consulting and editing by Access experts, MVPs and members of the Microsoft Access team. It gives you the information and examples to expand your areas of expertise and immediately start to develop and upgrade projects. Exp
Want to know how to make your pages look beautiful, communicate your message effectively, guide visitors through your website with ease, and get everything approved by the accessibility and usability police at the same time? Head First Web Design is your ticket to mastering all of these complex topics, and understanding what's really going on in the world of web design. Whether you're building a personal blog or a corporate website, there's a lot more to web design than div's and CSS selectors, but what do you really need to know? With this book, you'll learn the secrets of designing effecti
SRD 69 NIST Chemistry WebBook (Web, free access) The NIST Chemistry WebBook contains: Thermochemical data for over 7000 organic and small inorganic compounds; thermochemistry data for over 8000 reactions; IR spectra for over 16,000 compounds; mass spectra for over 33,000 compounds; UV/Vis spectra for over 1600 compounds; electronic and vibrational spectra for over 5000 compounds; constants of diatomic molecules(spectroscopic data) for over 600 compounds; ion energetics data for over 16,000 compounds; thermophysical property data for 74 fluids.
Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.
Griffith, J.A.; Egbert, S.L.
Remote sensing education is increasingly in demand across academic and professional disciplines. Meanwhile, Internet technology and the World Wide Web (WWW) are being more frequently employed as teaching tools in remote sensing and other disciplines. The current wealth of information on the Internet and World Wide Web must be distilled, nonetheless, to be useful in remote sensing education. An extensive literature base is developing on the WWW as a tool in education and in teaching remote sensing. This literature reveals benefits and limitations of the WWW, and can guide its implementation. Among the most beneficial aspects of the Web are increased access to remote sensing expertise regardless of geographic location, increased access to current material, and access to extensive archives of satellite imagery and aerial photography. As with other teaching innovations, using the WWW/Internet may well mean more work, not less, for teachers, at least at the stage of early adoption. Also, information posted on Web sites is not always accurate. Development stages of this technology range from on-line posting of syllabi and lecture notes to on-line laboratory exercises and animated landscape flyovers and on-line image processing. The advantages of WWW/Internet technology may likely outweigh the costs of implementing it as a teaching tool.
Klatt, Edward C.
Sandhyarani, Ramancha; Gyani, Jayadev; 10.5121/acij.2012.3205
This paper support the concept of a community Web directory, as a Web directory that is constructed according to the needs and interests of particular user communities. Furthermore, it presents the complete method for the construction of such directories by using web usage data. User community models take the form of thematic hierarchies and are constructed by employing clustering approach. We applied our methodology to the ODP directory and also to an artificial Web directory, which was generated by clustering Web pages that appear in the access log of an Internet Service Provider. For the discovery of the community models, we introduced a new criterion that combines a priori thematic informativeness of the Web directory categories with the level of interest observed in the usage data. In this context, we introduced and evaluated new clustering method. We have tested the methodology using access log files which are collected from the proxy servers of an Internet Service Provider and provided results that ind...
K. F. Bharati
Full Text Available The traditional search engines available over the internet are dynamic in searching the relevant content over the web. The search engine has got some constraints like getting the data asked from a varied source, where the data relevancy is exceptional. The web crawlers are designed only to more towards a specific path of the web and are restricted in moving towards a different path as they are secured or at times restricted due to the apprehension of threats. It is possible to design a web crawler that will have the capability of penetrating through the paths of the web, not reachable by the traditional web crawlers, in order to get a better solution in terms of data, time and relevancy for the given search query. The paper makes use of a newer parser and indexer for coming out with a novel idea of web crawler and a framework to support it. The proposed web crawler is designed to attend Hyper Text Transfer Protocol Secure (HTTPS based websites and web pages that needs authentication to view and index. User has to fill a search form and his/her creditionals will be used by the web crawler to attend secure web server for authentication. Once it is indexed the secure web server will be inside the web crawler’s accessible zone
ZHANG Guo-yin; GU Guo-chang; LI Jian-li
The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information,so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.
Itoh, Yuji; Urushihata, Toshiya; Sakuma, Toru; Ikemune, Sachiko; Tojo, Masanori; Miyake, Teruhisa; Takahashi, Hiroshi; Ohkoshi, Norio; Ishizuka, Kazushige; Ono, Tsukasa
This report describes a Web application intended for visually impaired users. Today hundreds of millions of peoplebenefit from the Internet (or the World Wide Web), which is the greatest source of information in the world. The World WideWeb Consortium (W3C) has set the guidelines for Web content accessibility, which allows visually impaired people to accessand use Web contents. However, many of Web sites do not yet follow these guidelines. Thus, we propose a Web applicationsystem that collect...
M. Tayfun Gülle
Full Text Available Departing from the idea that internet, which has become a deep information tunnel, is causing a problem in access to “accurate information”, it is expressed that societies are imprisoned within the world of “virtual reality” with web 2.0/web 3.0 technologies and social media applications. In order to diagnose this problem correctly, the media used from past to present for accessing information are explained shortly as “social tools.” Furthermore, it is emphasised and summarised with an editorial viewpoint that the means of reaching accurate information can be increased via the freedom of expression channel which will be brought forth by “good librarianship” applications. IFLA Principles of Freedom of Expression and Good Librarianship is referred to at the end of the editorial.
L.K. Joshila Grace
Full Text Available Log files contain information about User Name, IP Address, Time Stamp, Access Request, number of Bytes Transferred, Result Status, URL that Referred and User Agent. The log files are maintained by the web servers. By analysing these log files gives a neat idea about the user. This paper gives a detailed discussion about these log files, their formats, their creation, access procedures, their uses, various algorithms used and the additional parameters that can be used in the log files which in turn gives way to an effective mining. It also provides the idea of creating an extended log file and learning the user behaviour.
Grace, L K Joshila; Nagamalai, Dhinaharan
Log files contain information about User Name, IP Address, Time Stamp, Access Request, number of Bytes Transferred, Result Status, URL that Referred and User Agent. The log files are maintained by the web servers. By analysing these log files gives a neat idea about the user. This paper gives a detailed discussion about these log files, their formats, their creation, access procedures, their uses, various algorithms used and the additional parameters that can be used in the log files which in turn gives way to an effective mining. It also provides the idea of creating an extended log file and learning the user behaviour.
With the constant spread of internet access, the world of software is constantly transforming product shapes into services delivered via web browsers. Modern next generation web applications change the way browsers and users interact with servers. A lot of word scale services have already been delivered by top companies as Single Page Applications. Moving services online poses a big attention towards data protection and web application security. Single Page Application are exposed to server-s...
Haritsa, Jayant R.
Search engines are currently the standard medium for locating and accessing information on the Web. However, they may not scale to match the anticipated explosion of Web content since they support only extremely coarse-grained queries and axe based on centralized architectures. In this paper, we discuss how database technology can be successfully utilized to address the above problems. We also present the main features of a prototype Web database system called DIASPORA that we have developed ...
One of the application areas of data mining is the World Wide Web (WWW or Web), which serves as a huge, widely distributed, global information service for every kind of information such as news, advertisements, consumer information, financial management, education, government, e-commerce, health services, and many other information services. The Web also contains a rich and dynamic collection of hyperlink information, Web page access and usage information, providing sources for data mining. The amount of information on the Web is growing rapidly, as well as the number of Web sites and Web page
... the Government Printing Office's World Wide Web site (which can be found at http://www.access.gpo.gov... accessed electronically at the Government Printing Office's World Wide Web site (which can be found at...
Nearly a decade ago the author wrote in one of the first widely-cited academic articles, Educational Researcher, about the educational role of the web. He argued that educators must be able to demonstrate that the web (1) can increase access to learning, (2) must not result in higher costs for learning, and (3) can lead to improved learning. These…
Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…
Deshpande, Yogesh; Murugesan, San; Ginige, Athula; Hansen, Steve; Schwabe, Daniel; Gaedke, Martin; White, Bebo
Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: a) why is it needed? b) what is its domain of operation? c) how does it help and what should it do to improve Web application develo...
Pro Access 2010 Development is a fundamental resource for developing business applications that take advantage of the features of Access 2010 and the many sources of data available to your business. In this book, you'll learn how to build database applications, create Web-based databases, develop macros and Visual Basic for Applications (VBA) tools for Access applications, integrate Access with SharePoint and other business systems, and much more. Using a practical, hands-on approach, this book will take you through all the facets of developing Access-based solutions, such as data modeling, co
Ulrich Fuller, Laurie
The easy guide to Microsoft Access returns with updates on the latest version! Microsoft Access allows you to store, organize, view, analyze, and share data; the new Access 2013 release enables you to build even more powerful, custom database solutions that integrate with the web and enterprise data sources. Access 2013 For Dummies covers all the new features of the latest version of Accessand serves as an ideal reference, combining the latest Access features with the basics of building usable databases. You'll learn how to create an app from the Welcome screen, get support
Traditional call centers can be accessed via speech only, and the call center based on web provides both data and speech access, but it needs a powerful terminal-computer. By analyzing traditional call centers and call centers based on web, this paper presents the framework of an advanced call center supporting WAP access. A typical service is also described in detail.
Fok, Chien Liang; Sun, Fei; Mangum, Matt; Mok, Al; He, Binghan; Sentis, Luis
The Cloud-based Advanced Robotics Laboratory (CARL) integrates a whole body controller and web-based teleoperation to enable any device with a web browser to access and control a humanoid robot. By integrating humanoid robots with the cloud, they are accessible from any Internet-connected device. Increased accessibility is important because few people have access to state-of-the-art humanoid robots limiting their rate of development. CARL's implementation is based on modern software libraries...
Andrei Maciuca; Dan Popescu
The current paper proposes a smart web interface designed for monitoring the status of the elderly people. There are four main user types used in the web application: the administrator (who has power access to all the application’s functionalities), the patient (who has access to his own personal data, like parameters history, personal details), relatives of the patient (who have administrable access to the person in care, access that is defined by the patient) and the medic (who can view ...
Archuleta, Christy-Ann M.; Eames, Deanna R.
The Rio Grande Civil Works and Restoration Projects Web Application, developed by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers (USACE) Albuquerque District, is designed to provide publicly available information through the Internet about civil works and restoration projects in the Rio Grande Basin. Since 1942, USACE Albuquerque District responsibilities have included building facilities for the U.S. Army and U.S. Air Force, providing flood protection, supplying water for power and public recreation, participating in fire remediation, protecting and restoring wetlands and other natural resources, and supporting other government agencies with engineering, contracting, and project management services. In the process of conducting this vast array of engineering work, the need arose for easily tracking the locations of and providing information about projects to stakeholders and the public. This fact sheet introduces a Web application developed to enable users to visualize locations and search for information about USACE (and some other Federal, State, and local) projects in the Rio Grande Basin in southern Colorado, New Mexico, and Texas.
Archuleta, Christy-Ann M.; Eames, Deanna R.
The Rio Grande Civil Works and Restoration Projects Web Application, developed by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers (USACE) Albuquerque District, is designed to provide publicly available information through the Internet about civil works and restoration projects in the Rio Grande Basin. Since 1942, USACE Albuquerque District responsibilities have included building facilities for the U.S. Army and U.S. Air Force, providing flood protection, supplying water for power and public recreation, participating in fire remediation, protecting and restoring wetlands and other natural resources, and supporting other government agencies with engineering, contracting, and project management services. In the process of conducting this vast array of engineering work, the need arose for easily tracking the locations of and providing information about projects to stakeholders and the public. This fact sheet introduces a Web application developed to enable users to visualize locations and search for information about USACE (and some other Federal, State, and local) projects in the Rio Grande Basin in southern Colorado, New Mexico, and Texas.
Zaretzki, J.; Bergeron, C.; Huang, T.-W.;
. RS-WebPredictor is the first freely accessible server that predicts the regioselectivity of the last six isozymes. Server execution time is fast, taking on average 2s to encode a submitted molecule and 1s to apply a given model, allowing for high-throughput use in lead optimization projects...
In this thesis, we investigate the path towards a focused web harvesting approach which can automatically and efficiently query websites, navigate through results, download data, store it and track data changes over time. Such an approach can also facilitate users to access a complete collection of
景帅; 王颖纯; 刘燕权
纵览针对残疾人的网站可访问性相关研究，理论文章多于实证调研，缺乏对顶尖大学图书馆网站的实证数据分析。为此文章选取美国8所常春藤盟校，研究其图书馆网站可访问性特点及相关政策是否符合美国1990年颁布的《美国残疾人法》（ADA），并通过WAVE和邮件问询进行调研评估。研究发现：虽然每个网站都有很好的可用性和可操作性，同时也均向残疾人提供了服务说明以及指向残疾服务的链接，特别是对视障者提供了屏幕阅读器等辅助技术以增强其可访性，但这些网站均有WAVE认定的6类准则违规缺陷中的一种或多种。最常见的问题为缺失文档语言（44％）、冗余链接（69％）、可疑链接（50%）、跳过导航标题（44%）等。%This study evaluates the websites of the US ’ Ivy League Schools for accessibility by users with disabilities to determine if they are compliant with accessibility standards established by the Americans with Disabilities Act (ADA). By using the web accessibility evaluator WAVE and email survey, the author found that among selected sixteen web-sites in the eight universities, each site has good availability and operability, and all the libraries ’ websites offer services to people with disabilities and links to disability services, especially offer screen readers and other assistive technologies to enhance the access of people with visual impairment. However, errors are present at all the websites. The most com-mon problems are the lack of missing document language (44%), redundant link (69%), suspicious link (50%), and skip navigation(44%).
上超望; 刘清堂; 赵呈领; 王艳凤; 杨琳
Business process access control mechanism is a difficult problem in Web services composition application.The dynamic Interactivity and Coordination of business process activities have been ignored in the existing research,which can not meet the demands for the dynamic business process access control.An UCON enhanced business process dynamic access control model（WS-BPUCON） is proposed,which unbinds the coupling relationship of organization model and the process model,and provides sufficient flexibility to implement the dynamic and fine-gained access control based on the authorization,obligation and condition for business process.The paper also describes the implementation architecture of WS-BPUCON in the end.%业务流程访问控制机制是组合Web服务应用中的难点,现有的访问控制模型忽视了流程活动之间动态交互性和协同性的特点,不能适应业务流程权限的动态管理.本文提出一种使用控制支持的组合Web服务业务流程动态访问控制模型WS-BPUCON,模型通过角色和权限的分离解除了组织模型和业务流程模型的耦合关系,能够根据分布式开放网络环境中的属性信息,基于授权、职责和条件三种约束决定策略来检查访问控制决策,具有上下文感知、细粒度访问管理等特性,给出了WS-BPUCON的实施框架.
Seemann, Ernst Stefan; Menzel, Karl Peter; Backofen, Rolf;
The function of non-coding RNA genes largely depends on their secondary structure and the interaction with other molecules. Thus, an accurate prediction of secondary structure and RNA-RNA interaction is essential for the understanding of biological roles and pathways associated with a specific RN...... to interactive usage of the predictors. Additionally, the web servers provide direct access to annotated RNA alignments, such as the Rfam 10.0 database and multiple alignments of 16 vertebrate genomes with human. The web servers are freely available at: http://rth.dk/resources/petfold/...
Grier, Christopher L.
Web browsers are plagued with vulnerabilities, providing hackers with easy access to computer systems using browser-based attacks. Efforts that retrofit existing browsers have had limited success since modern browsers are not designed to withstand attack. To enable more secure web browsing, we design and implement new web browsers from the ground…
Hawkins, I.; Battle, R.; Miller-Bagwell, A.
We describe a partnership approach in use at UC Berkeley's Center for EUV Astrophysics (CEA) that facilitates the adaptation of astrophysics data and information---in particular from NASA's EUVE satellite---for use in the K--12 classroom. Our model is founded on a broad collaboration of personnel from research institutions, centers of informal science teaching, schools of education, and K--12 schools. Several CEA-led projects follow this model of collaboration and have yielded multimedia, Internet-based, lesson plans for grades 6 through 12 that are created and distributed on the World Wide Web (http://www.cea.berkeley.edu/Education). Use of technology in the classroom can foster an environment that more closely reflects the processes scientists use in doing research (Linn, diSessa, Pea, & Songer 1994, J.Sci.Ed.Tech., ``Can Research on Science Learning and Instruction Inform Standards for Science Education?"). For instance, scientists rely on technological tools to model, analyze, and ultimately store data. Linn et al. suggest introducing technological tools to students from the earliest years to facilitate scientific modeling, scientific collaborations, and electronic communications in the classroom. Our investigation aims to construct and evaluate a methodology for effective participation of scientists in K--12 education, thus facilitating fruitful interactions with teachers and other educators and increasing effective use of technology in the classroom. We describe several team-based strategies emerging from these project collaborations. These strategies are particular to the use of the Internet and World Wide Web as relatively new media for authoring K--12 curriculum materials. This research has been funded by NASA contract NAS5-29298, NASA grant ED-90033.01-94A to SSL/UCB, and NASA grants NAG5-2875 and NAGW-4174 to CEA/UCB.
Full Text Available A striking feature of web-building spiders is the use of silk to make webs, mainly for prey capture. However, building a web is energetically expensive and increases the risk of predation. To reduce such costs and still have access to abundant prey, some web-building spiders have evolved web invasion behaviour. In general, no consistent patterns of web invasion have emerged and the factors determining web invasion remain largely unexplored. Here we report web invasion among conspecifics in seven nocturnal species of orb-web spiders, and examined the factors determining the probability of webs that could be invaded and taken over by conspecifics. About 36% of webs were invaded by conspecifics, and 25% of invaded webs were taken over by the invaders. A web that was built higher and intercepted more prey was more likely to be invaded. Once a web was invaded, the smaller the size of the resident spider, the more likely its web would be taken over by the invader. This study suggests that web invasion, as a possible way of reducing costs, may be widespread in nocturnal orb-web spiders.
Huurdeman, H.C.; Ben David, A.; Samar, T.
Web archives provide access to snapshots of the Web of the past, and could be valuable for research purposes. However, access to these archives is often limited, both in terms of data availability, and interfaces to this data. This paper explores new methods to overcome these limitations. It present
Gabriel Fontanet Nadal
Full Text Available Accesible Tourism is a kind of Tourism that is specially dedicated to disabled people. This Tourism refers to the removal of physical elements that difficult the disabled people mobility at the destination. The Accesible Tourism should take care of both physical and web accessibility. The Web Accessibility of a web is defined as the capability this web to be accessed by people with any kind of disability. Some organizations generate rules to improve web accessibility. An analysis of Web Accessibility in Tourist Web Sites is shown at this document.
Bush, Nigel E.; Bowen, Deborah J.; Jean Wooldridge; Abi Ludwig; Hendrika Meischke; Robert Robbins
Much is written about Internet access, Web access, Web site accessibility, and access to online health information. The term access has, however, a variety of meanings to authors in different contexts when applied to the Internet, the Web, and interactive health communication. We have summarized those varied uses and definitions and consolidated them into a framework that defines Internet and Web access issues for health researchers. We group issues into two categories: connectivity and human...
Full Text Available Predicting current and potential species distributions and abundance is critical for managing invasive species, preserving threatened and endangered species, and conserving native species and habitats. Accurate predictive models are needed at local, regional, and national scales to guide field surveys, improve monitoring, and set priorities for conservation and restoration. Modeling capabilities, however, are often limited by access to software and environmental data required for predictions. To address these needs, we built a comprehensive web-based system that: (1 maintains a large database of field data; (2 provides access to field data and a wealth of environmental data; (3 accesses values in rasters representing environmental characteristics; (4 runs statistical spatial models; and (5 creates maps that predict the potential species distribution. The system is available online at www.niiss.org, and provides web-based tools for stakeholders to create potential species distribution models and maps under current and future climate scenarios.
Webometrics is concerned with measuring aspects of the web: web sites, web pages, parts of web pages, words in web pages, hyperlinks, web search engine results. The importance of the web itself as a communication medium and for hosting an increasingly wide array of documents, from journal articles to holiday brochures, needs no introduction. Given this huge and easily accessible source of information, there are limitless possibilities for measuring or counting on a huge scale (e.g., the number of web sites, the number of web pages, the number of blogs) or on a smaller scale (e.g., the number o
Calì, Andrea; Martinenghi, D.; R. Torlone
The Deep Web is constituted by data accessible through Web pages, but not readily indexable by search engines, as they are returned in dynamic pages. In this paper we propose a framework for accessing Deep Web sources, represented as relational tables with so-called ac- cess limitations, with keyword-based queries. We formalize the notion of optimal answer and investigate methods for query processing. To our knowledge, this problem has never been studied in ...
Full Text Available Mining the web is defined as discovering knowledge from hypertext and World Wide Web. The World Wide Web is one of the longest rising areas of intelligence gathering. Now a day there are billions of web pages, HTML archive accessible via the internet, and the number is still increasing. However, considering the inspiring diversity of the web, retrieving of interestingness web based content has become a very complex task. The large amount of data heterogeneity, complex format, high dimensional data and lack of structure of web, knowledge mining is a challenging task. In this paper, it is proposed to introduce a new framework generated to handle unstructured complex data. This web knowledge mining expertise brings forward a kind of XML-based distributed data mining architecture. Based on the research of web knowledge mining, XML is used to create well structured data. Web knowledge mining framework attempts to determine useful knowledge from derived data, complex format, and high dimensional data obtained from the interactions of the users through the Web.
G. P. Perrucci
Full Text Available This paper advocates a novel approach for mobile web browsing based on cooperation among wireless devices within close proximity operating in a cellular environment. In the actual state of the art, mobile phones can access the web using different cellular technologies. However, the supported data rates are not sufficient to cope with the ever increasing traffic requirements resulting from advanced and rich content services. Extending the state of the art, higher data rates can only be achieved by increasing complexity, cost, and energy consumption of mobile phones. In contrast to the linear extension of current technology, we propose a novel architecture where mobile phones are grouped together in clusters, using a short-range communication such as Bluetooth, sharing, and accumulating their cellular capacity. The accumulated data rate resulting from collaborative interactions over short-range links can then be used for cooperative mobile web browsing. By implementing the cooperative web browsing on commercial mobile phones, it will be shown that better performance is achieved in terms of increased data rate and therefore reduced access times, resulting in a significantly enhanced web browsing user experience on mobile phones.
Full Text Available This paper deals with the application of LiveConnect for the remote control of real devices/stations over the Web. In this context, both the concept of Lean Web Automation and a flexible Java-based application tool have been developed ensuring a fast and secure process data transfer between device-server and Web browser by the subscriber/publisher principle. Index Term: Web-based remote control, Lean Web Automation, teletechnology, Web Access Kit.
Full Text Available AGRIS is the International System for Agricultural Science and Technology. It is supported by a large community of data providers, partners and users. AGRIS is a database that aggregates bibliographic data, and through this core data, related content across online information systems is retrieved by taking advantage of Semantic Web capabilities. AGRIS is a global public good and its vision is to be a responsive service to its user needs by facilitating contributions and feedback regarding the AGRIS core knowledgebase, AGRIS’s future and its continuous development. Periodic AGRIS e-consultations, partner meetings and user feedback are assimilated to the development of the AGRIS application and content coverage. This paper outlines the current AGRIS technical set-up, its network of partners, data providers and users as well as how AGRIS’s responsiveness to clients’ needs inspires the continuous technical development of the application. The paper concludes by providing a use case of how the AGRIS stakeholder input and the subsequent AGRIS e-consultation results influence the development of the AGRIS application, knowledgebase and service delivery.
Internet的迅速发展,使World Wide Web(WWW)成为一个巨大的、蕴涵着具有潜在价值知识的分布式信息空间.数据挖掘是从大量的数据中发现隐合的规律性内容,解决数据的应用质量问题,并充分利用有用的数据,帮助决策者调整策略,减少风险,做出正确的决策,是最具有前瞻性的一项技术.数据挖掘技术应用在Web环境下,通过对服务器日志信息采集,创建Web日志挖掘模型,分析经常访问的信息串,以利于网站管理者和经营者对网站管理进行决策参考.
Full Text Available Abstract Background Obtaining the gene structure for a given protein encoding gene is an important step in many analyses. A software suited for this task should be readily accessible, accurate, easy to handle and should provide the user with a coherent representation of the most probable gene structure. It should be rigorous enough to optimise features on the level of single bases and at the same time flexible enough to allow for cross-species searches. Results WebScipio, a web interface to the Scipio software, allows a user to obtain the corresponding coding sequence structure of a here given a query protein sequence that belongs to an already assembled eukaryotic genome. The resulting gene structure is presented in various human readable formats like a schematic representation, and a detailed alignment of the query and the target sequence highlighting any discrepancies. WebScipio can also be used to identify and characterise the gene structures of homologs in related organisms. In addition, it offers a web service for integration with other programs. Conclusion WebScipio is a tool that allows users to get a high-quality gene structure prediction from a protein query. It offers more than 250 eukaryotic genomes that can be searched and produces predictions that are close to what can be achieved by manual annotation, for in-species and cross-species searches alike. WebScipio is freely accessible at http://www.webscipio.org.
杨镇雄; 蔡祖锐; 陈国华; 汤庸; 张龙
开放存取（open access，OA）期刊属于网络深层资源且分散在互联网中，传统的搜索引擎不能对其建立索引，不能满足用户获取OA期刊资源的需求，从而造成了开放资源的浪费。针对如何集中采集万维网上分散的开放存取期刊资源的问题，提出了一个面向OA期刊的分布式主题爬虫架构。该架构采用主从分布式设计，提出了基于用户预定义规则的OA期刊页面学术信息提取方法，由一个主控中心节点控制多个可动态增减的爬行节点，采用基于Chrome浏览器的插件机制来实现分布式爬行节点的可扩展性和部署的灵活性。%Open access journal is a kind of deep online resources and disperses on the Internet, and it is difficult for the traditional search engines to index these online resources, so the user can not access directly the open access journal via search engines, resulting in a waste of these open resources. This paper proposes a novel focused Web crawler with distributed architecture to collect the open access journal resources scattering throughout the Internet. This architecture adopts the distributed master-slave design, which consists of a master control center and multiple distributed crawler nodes, and proposes an academic information extraction method based on user predefined rules from the open access journals. These distributed crawling nodes can be adjusted dynamically and use Chrome browser based plug-in mechanism to achieve scalability and deployment flexibility.
Sundaravel, A.; Wilkinson, D. C.
The Geostationary Operational Environmental Satellite-R Series (GOES-R) makes use of advanced instruments and technologies to monitor the Earth's surface and provide with accurate space weather data. The first GOES-R series satellite is scheduled to be launched in 2015. The data from the satellite will be widely used by scientists for space weather modeling and predictions. This project looks into the ways of how these datasets can be made available to the scientists on the Web and to assist them on their research. We are working on to develop a prototype web-based system that allows users to browse, search and download these data. The GOES-R datasets will be archived in NetCDF (Network Common Data Form) and CSV (Comma Separated Values) format. The NetCDF is a self-describing data format that contains both the metadata information and the data. The data is stored in an array-oriented fashion. The web-based system will offer services in two ways: via a web application (portal) and via web services. Using the web application, the users can download data in NetCDF or CSV format and can also plot a graph of the data. The web page displays the various categories of data and the time intervals for which the data is available. The web application (client) sends the user query to the server, which then connects to the data sources to retrieve the data and delivers it to the users. Data access will also be provided via SOAP (Simple Object Access Protocol) and REST (Representational State Transfer) web services. These provide functions which can be used by other applications to fetch data and use the data for further processing. To build the prototype system, we are making use of proxy data from existing GOES and POES space weather datasets. Java is the programming language used in developing tools that formats data to NetCDF and CSV. For the web technology we have chosen Grails to develop both the web application and the services. Grails is an open source web application
随着容器技术在云计算中的大量应用，以Docker为代表的容器引擎在PaaS中大放光彩，出现一大批基于Docker的云计算初创公司。然而由于基于容器的云平台的特殊性，一般不会为容器分配固定IP，导致用户无法直接对云平台中的容器进行访问控制，对用户添加自定义服务等操作增加不便，ContainerSSh则是专门针对该问题而设计的解决方案。%With the development of container technology in cloud computing,it has been a large number of applications are created in cloud com-puting, Docker is a popular container engine and a lot of startups provide services based on Docker.So container technology has a very important for PaaS. However,due to the special nature of cloud platform based container, most of cloud platform will not assign fixed IP to container,user can not directly access and control the container in the cloud platform,so users are difficult to add personalized service. ContainerSSh is a solution that is designed to solve that problems.
With the development of container technology in cloud computing,it has been a large number of applications are created in cloud com-puting, Docker is a popular container engine and a lot of startups provide services based on Docker.So container technology has a very important for PaaS. However,due to the special nature of cloud platform based container, most of cloud platform will not assign fixed IP to container,user can not directly access and control the container in the cloud platform,so users are difficult to add personalized service. ContainerSSh is a solution that is designed to solve that problems.%随着容器技术在云计算中的大量应用，以Docker为代表的容器引擎在PaaS中大放光彩，出现一大批基于Docker的云计算初创公司。然而由于基于容器的云平台的特殊性，一般不会为容器分配固定IP，导致用户无法直接对云平台中的容器进行访问控制，对用户添加自定义服务等操作增加不便，ContainerSSh则是专门针对该问题而设计的解决方案。
The University of Arizona Artificial Intelligence Lab (AI Lab) Dark Web project is a long-term scientific research program that aims to study and understand the international terrorism (Jihadist) phenomena via a computational, data-centric approach. We aim to collect "ALL" web content generated by international terrorist groups, including web sites, forums, chat rooms, blogs, social networking sites, videos, virtual world, etc. We have developed various multilingual data mining, text mining, and web mining techniques to perform link analysis, content analysis, web metrics (technical
Fuertes Castro, José Luis; Pérez Pérez, Aurora
Muchos sitios Web tienen un importante problema de accesibilidad, ya que su diseño no ha contemplado la gran diversidad funcional que presentan cada uno de los potenciales usuarios. Las directrices de accesibilidad del contenido Web, desarrolladas por el Consorcio de la Web, están compuestas por una serie de recomendaciones para que una página Web pueda utilizarse por cualquier persona. Uno de los principales problemas surge a la hora de comprobar la accesibilidad de una página Web, dado que,...
Hall, Wendy; Tiropanis, Thanassis
This paper examines the evolution of the World Wide Web as a network of networks and discusses the emergence of Web Science as an interdisciplinary area that can provide us with insights on how the Web developed, and how it has affected and is affected by society. Through its different stages of evolution, the Web has gradually changed from a technological network of documents to a network where documents, data, people and organisations are interlinked in various and often unexpected ways. It...
Perrucci, GP; Fitzek, FHP; Zhang, Qi;
This paper advocates a novel approach for mobile web browsing based on cooperation among wireless devices within close proximity operating in a cellular environment. In the actual state of the art, mobile phones can access the web using different cellular technologies. However, the supported data...... extension of current technology, we propose a novel architecture where mobile phones are grouped together in clusters, using a short-range communication such as Bluetooth, sharing, and accumulating their cellular capacity. The accumulated data rate resulting from collaborative interactions over short...
Yoshida, Catherine E; Kruczkiewicz, Peter; Laing, Chad R; Lingohr, Erika J; Gannon, Victor P J; Nash, John H E; Taboada, Eduardo N
-typing allows for continuity with historical serotyping data as we transition towards the increasing adoption of genomic analyses in epidemiology. The SISTR platform is freely available on the web at https://lfz.corefacility.ca/sistr-app/.
Los catálogos en línea de acceso público del Mercosur disponibles en entorno web: características del Proyecto UBACYT F054 Online public access catalogs of Mercosur in a web environment: characteristics of UBACYT F054 Project
Elsa E. Barber
Full Text Available Se presentan los lineamientos teórico-metodológicos del proyecto de investigación UBACYT F054 (Programación Científica y Técnica de la Universidad de Buenos Aires 2004-2007. Se analiza la problemática de los catálogos en línea de acceso público (OPACs disponibles en entorno web de las bibliotecas nacionales, universitarias, especializadas y públicas del Mercosur. Se estudian los aspectos vinculados con el control operativo, la formulación de la búsqueda, los puntos de acceso, el control de salida y la asistencia al usuario. El proyecto se propone, desde un abordaje cuantitativo y cualitativo, efectuar un diagnóstico de situación válido para los catálogos de la región. Plantea, además, un estudio comparativo con el fin de vislumbrar las tendencias existentes dentro de esta temática en bibliotecas semejantes de Argentina, Brasil, Paraguay y Uruguay.The theoretical-methodological aspects of the research project UBACYT F054 (Universidad de Buenos Aires Technical and Scientific Program, 2004- 2007 are outlined. Online Public Access Catalogs (OPACs in web environment in national, academic, public and special libraries in countries belonging to Mercosur are analized. Aspects related to the operational control, search formulation, access points, output control and user assistance are studied. The project aims, both quantitatively and qualitatively, to make a situation diagnosis valid for the catalogs of the region. It also offers a comparative study in order to see the existing tendencies on the subject in similar libraries in Argentina, Brasil, Paraguay and Uruguay.
Claudia Elena DINUCA
Full Text Available Web servers worldwide generate a vast amount of information on web users’ browsing activities. Several researchers have studied these so-called clickstream or web access log data to better understand and characterize web users. Clickstream data can be enriched with information about the content of visited pages and the origin (e.g., geographic, organizational of the requests. The goal of this project is to analyse user behaviour by mining enriched web access log data. With the continued growth and proliferation of e-commerce, Web services, and Web-based information systems, the volumes of click stream and user data collected by Web-based organizations in their daily operations has reached astronomical proportions. This information can be exploited in various ways, such as enhancing the effectiveness of websites or developing directed web marketing campaigns. The discovered patterns are usually represented as collections of pages, objects, or re-sources that are frequently accessed by groups of users with common needs or interests. The focus of this paper is to provide an overview how to use frequent pattern techniques for discovering different types of patterns in a Web log database. In this paper we will focus on finding association as a data mining technique to extract potentially useful knowledge from web usage data. I implemented in Java, using NetBeans IDE, a program for identification of pages’ association from sessions. For exemplification, we used the log files from a commercial web site.
V. Lakshmi Praba; T. Vasantha
The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to web data and documents. Web content mining and web structure mining have important roles in identifying the relevant web page. Relevancy of web page denotes how well a retrieved web page or set of we...
Earth science information is important to decisionmakers who formulate public policy related to mineral resource sustainability, land stewardship, environmental hazards, the economy, and public health. To meet the growing demand for easily accessible data, the Mineral Resources Program has developed, in cooperation with other Federal and State agencies, an Internet-based, data-delivery system that allows interested customers worldwide to download accurate, up-to-date mineral resource-related data at any time. All data in the system are spatially located and customers with Internet access and a modern Web browser can easily produce maps having user-defined overlays for any region of interest.
A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers
Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich
The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced
Full Text Available The rapid growth of the web and the lack of structure or an integrated schema create various issues to access the information for users. All users’ access on web information are saved in the related server log files. The circumstance of using these files is implemented as a resource for finding some patterns of user's behavior. Web mining is a subset of data mining and it means the mining of the related data from WWW, which is categorized into three parts including web content mining, web structure mining and web usage mining, based on the part of data, which is mined. It seems necessary to have a technique, which is capable of learning the users’ interests and based on the interests, which could filter the unrelated interests automatically or it could offer the related information to the user in reasonable amount of time. The web usage mining makes a profile from users to recognize them and it has direct relationship to web personalizing. The primary objective of personalizing systems is to prepare the thing, which is required by users, without asking them explicitly. In the other way, formal models prepare the possibility of system’s behavior modeling. The Petri and queue nets as some samples of these models can analyze the user's behavior in web. The primary objective of this paper is to present a colored Petri net to model the user's interactions for offering a list of pages recommendation to them in web. Estimating the user's behavior is implemented in some cases like offering the proper pages to continue the browse in web, ecommerce and targeted advertising. The preliminary results indicate that the proposed method is able to improve the accuracy criterion 8.3% rather static method.
Full Text Available The increasing availability and popularity of computer systems has resulted in a demand for new, language- and platform-independent ways of data exchange. That demand has in turn led to a significant growth in the importance of systems based on Web services. Alongside the growing number of systems accessible via Web services came the need for specialized data repositories that could offer effective means of searching of available services. The development of mobile systems and wireless data transmission technologies has allowed the use of distributed devices and computer systems on a greater scale. The accelerating growth of distributed systems might be a good reason to consider the development of distributed Web service repositories with built-in mechanisms for data migration and synchronization.
Full Text Available When designing for e-learning the objective is to design for learning i.e. the technology supporting the learning activity should aid and support the learning process and be an arena where learning is likely to occur. To obtain this when designing e-learning for the workplace the author argue that it is important to have knowledge on how users actually access and use e-learning systems. In order to gain this knowledge web logs from a web lecture developed for a Scandinavian public body has been analyzed. During a period of two and a half months 15 learners visited the web lecture 74 times. The web lecture consisted of streaming video with exercises and additional links to resources on the WWW to provide an opportunity to investigate the topic from multiple perspectives. The web lecture took approximately one hour to finish. Using web usage mining for the analysis seven groups or interaction patterns emerged: peaking, one go, partial order, partial unordered, single module, mixed modules, non-video modules. Furthermore the web logs paint a picture of the learning activities being interrupted. This suggests that modules needs to be fine-grained (e.g. less than 8 minutes per video clip so learners’ do not need to waste time having to watch parts of a video clip while waiting for the part of interest to appear or having to fast forward. A clear and logical structure is also important to help the learner find their way back accurately and fast.
Thuraisingham, Bhavani; Clifton, Chris; Gupta, Amar; Bertino, Elisa; Ferrari, Elena
This paper provides directions for web and e-commerce applications security. In particular, access control policies, workflow security, XML security and federated database security issues pertaining to the web and ecommerce applications are discussed.
Jain, Ratnesh Kumar; Kasana, Dr. R. S.; Jain, Dr. Suresh
World Wide Web is a huge data repository and is growing with the explosive rate of about 1 million pages a day. As the information available on World Wide Web is growing the usage of the web sites is also growing. Web log records each access of the web page and number of entries in the web logs is increasing rapidly. These web logs, when mined properly can provide useful information for decision-making. The designer of the web site, analyst and management executives are interested in extracti...
Background This study aims to rank policy concerns and policy-related research issues in order to identify policy and research gaps on access to medicines (ATM) in low- and middle-income countries in Latin America and the Caribbean (LAC), as perceived by policy makers, researchers, NGO and international organization representatives, as part of a global prioritization exercise. Methods Data collection, conducted between January and May 2011, involved face-to-face interviews in El Salvador, Colombia, Dominican Republic, and Suriname, and an e-mail survey with key-stakeholders. Respondents were asked to choose the five most relevant criteria for research prioritization and to score policy/research items according to the degree to which they represented current policies, desired policies, current research topics, and/or desired research topics. Mean scores and summary rankings were obtained. Linear regressions were performed to contrast rankings concerning current and desired policies (policy gaps), and current and desired research (research gaps). Results Relevance, feasibility, and research utilization were the top ranked criteria for prioritizing research. Technical capacity, research and development for new drugs, and responsiveness, were the main policy gaps. Quality assurance, staff technical capacity, price regulation, out-of-pocket payments, and cost containment policies, were the main research gaps. There was high level of coherence between current and desired policies: coefficients of determination (R2) varied from 0.46 (Health system structure; r = 0.68, P <0.01) to 0.86 (Sustainable financing; r = 0.93, P <0.01). There was also high coherence between current and desired research on Rational selection and use of medicines (r = 0.71, P <0.05, R2 = 0.51), Pricing/affordability (r = 0.82, P <0.01, R2 = 0.67), and Sustainable financing (r = 0.76, P <0.01, R2 = 0.58). Coherence was less for Health system structure (r = 0.61, P <0.01, R2 = 0.38). Conclusions This
In this thesis, we investigate the path towards a focused web harvesting approach which can automatically and efficiently query websites, navigate through results, download data, store it and track data changes over time. Such an approach can also facilitate users to access a complete collection of relevant data to their topics of interest and monitor it over time. To realize such a harvester, we focus on the following obstacles. First, we try to find methods that can achieve the best coverag...
L.Saoudi; A.Boukerram; S.Mhamedi
Traditional search engines deal with the Surface Web which is a set of Web pages directly accessible through hyperlinks and ignores a large part of the Web called hidden Web which is a great amount of valuable information of online database which is “hidden” behind the query forms. To access to those information the crawler have to fill the forms with a valid data, for this reason we propose a new approach which use SQLI technique in order to find the most promising keywords of a specific dom...
Full Text Available Problem statement: In the internet era web sites on the internet are useful source of information for almost every activity. So there is a rapid development of World Wide Web in its volume of traffic and the size and complexity of web sites. Web mining is the application of data mining, artificial intelligence, chart technology and so on to the web data and traces users visiting behaviors and extracts their interests using patterns. Because of its direct application in e-commerce, Web analytics, e-learning, information retrieval, web mining has become one of the important areas in computer and information science. There are several techniques like web usage mining exists. But all processes its own disadvantages. This study focuses on providing techniques for better data cleaning and transaction identification from the web log. Approach: Log data is usually noisy and ambiguous and preprocessing is an important process for efficient mining process. In the preprocessing, the data cleaning process includes removal of records of graphics, videos and the format information, the records with the failed HTTP status code and robots cleaning. Sessions are reconstructed and paths are completed by appending missing pages in preprocessing. And also the transactions which depict the behavior of users are constructed accurately in preprocessing by calculating the Reference Lengths of user access by considering byte rate. Results: When the number of records is considered, for example, for 1000 record, only 350 records are resulted using data cleaning. When the execution time is considered, the initial log take s119 seconds for execution, whereas, only 52 seconds are required by proposed technique. Conclusion: The experimental results show the performance of the proposed algorithm and comparatively it gives the good results for web usage mining compared to existing approaches.
Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.
Goodrich, John W.
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Web-based self-access English learning and in-class English teaching,as two main ways for college students′ English learning,have their advantages and deficiencies.To make them offer better services for non-English majors＇ English learning,this paper investigates their present situations,integrates their advantages and then proposes possible solutions for complementary advantages of the two methods.%网络自主学习和课堂教学,作为当代大学生英语学习的两种主要形式,各有其优势和不足。因此,要针对非英语专业的大学生调查英语网络自主学习和英语课堂教学的现状,创新以整合网络自主学习与课堂教学的优势互补的可行方案,使这两种教学形式更好地为大学生的英语学习服务。
Perhaps the most significant tool of our internet age is the web search engine, providing a powerful interface for accessing the vast amount of information available on the world wide web and beyond. While still in its infancy compared to the knowledge tools that precede it - such as the dictionary or encyclopedia - the impact of web search engines on society and culture has already received considerable attention from a variety of academic disciplines and perspectives. This article aims to organize a meta-discipline of “web search studies,” centered around a nucleus of major research on web search engines from five key perspectives: technical foundations and evaluations; transaction log analyses; user studies; political, ethical, and cultural critiques; and legal and policy analyses.
Falquet, Laurent; Bordoli, Lorenza; Ioannidis, Vassilios; Pagni, Marco; Jongeneel, C. Victor
EMBnet is a consortium of collaborating bioinformatics groups located mainly within Europe (http://www.embnet.org). Each member country is represented by a ‘node’, a group responsible for the maintenance of local services for their users (e.g. education, training, software, database distribution, technical support, helpdesk). Among these services a web portal with links and access to locally developed and maintained software is essential and different for each node. Our web portal targets bio...
Solomon, David J.
Web-based surveying is becoming widely used in social science and educational research. The Web offers significant advantages over more traditional survey techniques however there are still serious methodological challenges with using this approach. Currently coverage bias or the fact significant numbers of people do not have access, or choose not to use the Internet is of most concern to researchers. Survey researchers also have much to learn concerning the most effective ways to conduct s...
Discusses efforts by the Federal Depository Library Program to make information accessible more or mostly by electronic means. Topics include Web-based locator tools; collection development; digital archives; bibliographic metadata; and access tools and user interfaces. (Author/LRW)
Cetl, V.; Kliment, T.; Kliment, M.
The effective access and use of geospatial information (GI) resources acquires a critical value of importance in modern knowledge based society. Standard web services defined by Open Geospatial Consortium (OGC) are frequently used within the implementations of spatial data infrastructures (SDIs) to facilitate discovery and use of geospatial data. This data is stored in databases located in a layer, called the invisible web, thus are ignored by search engines. SDI uses a catalogue (discovery) service for the web as a gateway to the GI world through the metadata defined by ISO standards, which are structurally diverse to OGC metadata. Therefore, a crosswalk needs to be implemented to bridge the OGC resources discovered on mainstream web with those documented by metadata in an SDI to enrich its information extent. A public global wide and user friendly portal of OGC resources available on the web ensures and enhances the use of GI within a multidisciplinary context and bridges the geospatial web from the end-user perspective, thus opens its borders to everybody. Project "Crosswalking the layers of geospatial information resources to enable a borderless geospatial web" with the acronym BOLEGWEB is ongoing as a postdoctoral research project at the Faculty of Geodesy, University of Zagreb in Croatia (http://bolegweb.geof.unizg.hr/). The research leading to the results of the project has received funding from the European Union Seventh Framework Programme (FP7 2007-2013) under Marie Curie FP7-PEOPLE-2011-COFUND. The project started in the November 2014 and is planned to be finished by the end of 2016. This paper provides an overview of the project, research questions and methodology, so far achieved results and future steps.
Ramesh, C; Govardhan, A
With the rapid growth of internet technologies, Web has become a huge repository of information and keeps growing exponentially under no editorial control. However the human capability to read, access and understand Web content remains constant. This motivated researchers to provide Web personalized online services such as Web recommendations to alleviate the information overload problem and provide tailored Web experiences to the Web users. Recent studies show that Web usage mining has emerged as a popular approach in providing Web personalization. However conventional Web usage based recommender systems are limited in their ability to use the domain knowledge of the Web application. The focus is only on Web usage data. As a consequence the quality of the discovered patterns is low. In this paper, we propose a novel framework integrating semantic information in the Web usage mining process. Sequential Pattern Mining technique is applied over the semantic space to discover the frequent sequential patterns. Th...
Thorlund Jepsen, Erik; Seiden, Piet; Ingwersen, Peter Emil Rerup;
Because of the increasing presence of scientific publications on the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research on techniques and methods for retrieval of scientific Web publications is called for. In this article, we report...... on the initial steps taken toward the construction of a test collection of scientific Web publications within the subject domain of plant biology. The steps reported are those of data gathering and data analysis aiming at identifying characteristics of scientific Web publications. The data used in this article...... were generated based on specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AllTheWeb, and AltaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality...
Full Text Available In today’s high tech environment every organization, individual computer users use internet for accessing web data. To maintain high confidentiality and security of the data secure web solutions are required. In this paper we described dedicated anonymous web browsing solutions which makes our browsing faster and secure. Web application which play important role for transferring our secret information including like email need more and more security concerns. This paper also describes that how we can choose safe web hosting solutions and what the main functions are which provides more security over server data. With the browser security network security is also important which can be implemented using cryptography solutions, VPN and by implementing firewalls on the network. Hackers always try to steal our identity and data, they track our activities using the network application software’s and do harmful activities. So in this paper we described that how we can monitor them from security purposes.
Full Text Available With the constant spread of internet access, the world of software is constantly transforming product shapes into services delivered via web browsers. Modern next generation web applications change the way browsers and users interact with servers. A lot of word scale services have already been delivered by top companies as Single Page Applications. Moving services online poses a big attention towards data protection and web application security. Single Page Application are exposed to server-side web applications security in a new way. Also, having application logic being executed by untrusted client environment requires close attention on client application security. Single Page Applications are vulnerable to the same security threads as server-side web application thus not making them less secure. Defending techniques can be easily adapted to guard against hacker attacks.
Snell, James L; Kulchenko, Pavel
The web services architecture provides a new way to think about and implement application-to-application integration and interoperability that makes the development platform irrelevant. Two applications, regardless of operating system, programming language, or any other technical implementation detail, communicate using XML messages over open Internet protocols such as HTTP or SMTP. The Simple Open Access Protocol (SOAP) is a specification that details how to encode that information and has become the messaging protocol of choice for Web services.Programming Web Services with SOAP is a detail
Efficient access to data, sharing data, extracting information from data, and making use of the information have become urgent needs for today''s corporations. With so much data on the Web, managing it with conventional tools is becoming almost impossible. New tools and techniques are necessary to provide interoperability as well as warehousing between multiple data sources and systems, and to extract information from the databases. XML Databases and the Semantic Web focuses on critical and new Web technologies needed for organizations to carry out transactions on the Web, to understand how to use the Web effectively, and to exchange complex documents on the Web.This reference for database administrators, database designers, and Web designers working in tandem with database technologists covers three emerging technologies of significant impact for electronic business: Extensible Markup Language (XML), semi-structured databases, and the semantic Web. The first two parts of the book explore these emerging techn...
LIU Wei; LI Xian; LING Yanyan; ZHANG Xiaoyu; MENG Xiaofeng
With the rapid development of Web, there are more and more Web databases available for users to access. At the same time, job searchers often have difficulties in first finding the right sources and then querying over them, providing such an integrated job search system over Web databases has become a Web application in high demand. Based on such consideration, we build a deep Web data integration system that supports unified access for users to multiple job Web sites as a job meta-search engine. In this paper, the architecture of the system is given first, and the key components in the system are introduced.
In this thesis we developed a prototype robot, which can be controlled by user via web interface and is accessible trough a web browser. Web interface updates sensor data and streams video captured with the web-cam mounted on the robot in real-time. Raspberry Pi computer runs the back-end code of the thesis. General purpose input-output header on Raspberry Pi communicates with motor driver and sensors. Wireless dongle and web-cam connected trough USB, ensure wireless communication and vid...
Full Text Available Web testing is the name given to software testing that focuses on web applications. Issues such as the security of the web application, the basic functionality of the site, its accessibility to handicapped users and fully able users, as well as readiness for expected traffic and number of users and the ability to survive a massive spike in user traffic, both of which are related to load testing. In this paper web testing based on tool, challenges and methods have been discussed which will help in handling some challenges during the website development. This paper presents best methods for testing a web application.
Full Text Available On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1 First we describe the basics of web mining, types of web mining. 2 Details of each web mining technique.3We propose the architecture for the personalized web page recommendation.
Semantic Web Services for Web Databases introduces an end-to-end framework for querying Web databases using novel Web service querying techniques. This includes a detailed framework for the query infrastructure for Web databases and services. Case studies are covered in the last section of this book. Semantic Web Services For Web Databases is designed for practitioners and researchers focused on service-oriented computing and Web databases.
Rocco, D; Liu, L; Critchlow, T
Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address these challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.
By the empirical analysis of web citation of Chinese library and information science journals, this paper finds out that the citation of. html format papers is decreasing year by year, while the. pdf format and dynamic web one are gradually rising; academic information on wikis, blogs, forums and other new web are accepted by Chinese library and information science scholars; the accessibility of dynamic web citation is slightly higher than that of static web citation, and cites of them are between 50% and 51% ; web citations distributed in the . edu domain has the worst accessibility. And the related reasons are analyzed.%通过对我国图书情报学期刊网络引文的实证分析，得出如下结论：HTML格式网络引文的比例在逐年下降，PDF格式和动态类网络引文的比例在逐渐上升，维基、博客、论坛等新型网络学术信息正日益得到我国图书情报学者的认可和接受；动态类网络引文的可追溯性略高于静态类网络引文，但二者可追溯率都介于50％-51％之间；分布在．edu域名的网络引文的可追溯性相对较差。
Traditional call centers can be accessed via speech only, and the call center based on web provides both da-ta and speech access,but it needs a powerful terminal-computer.By analyzing traditional call centers and call cen-ters based on web, this paper presents the framework of an advanced call center supporting WAP access.A typical service is also described in detail.
Ai-Bo Song; Mao-Xian Zhao; Zuo-Peng Liang; Yi-Sheng Dong; Jun-Zhou Luo
With the growing popularity of the World Wide Web, large volume of user access data has been gathered automatically by Web servers and stored in Web logs. Discovering and understanding user behavior patterns from log files can provide Web personalized recommendation services. In this paper, a novel clustering method is presented for log files called Clustering large Weblog based on Key Path Model (CWKPM), which is based on user browsing key path model, to get user behavior profiles. Compared with the previous Boolean model,key path model onsiders the major features of users' accessing to the Web: ordinal, contiguous and duplicate.Moreover, for clustering, it has fewer dimensions. The analysis and experiments show that CWKPM is an efficient and effective approach for clustering large and high-dimension Web logs.
Ai-BoSong; Mao-XianZhao; Zuo-PengLiang; Yi-ShengDong; Jun-ZhouLuo
With the growing popularity of the World Wide Web, large volume of user access data has been gathered automatically by Web servers and stored in Web logs. Discovering and understanding user behavior patterns from log files can provide Web personalized recommendation services. In this paper, a novel clustering method is presented for log files called Clustering large Weblog based on Key Path Model (CWKPM), which is based on user browsing key path model, to get user behavior profiles. Compared with the previous Boolean model, key path model considers the major features of users' accessing to the Web: ordinal, contiguous and duplicate. Moreover, for clustering, it has fewer dimensions. The analysis and experiments show that CWKPM is an efficient and effective approach for clustering large and high-dimension Web logs.
Cohen, Andrew; Vitányi, Paul
Normalized web distance (NWD) is a similarity or normalized semantic distance based on the World Wide Web or any other large electronic database, for instance Wikipedia, and a search engine that returns reliable aggregate page counts. For sets of search terms the NWD gives a similarity on a scale from 0 (identical) to 1 (completely different). The NWD approximates the similarity according to all (upper semi)computable properties. We develop the theory and give applications. The derivation of ...
Full Text Available Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in deep web that can be useful to gain new insight for various domains, creating need to access the information from the deep web by developing efficient techniques. As the amount of Web content grows rapidly, the types of data sources are proliferating, which often provide heterogeneous data. So we need to select Deep Web Data sources that can be used by the integration systems. The paper discusses various techniques that can be used to surface the deep web information and techniques for Deep Web Source Selection.
U.S. Department of Health & Human Services — A search-based Web service that provides access to disease, condition and wellness information via MedlinePlus health topic data in XML format. The service accepts...
"From "blogs" to "wikis", the Web is now more than a mere repository of information. Martin Griffiths investigates how this new interactivity is affecting the way physicists communicate and access information." (5 pages)
Li, R.; Shen, Y.; Huang, W.; Wu, H.
Computer Science This thesis examines methods for accessing information stored in a relational database from a Web Page. The stateless and connectionless nature of the Web's Hypertext Transport Protocol as well as the open nature of the Internet Protocol pose problems in the areas of database concurrency, security, speed, and performance. We examined the Common Gateway Interface, Server API, Oracle's Web/database architecture, and the Java Database Connectivity interface in terms of p...
Karreman, Joyce; Geest, van der Thea; Buursink, Esmee
Background: The W3C Web Accessibility Initiative has issued guidelines for making websites better and easier to access for people with various disabilities (W3C Web Accessibility Initiative guidelines 1999). - Method: The usability of two versions of a website (a non-adapted site and a site that wa
... accessed electronically at the Government Printing Office's World Wide Web site (which can be found at http://www.access.gpo.gov/su_docs). Some records are maintained under a government-wide systems of records... Printing Office's World Wide Web site (which can be found at http://www.access.gpo.gov/su_docs)....
Jain, Ratnesh Kumar; Jain, Dr Suresh
World Wide Web is a huge data repository and is growing with the explosive rate of about 1 million pages a day. As the information available on World Wide Web is growing the usage of the web sites is also growing. Web log records each access of the web page and number of entries in the web logs is increasing rapidly. These web logs, when mined properly can provide useful information for decision-making. The designer of the web site, analyst and management executives are interested in extracting this hidden information from web logs for decision making. Web access pattern, which is the frequently used sequence of accesses, is one of the important information that can be mined from the web logs. This information can be used to gather business intelligence to improve sales and advertisement, personalization for a user, to analyze system performance and to improve the web site organization. There exist many techniques to mine access patterns from the web logs. This paper describes the powerful algorithm that mine...
Meulenhoff, P.J.; Ostendorf, D.R.; Živković, M.; Meeuwissen, H.B.; Gijsen, B.M.M.
In this paper, we analyze overload control for composite web services in service oriented architectures by an orchestrating broker, and propose two practical access control rules which effectively mitigate the effects of severe overloads at some web services in the composite service. These two rules
Herman, I.; Gylling, M.
Although using advanced Web technologies at their core, e-books represent a parallel universe to everyday Web documents. Their production workflows, user interfaces, their security, access, or privacy models, etc, are all distinct. There is a lack of a vision on how to unify Digital Publishing and t
SRD 40 NDRL/NIST Solution Kinetics Database on the Web (Web, free access) Data for free radical processes involving primary radicals from water, inorganic radicals and carbon-centered radicals in solution, and singlet oxygen and organic peroxyl radicals in various solvents.
Web Archives of ATLAS Plenary Sessions, Workshops, Meetings, and Tutorials recorded over the past two years are available via the University of Michigan portal here. Most recent additions include the ROOT Workshop held at CERN on March 26-27, the Physics Analysis Tools Workshop held in Bergen, Norway on April 23-27, and the CTEQ Workshop: "Physics at the LHC: Early Challenges" held at Michigan State University on May 14-15. Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally. In addition, you will find access to a variety of general tutorials and events via the portal. Suggestions for events or tutorials to record in 2007, as well as feedback on existing archives is always welcome. Please contact us at email@example.com. Thank you and enjoy the lectures! The Michigan Web Lecture Team Tushar Bhatnagar, Steven Goldfarb, Jeremy Herr, Mitch McLachlan, Homer A....
上超望; 刘清堂; 赵呈领; 童名文
Business process access control mechanism is a difficult problem in composite web services security applications.Considering the deficiency in current researches,an Activity Authorization Based Dynamic Access Control Model for BPEL4WS (AACBP)is proposed.By dissolving the coupling relationship between the organization model and the business process model,AACBP utilizes activity authorization as the basic unit to implement BPEL4WS access control.Through the activity instances,the model implements fine-gained access control of the activities,and realizes the synchronization of authorization and business process execution.At last,the paper also describes the implementa-tion architecture of AACBP model in web services secure composition.%业务流程访问控制机制是组合Web服务安全应用中的难点问题。针对现有研究不足，提出基于活动授权的Web服务业务流程动态访问控制模型AACBP（Activity Authorization Based Dynamic Access Control Model for BPEL4WS）。通过解除组织模型和业务流程模型间的耦合关系，AACBP将活动授权作为BPEL4WS（Business Process Expression Language for Web Services）活动访问控制实施的基本单元。依据活动实例动态感知上下文，AACBP细粒度约束活动访问授权，实现授权流与业务流程执行同步。最后给出AACBP模型在Web服务安全组合中的实施机制。
Dokas, I.M.; Alapetite, Alexandre
Similar to many legacy computer systems, expert systems can be accessed via the Web, forming a set of Web applications known as Web based expert systems. The tough Web competition, the way people and organizations rely on Web applications and theincreasing user requirements for better services have...... raised their complexity. Unfortunately, there is so far no clear answer to the question: How may the methods and experience of Web engineering and expert systems be combined and applied in order todevelop effective and successful Web based expert systems? In an attempt to answer this question......, a development process meta-model for Web based expert systems will be presented. Based on this meta-model, a publicly available Web based expert systemcalled Landfill Operation Management Advisor (LOMA) was developed. In addition, the results of an accessibility evaluation on LOMA – the first ever reported...
Surfing for Data: A Gathering Trend in Data Storage Is the Use of Web-Based Applications that Make It Easy for Authorized Users to Access Hosted Server Content with Just a Computing Device and Browser
Technology & Learning, 2005
In recent years, the widespread availability of networks and the flexibility of Web browsers have shifted the industry from a client-server model to a Web-based one. In the client-server model of computing, clients run applications locally, with the servers managing storage, printing functions, and network traffic. Because every client is…
Spink, Amanda; Koricich, Andrew; Jansen, B J; Cole, Charles
Sexual information seeking is an important element within human information behavior. Seeking sexually related information on the Internet takes many forms and channels, including chat rooms discussions, accessing Websites or searching Web search engines for sexual materials. The study of sexual Web queries provides insight into sexually-related information-seeking behavior, of value to Web users and providers alike. We qualitatively analyzed queries from logs of 1,025,910 Alta Vista and AlltheWeb.com Web user queries from 2001. We compared the differences in sexually-related Web searching between Alta Vista and AlltheWeb.com users. Differences were found in session duration, query outcomes, and search term choices. Implications of the findings for sexual information seeking are discussed.
Full Text Available Web applications vulnerabilities allow attackers to perform malicious actions that range from gaining unauthorized account access to obtaining sensitive data. The number of reported web application vulnerabilities in last decade is increasing dramatically. The most of vulnerabilities result from improper input validation and sanitization. The most important of these vulnerabilities based on improper input validation and sanitization are: SQL injection (SQLI, Cross-Site Scripting (XSS and Buffer Overflow (BOF. In order to address these vulnerabilities we designed and developed the WAPTT (Web Application Penetration Testing Tool tool - web application penetration testing tool. Unlike other web application penetration testing tools, this tool is modular, and can be easily extended by end-user. In order to improve efficiency of SQLI vulnerability detection, WAPTT uses an efficient algorithm for page similarity detection. The proposed tool showed promising results as compared to six well-known web application scanners in detecting various web application vulnerabilities.
Pazos Arias, José J; Díaz Redondo, Rebeca P
The recommendation of products, content and services cannot be considered newly born, although its widespread application is still in full swing. While its growing success in numerous sectors, the progress of the Social Web has revolutionized the architecture of participation and relationship in the Web, making it necessary to restate recommendation and reconciling it with Collaborative Tagging, as the popularization of authoring in the Web, and Social Networking, as the translation of personal relationships to the Web. Precisely, the convergence of recommendation with the above Social Web pillars is what motivates this book, which has collected contributions from well-known experts in the academy and the industry to provide a broader view of the problems that Social Recommenders might face with. If recommender systems have proven their key role in facilitating the user access to resources on the Web, when sharing resources has become social, it is natural for recommendation strategies in the Social Web...
Trupti B. Mane , Prof. Girish P. Potdar
Full Text Available The World Wide Web (WWW is getting a lot of attention as it is becoming huge repository ofinformation. A web page gets deployed on websiteby its web template system. Those templates can beused by any individual or organization to set up their website. Also the templates provide its readersthe ease of access to the contents guided by consistent structures. Hence the template detection techniques are emerging as Web Templates are becoming more and more important. Earlier systems consider all documents are guaranteed to conform to a common template and hence template extraction is done with those assumptions. However it is not feasible in real application. Our focus is on extracting templates from heterogeneous web pages. But due to large variety of web documents, there is a need to manage unknown number of templates. This can be achieved by clustering web documents by selecting a good partition method. The correctness of extracted templates depending on quality of clustering
Full Text Available The Internet offers multiple solutions to linkcompanies with their partners, customers or suppliersusing IT solutions, including a special focus on Webservices. Web services are able to solve the problem relatedto the exchange of data between business partners, marketsthat can use each other's services, problems ofincompatibility between IT applications. As web servicesare described, discovered and accessed programs based onXML vocabularies and Web protocols, Web servicesrepresents solutions for Web-based technologies for smalland medium-sized enterprises (SMEs. This paper presentsa web service framework for economic applications. Also, aprototype of this IT solution using web services waspresented and implemented in a few companies from IT,commerce and consulting fields measuring the impact ofthe solution in the business environment development.
... offers a search-based Web service that provides access to MedlinePlus health topic data in XML format. ... of Medicine" for English and "Biblioteca Nacional de Medicina" for Spanish. altTitle See reference(s) and synonym(s) for ...
The National Academy Press is the publisher for the National Academy of Sciences, the National Academy of Engineering, the Institute of Medicine, and the National Research Council. Through this web site, you have access to a virtual treasure trove of books, reports and publicatio...
Falquet, Laurent; Bordoli, Lorenza; Ioannidis, Vassilios; Pagni, Marco; Jongeneel, C Victor
EMBnet is a consortium of collaborating bioinformatics groups located mainly within Europe (http://www.embnet.org). Each member country is represented by a 'node', a group responsible for the maintenance of local services for their users (e.g. education, training, software, database distribution, technical support, helpdesk). Among these services a web portal with links and access to locally developed and maintained software is essential and different for each node. Our web portal targets biomedical scientists in Switzerland and elsewhere, offering them access to a collection of important sequence analysis tools mirrored from other sites or developed locally. We describe here the Swiss EMBnet node web site (http://www.ch.embnet.org), which presents a number of original services not available anywhere else.
Vidya S. Dandagi
Full Text Available Semantic Web is a system that allows machines to understand complex human requests. Depending on the meaning semantic web replies. Semantics is the learning of the meanings of linguistic appearance. It is the main branch of contemporary linguistics. Semantics is meaning of words, text or a phrase and relations between them. RDF provides essential support to the Semantic Web. To represent distributed information RDF is created. Applications can use RDF created and process it in an adaptive manner. Knowledge representation is done using RDF standards and it is machine understandable. This paper describes the creation of a semantic web using RDF, and retrieval of accurate results using SparQL query language.
Greeshma G. Vijayan
Full Text Available As the Internet continues to grow in size and popularity, web traffic and network bottlenecks are major issues in the network world. The continued increase in demand for objects on the Internet causes severe overloading in many sites and network links. Many users have no patience in waiting more than few seconds for downloading a web page. Web traffic reduction techniques are necessary for accessing the web sites efficiently with the facility of existing network. Web pre-fetching techniques and web caching reduces the web latency that we face on the internet today. This paper describes about the various prefetching and caching techniques, how they predict the web object to be pre-fetched and what are the issues challenges involved when these techniques are applied to a mobile environment
Sharples, Mike; Kloos, Carlos Delgado; Dimitriadis, Yannis; Garlatti, Serge; Specht, Marcus
Many modern web-based systems provide a "responsive" design that allows material and services to be accessed on mobile and desktop devices, with the aim of providing "ubiquitous access." Besides offering access to learning materials such as podcasts and videos across multiple locations, mobile, wearable and ubiquitous…
The Web browser has become one of the major application interfaces for remotely participating in magnetic fusion. Web portals are used to present very diverse sources of information in a unified way. While a web portal has several benefits over other software interfaces, such as providing single point of access for multiple computational services, and eliminating the need for client software installation, the design and development of a web portal has unique challenges. One of the challenges is that a web portal needs to be fast and interactive despite a high volume of tools and information that it presents. Another challenge is the visual output on the web portal often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments; therefore the applications and information should be customizable depending on the needs of users. An appropriate software architecture and web technologies can meet these problems. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It utilizes a multi-tier software architecture, and web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services for users to select. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, shared visualization and online instant message services. Furthermore, the web portal will provide a mechanism to allow users to create their own applications on the web portal as well as bridging capabilities to external applications such as Twitter and other social networks. In this series of slides, we describe the software architecture of this scientific web portal and our experiences in utilizing web 2.0 technologies. A
Sprimont, P.-G.; Ricci, D.; Nicastro, L.
Subhra Prosun Paul
Full Text Available The Web is a progressively more important resource in many aspects of life: education, employment, government, commerce, healthcare, recreation, and more. It is essential that the web be accessible to people with equal access and equal opportunity to all also with disabilities. An accessible web can also help people with disabilities more actively contribute in society. This paper concentrates on mainly two things; firstly, it briefly examines accessibility guidelines, evaluation methods and analysis tools. Secondly, it analyzes and evaluates the web accessibility of e-Government websites of Bangladesh according to the „W3C Web Content Accessibility Guidelines‟. We also present a recommendation for improvement of e-Government website accessibility in Bangladesh.
Describes database-driven Web pages that dynamically display different information each time the page is accessed in response to the user's needs. Highlights include information management; online assignments; grade tracking; updating Web pages; creating database-driven Web pages; and examples of how they have been used for a high school physics…
@@ 0 Introduction The surprising growth of the Internet, coupled with the rapid development of Web technique and more and more emergence of web information system and application, is bring great opportunities and big challenges to us. Since the Web provides cross-platform universal access to resources for the massive user population, even greater demand is requested to manage data and services effectively.
Thomas, David A.; Li, Qing
The World Wide Web is evolving in response to users who demand faster and more efficient access to information, portability, and reusability of digital objects between Web-based and computer-based applications and powerful communication, publication, collaboration, and teaching and learning tools. This article reviews current uses of Web-based…
A web spider is an automated program or a script that independently crawls websites on the internet. At the same time its job is to pinpoint and extract desired data from websites. The data is then saved in a database and is later used for different purposes. Some spiders download websites which are then saved into large repositories, while others search for more specific data, such as emails or phone numbers. The most well known and the most important application of web crawlers is crawling ...
The science plays a crucial role in the modern society, and the popularization of science in its electronic form is closely related to the rise and development of the World Wide Web. Since 1990s -the introduction of the Web as a part of the Internet- the science popularization has become more and more involved in the web-based society. Therefore, the Web has become an important technical support of the science popularization. The Web, on the one hand, has increased the accessibility, visibili...
Lumb, P D; Rutty, G N
The chosen subject for this month's review is toxicology and covers sites touching upon prescription and illicit drugs, analytical techniques and poisonous plants. It highlights a common problem to the user of the Internet. As more and more people log on and put their web site on for public access, searching for a single, comprehensive, all-encompassing single site becomes almost impossible. Many sites are repetitive or purely personal adverts. Unless you are recommended a site or you are prepared to wade your way through all the junk, one will never find the 'El Dorado' you are seeking. PMID:15274978
Nigel E. Bush
Full Text Available Much is written about Internet access, Web access, Web site accessibility, and access to online health information. The term access has, however, a variety of meanings to authors in different contexts when applied to the Internet, the Web, and interactive health communication. We have summarized those varied uses and definitions and consolidated them into a framework that defines Internet and Web access issues for health researchers. We group issues into two categories: connectivity and human interface. Our focus is to conceptualize access as a multicomponent issue that can either reduce or enhance the public health utility of electronic communications.
Kishore, T Krishna; Narayana, N Lakshmi
Most of the web user's requirements are search or navigation time and getting correctly matched result. These constrains can be satisfied with some additional modules attached to the existing search engines and web servers. This paper proposes that powerful architecture for search engines with the title of Probabilistic Semantic Web Mining named from the methods used. With the increase of larger and larger collection of various data resources on the World Wide Web (WWW), Web Mining has become one of the most important requirements for the web users. Web servers will store various formats of data including text, image, audio, video etc., but servers can not identify the contents of the data. These search techniques can be improved by adding some special techniques including semantic web mining and probabilistic analysis to get more accurate results. Semantic web mining technique can provide meaningful search of data resources by eliminating useless information with mining process. In this technique web servers...
Bell, Hudson; Tang, Nelson K. H.
A user survey of 60 company Web sites (electronic commerce, entertainment and leisure, financial and banking services, information services, retailing and travel, and tourism) determined that 30% had facilities for conducting online transactions and only 7% charged for site access. Overall, Web sites were rated high in ease of access, content, and…
Brescia, Massimo; Esposito, Francesco; Fiore, Michelangelo; Garofalo, Mauro; Guglielmo, Magda; Longo, Giuseppe; Manna, Francesco; Nocella, Alfonso; Vellucci, Civita
Astronomy is undergoing through a methodological revolution triggered by an unprecedented wealth of complex and accurate data. DAMEWARE (DAta Mining & Exploration Web Application and REsource) is a general purpose, Web-based, Virtual Observatory compliant, distributed data mining framework specialized in massive data sets exploration with machine learning methods. We present the DAMEWARE (DAta Mining & Exploration Web Application REsource) which allows the scientific community to perform data mining and exploratory experiments on massive data sets, by using a simple web browser. DAMEWARE offers several tools which can be seen as working environments where to choose data analysis functionalities such as clustering, classification, regression, feature extraction etc., together with models and algorithms.
Kuppusamy, K S; Aghila, G
Though World Wide Web is the single largest source of information, it is ill-equipped to serve the people with vision related problems. With the prolific increase in the interest to make the web accessible to all sections of the society, solving this accessibility problem becomes mandatory. This paper presents a technique for making web pages accessible for people with low vision issues. A model for making web pages accessible, WILI (Web Interface for people with Low-vision Issues) has been proposed. The approach followed in this work is to automatically replace the existing display style of a web page with a new skin following the guidelines given by Clear Print Booklet provided by Royal National Institute of Blind. "Single Click Solution" is one of the primary advantages provided by WILI. A prototype using the WILI model is implemented and various experiments are conducted. The results of experiments conducted on WILI indicate 82% effective conversion rate.
Madhavan, J.; Afanasiev, L.; Antova, L.; Halevy, A.
Over the past few years, we have built a system that has exposed large volumes of Deep-Web content to Google.com users. The content that our system exposes contributes to more than 1000 search queries per-second and spans over 50 languages and hundreds of domains. The Deep Web has long been acknowledged to be a major source of structured data on the web, and hence accessing Deep-Web content has long been a problem of interest in the data management community. In this paper, we report on where...
Full Text Available Web usage mining performs mining on web usage data or web logs. It is now possible to perform data mining on web log records collected from the web page history. A web log is a listing of page reference data/click stream data. The behavior of the web page readers is imprinted in the web server log files. By looking at the sequence of pages a user accesses, a user profile could be developed thus aiding in personalization. With personalization, web access or the contents of web page are modified to better fit the desires of the user and also to identify the browsing behavior of the user can improve system performance, enhance the quality and delivery of Internet Information services to the end user and identify the population of potential customers. With clustering, the desires are determined based on similarities. In this study, a Fuzzy clustering algorithm is designed and implemented. For the proposed algorithm, meaningful behavior patterns are extracted by applying efficient Fuzzy clustering algorithm, to log data. It is proved that performance of the proposed system is better than that of the existing best algorithm. The proposed Fuzzy clustering w-miner algorithm can provide popular information to web page visitors.
Full Text Available Currently, computers are changing from single, isolated devices into entry points to a worldwide network of information exchange and business transactions called the World Wide Web (WWW. However, the success of the WWW has made it increasingly difficult to find, access, present and maintain the information required by a wide variety of users. In response to this problem, many new research initiatives and commercial enterprises have been set up to enrich the available information with machine-process able semantics. This Semantic Web will provide intelligent access to heterogeneous, distributed information, enabling software products (agents to mediate between user needs and the information sources available. In this paper we describe some areas for application of this new technology. We focus on on-going work in the fields of knowledge management and electronic commerce. We also take a perspective on the semantic web-enabled web services which will help to bring the semantic web to its full potential.
Full Text Available Traditional search engines deal with the Surface Web which is a set of Web pages directly accessible through hyperlinks and ignores a large part of the Web called hidden Web which is a great amount of valuable information of online database which is “hidden” behind the query forms. To access to those information the crawler have to fill the forms with a valid data, for this reason we propose a new approach which use SQLI technique in order to find the most promising keywords of a specific domain for automatic form submission. The effectiveness of proposed framework has been evaluated through experiments using real web sites and encouraging preliminary results were obtained
Full Text Available The importance of accessibility to digital e-learning resources is widely acknowledged. The World Wide Web Consortium Web Accessibility Initiative has played a leading role in promoting the importance of accessibility and developing guidelines that can help when developing accessible web resources. The accessibility of e-learning resources provides additional challenges. While it is important to consider the technical and resource related aspects of e-learning when designing and developing resources for students with disabilities, there is a need to consider pedagogic and contextual issues as well. A holistic framework is therefore proposed and described, which in addition to accessibility issues takes into account learner needs, learning outcomes, local factors, infrastructure, usability and quality assurance. The practical application and implementation of this framework is discussed and illustrated through the use of examples and case studies.
Nielsen, Jens Munk
Food webs are structured by intricate nodes of species interactions which govern the flow of organic matter in natural systems. Despite being long recognized as a key component in ecology, estimation of food web functioning is still challenging due to the difficulty in accurately measuring species interactions within a food web. Novel tracing methods that estimate species diet uptake and trophic position are therefore needed for assessing food web dynamics. The focus of this thesis is the use...
Tu, Ha T; Corey, Catherine G
To aid consumers in comparing prescription drug costs, many states have launched Web sites to publish drug prices offered by local retail pharmacies. The current push to make retail pharmacy prices accessible to consumers is part of a much broader movement to increase price transparency throughout the health-care sector. Efforts to encourage price-based shopping for hospital and physician services have encountered widespread concerns, both on grounds that prices for complex services are difficult to measure and compare accurately and that quality varies substantially across providers. Experts agree, however, that prescription drugs are much easier to shop for than other, more complex health services. However, extensive gaps in available price information--the result of relying on Medicaid data--seriously hamper the effectiveness of state drug price-comparison Web sites, according to a new study by the Center for Studying Health System Change (HSC). An alternative approach--requiring pharmacies to submit price lists to the states--would improve the usefulness of price information, but pharmacies typically oppose such a mandate. Another limitation of most state Web sites is that price information is restricted to local pharmacies, when online pharmacies, both U.S. and foreign, often sell prescription drugs at substantially lower prices. To further enhance consumer shopping tools, states might consider expanding the types of information provided, including online pharmacy comparison tools, lists of deeply discounted generic drugs offered by discount retailers, and lists of local pharmacies offering price matches. PMID:18494180
Rathipriya, R.; Thangavel, K.; Bagyamani, J.
Web mining is the nontrivial process to discover valid, novel, potentially useful knowledge from web data using the data mining techniques or methods. It may give information that is useful for improving the services offered by web portals and information access and retrieval tools. With the rapid development of biclustering, more researchers have applied the biclustering technique to different fields in recent years. When biclustering approach is applied to the web usage data it automaticall...
Bayu Kanigoro; Widodo Budiharto; Jurike V. Moniaga; Muhsin Shodiq
Once an individual has access to the Internet, there is a wide variety of different methods of communication and information exchange over the network, one of them is telepresence robot. This study presents a web framework for web conference system of intelligent telepresence robot. The robot is controlled using web conference system from Google App Engine, so the manager/supervisor at office/industry can direct the robot to the intended person to start a discussion/inspection. We build a web...
Jalal, Samir Kumar; Biswas, Subal Chandra; Mukhopadhyay, Parthasarathi
The paper focuses on the Web presence and visibility of websites of Asian countries. The paper tries to highlight the Web presence using some webometric indicators like Internet access, webpages, number of Internet users, and link counts. The study analyzes the web presence using popular search engines like Altavista, Google, Yahoo and MSN. An attempt has also been made to find out the Web Impact Factor (WIF) for selected Asian countries. The result shows that China (43.7%),...
The exponential expanding of the numbers of web sites and Internet users makes WWW the most important global information resource. From information publishing and electronic commerce to entertainment and social networking, the Web allows an inexpensive and efficient access to the services provided by individuals and institutions. The basic units for distributing these services are the web sites scattered throughout the world. However, the extreme fragility of web services and content, the hig...
Yogish H K; Dr.G.T. Raju; Manjunath T. N,
The World Wide Web serves as huge, widely distributed, global information service centre for news, advertisements, consumer information, financial management, education, government, e-commerce and many other information services. The web also contains a rich and dynamic collection of hyperlink information and web page access and usage information, providing rich sources of data for data mining. The Web usage mining is the area of data mining which deals with the discovery and analysis of usag...
Full Text Available Problem statement: The main goal of a Web crawler is to collect documents that are relevant to a given topic in which the search engine specializes. These topic specific search systems typically take the whole document's content in predicting the importance of an unvisited link. But current research had proven that the document's content pointed to by an unvisited link is mainly dependent on the anchor text, which is more accurate than predicting it on the contents of the whole page. Approach: Between these two extremes, it was proposed that Treasure Graph, called T-Graph is a more effective way to guide the Web crawler to fetch topic specific documents predicted by identifying the topic boundary around the unvisited link and comparing that text with all the nodes of the T-Graph to obtain the matching node(s and calculating the distance in the form of documents to be downloaded to reach the target documents. Results: Web search systems based on this strategy allowed crawlers and robots to update their experiences more rapidly and intelligently that can also offer speed of access and presentation advantages. Conclusion/Recommendations: The consequences of visiting a link to update a robot's experiences based on the principles and usage of T-Graph can be deployed as intelligent-knowledge Web crawlers as shown by the proposed novel Web search system architecture.
Full Text Available Deep Web contents are accessed by queries submitted to Web databases and the returned data records are en wrapped in dynamically generated Web pages (they will be called deep Web pages in this paper. The structured data that Extracting from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now, a too many number of techniques have been proposed to address this problem, but all of them have limitations because they are Web-page-programming-language dependent.
T. Rajesh; T. Prathap; S.Naveen Nambi; A. R. Arunachalam
Deep Web contents are accessed by queries submitted to Web databases and the returned data records are en wrapped in dynamically generated Web pages (they will be called deep Web pages in this paper). The structured data that Extracting from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now, a too many number of techniques have been proposed to address this problem, but all of them have limitations because they are Web-page-programming...
The World Wide Web has enabled the creation of a global information space comprising linked documents. As the Web becomes ever more enmeshed with our daily lives, there is a growing desire for direct access to raw data not currently available on the Web or bound up in hypertext documents. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. In this Synthesis lecture we provide readers with a detailed technical i
Marylene S. Eder
Full Text Available Abstract Interactive campus map is a web based application that can be accessed through a web browser. With the Google Map Application Programming Interface availability of the overlay function has been taken advantage to create custom map functionalities. Collection of building points were gathered for routing and to create polygons which serves as a representation of each building. The previous campus map provides a static visual representation of the campus. It uses legends building name and its corresponding building number in providing information. Due to its limited capabilities it became a realization to the researchers to create an interactive campus map.Storing data about the building room and staff information and university events and campus guide are among the primary features that this study has to offer. Interactive Web-based Campus Information System is intended in providing a Campus Information System.It is open to constant updates user-friendly for both trained and untrained users and capable of responding to all needs of users and carrying out analyses. Based on the data gathered through questionnaires researchers analyzed the results of the test survey and proved that the system is user friendly deliver information to users and the important features that the students expect.
Avila, Edwin M. Martinez; Muniz, Ricardo; Szafran, Jamie; Dalton, Adam
Lines of code (LOC) analysis is one of the methods used to measure programmer productivity and estimate schedules of programming projects. The Launch Control System (LCS) had previously used this method to estimate the amount of work and to plan development efforts. The disadvantage of using LOC as a measure of effort is that one can only measure 30% to 35% of the total effort of software projects involves coding . In the application, instead of using the LOC we are using function point for a better estimation of hours in each software to develop. Because of these disadvantages, Jamie Szafran of the System Software Branch of Control And Data Systems (NE-C3) at Kennedy Space Canter developed a web application called Function Point Analysis (FPA) Depot. The objective of this web application is that the LCS software architecture team can use the data to more accurately estimate the effort required to implement customer requirements. This paper describes the evolution of the domain model used for function point analysis as project managers continually strive to generate more accurate estimates.
Verdu Torres, Daniel
The ALICE Detector Control System is a complex hardware and software infrastructure and is running in a protected network environment. Monitoring data, announcements and alarms are made accessible to interested users in several different ways: dedicated panels running on operator nodes, web sites, email and sms. The project aims to aggregate information coming from several different systems, categorize according to its nature, reformat and publish on a dedicated web site. For this purpose, I have used "WinCC_OA" software tool, which is the software used by the ALICE DCS group.
This book is full of short, concise recipes to learn a variety of useful web scraping techniques using Java. You will start with a simple basic recipe of setting up your Java environment and gradually learn some more advanced recipes such as using complex Scrapers.Instant Web Scraping with Java is aimed at developers who, while not necessarily familiar with Java, are at least ready to dive into the complexities of this language with simple, step-by-step instructions leading the way. It is assumed that you have at least an intermediate knowledge of HTML, some knowledge of MySQL, and access to a
The perfect place to learn how to design Web sites for mobile devices!. With the popularity of Internet access via cell phones and other mobile devices, Web designers now have to consider as many as eight operating systems, several browsers, and a slew of new devices as they plan a new site, a new interface, or a new sub-site. This easy-to-follow friendly book guides you through this brave new world with a clear look at the fundamentals and offers practical techniques and tricks you may not have considered.: Explores all issues to consider in planning a mobile site; Covers the tools needed for
Full Text Available In today's scenario web services have become a grand vision to implement the business process functionalities. With increase in number of similar web services, one of the essential challenges is to discover relevant web service with regard to user specification. Relevancy of web service discovery can be improved by augmenting semantics through expressive formats like OWL. QoS based service selection will play a significant role in meeting the non-functional user requirements. Hence QoS and semantics has been used as finer search constraints to discover the most relevant service. In this paper, we describe a QoS framework for ontology based web service discovery. The QoS factors taken into consideration are execution time, response time, throughput, scalability, reputation, accessibility and availability. The behavior of each web service at various instances is observed over a period of time and their QoS based performance is analyzed.
Full Text Available Web Services (WS are used for development of distributed applications containing native code assembled with references to remote Web Services. There are thousands of Web Services available on Web but the problem is how to find an appropriate WS by discovering its details, i.e. description of functionality of object methods exposed for public use. Several models have been suggested, some of them implemented, but so far none of them allowing systematic, publicly available, search. This paper suggests a model for publishing WS in a flexible way which allows an automated way of finding the desired Web Service by category and/or functionality without having to access any dedicated servers. The search results in a narrow set of Web Services suitable for problem solution according to the user specification.
Full Text Available Problem statement: Web usage mining is the technique of extracting useful information from server logs (users history and finding out what users are looking for on the Internet. This type of web mining allows for the collection of Web access data for Web pages. Scope: The web usage data provides the paths leading to accessed Web pages with prefrences and higher priorities. This information is often gathered automatically into access logs through the Web server. Approach: In this study we propose Induction based decision rule model for generating inferences and implicit hidden behavioral aspects in the web usage mining which investigates at the web server and client logs. The decision based rule induction mining combines a fast decision rule induction algorithm and a method for converting a decision tree to a simplified rule set. Results: The experimentation is conducted by weka tool and the performance of proposed Induction based decision rule algorithm is evaluated in terms of mined decisive rules, Execution time, root mean square error and mean absolute error. Proposed induction rule mining needs 400 ms of execution time for decisive rule generation, whereas previous work expectation maximization algorithm needs 600ms. Conclusion: Web usage mining is evaluated with decisive rules of user page navigation and preferences. Decisive rule provide the web site developers and owners to known the site presentation likeness and demands of the web users.
The Web browser has become one of the major application interfaces for remotely participating in magnetic fusion experiments. Recently in other areas, web portals have begun to be deployed. These portals are used to present very diverse sources of information in a unified way. While a web portal has several benefits over other software interfaces, such as providing single point of access for multiple computational services, and eliminating the need for client software installation, the design and development of a web portal has unique challenges. One of the challenges is that a web portal needs to be fast and interactive despite a high volume of tools and information that it presents. Another challenge is the visual output on the web portal often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments; therefore the applications and information should be customizable depending on the needs of users. An appropriate software architecture and web technologies can meet these problems. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It utilizes a multi-tier software architecture, and web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services for users to select. The users can create a unique personalized working environment to fit their own needs and interests. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, shared visualization and online instant message services. Furthermore, the web portal will provide a mechanism to allow users to create their own applications on the web portal as well as bridging capabilities to external applications such as
Rajugan, R.; Chang, E.; Dillon, T.; Feng, L.
The emergence of semantic Web (SW) and related technologies promise to make the Web a meaningful experience. Web services (WS) enable distributed access and discovery of internal, enterprise functions and services over the Web, in a secure controlled environment. Yet, high level modeling, design and
Sun, Yanyan; Gao, Fei
Web annotation is a Web 2.0 technology that allows learners to work collaboratively on web pages or electronic documents. This study explored the use of Web annotation as an online discussion tool by comparing it to a traditional threaded discussion forum. Ten graduate students participated in the study. Participants had access to both a Web…
LAI; Maosheng; QU; Peng; ZHAO; Kang
The paper focuses on the habits of China Web users’language utilization behaviors in accessing the Web.It also seeks to make a general study on the basic nature of language phenomenon with regard to digital accessing.A questionnaire survey was formulated and distributed online for these research purposes.There were 1,267 responses collected.The data were analyzed with descriptive statistics,Chi-square testing and contingency table analyses.Results revealed the following findings.Tagging has already played an important role in Web2.0 communication for China’s Web users.China users rely greatly on all kinds of taxonomies in browsing and have also an awareness of them in effective searching.These imply that the classified languages in digital environment may aid Chinese Web users in a more satisfying manner.Highly subject-specific words,especially those from authorized tools,yielded better results in searching.Chinese users have high recognition for related terms.As to the demographic aspect,there is little difference between different genders in the utilization of information retrieval languages.Age may constitute a variable element to a certain degree.Educational background has a complex effect on language utilizations in searching.These research findings characterize China Web users’behaviors in digital information accessing.They also can be potentially valuable for the modeling and further refinement of digital accessing services.
Bagyamani, R Rathipriya K Thangavel J
Web mining is the nontrivial process to discover valid, novel, potentially useful knowledge from web data using the data mining techniques or methods. It may give information that is useful for improving the services offered by web portals and information access and retrieval tools. With the rapid development of biclustering, more researchers have applied the biclustering technique to different fields in recent years. When biclustering approach is applied to the web usage data it automatically captures the hidden browsing patterns from it in the form of biclusters. In this work, swarm intelligent technique is combined with biclustering approach to propose an algorithm called Binary Particle Swarm Optimization (BPSO) based Biclustering for Web Usage Data. The main objective of this algorithm is to retrieve the global optimal bicluster from the web usage data. These biclusters contain relationships between web users and web pages which are useful for the E-Commerce applications like web advertising and marketin...
The accessible information decreases under quantum operations. We analyzed the connection between quantum operations and accessible information. We show that a general quantum process cannot be operated accurately. Futhermore, an unknown state of a closed quantum system can not be operated arbitrarily by a unitary quantum operation.
Jarir, Zahi; Quafafou, Mohamed; Erradi, Mahammed
The field of information extraction from the Web emerged with the growth of the Web and the multiplication of online data sources. This paper is an analysis of information extraction methods. It presents a service oriented approach for web information extraction considering both web data management and extraction services. Then we propose an SOA based architecture to enhance flexibility and on-the-fly modification of web extraction services. An implementation of the proposed architecture is p...
R. Joseph Manoj
Full Text Available Authentication is a method which validates users' identity prior to permitting them to access the web services. To enhance the security of web services, providers follow varieties of authentication methods to restrict malicious users from accessing the services. This paper proposes a new authentication method which claims user’s identity by analyzing web server log files which includes the details of requesting user’s IP address, username, password, date and time of request, status code, URL etc., and checks IP address spoofing using ingress packet filtering method. This paper also analyses the resultant data and performance of the proposed work.
Moyano, Marcelo; Buccella, Agustina; Cechich, Alejandra; Estévez, Elsa Clara
Semantic Web is an extension of the current web in which data contained in the web documents are machine-understandable. On the other hand, Web Services provide a new model of the web in which sites exchange dynamic information on demand. Combination of both introduces a new concept named Semantic Web Services in which semantic information is added to the different activities involved in Web Services, such as discovering, publication, composition, etc. In this paper, we analyze several propos...
Echevarria, Juan Jose; Ruiz-de-Garibay, Jonathan; Legarda, Jon; Alvarez, Maite; Ayerbe, Ana; Vazquez, Juan Ignacio
Information and Communication Technologies (ICTs) continue to overcome many of the challenges related to wireless sensor monitoring, such as for example the design of smarter embedded processors, the improvement of the network architectures, the development of efficient communication protocols or the maximization of the life cycle autonomy. This work tries to improve the communication link of the data transmission in wireless sensor monitoring. The upstream communication link is usually based on standard IP technologies, but the downstream side is always masked with the proprietary protocols used for the wireless link (like ZigBee, Bluetooth, RFID, etc.). This work presents a novel solution (WebTag) for a direct IP based access to a sensor tag over the Near Field Communication (NFC) technology for secure applications. WebTag allows a direct web access to the sensor tag by means of a standard web browser, it reads the sensor data, configures the sampling rate and implements IP based security policies. It is, definitely, a new step towards the evolution of the Internet of Things paradigm.
Echevarria, Juan Jose; Ruiz-de-Garibay, Jonathan; Legarda, Jon; Álvarez, Maite; Ayerbe, Ana; Vazquez, Juan Ignacio
Information and Communication Technologies (ICTs) continue to overcome many of the challenges related to wireless sensor monitoring, such as for example the design of smarter embedded processors, the improvement of the network architectures, the development of efficient communication protocols or the maximization of the life cycle autonomy. This work tries to improve the communication link of the data transmission in wireless sensor monitoring. The upstream communication link is usually based on standard IP technologies, but the downstream side is always masked with the proprietary protocols used for the wireless link (like ZigBee, Bluetooth, RFID, etc.). This work presents a novel solution (WebTag) for a direct IP based access to a sensor tag over the Near Field Communication (NFC) technology for secure applications. WebTag allows a direct web access to the sensor tag by means of a standard web browser, it reads the sensor data, configures the sampling rate and implements IP based security policies. It is, definitely, a new step towards the evolution of the Internet of Things paradigm. PMID:23012511
Juan Jose Echevarria
Full Text Available Information and Communication Technologies (ICTs continue to overcome many of the challenges related to wireless sensor monitoring, such as for example the design of smarter embedded processors, the improvement of the network architectures, the development of efficient communication protocols or the maximization of the life cycle autonomy. This work tries to improve the communication link of the data transmission in wireless sensor monitoring. The upstream communication link is usually based on standard IP technologies, but the downstream side is always masked with the proprietary protocols used for the wireless link (like ZigBee, Bluetooth, RFID, etc.. This work presents a novel solution (WebTag for a direct IP based access to a sensor tag over the Near Field Communication (NFC technology for secure applications. WebTag allows a direct web access to the sensor tag by means of a standard web browser, it reads the sensor data, configures the sampling rate and implements IP based security policies. It is, definitely, a new step towards the evolution of the Internet of Things paradigm.
Kahl, Chad M.; Williams, Sarah C.
To ensure efficient access to and integrated searching capabilities for their institution's new digital library projects, the authors studied Web sites of the Association of Research Libraries' (ARL) 111 academic, English-language libraries. Data were gathered on 1117 digital projects, noting library Web site and project access, metadata, and…
Full Text Available A web application uses two words “web” and “application”.Where web means web browser and application meanscomputer software. Web browser is used to search theinformation on the World Wide Web i.e. www or on Internet,where as application is used to solve the single or multiple tasks,depending on the type of application. In this way, we can saythat a web application is computer software to perform single ormultiple tasks on the computer network using web browser.Now, the questions arise for the developer of “web application”,if we develop a web application then how to sell it and how wewill get the maximum profit from its marketing. Is there anyway? There are many ways to market web application by usingcommercial advertisement, trail version, Beta Version,Promotional Launch, by a customize version such as desktopapplication, browser application etc. These are the old methodof marketing “web application” The new and modern method ofmarketing a “Web application” is as a cloud computing (SaaSbecause it is accessed by web browser and used to solve singleor multiple task with very low cost except hosted on centralserver while web application may be hosted on different servers.The cost, security, maintenances and speed are the main benefitof marketing of web application as Cloud Computingapplication.
Full Text Available In this study, we present users assistants for e-learning environment over the Web called AVUNET. It was made up of an educational server that allows access to the available courses on the site. The server was structured in pedagogical labs that respond to the user needs. It offers also a module for self-evaluation so that the user can evaluate his/her level. Trainers have created this facility very carefully to assure a detailed evaluation with very accurate solutions. Based on the model of "telephone ring", the system proposes facilities for communication and collaboration in order to bring the trainers and the learners closer to each other. This forum of trainers and learners allow the users to exchange information and their experiences pedagogic. These experiences are acquired by the use of computer aided tools for virtual navigation through structured hypertext documents. The user actions during a learning or navigation session are analyzed.
Clark, Kenneth; Hosticka, Alice; Kent, Judi; Browne, Ron
Addresses issues of access to World Wide Web sites, mathematics and science content-resources available on the Web, and methods for integrating mathematics, science, and language arts instruction. (Author/ASK)
Web browsers have become a major application interface for participating in scientific experiments such as those in magnetic fusion. The recent advances in web technologies motivated the deployment of interactive web applications with rich features. In the scientific world, web applications have been deployed in portal environments. When used in a scientific research environment, such as fusion experiments, web portals can present diverse sources of information in a unified interface. However, the design and development of a scientific web portal has its own challenges. One such challenge is that a web portal needs to be fast and interactive despite the high volume of information and number of tools it presents. Another challenge is that the visual output of the web portal must not be overwhelming to the end users, despite the high volume of data generated by fusion experiments. Therefore, the applications and information should be customizable depending on the needs of end users. In order to meet these challenges, the design and implementation of a web portal needs to support high interactivity and user customization. A web portal has been designed to support the experimental activities of DIII-D researchers worldwide by providing multiple services, such as real-time experiment status monitoring, diagnostic data access and interactive data visualization. The web portal also supports interactive collaborations by providing a collaborative logbook, shared visualization and online instant messaging services. The portal's design utilizes the multi-tier software architecture and has been implemented utilizing web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services, which allows users to create a unique, personalized working environment to fit their own needs and interests. This paper describes the software
The TEC community operates the TEXTOR device and in doing so collects and stores data from a number of different front-end acquisition systems, processing codes and analysis systems. Due to the evolution of these systems in the past, different, distributed data storage technologies were used to record this data. In an attempt to reduce the number of interfaces client codes have to use when accessing data from these data stores, an 'umbrella' concept was developed: a software-layer that covers (as an 'umbrella') as many as possible of these stores and provides a unified access mechanism to them. We explored the possibility of using the widely supported HTTP protocol for this purpose; this is the core protocol of the World-Wide-Web and it is capable of transporting almost any type of data. The concepts behind using this protocol were based on earlier work at JET. Access via this umbrella has been provided to the most important data stores around TEXTOR and access to others is being added regularly. Clients codes, libraries and programs have been developed for several user environments. The HTTP based concepts and the data-access via this system have been found to be highly portable. This paper gives an overview of the TEC Web-Umbrella system, it describes the basic concepts of this system and it presents some of the client-side codes and programs. The paper also reports on some first (tentative) user experiences with it
Rutty, G N
Now that one has logged onto the world wide web (WWW) and utilized one or more of the home pages listed previously (or used another equally good home page) to seek out basic information available to forensic practitioners, the question now arises of how to go about making the most of the information available. One feature consistent to most home pages is links to the home pages of Associations and Societies, one or more of which most practitioners will be members of. With access to the WWW not only have you access to your own association/society, but you can also keep up to date with all the others to which you have not paid subscriptions. Although an internet search using a WWW search engine or the 'top 6' home pages may assist in identifying a large number of association and society sites, one of the most useful places to start is the home page of the Indian Academy of Forensic Medicine (IAFM). This, to date, lists a total of 139 such sites. To access all the home pages listed may take in excess of 6 h so the following review looks at the range of sites available and recommends some places the author considers many people may wish to know and visit. Again, this is inevitably a personal choice and it is recognized that those sites not listed may, in fact, be the preferred choice for other users of the forensic WWW. PMID:15335474
Gonçalves, Bruno; Ramasco, José J.
The increasing ubiquity of Internet access and the frequency with which people interact with it raise the possibility of using the Web to better observe, understand, and monitor several aspects of human social behavior. Web sites with large numbers of frequently returning users are ideal for this task. If these sites belong to companies or universities, their usage patterns can furnish information about the working habits of entire populations. In this work, we analyze the properly anonymized logs detailing the access history to Emory University’s Web site. Emory is a medium-sized university located in Atlanta, Georgia. We find interesting structure in the activity patterns of the domain and study in a systematic way the main forces behind the dynamics of the traffic. In particular, we find that linear preferential linking, priority-based queuing, and the decay of interest for the contents of the pages are the essential ingredients to understand the way users navigate the Web.
Gray, Geraldine; O’Connor, Kieran
Web Services using eXtensible Markup Language (XML) based standards are becoming the new archetype for enabling business to business collaborations. This paper describes the conceptual architecture and semantics of constructing and consuming Web Services. It describes how Web Services fit into the enterprise application environment. It discusses Web Services security. Finally, it outlines the flaws of Web Services in their current state.
WEB 407 Week 1 DQs WEB 407 Week 1 Individual Assignment / Encrypted Login Page WEB 407 Week 2 DQs WEB 407 Week 2 Individual Assignment Database WEB 407 Week 3 DQs WEB 407 Week 3 Individual Assignment Database Justification Memo WEB 407 Week 4 DQs WEB 407 Week 5 DQs WEB 407 Week 5 Learning Team Assignment Web Application
Obrenovic, Z.; Ossenbruggen, J.R. van
A web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities,
Golterman, Linda; Banasiak, Nancy C
This article describes a framework for evaluating the quality of health care information on the Internet and identifies strategies for accessing reliable child health resources. A number of methods are reviewed, including how to evaluate Web sites for quality using the Health Information Technology Institute evaluation criteria, how to identify trustworthy Web sites accredited by Health On the Net Foundation Code of Conduct, and the use of portals to access prescreened Web sites by organizations, such as the Medical Library Association. Pediatric nurses can use one or all of these strategies to develop a list of reliable Web sites as a supplement to patient and family teaching. PMID:21661608
LIU Dan; GUO Cheng-cheng; ZHANG Li
Web offers a very convenient way to access remote information resources, an important measurement of evaluating Web services quality is how long it takes to search and get information.By caching the Web server's dynamic content, it can avoid repeated queries for database and reduce the access frequency of original resources, thus to improve the speed of server's response.This paper describes the concept, advantages, principles and concrete realization procedure of a dynamic content cache module for Web server.
Ng, CP; Wang, CL
Access latency and load balancing are the two main issues in the design of clustered Web server architecture for achieving high performance. We propose a novel document distribution algorithm for load balancing on a cluster of distributed Web servers. We group Web pages that are likely to be accessed during a request session into a migrating unit, which is used as the basic unit of document placement. A modified binning algorithm is developed to distribute the migrating units among the Web se...
Huang, CC; Meng, EC; Morris, JH; Pettersen, EF; Ferrin, TE
Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/ docs/webservices.html). By stre...
The phenomenal growth of the World Wide Web has brought huge increase in the traffic to the popular web sites.Long delays and denial of service experienced by the end-users,especially during the peak hours,continues to be the common problem while accessing popular sites.Replicating some of the objects at multiple sites in a distributed web-server environment is one of the possible solutions to improve the response time/Iatency. The decision of what and where to replicate requires solving a constraint optimization problem,which is NP-complete in general.In this paper, we consider the problem of placing copies of objects in a distributed web server system to minimize the cost of serving read and write requests when the web servers have Iimited storage capacity.We formulate the problem as a 0-1 optimization problem and present a polynomial time greedy algorithm with backtracking to dynamically replicate objects at the appropriate sites to minimize a cost function．To reduce the solution search space，we present necessary condi tions for a site to have a replica of an object jn order to minimize the cost function We present simulation resuIts for a variety of problems to illustrate the accuracy and efficiency of the proposed algorithms and compare them with those of some well-known algorithms．The simulation resuIts demonstrate the superiority of the proposed algorithms．
Introduction: We examine how residents and citizens of The Netherlands perceive open access to acquire preliminary insight into the role it might play in cultivating civic scientific literacy. Open access refers to scientific or scholarly research literature available on the Web to scholars and the general public in free online journals and…
Full Text Available Explosive and quick growth of the World Wide Web has resulted in intricate Web sites, demanding enhanced user skills and sophisticated tools to help the Web user to find the desired information. Finding desired information on the Web has become a critical ingredient of everyday personal, educational, and business life. Thus, there is a demand for more sophisticated tools to help the user to navigate a Web site and find the desired information. The users must be provided with information and services specific to their needs, rather than an undifferentiated mass of information. For discovering interesting and frequent navigation patterns from Web server logs many Web usage mining techniques have been applied. The recommendation accuracy of solely usage based techniques can be improved by integrating Web site content and site structure in the personalization process.Herein, we propose Semantically enriched Web Usage Mining method (SWUM, which combines the fields of Web Usage Mining and Semantic Web. In the proposed method, the undirected graph derived from usage data is enriched with rich semantic information extracted from the Web pages and the Web site structure. The experimental results show that the SWUM generates accurate recommendations with integration of usage, semantic data and Web site structure. The results shows that proposed method is able to achieve 10-20% better accuracy than the solely usage based model, and 5-8% better than an ontology based model.
Liu, C H; Chen, Jason J Y
Traditionally, agent and web service are two separate research areas. We figure that, through agent communication, agent is suitable to coordinate web services. However, there exist agent communication problems due to the lack of uniform, cross-platform vocabulary. Fortunately, ontology defines a vocabulary. We thus propose a new agent communication layer and present the web ontology language (OWL)-based operational ontologies that provides a declarative description. It can be accessed by various engines to facilitate agent communication. Further, in our operational ontologies, we define the mental attitudes of agents that can be shared among other agents. Our architecture enhanced the 3APL agent platform, and it is implemented as an agent communication framework. Finally, we extended the framework to be compatible with the web ontology language for service (OWL-S), and then develop a movie recommendation system with four OWL-S semantic web services on the framework. The benefits of this work are: 1) dynamic ...
Full Text Available Selecting the most relevant Web Service according to a client requirement is an onerous task, as innumerous number of functionally same Web Services(WS are listed in UDDI registry. WS are functionally same but their Quality and performance varies as per service providers. A web Service Selection Process involves two major points: Recommending the pertinent Web Service and avoiding unjustifiable web service. The deficiency in keyword based searching is that it doesn’t handle the client request accurately as keyword may have ambiguous meaning on different scenarios. UDDI and search engines all are based on keyword search, which are lagging behind on pertinent Web service selection. So the search mechanism must be incorporated with the Semantic behavior of Web Services. In order to strengthen this approach, the proposed model is incorporated with Quality of Services (QoS based Ranking of semantic web services.
Vrishali P. Sonavane
Full Text Available The Internet is the roads and the highways in the information World, the content providers are the road workers, and the visitors are the drivers. As in the real world, there can be traffic jams, wrong signs, blind alleys, and so on. The content providers, as the road workers, need information about their users to make possible Web site adjustments. Web logs store every motion on the provider's Web site. So the providers need only a tool to analyze these logs. This tool is called Web Usage Mining. Web Usage Mining is a part of Web Mining. It is the foundation for a Web site analysis. It employs various knowledge discovery methods to gain Web usage patterns. In this paper we used LCS algorithm for improving accuracy of recommendation. The Expremental results show that the approach can improve accuracy of classification in the architecture. Using LCS algorithm we can predict users future request more accurately.
Zhang, Chuanrong; Li, Weidong
This book covers key issues related to Geospatial Semantic Web, including geospatial web services for spatial data interoperability; geospatial ontology for semantic interoperability; ontology creation, sharing, and integration; querying knowledge and information from heterogeneous data source; interfaces for Geospatial Semantic Web, VGI (Volunteered Geographic Information) and Geospatial Semantic Web; challenges of Geospatial Semantic Web; and development of Geospatial Semantic Web applications. This book also describes state-of-the-art technologies that attempt to solve these problems such
US Agency for International Development — WebTA is a web-based time and attendance system that supports USAID payroll administration functions, and is designed to capture hours worked, leave used and...
For many of us, using the Web is a natural and even indispensable part of our daily lives. But only 20% of the world’s population have access to it. Tim Berners-Lee, the Web's inventor, created the Web Foundation in 2007 with the aim of accelerating access to the Web for the rest of the world's population. Showcased at the Sharing Knowledge conference, the Mobile Web is one of the Web Foundation’s projects in which members of CERN are involved. Virtually no access to the Web but a very extensive GSM network: that's the situation that many developing countries especially in Africa find themselves in. “Owing to its size, its unstable soils and its limited infrastructure, it is technically very difficult to bring optic fibres for Internet connections to all regions of Africa. The idea of the Mobile Web project is therefore to be able to use the GSM network to access the Web,” explains Silvano de Gennaro, a member of the video team within CERN's Communication Gro...
LI Chaofeng; LU Yansheng
The task of clustering Web sessions is to group Web sessions based on similarity and consists of maximizing the intra-group similarity while minimizing the inter-group similarity.The first and foremost question needed to be considered in clustering Web sessions is how to measure the similarity between Web sessions. However, there are many shortcomings in traditional measurements. This paper introduces a new method for measuring similarities between Web pages that takes into account not only the URL but also the viewing time of the visited Web page. Then we give a new method to measure the similarity of Web sessions using sequence alignment and the similarity of Web page access in detail.Experiments have proved that our method is valid and efficient.
Full Text Available The Web has become one of the largest and most readily accessible repositories of human knowledge. The traditional search engines index only surface Web whose pages are easily found. The focus has now been moved to invisible Web or hidden Web, which consists of a large warehouse of useful data such as images, sounds, presentations and many other types of media. To use such data, there is a need for specialized technique to locate those sites as we do with search engines. This paper focuses on an effective design of a Hidden Web Crawler that can automatically discover pages from the Hidden Web by employing multi- agent Web mining system. A framework for deep web with genetic algorithm is used to discover the resource discovery problem and the results show the improvement in the crawling strategy and harvest rate.
ZHOU Hong-fang; FENG Bo-qin; HEI Xin-hong; LU Lin-tao
Web-log contains a lot of information related with user activities on the Internet.How to mine user browsing interest patterns effectively is an important and challengeable research topic.On the analysis of the present algorithm's advantages and disadvantages, we propose a new concept: support-interest.Its key insight is that visitor will backtrack if they do not find the information where they expect.And the point from where they backtrack is the expected location for the page.We present User Access Matrix and the corresponding algorithm for discovering such expected locations that can handle page caching by the browser.Since the URL-URL matrix is a sparse matrix which can be represented by List of 3-tuples, we can mine user preferred sub-paths from the computation of this matrix.Accordingly, all the sub-paths are merged, and user preferred paths are formed.Experiments showed that it was accurate and scalable.It's suitable for website based application, such as to optimize website's topological structure or to design personalized services.
de Jesus, J.; Walker, P.; Grant, M.
Pollock, Jeffrey T
Semantic Web technology is already changing how we interact with data on the Web. By connecting random information on the Internet in new ways, Web 3.0, as it is sometimes called, represents an exciting online evolution. Whether you're a consumer doing research online, a business owner who wants to offer your customers the most useful Web site, or an IT manager eager to understand Semantic Web solutions, Semantic Web For Dummies is the place to start! It will help you:Know how the typical Internet user will recognize the effects of the Semantic WebExplore all the benefits the data Web offers t
Frobert, L.; Kamb, L.; Trani, L.; Spinuso, A.; Bossu, R.; Van Eck, T.
Full Text Available A Web based environment has been designed to execute C programs without explicitly installing any compilers on the machine, thus addressing the concerns of portability and accessibility. Theenvironment runs on a Linux server, uses password authentication and provides each user with separate project directories to store all his programs. These can be accessed and modified by the respective useronly. The entered code can be compiled and executed easily without using licensed software for the same. This saves installation time as well as memory of the client machine. The configuration of the machineneed not be an issue as the system is web based and thus platform independent.
Dragut, Eduard C; Yu, Clement T
There are millions of searchable data sources on the Web and to a large extent their contents can only be reached through their own query interfaces. There is an enormous interest in making the data in these sources easily accessible. There are primarily two general approaches to achieve this objective. The first is to surface the contents of these sources from the deep Web and add the contents to the index of regular search engines. The second is to integrate the searching capabilities of these sources and support integrated access to them. In this book, we introduce the state-of-the-art tech
Folz, Pauline; Montoya, Gabriela; Skaf-Molli, Hala; Molli, Pascal; Vidal, Maria-Esther
International audience Web integration systems are able to provide transparent and uniform access to heterogeneous Web data sources by integrating views of Linked Data, Web Service results, or data extracted from the Deep Web. However, given the potential large number of views, query engines of Web integration systems have to implement execution techniques able to scale up to real-world scenarios and efficiently execute queries. We tackle the problem of SPARQL query processing against RDF ...
Isak Shabani; Besmir Sejdiu; Fatushe Jasharaj
Many web applications over the last decade are built using Web services based on Simple Object Access Protocol (SOAP), because these Web services are the best choice for web applications and mobile applications in general. Researches and the results of them show how architectures and the systems primarily designed for use on desktop such as Web services calls with SOAP messaging, now are possible to be used on mobile platforms such as Android. The purpose of this paper is the study of Android...
Chen, Hsinchun; Chau, Michael
Presents an overview of machine learning research and reviews methods used for evaluating machine learning systems. Ways that machine-learning algorithms were used in traditional information retrieval systems in the "pre-Web" era are described, and the field of Web mining and how machine learning has been used in different Web mining applications…
Cosmogenic nuclide techniques are increasingly being utilized in geoscience research. For this it is critical to establish an effective, easily accessible and well defined tool for cosmogenic nuclide computations. We have been developing a web-based tool (WebCN) to calculate surface exposure ages and erosion rates based on the nuclide concentrations measured by the accelerator mass spectrometry. WebCN for 10Be and 26Al has been finished and published at http://www.physics.purdue.edu/primelab/for_users/rockage.html. WebCN for 36Cl is under construction. WebCN is designed as a three-tier client/server model and uses the open source PostgreSQL for the database management and PHP for the interface design and calculations. On the client side, an internet browser and Microsoft Access are used as application interfaces to access the system. Open Database Connectivity is used to link PostgreSQL and Microsoft Access. WebCN accounts for both spatial and temporal distributions of the cosmic ray flux to calculate the production rates of in situ-produced cosmogenic nuclides at the Earth's surface
Arora, Monika; Kanjilal, Uma; Varshney, Dinesh
The major challenge in information access is the rich data available for information retrieval, evolved to provide principle approaches or strategies for searching. The search has become the leading paradigm to find the information on World Wide Web. For building the successful web retrieval search engine model, there are a number of challenges that arise at the different levels where techniques, such as Usenet, support vector machine are employed to have a significant impact. The present investigations explore the number of problems identified its level and related to finding information on web. This paper attempts to examine the issues by applying different methods such as web graph analysis, the retrieval and analysis of newsgroup postings and statistical methods for inferring meaning in text. We also discuss how one can have control over the vast amounts of data on web, by providing the proper address to the problems in innovative ways that can extremely improve on standard. The proposed model thus assists the users in finding the existing formation of data they need. The developed information retrieval model deals with providing access to information available in various modes and media formats and to provide the content is with facilitating users to retrieve relevant and comprehensive information efficiently and effectively as per their requirements. This paper attempts to discuss the parameters factors that are responsible for the efficient searching. These parameters can be distinguished in terms of important and less important based on the inputs that we have. The important parameters can be taken care of for the future extension or development of search engines