Moreno, Lourdes; Martínez, Paloma; Contreras, Jesús; Benjamins, Richard
The importance for Web applications to reach all kind of potential users and customers is being stressed by companies and public sectors. The standardization initiative for Web applications, WAI and the Universal Design framework establish useful rules for building accessible applications for any kind of disabled and non-disabled users. The proliferation of Semantic Web technologies and formal ontologies offer a technological opportunity for establishing automatic and advanced methods for ...
Dean, Andrew S.
Determining the best method for granting World Wide Web (Web) users access to remote relational databases is difficult. Choosing the best supporting Web/database link method for implementation requires an in-depth understanding of the methods available and the relationship between the link designer's goals and the underlying issues of Performance and Functionality, Cost, Development Time and Ease, Serviceability, Flexibility and Openness, Security, State and Session. This thesis examined exis...
Israa Wahbi Kamal
Full Text Available University web portals are considered one of the main access gateways for universities. Typically, they have a large candidate audience among the current students, employees, and faculty members aside from previous and future students, employees, and faculty members. Web accessibility is the concept of providing web content universal access to different machines and people with different ages, skills, education levels, and abilities. Several web accessibility metrics have been proposed in previous years to measure web accessibility. We integrated and extracted common web accessibility metrics from the different accessibility tools used in this study. This study evaluates web accessibility metrics for 36 Jordanian universities and educational institute websites. We analyze the level of web accessibility using a number of available evaluation tools against the standard guidelines for web accessibility. Receiver operating characteristic quality measurements is used to evaluate the effectiveness of the integrated accessibility metrics.
Olive, Geoffrey C.
Improving Web accessibility for disabled users visiting a university's Web site is explored following the World Wide Web Consortium (W3C) guidelines and Section 508 of the Rehabilitation Act rules for Web page designers to ensure accessibility. The literature supports the view that accessibility is sorely lacking, not only in the USA, but also…
Bayir, Murat Ali; Toroslu, Ismail Hakki; Cosar, Ahmet; Fidan, Guven
Web usage mining is a type of web mining, which exploits data mining techniques to discover valuable information from navigation behavior of World Wide Web users. As in classical data mining, data preparation and pattern discovery are the main issues in web usage mining. The first phase of web usage mining is the data processing phase, which includes the session reconstruction operation from server logs. Session reconstruction success directly affects the quality of the frequent patterns disc...
Web services is used in Experimental Physics and Industrial Control System (EPICS). Combined with EPICS Channel Access protocol, Web services high usability, platform independence and language independence can be used to design a fully transparent and uniform software interface layer, which helps us complete channel data acquisition, modification and monitoring functions. This software interface layer, a cross-platform of cross-language, has good interoperability and reusability. (authors)
The purpose of this web-accessible database is for the public to be able to view instantaneous readings from a solar-powered air monitoring station located in a public location (prototype pilot test is outside of a library in Durham County, NC). The data are wirelessly transmitte...
Foley, Alan; Regan, Bob
Discusses Web design for people with disabilities and outlines a process-based approach to accessibility policy implementation. Topics include legal mandates; determining which standards apply to a given organization; validation, or evaluation of the site; site architecture; navigation; and organizational needs. (Author/LRW)
Leon, John; Cutlip, William; Hametz, Mark
The Access To Space (ATS) Group at NASA's Goddard Space Flight Center (GSFC) supports the science and technology community at GSFC by facilitating frequent and affordable opportunities for access to space. Through partnerships established with access mode suppliers, the ATS Group has developed an interactive Mission Design web site. The ATS web site provides both the information and the tools necessary to assist mission planners in selecting and planning their ride to space. This includes the evaluation of single payloads vs. ride-sharing opportunities to reduce the cost of access to space. Features of this site include the following: (1) Mission Database. Our mission database contains a listing of missions ranging from proposed missions to manifested. Missions can be entered by our user community through data input tools. Data is then accessed by users through various search engines: orbit parameters, ride-share opportunities, spacecraft parameters, other mission notes, launch vehicle, and contact information. (2) Launch Vehicle Toolboxes. The launch vehicle toolboxes provide the user a full range of information on vehicle classes and individual configurations. Topics include: general information, environments, performance, payload interface, available volume, and launch sites.
Bradbard, David A.; Peters, Cara
Web accessibility is the practice of making Web sites accessible to all, particularly those with disabilities. As the Internet becomes a central part of post-secondary instruction, it is imperative that instructional Web sites be designed for accessibility to meet the needs of disabled students. The purpose of this article is to introduce Web…
Full Text Available Web accessibility makes it possible for the disabled to get equal access to information provided in web like the normal. Therefore, to enable the disabled to use web, there is a need for construction of web page abide by accessibility. The text on the web site is output by sound using screen reader, so that the visually impaired can recognize the meaning of text. However, screen reader cannot recognize image. This paper studies a method for explaining images included in web pages using QR-Code. When producing web page adapting the method provided in this paper, it will help the visually impaired to understand the contents of webpage.
Web accessibility makes it possible for the disabled to get equal access to information provided in web like the normal. Therefore, to enable the disabled to use web, there is a need for construction of web page abide by accessibility. The text on the web site is output by sound using screen reader, so that the visually impaired can recognize the meaning of text. However, screen reader cannot recognize image. This paper studies a method for explaining images included in web pages using QR-...
Suneet Kumar; Anuj Kumar Yadav; Rakesh Bharti; Rani Choudhary
Searching Focused web crawlers have recently emerged as an alternative to the well-established web search engines. While the well-known focused crawlers retrieve relevant web-pages, there are various applications which target whole websites instead of single web-pages. For example, companies are represented by websites, not by individual web-pages. To answer queries targeted at websites, web directories are an established solution. In this paper, we introduce a novel focused website crawler t...
Bray, Marty; Pugalee, David; Flowers, Claudia P.; Algozzine, Bob
Many middle schools use the Web to disseminate and gather information. Online barriers often limit the accessibility of the Web for students with disabilities. The purpose of this study was to evaluate the accessibility of home pages of a sample of middle schools. The authors located 165 Web sites using a popular online directory and evaluated the…
Bradbard, David A.; Peters, Cara; Caneva, Yoana
The Web has become an integral part of postsecondary education within the United States. There are specific laws that legally mandate postsecondary institutions to have Web sites that are accessible for students with disabilities (e.g., the Americans with Disabilities Act (ADA)). Web accessibility policies are a way for universities to provide a…
Snyder, Herbert; Rosenbaum, Howard
Examines the use of Robot Exclusion Protocol (REP) to restrict the access of search engine robots to 10 major United States university Web sites. An analysis of Web site searching and interviews with Web server administrators shows that the decision to use this procedure is largely technical and is typically made by the Web server administrator.…
Centelles Velilla, Miquel; Ribera, Mireia; Rodríguez Santiago, Inmaculada
This paper presents a research concerning the conversion of non-accessible web pages containing mathematical formulae into accessible versions through an OCR (Optical Character Recognition) tool. The objective of this research is twofold. First, to establish criteria for evaluating the potential accessibility of mathematical web sites, i.e. the feasibility of converting non-accessible (non-MathML) math sites into accessible ones (Math-ML). Second, to propose a data model and a mechanism to pu...
Goodwin, Morten; Susar, Deniz; Nietzio, Annika;
Equal access to public information and services for all is an essential part of the United Nations (UN) Declaration of Human Rights. Today, the Web plays an important role in providing information and services to citizens. Unfortunately, many government Web sites are poorly designed and have...... accessibility barriers that prevent people with disabilities from using them. This article combines current Web accessibility benchmarking methodologies with a sound strategy for comparing Web accessibility among countries and continents. Furthermore, the article presents the first global analysis of the Web...... accessibility of 192 United Nation Member States made publically available. The article also identifies common properties of Member States that have accessible and inaccessible Web sites and shows that implementing antidisability discrimination laws is highly beneficial for the accessibility of Web sites, while...
Full Text Available The success of web-based applications depends on how well it is perceive by the end-users. The various web accessibility guidelines have promoted to help improve accessing, understanding the content of web pages. Designing for the total User Experience (UX is an evolving discipline of the World Wide Web mainstream that focuses on how the end users will work to achieve their target goals. To satisfy end-users, web-based applications must fulfill some common needs like clarity, accessibility and availability. The aim of this study is to evaluate how the User Experience characteristics of web-based application are related to web accessibility guidelines (WCAG 2.0, ISO 9241:151 and Section 508.
Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.
Uterine cervix image analysis is of great importance to the study of uterine cervix cancer, which is among the leading cancers affecting women worldwide. In this paper, we describe our proof-of-concept, Web-accessible system for automated segmentation of significant tissue regions in uterine cervix images, which also demonstrates our research efforts toward promoting collaboration between engineers and physicians for medical image analysis projects. Our design and implementation unifies the merits of two commonly used languages, MATLAB and Java. It circumvents the heavy workload of recoding the sophisticated segmentation algorithms originally developed in MATLAB into Java while allowing remote users who are not experienced programmers and algorithms developers to apply those processing methods to their own cervicographic images and evaluate the algorithms. Several other practical issues of the systems are also discussed, such as the compression of images and the format of the segmentation results.
Dragut, Eduard Constantin
An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…
Ahmi, Aidi; Mohamad, Rosli
Despite the fact that Malaysian public institutions have progressed considerably on website and portal usage, web accessibility has been reported as one of the issues deserves special attention. Consistent with the government moves to promote an effective use of web and portal, it is essential for the government institutions to ensure compliance with established standards and guidelines on web accessibility. This paper evaluates accessibility of 25 Malaysian ministries websites using automated tools i.e. WAVE and Achecker. Both tools are designed to objectively evaluate web accessibility in conformance with Web Content Accessibility Guidelines 2.0 (WCAG 2.0) and United States Rehabilitation Act 1973 (Section 508). The findings reported somewhat low compliance to web accessibility standard amongst the ministries. Further enhancement is needed in the aspect of input elements such as label and checkbox to be associated with text as well as image-related elements. This findings could be used as a mechanism for webmasters to locate and rectify errors pertaining to the web accessibility and to ensure equal access of the web information and services to all citizen.
Full Text Available Abstract Background The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. Results The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. Conclusions We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.
Full Text Available Semantic Web approaches try to get the interoperability and communication among technologies and organizations. Nevertheless, sometimes it is forgotten that the Web must be useful for every user, consequently it is necessary to include tools and techniques doing Semantic Web be accessible. Accessibility and usability are two usually joined concepts widely used in web application development, however their meaning are different. Usability means the way to make easy the use but accessibility is referred to the access possibility. For the first one, there are many well proved approaches in real cases. However, accessibility field requires a deeper research that will make feasible the access to disable people and also the access to novel non-disable people due to the cost to automate and maintain accessible applications. In this paper, we propose one architecture to achieve the accessibility in web-environments dealing with the WAI accessibility standard and the Universal Design paradigm. This architecture tries to control the accessibility in web applications development life-cycle following a methodology starting from a semantic conceptual model and leans on description languages and controlled vocabularies.
Why do librarians and library staff other than Web librarians and developers need to know about accessibility? Web services staff do not--or should not--operate in isolation from the rest of the library staff. It is important to consider what areas of online accessibility are applicable to other areas of library work and to colleagues' regular job…
Lee, MW; Chen, SY; Liu, X.
Web-based technology has already been adopted as a tool to support teaching and learning in higher education. One criterion affecting the usability of such a technology is the design of web-based interface (WBI) within web-based learning programs. How different users access the WBIs has been investigated by several studies, which mainly analyze the collected data using statistical methods. In this paper, we propose to analyze users’ learning behavior using Data Mining (DM) techniques. Finding...
Offers an introduction to web accessibility and usability for information professionals, offering advice on the concerns relevant to library and information organizations. This book can be used as a resource for developing staff training and awareness activities. It will also be of value to website managers involved in web design and development.
Zeng, Xiaoming; Parmanto, Bambang
Background The World Wide Web (WWW) has become an increasingly essential resource for health information consumers. The ability to obtain accurate medical information online quickly, conveniently and privately provides health consumers with the opportunity to make informed decisions and participate actively in their personal care. Little is known, however, about whether the content of this online health information is equally accessible to people with disabilities who must rely on special dev...
Torres-Salinas, Daniel; Orduña-Malea, Enrique
Web of knowledge (WoK) presented in January 2014 a number of important changes in version 5.13.1, including its name change (to “Web of science”; the database was renamed “Web of science core collection”) and the introduction of new tools to refine search results, among which is the option to select articles published in open access journals. This function checks global scientific output over the last decade (2004-2013) to determine the percentage of total publishing that is open access, both...
Gomathi, C.; Moorthi, M.; Duraiswamy, K.
Web Access Pattern (WAP), which is the sequence of accesses pursued by users frequently, is a kind of interesting and useful knowledge in practice. Sequential Pattern mining is the process of applying data mining techniques to a sequential database for the purposes of discovering the correlation relationships that exist among an ordered list of…
Carminati, Barbara; Ferrari, Elena; Perego, Andrea
The original purpose of Web metadata was to protect end-users from possible harmful content and to simplify search and retrieval. However they can also be also exploited in more enhanced applications, such as Web access personalization on the basis of end-users’ preferences. In order to achieve this, it is however necessary to address several issues. One of the most relevant is how to assess the trustworthiness of Web metadata. In this paper, we discuss how such issue can be addressed through the use of collaborative and Semantic Web technologies. The system we propose is based on a Web-based Social Network, where members are able not only to specify labels, but also to rate existing labels. Both labels and ratings are then used to assess the trustworthiness of resources’ descriptions and to enforce Web access personalization.
Robbins Kay A
Full Text Available Abstract Background Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. Findings SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. Conclusions We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.
Valentine, D. W.; Jennings, B.; Zaslavsky, I.; Maidment, D. R.
The CUAHSI hydrologic information system (HIS) is designed to be a live, multiscale web portal system for accessing, querying, visualizing, and publishing distributed hydrologic observation data and models for any location or region in the United States. The HIS design follows the principles of open service oriented architecture, i.e. system components are represented as web services with well defined standard service APIs. WaterOneFlow web services are the main component of the design. The currently available services have been completely re-written compared to the previous version, and provide programmatic access to USGS NWIS. (steam flow, groundwater and water quality repositories), DAYMET daily observations, NASA MODIS, and Unidata NAM streams, with several additional web service wrappers being added (EPA STORET, NCDC and others.). Different repositories of hydrologic data use different vocabularies, and support different types of query access. Resolving semantic and structural heterogeneities across different hydrologic observation archives and distilling a generic set of service signatures is one of the main scalability challenges in this project, and a requirement in our web service design. To accomplish the uniformity of the web services API, data repositories are modeled following the CUAHSI Observation Data Model. The web service responses are document-based, and use an XML schema to express the semantics in a standard format. Access to station metadata is provided via web service methods, GetSites, GetSiteInfo and GetVariableInfo. The methdods form the foundation of CUAHSI HIS discovery interface and may execute over locally-stored metadata or request the information from remote repositories directly. Observation values are retrieved via a generic GetValues method which is executed against national data repositories. The service is implemented in ASP.Net, and other providers are implementing WaterOneFlow services in java. Reference implementation of
Purpose -- This paper investigates the impact and techniques for mitigating the effects of web robots on usage statistics collected by Open Access institutional repositories (IRs). Design/methodology/approach -- A review of the literature provides a comprehensive list of web robot detection techniques. Reviews of system documentation and open source code are carried out along with personal interviews to provide a comparison of the robot detection techniques used in the major IR platforms. An ...
Full Text Available Normal 0 Government's use of the Web in the UK is prolific and a wide range of services are now available though this channel. The government set out to address the problem that links from Hansard (the transcripts of Parliamentary debates were not maintained over time and that therefore there was need for some long-term storage and stewardship of information, including maintaining access. Further investigation revealed that linking was key, not only in maintaining access to information, but also to the discovery of information. This resulted in a project that affects the entire government Web estate, with a solution leveraging the basic building blocks of the Internet (DNS and the Web (HTTP and URIs in a pragmatic way, to ensure that an infrastructure is in place to provide access to important information both now and in the future.
Tso, Kam S.; Pajevski, Michael J.
Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers
Full Text Available We present KnoWDiaL, an approach for Learning and using task-relevant Knowledge from human-robot Dialog and access to the Web. KnoWDiaL assumes that there is an autonomous agent that performs tasks, as requested by humans through speech. The agent needs to “understand” the request, (i.e., to fully ground the task until it can proceed to plan for and execute it. KnoWDiaL contributes such understanding by using and updating a Knowledge Base, by dialoguing with the user, and by accessing the web. We believe that KnoWDiaL, as we present it, can be applied to general autonomous agents. However, we focus on our work with our autonomous collaborative robot, CoBot, which executes service tasks in a building, moving around and transporting objects between locations. Hence, the knowledge acquired and accessed consists of groundings of language to robot actions, and building locations, persons, and objects. KnoWDiaL handles the interpretation of voice commands, is robust regarding speech recognition errors, and is able to learn commands involving referring expressions in an open domain, (i.e., without requiring a lexicon. We present in detail the multiple components of KnoWDiaL, namely a frame-semantic parser, a probabilistic grounding model, a web-based predicate evaluator, a dialog manager, and the weighted predicate-based Knowledge Base. We illustrate the knowledge access and updates from the dialog and Web access, through detailed and complete examples. We further evaluate the correctness of the predicate instances learned into the Knowledge Base, and show the increase in dialog efficiency as a function of the number of interactions. We have extensively and successfully used KnoWDiaL in CoBot dialoguing and accessing the Web, and extract a few corresponding example sequences from captured videos.
SanthanaVannan, Suresh K [ORNL; Cook, Robert B [ORNL; Pan, Jerry Yun [ORNL; Wilson, Bruce E [ORNL
Remote sensing data from satellites have provided valuable information on the state of the earth for several decades. Since March 2000, the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on board NASA s Terra and Aqua satellites have been providing estimates of several land parameters useful in understanding earth system processes at global, continental, and regional scales. However, the HDF-EOS file format, specialized software needed to process the HDF-EOS files, data volume, and the high spatial and temporal resolution of MODIS data make it difficult for users wanting to extract small but valuable amounts of information from the MODIS record. To overcome this usability issue, the NASA-funded Distributed Active Archive Center (DAAC) for Biogeochemical Dynamics at Oak Ridge National Laboratory (ORNL) developed a Web service that provides subsets of MODIS land products using Simple Object Access Protocol (SOAP). The ORNL DAAC MODIS subsetting Web service is a unique way of serving satellite data that exploits a fairly established and popular Internet protocol to allow users access to massive amounts of remote sensing data. The Web service provides MODIS land product subsets up to 201 x 201 km in a non-proprietary comma delimited text file format. Users can programmatically query the Web service to extract MODIS land parameters for real time data integration into models, decision support tools or connect to workflow software. Information regarding the MODIS SOAP subsetting Web service is available on the World Wide Web (WWW) at http://daac.ornl.gov/modiswebservice.
Bemmel, van J.; Wegdam, M.; Lagerberg, K.
Web services fail to deliver on the promise of ubiquitous deployment and seamless interoperability due to the lack of a uniform, standards-based approach to all aspects of security. In particular, the enforcement of access policies in a service oriented architecture is not addressed adequately. We p
Wheaton, Joseph; Bertini, Patrizia
Accessibility is hardly a new problem and certainly did not originate with the Web. Lack of access to buildings long preceded the call for accessible Web content. Although it is unlikely that rehabilitation educators look at Web page accessibility with indifference, many may also find it difficult to implement. The authors posit three reasons why…
DuPlain, Ron; Benson, John; Sessoms, Eric
Web usage mining is to analysis Web log files to discover user accessing patterns of Web pages. The web server access logs records very significant information about websites. Information about a web site such as links between the pages, the users' profile, and demographic properties can be obtained from the web server access logs. In this study, user access logs of the Web server of Firat University were analyzed with Web mining software by using Web usage mining method. The end of analysis,...
Full Text Available A Turing machine has an important role in education in the field of computer science, as it is a milestone in courses related to automata theory, theory of computation and computer architecture. Its value is also recognized in the Computing Curricula proposed by the Association for Computing Machinery (ACM and IEEE Computer Society. In this paper we present a physical implementation of the Turing machine accessed through Web. To enable remote access to the Turing machine, an implementation of the client-server architecture is built. The web interface is described in detail and illustrations of remote programming, initialization and the computation of the Turing machine are given. Advantages of such approach and expected benefits obtained by using remotely accessible physical implementation of the Turing machine as an educational tool in the teaching process are discussed.
Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.
Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the
K.V.S. Jaharsh Samayan
Full Text Available The motive of this study is to suggest a protocol which can be implemented to observe the activities of any node within a network whose contribution to the organization needs to be measured. Many associates working in any organization misuse the resources allocated to them and waste their working time in unproductive work which is of no use to the organization. In order to tackle this problem the dynamic approach in monitoring web pages accessed by user using cookies gives a very efficient way of tracking all the activities of the individual and store in cookies which are generated based on their recent web activity and display a statistical information of how the users web activity for the time period has been utilized for every IP-address in the network. In a ever challenging dynamic world monitoring the productivity of the associates in the organization plays an utmost important role.
Robbins Kay A; Burkhardt Cory; Doderer Mark S
Abstract Background Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformati...
Tso, Kam S.; Pajevski, Michael J.; Johnson, Bryan
Cyber security has gained national and international attention as a result of near continuous headlines from financial institutions, retail stores, government offices and universities reporting compromised systems and stolen data. Concerns continue to rise as threats of service interruption, and spreading of viruses become ever more prevalent and serious. Controlling access to application layer resources is a critical component in a layered security solution that includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. In this paper we discuss the development of an application-level access control solution, based on an open-source access manager augmented with custom software components, to provide protection to both Web-based and Java-based client and server applications.
Ms. Seema , Ms. Priyanka Makkar
World Wide Web is a huge repository of web pagesand links. It provides abundance of information for theInternet users. The growth of web is tremendous asapproximately one million pages are added daily. Users’accesses are recorded in web logs. Because of thetremendous usage of web, the web log files are growing at afaster rate and the size is becoming huge. Web data mining isthe application of data mining techniques in web data. WebUsage Mining applies mining techniques in log data toextract t...
Haibo Shen; Yu Cheng
As mobile web services becomes more pervasive, applications based on mobile web services will need flexible access control mechanisms. Unlike traditional approaches based on the identity or role for access control, access decisions for these applications will depend on the combination of the required attributes of users and the contextual information. This paper proposes a semantic context-based access control model (called SCBAC) to be applied in mobile web services environment by combining ...
Kreutel, Jörn; Gerlach, Andrea; Klekamp, Stefanie; Schulz, Kristin
We describe the ideas and results of an applied research project that aims at leveraging the expressive power of semantic web technologies as a server-side backend for mobile applications that provide access to location and multimedia data and allow for a rich user experience in mobile scenarios, ranging from city and museum guides to multimedia enhancements of any kind of narrative content, including e-book applications. In particular, we will outline a reusable software architecture for both server-side functionality and native mobile platforms that is aimed at significantly decreasing the effort required for developing particular applications of that kind.
Byerley, Suzanne L.; Chambers, Mary Beth
Examined the accessibility of two Web-based abstracting and indexing services by blind users using screen-reading programs based on guidelines from the Rehabilitation Act of 1973 and Web Content Accessibility Guidelines by the WWW Consortium. Suggests Web developers need to conduct usability testing and librarians need to be aware of accessibility…
Librarians and libraries have long been committed to providing equitable access to information. In the past decade and a half, the growth of the Internet and the rapid increase in the number of online library resources and tools have added a new dimension to this core duty of the profession: ensuring accessibility of online resources to users with…
The article attempts to answer the question: How in terms of web availability presents a group of web services type of e-shops operated by selected polish e-commerce companies? Discusses the essence of the web availability in the context of WCAG 2.0 standard and business benefits for companies arising from ownership accessible website fulfilling the recommendations of WCAG 2.0. Assessed of level the web accessibility of e-shops of selected polish e-commerce companies.
Fuertes Castro, José Luis; González, Ricardo; Gutiérrez, Emmanuelle; Martínez Normand, Loïc
Website accessibility evaluation is a complex task requiring a combination of human expertise and software support. There are several online and offline tools to support the manual web accessibility evaluation process. However, they all have some weaknesses because none of them includes all the desired features. In this paper we present Hera-FFX, an add-on for the Firefox web browser that supports semi-automatic web accessibility evaluation.
Gondara, Mandeep Kaur
Semantic Web is an open, distributed, and dynamic environment where access to resources cannot be controlled in a safe manner unless the access decision takes into account during discovery of web services. Security becomes the crucial factor for the adoption of the semantic based web services. An access control means that the users must fulfill certain conditions in order to gain access over web services. Access control is important in both perspectives i.e. legal and security point of view. This paper discusses important requirements for effective access control in semantic web services which have been extracted from the literature surveyed. I have also discussed open research issues in this context, focusing on access control policies and models in this paper.
Ulltveit-Moe, Nils; Olsen, Morten Goodwin; Pillai, Anand B.;
The European Internet Accessibility project (EIAO) has developed an Observatory for performing large scale automatic web accessibility evaluations of public sector web sites in Europe. The architecture includes a distributed web crawler that crawls web sites for links until either a given budget...... of web pages have been identified or the web site has been crawled exhaustively. Subsequently, a uniform random subset of the crawled web pages is sampled and sent for accessibility evaluation and the evaluation results are stored in a Resource Description Format (RDF) database that is later loaded...... challenges that the project faced and the solutions developed towards building a system capable of regular large-scale accessibility evaluations with sufficient capacity and stability. It also outlines some possible future architectural improvements....
Guenther, Rebecca; Myrick, Leslie
Born-digital material such as archived Web sites provides unique challenges in ensuring access and preservation. This article examines some of the technical challenges involved in harvesting and managing Web archives as well as metadata strategies to provide descriptive, technical, and preservation related information about archived Web sites,…
Khelghati, Mohammadreza; Keulen, van Maurice; Hiemstra, Djoerd
The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester wh
Schmetzke, Axel; Comeaux, David
This paper focuses on the accessibility of North American library and library school Web sites for all users, including those with disabilities. Web accessibility data collected in 2006 are compared to those of 2000 and 2002. The findings of this follow-up study continue to give cause for concern: Despite improvements since 2002, library and…
This collective case study reviewed the current state of Web accessibility at 102 postsecondary colleges and universities in North Carolina. The study examined themes within Web-accessibility compliance and identified which disability subgroups were most and least affected, why the common errors were occurring, and how the errors could be fixed.…
The concept of Web accessibility refers to a combined set of measures, namely, how easily and how efficiently different types of users may make use of a given service. While some recommendations for accessibility are focusing on people with variousspecific disabilities, this document seeks to...... broaden the scope to any type of user and any type of use case. The document provides an introduction to some required concepts and technical standards for designing accessible Web sites. A brief review of thelegal requirements in a few countries for Web accessibility complements the recommendations...
Full Text Available As smartphone clients are restricted in computational power and bandwidth, it is important to minimise the overhead of transmitted messages. This paper identifies and studies methods that reduce the amount of data being transferred via wireless links between a web service client and a web service. Measurements were performed in a real environment based on a web service prototype providing public transport information for the city of Hamburg in Germany, using actual wireless links with a mobile smartphone device. REST based web services using the data exchange formats JSON, XML and Fast Infoset were evaluated against the existing SOAP based web service.
The article is intended to introduce the readers to the concept and background of Web accessibility in the United States. I will first discuss different definitions of Web accessibility. The beneficiaries of accessible Web or the sufferers from inaccessible Web will be discussed based on the type of disability. The importance of Web accessibility will be introduced from the perspectives of ethical, demographic, legal, and financial importance. Web accessibility related standards and legislations will be discussed in great detail. Previous research on evaluating Web accessibility will be presented. Lastly, a system for automated Web accessibility transformation will be introduced as an alternative approach for enhancing Web accessibility.
Full Text Available The article attempts to answer the question: How in terms of web availability presents a group of web services type of e-shops operated by selected polish e-commerce companies? Discusses the essence of the web availability in the context of WCAG 2.0 standard and business benefits for companies arising from ownership accessible website fulfilling the recommendations of WCAG 2.0. Assessed of level the web accessibility of e-shops of selected polish e-commerce companies.
Highlights: • We present H1DS, a new RESTful web service for accessing fusion data. • We examine the scalability and extensibility of H1DS. • We present a fast and user friendly web browser client for the H1DS web service. • A summary relational database is presented as an application of the H1DS API. - Abstract: A new data access system, H1DS, has been developed and deployed for the H-1 Heliac at the Australian Plasma Fusion Research Facility. The data system provides access to fusion data via a RESTful web service. With the URL acting as the API to the data system, H1DS provides a scalable and extensible framework which is intuitive to new users, and allows access from any internet connected device. The H1DS framework, originally designed to work with MDSplus, has a modular design which can be extended to provide access to alternative data storage systems
Purpose: The purpose of this paper is to explore both accessibility and usability and examine the inhibitors and methods to evaluate site accessibility. Design techniques which improve end-user access and site interactivity, demonstrated by practical examples, are also studied. Design/methodology/approach: Assesses various web sites for…
Petrucci, Lori Stefano; Harth, Eric; Roth, Patrick; Assimacopoulos, André; Pun, Thierry
The World Wide Web (WWW) has recently become the main source of digital information accessible everywhere and by everyone. Nevertheless, the inherent visual nature of Internet browsers makes the Web inaccessible to the visually impaired. To solve this problem, non-visual browsers have been developed. One of the new problems, however, with those non-visual browsers is that they often transform the visual content of HTML documents into textual information only, that can be restituted by a text-...
Scott Fowler; Katrin Hameseder; Anders Peterson
As smartphone clients are restricted in computational power and bandwidth, it is important to minimise the overhead of transmitted messages. This paper identifies and studies methods that reduce the amount of data being transferred via wireless links between a web service client and a web service. Measurements were performed in a real environment based on a web service prototype providing public transport information for the city of Hamburg in Germany, using actual wireless links with a mobil...
Emig, Christian; Brandt, Frank; Abeck, Sebastian; Biermann, Jürgen; Klarl, Heiko
With the mutual consent to use WSDL (Web Service Description Language) to describe web service interfaces and SOAP as the basic communication protocol, the cornerstone for web service-oriented architecture (WSOA) has been established. Considering the momentum observable by the growing number of specifications in the web service domain for the indispensable cross-cutting concern of identity management (IdM) it is still an open issue how a WSOA-aware IdM architecture is built and how it is link...
Singh, Jaspreet; Fernando, Zeon Trevor; Chawla, Saniya
In addition to user-generated content, Open Educational Resources are increasingly made available on the Web by several institutions and organizations with the aim of being re-used. Nevertheless, it is still difficult for users to find appropriate resources for specific learning scenarios among the vast amount offered on the Web. Our goal is to give users the opportunity to search for authentic resources from the Web and reuse them in a learning context. The LearnWeb-OER platform enhances col...
Laursen, Ditte; Møldrup-Dalum, Per
Digital heritage archiving is an ongoing activity that requires commitment, involvement and cooperation between heritage institutions and policy makers as well as producers and users of information. In this presentation, we will address how a web archive is created over time as well as what or who...... drives the development of a web archive. Empirically, we will look back on the 10 years of development to collect, preserve and access the Danish web, in the Danish national web archive called Netarkivet. In particular, we will address how a web archive is created and re-created over time in relation to...
Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.
The biomedical digital library of the future is expected to provide access to stores of biomedical database information containing text and images. Developing efficient methods for accessing such databases is a research effort at the Lister Hill National Center for Biomedical Communications of the National Library of Medicine. In this paper we examine issues in providing access to databases across the Web and describe a tool we have developed: the Web-based Medical Information Retrieval System (WebMIRS). We address a number of critical issues, including preservation of data integrity, efficient database design, access to documentation, quality of query and results interfaces, capability to export results to other software, and exploitation of multimedia data. WebMIRS is implemented as a Java applet that allows database access to text and to associated image data, without requiring any user software beyond a standard Web browser. The applet implementation allows WebMIRS to run on any hardware platform (such as PCs, the Macintosh, or Unix machines) which supports a Java-enabled Web browser, such as Netscape or Internet Explorer. WebMIRS is being tested on text/x-ray image databases created from the National Health and Nutrition Examination Surveys (NHANES) data collected by the National Center for Health Statistics.
Chan, Lois Mai; Lin, Xia; Zeng, Marcia
This paper presents some of the efforts currently being made to develop mechanisms that can organize World Wide Web resources for efficient and effective retrieval, as well as programs that can accommodate multiple languages. Part 1 discusses structural approaches to organizing Web resources, including the use of hierarchical or…
National Aeronautics and Space Administration — Global Science & Technology, Inc. (GST) proposes to investigate information processing and delivery technologies to provide near-real-time Web-based access to...
World Wide Web is becoming increasingly necessary for everybody regardless of age, gender, culture, health and individual disabilities. Unfortunately, the information on the Web is still not accessible to deaf and hard of hearing Web users since these people require translations of written forms into their first language: sign language, which is based on facial expressions, hands and body movements and has its own linguistic structure. This thesis introduces a possible solution (method) for p...
More than a decade after the Web began to develop, electronic resources, open access tendencies, and libraries as well as Web 2.0 developed rapidly, offering not only to librarians but the end users, researchers, educators, students new forms for communication and information. As the academy gradually extended itself into electronic libraries, online databases, social networking services, the world has populated diverse and blooming web 2.0 applications. I will focus on two emerging areas, We...
Zeng, Xiaoming; Sligar, Steven R.
Human resource development programs in various institutions communicate with their constituencies including persons with disabilities through websites. Web sites need to be accessible for legal, economic and ethical reasons. We used an automated web usability evaluation tool, aDesigner, to evaluate 205 home pages from the organizations of AHRD…
Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly
Offers an introduction to the web of data and the semantic web, exploring technologies including APIs, microformats and linked data. This title includes topical commentary and practical examples that explore how information professionals can harness the power of this phenomenon to inform strategy and become facilitators of access to data.
Khelghati, Mohammadreza; Keulen, van, S.; Hiemstra, Djoerd
The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester which can be applied to different settings and situations. To develop such a harvester, a number of issues should be considered. Among these issues, business domain features, targeted websites' featu...
Jacyno, M.; Payne, T. R.; Watkins, E R; Taylor, S. J.; Surridge, M.
As the technical infrastructure to support Grid environments matures, attention should focus on providing dynamic access to services, whilst ensuring such access is appropriately monitored and secured. Access policies may be dynamic, whereby intra-organisational workflows define local knowledge that could be used to establish appropriate credentials necessary to access the desired service. We describe a typical Grid-based scenario that requires local semantic workflows that establish the appr...
Lewis, Kay; Yoder, Diane; Riley, Elizabeth; So, Yvonne; Yusufali, Sarah
Access to education has always challenged students with disabilities. The increase of online instructional materials presents new opportunities--and possible barriers--for accessibility in higher education. Despite rising numbers of students with disabilities in higher education, colleges and universities have not ensured accessibility of online…
Good, Alice; Stokes, Suzanne; Jerrams-Smith, Jenny
The Web can provide a quick and easy way to access health information, especially for elderly users. However, these health information sites need to be accessible and usable. In spite of legislation and clear guidelines, there continues to be issues of poor accessibility and usability. Because of an aging population and the likelihood of being more susceptible to age-related impairments such as restricted vision and mobility, the severity of this problem continues to grow. This article presents the results of an exploratory study aimed at assessing the accessibility and usability of three health information Web sites for elderly novice users. The results from the study show that certain aspects of these Web sites make it difficult for elderly people to use them, especially if the users have impairments. Problematic areas are highlighted regarding usability and accessibility, and recommendations are made based on the findings. PMID:19195297
Full Text Available As mobile web services becomes more pervasive, applications based on mobile web services will need flexible access control mechanisms. Unlike traditional approaches based on the identity or role for access control, access decisions for these applications will depend on the combination of the required attributes of users and the contextual information. This paper proposes a semantic context-based access control model (called SCBAC to be applied in mobile web services environment by combining semantic web technologies with context-based access control mechanism. The proposed model is a context-centric access control solutions, context is the first-class principle that explicitly guides both policy specification and enforcement process. In order to handle context information in the model, this paper proposes a context ontology to represent contextual information and employ it in the inference engine. As well as, this paper specifies access control policies as rules over ontologies representing the concepts introduced in the SCBAC model, and uses semantic web rule language (SWRL to form policy rule and infer those rules by JESS inference engine. The proposed model can also be applied to context-aware applications.
Radovan, Marko; Perdih, Mojca
E-learning is a rapidly developing form of education. One of the key characteristics of e-learning is flexibility, which enables easier access to knowledge for everyone. Information and communications technology (ICT), which is e-learning's main component, enables alternative means of accessing the web-based learning materials that comprise the…
In this article, we describe the present situation of access network management, enumerate a few problems during the development of network management systems, then put forward a distributed Intranet/Web solution named iMAN to the integrated management of access networks, present its architecture and protocol stack, and describe its application in practice.
Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly
Esmeralda Serrano Mascaraque
Full Text Available Los organismos oficiales deben facilitar recursos informativos y prestar servicios a través de diversos medios en aras de conseguir el derecho a la información que le asiste a todo ciudadano. En el momento actual la Web es uno de los recursos más extendidos y por ello es fundamental evaluar el grado de accesibilidad que tienen los contenidos volcados en la Red. Para lograr esto se aplicarán las herramientas y software necesarios y se evaluará el nivel de accesibilidad de un grupo de sitios web representativos. Además se intentará determinar si existe algún tipo de relación entre accesibilidad y usabilidad, ya que ambos son aspectos deseables (o incluso exigibles legalmente, en el caso de la accesibilidad para tener un correcto diseño de web.Government agencies should provide information resources and services through various means in order to achieve the right to information that assists all citizens. Being the Web one of the most widespread resources, it becomes essential to evaluate the degree of its content accessibility. We will evaluate this level on a representative group of websites, and we will try to determine whether there is any relationship between accessibility and usability since both aspects are desired (or even legally required in the case of the accesibility in a proper Web design.
Luis Joyanes Aguilar
Full Text Available Due to the significant increase in threats, attacks and vulnerabilities that affect the Web in recent years has resulted the development and implementation of pools and methods to ensure security measures in the privacy, confidentiality and data integrity of users and businesses. Under certain circumstances, despite the implementation of these tools do not always get the flow of information which is passed in a secure manner. Many of these security tools and methods cannot be accessed by people who have disabilities or assistive technologies which enable people to access the Web efficiently. Among these security tools that are not accessible are the virtual keyboard, the CAPTCHA and other technologies that help to some extent to ensure safety on the Internet and are used in certain measures to combat malicious code and attacks that have been increased in recent times on the Web. Through the implementation of intelligent systems can detect, recover and receive information on the characteristics and properties of the different tools and hardware devices or software with which the user is accessing a web application and through analysis and interpretation of these intelligent systems can infer and automatically adjust the characteristics necessary to have these tools to be accessible by anyone regardless of disability or navigation context. This paper defines a set of guidelines and specific features that should have the security tools and methods to ensure the Web accessibility through the implementation of intelligent systems.
Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.
Full Text Available Sequential Pattern mining is the process of applying data mining techniques to asequential database for the purposes of discovering the correlation relationships that existamong an ordered list of events. The task of discovering frequent sequences ischallenging, because the algorithm needs to process a combinatorially explosive numberof possible sequences. Discovering hidden information from Web log data is called Webusage mining. One common usage in web applications is the mining of users’ accessbehaviour for the purpose of predicting and hence pre-fetching the web pages that theuser is likely to visit. The aim of discovering frequent Sequential patterns in Web log datais to obtain information about the access behaviour of the users.Finding Frequent Sequential Pattern (FSP is an important problem in web usagemining. In this paper, we explore a new frequent sequence pattern technique calledAWAPT (Adaptive Web Access Pattern Tree, for FSP mining. An AWAPT combinesSuffix tree and Prefix tree for efficient storage of all the sequences that contain a givenitem. It eliminates recursive reconstruction of intermediate WAP tree during the miningby assigning the binary codes to each node in the WAP Tree. Web access pattern tree(WAP-tree mining is a sequential pattern mining technique for web log accesssequences, which first stores the original web access sequence database(WASD on aprefix tree, similar to the frequent pattern tree (FP-tree for storing non-sequential data.WAP-tree algorithm then, mines the frequent sequences from the WAP-tree byrecursively re-constructing intermediate trees, starting with suffix sequences and endingwith prefix sequences. An attempt has been made to AWAPT approach for improvingefficiency. AWAPT totally eliminates the need to engage in numerous reconstructions ofintermediate WAP-trees during mining and considerably reduces execution time.
National Aeronautics and Space Administration — We propose to investigate the feasibility and value of the "Software as a Service" paradigm in facilitating access to Earth Science numerical models. We...
Pardo, Mauricio Esteban; Strack, Guillermo; Martínez, Diego C.
Domotics systems are intelligent systems for houses and apartments to control several issues as security and light or climate devices. In this work we present the development of an economic domotic system to control different electrical devices in a private house. This is achieved either from inside the building or by remote control using a regular Internet connection. In order to provide this functionality, the system includes a server that provides web services to the controlling applicatio...
Chen, Alex Qiang
The World Wide Web (Web) has evolved from a collection of static pages that need reloading every time the content changes, into dynamic pages where parts of the page updates independently, without reloading it. As such, users are required to work with dynamic pages with components that react to events either from human interaction or machine automation. Often elderly and visually impaired users are the most disadvantaged when dealing with this form of interaction. Operating widgets require th...
Wen-Jye Shyr; Te-Jen Su; Chia-Ming Lin
This study develops a novel method for learning mechatronics using remote monitoring and control, based on a programmable logic controller (PLC) and WebAccess. A mechatronics module, a Web‐CAM and a PLC were integrated with WebAccess software to organize a remote laboratory. The proposed system enables users to access the Internet for remote monitoring and control of the mechatronics module via a web browser, thereby enhancing work flexibility by enabling personnel to control mechatronics equ...
Furche, Tim; Gottlob, Georg; Grasso, Giovanni; Guo, Xiaonan; Orsi, Giorgio; Schallhart, Christian
Forms are our gates to the web. They enable us to access the deep content of web sites. Automatic form understanding provides applications, ranging from crawlers over meta-search engines to service integrators, with a key to this content. Yet, it has received little attention other than as component in specific applications such as crawlers or meta-search engines. No comprehensive approach to form understanding exists, let alone one that produces rich models for semantic services or integrati...
Bakker, R.; Tiesinga, P.; Kotter, R.
The Scalable Brain Atlas (SBA) is a collection of web services that provide unified access to a large collection of brain atlas templates for different species. Its main component is an atlas viewer that displays brain atlas data as a stack of slices in which stereotaxic coordinates and brain regions can be selected. These are subsequently used to launch web queries to resources that require coordinates or region names as input. It supports plugins which run inside the viewer and respond when...
Vitols, G; Arhipova, I
Markup languages are used to describe the content published in the World Wide Web. Aim of this article is to analyze hypertext markup language versions and identify possibilities of improvement accessibility for the web information systems with markup language elements appropriate application. Analysis of the document structure is performed. Document structure and text description elements are selected. Selected elements are practically evaluated with screen readers. From the evaluation resul...
Trabant, Chad; Ahern, Timothy K.
At the IRIS Data Management Center (DMC) we have developed a suite of web service interfaces to access our large archive of, primarily seismological, time series data and related metadata. The goals of these web services include providing: a) next-generation and easily used access interfaces for our current users, b) access to data holdings in a form usable for non-seismologists, c) programmatic access to facilitate integration into data processing workflows and d) a foundation for participation in federated data discovery and access systems. To support our current users, our services provide access to the raw time series data and metadata or conversions of the raw data to commonly used formats. Our services also support simple, on-the-fly signal processing options that are common first steps in many workflows. Additionally, high-level data products derived from raw data are available via service interfaces. To support data access by researchers unfamiliar with seismic data we offer conversion of the data to broadly usable formats (e.g. ASCII text) and data processing to convert the data to Earth units. By their very nature, web services are programmatic interfaces. Combined with ubiquitous support for web technologies in programming & scripting languages and support in many computing environments, web services are very well suited for integrating data access into data processing workflows. As programmatic interfaces that can return data in both discipline-specific and broadly usable formats, our services are also well suited for participation in federated and brokered systems either specific to seismology or multidisciplinary. Working within the International Federation of Digital Seismograph Networks, the DMC collaborated on the specification of standardized web service interfaces for use at any seismological data center. These data access interfaces, when supported by multiple data centers, will form a foundation on which to build discovery and access mechanisms
Web services are starting to be widely used in applications for remotely accessing data. This is of special interest for research based on small and medium scale fusion devices, since scientists participating remotely to experiments are accessing large amounts of data over the Internet. Recent tests were conducted to see how the new network traffic, generated by the use of web services, can be integrated in the existing infrastructure and what would be the impact over existing applications, especially those used in a remote participation scenario
This thesis project has been performed at (and for) a company named Strödata. The purpose of the project has been to perform a risk analysis on Strödata’s web based business system, and specifically analyze how access to the business system through smartphones would affect the risks posed to the system. This has been done to help decide if smartphone access should be enabled. An implementation of a web application which is suited for use on a smartphone has also been developed, as a proof-of-...
Mangesh V. Bedekar
Full Text Available Web usage prediction has become a widely addressed topic with the huge proliferation of World Wide Web and computers. Most of the work done in this area of research is centered around prediction of what links the user is expected to visit next given his usage history, making suggestions for new web-sites he may be interested in and the like. We propose two algorithms to make browsers intelligent enough to gauge usage patterns. This algorithms are a blend of statistical and fuzzy logic techniques to gauge the surfing pattern of users, hence intelligently predicting the time ranges of likely user hits forparticular websites, speeding up the browsing experience by means of caching and preloading of predicted websites. Thus our design intends to make the browsers intelligent to speed up and better organize the browsing experience.
Full Text Available As the internet is fast migrating from static web pages to dynamic web pages, the users with visual impairment find it confusing and challenging when accessing the contents on the web. There is evidence that dynamic web applications pose accessibility challenges for the visually impaired users. This study shows that a difference can be made through the basic understanding of the technical requirement of users with visual impairment and addresses a number of issues pertinent to the accessibility needs for such users. We propose that only by designing a framework that is structurally flexible, by removing unnecessary extras and thereby making every bit useful (fit-for-purpose, will visually impaired users be given an increased capacity to intuitively access e-contents. This theory is implemented in a dynamic website for the visually impaired designed in this study. Designers should be aware of how the screen reading software works to enable them make reasonable adjustments or provide alternative content that still corresponds to the objective content to increase the possibility of offering faultless service to such users. The result of our research reveals that materials can be added to a content repository or re-used from existing ones by identifying the content types and then transforming them into a flexible and accessible one that fits the requirements of the visually impaired through our method (no-frill + agile methodology rather than computing in advance or designing according to a given specification.
Alharbi, Bader A.; Alshammari, Thamir H.; Felton, Nathan L.; Zhurkin, Victor B.; Cui, Feng
Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic...
Bitenc, U; The ATLAS collaboration; Ferrari, P; Luisa, L
This note describes how to access the DCS data from a web browser. The DCS data from the various Atlas subdetectors are stored to the Oracle database using the RDB manager in PVSS. All subdetectors use either the same, or a very similar schema. The effort coordinated within the Inner Detector has produced a web-based tool to search the Atlas DCS Oracle database and to display the results. This tool has been easily extended to access the data from other Atlas subdetectors. In this note we describe the structure of the AtlasDcsWebViewer and its use.
Near, Joseph Paul; Jackson, Daniel
We propose a specification-free technique for finding missing security checks in web applications using a catalog of access control patterns in which each pattern models a common access control use case. Our implementation, Space, checks that every data exposure allowed by an application's code matches an allowed exposure from a security pattern in our catalog. The only user-provided input is a mapping from application types to the types of the catalog; the rest of the process is entirely au...
The EU DataGrid has deployed a grid testbed at approximately 20 sites across Europe, with several hundred registered users. This paper describes authorisation systems produced by GridPP and currently used on the EU DataGrid Testbed, including local Unix pool accounts and fine-grained access control with Access Control Lists and Grid-aware filesystems, fileservers and web developement environments.
ZHAN Li-qiang; LIU Da-xin
We propose an efficient hybrid algorithm WDHP in this paper for mining frequent access patterns.WDHP adopts the techniques of DHP to optimize its performance, which is using hash table to filter candidate set and trimming database.Whenever the database is trimmed to a size less than a specified threshold, the algorithm puts the database into main memory by constructing a tree, and finds frequent patterns on the tree.The experiment shows that WDHP outperform algorithm DHP and main memory based algorithm WAP in execution efficiency.
Dai, Jianli; Chen, Yuansha; Lauzardo, Michael
Mycobacteria include a large number of pathogens. Identification to species level is important for diagnoses and treatments. Here, we report the development of a Web-accessible database of the hsp65 locus sequences (http://msis.mycobacteria.info) from 149 out of 150 Mycobacterium species/subspecies. This database can serve as a reference for identifying Mycobacterium species.
Herkenhöner, Ralph; De Meer, Hermann; Jensen, Meiko;
with trade secret protection. In this paper, we present an automated architecture to enable exercising the right of access in the domain of inter-organizational business processes based on Web Services technology. Deriving its requirements from the legal, economical, and technical obligations, we show...
Senthil, J; Arumugam, S.; S Margret Anouncia; Abhinav Kapoor
Today, a lot of web applications and web sites are data driven. These web applications have all the static and dynamic data stored in relational databases. The aim of this thesis is to generate automatic code for data access located in relational databases in minimum time.
In the large experimental facilities such as KEKB, RIBF, and J-PARC, the accelerators are operated by the remote control system based on EPICS (Experimental Physics and Industrial Control System). One of the advantages in EPICS-based system is the software reusability. Because it is available to develop client system by using Channel Access (CA) protocol without protocols with hardware dependencies, even if the system consists of the various kind controllers. As next-generation OPI (Operator Interface) using CA, we develop a server for the WebSocket, which is a new protocol provided by Internet Engineering Task Force (IETF), with combination of Node.js and the modules. As a result, we are able to use Web-based client system not only in the central control room but also with various types of equipment for accelerator operation. (author)
Weertman, B. R.; Trabant, C.; Karstens, R.; Suleiman, Y. Y.; Ahern, T. K.; Casey, R.; Benson, R. B.
The IRIS Data Management Center (DMC) has developed a suite of web services that provide access to the DMC's time series holdings, their related metadata and earthquake catalogs. In addition, services are available to perform simple, on-demand time series processing at the DMC prior to being shipped to the user. The primary goal is to provide programmatic access to data and processing services in a manner usable by and useful to the research community. The web services are relatively simple to understand and use and will form the foundation on which future DMC access tools will be built. Based on standard Web technologies they can be accessed programmatically with a wide range of programming languages (e.g. Perl, Python, Java), command line utilities such as wget and curl or with any web browser. We anticipate these services being used for everything from simple command line access, used in shell scripts and higher programming languages to being integrated within complex data processing software. In addition to improving access to our data by the seismological community the web services will also make our data more accessible to other disciplines. The web services available from the DMC include ws-bulkdataselect for the retrieval of large volumes of miniSEED data, ws-timeseries for the retrieval of individual segments of time series data in a variety of formats (miniSEED, SAC, ASCII, audio WAVE, and PNG plots) with optional signal processing, ws-station for station metadata in StationXML format, ws-resp for the retrieval of instrument response in RESP format, ws-sacpz for the retrieval of sensor response in the SAC poles and zeros convention and ws-event for the retrieval of earthquake catalogs. To make the services even easier to use, the DMC is developing a library that allows Java programmers to seamlessly retrieve and integrate DMC information into their own programs. The library will handle all aspects of dealing with the services and will parse the returned
Alharbi, Bader A; Alshammari, Thamir H; Felton, Nathan L; Zhurkin, Victor B; Cui, Feng
Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and parameters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site. PMID:25220945
Bader A Alharbi; Thamir H Alshammari; Nathan L Felton; Victor B Zhurkin; Feng Cui
Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and param-eters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site.
Elmsheuser, Johannes; The ATLAS collaboration; Serfon, Cedric; Garonne, Vincent; Blunier, Sylvain; Lavorini, Vincenzo; Nilsson, Paul
With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyse collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.
Elmsheuser, J.; Walker, R.; Serfon, C.; Garonne, V.; Blunier, S.; Lavorini, V.; Nilsson, P.
With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyse collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.
Elmsheuser, Johannes; The ATLAS collaboration; Serfon, Cedric; Garonne, Vincent; Blunier, Sylvain; Lavorini, Vincenzo; Nilsson, Paul
With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyze collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.
Brown, K E; Newby, K; Caley, M; Danahay, A; Kehal, I
Sexual health service access is fundamental to good sexual health, yet interventions designed to address this have rarely been implemented or evaluated. In this article, pilot evaluation findings for a targeted public health behavior change intervention, delivered via a website and web-app, aiming to increase uptake of sexual health services among 13-19-year olds are reported. A pre-post questionnaire-based design was used. Matched baseline and follow-up data were identified from 148 respondents aged 13-18 years. Outcome measures were self-reported service access, self-reported intention to access services and beliefs about services and service access identified through needs analysis. Objective service access data provided by local sexual health services were also analyzed. Analysis suggests the intervention had a significant positive effect on psychological barriers to and antecedents of service access among females. Males, who reported greater confidence in service access compared with females, significantly increased service access by time 2 follow-up. Available objective service access data support the assertion that the intervention may have led to increases in service access. There is real promise for this novel digital intervention. Further evaluation is planned as the model is licensed to and rolled out by other local authorities in the United Kingdom. PMID:26928566
Berget, Gerd; Herstad, Jo; Sandnes, Frode Eika
Universal design in context of digitalisation has become an integrated part of international conventions and national legislations. A goal is to make the Web accessible for people of different genders, ages, backgrounds, cultures and physical, sensory and cognitive abilities. Political demands for universally designed solutions have raised questions about how it is achieved in practice. Developers, designers and legislators have looked towards the Web Content Accessibility Guidelines (WCAG) for answers. WCAG 2.0 has become the de facto standard for universal design on the Web. Some of the guidelines are directed at the general population, while others are targeted at more specific user groups, such as the visually impaired or hearing impaired. Issues related to cognitive impairments such as dyslexia receive less attention, although dyslexia is prevalent in at least 5-10% of the population. Navigation and search are two common ways of using the Web. However, while navigation has received a fair amount of attention, search systems are not explicitly included, although search has become an important part of people's daily routines. This paper discusses WCAG in the context of dyslexia for the Web in general and search user interfaces specifically. Although certain guidelines address topics that affect dyslexia, WCAG does not seem to fully accommodate users with dyslexia. PMID:27534340
Proton Engineering Frontier Project (PEFP) has developed a 20MeV proton accelerator, and established a distributed control system based on EPICS for sub-system components such as vacuum unit, beam diagnostics, and power supply system. The control system includes a real-time monitoring and alarm functions. From the aspect of a efficient maintenance of a control system and a additional extension of subsystems, EPICS software framework was adopted. In addition, a control system should be capable of providing an easy access for users and a real-time monitoring on a user screen. Therefore, we have implemented a new web-based monitoring server with several libraries. By adding DB module, the new IOC web monitoring system makes it possible to monitor the system through the web. By integrating EPICS Channel Access (CA) and Database libraries into a Database module, the web-based monitoring system makes it possible to monitor the sub-system status through user's internet browser. In this study, we developed a web based monitoring system by using EPICS IOC (Input Output Controller) with IBM server
Pallen Mark J
Tired of all the time spent on the phone or sending emails to schedule beam time? Why not make your own schedule when it is convenient to you? The integrated web environment at the NIGMS East Coast Structural Biology Research Facility allows users to schedule their own beam time as if they were making travel arrangements and provides staff with a set of toolkits for management of routine tasks. These unique features are accessible through the MediaWiki-powered home pages. Here we describe the main features of this web environment that have shown to allow for an efficient and effective interaction between the users and the facility
Cristina Livia Iancu
Full Text Available This paper presents a solution for accessing web services in a light-secure way. Because the payload of the messages is not so sensitive, it is taken care only about protecting the user name and the password used for authentication and authorization into the web services system. The advantage of this solution compared to the common used SSL is avoiding the overhead related to the handshake and encryption, providing a faster response to the clients. The solution is intended for Windows machines and is developed using the latest stable Microsoft technologies.
A new web product data management architecture is presented. The three-tier web architecture and Simple Object Access Protocol (SOAP) are combined to build the web-based product data management (PDM) system which includes three tiers: the user services tier, the business services tier, and the data services tier. The client service component uses the serverside technology, and Extensible Markup Language (XML) web service which uses SOAP as the communication protocol is chosen as the business service component. To illustrate how to build a web-based PDM system using the proposed architecture,a case PDM system which included three logical tires was built. To use the security and central management features of the database, a stored procedure was recommended in the data services tier. The business object was implemented as an XML web service so that client could use standard internet protocols to communicate with the business object from any platform. In order to satisfy users using all sorts of browser, the server-side technology and Microsoft ASP.NET was used to create the dynamic user interface.
Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.
hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.
David, Fabrice P A; Delafontaine, Julien; Carat, Solenne; Ross, Frederick J; Lefebvre, Gregory; Jarosz, Yohan; Sinclair, Lucas; Noordermeer, Daan; Rougemont, Jacques; Leleu, Marion
The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch. PMID:24475057
Hiatt, S. H.; Hashimoto, H.; Melton, F. S.; Michaelis, A.; Milesi, C.; Nemani, R. R.; Votava, P.; Wang, W.
Lee, Dong Uk; Won, Byung Chool; Lee, Yong Bum; Kim, Young In; Hahn, Do Hee
The SFR R and D and technology monitoring system based on the MS enterprise project management is developed for systematic effective management of 'Development of Basic Key Technologies for Gen IV SFR' project which was performed under the Mid- and Long-term Nuclear R and D Program sponsored by the Ministry of Education, Science and Technology. This system is a tool for project management based on web access. Therefore this manual is a detailed guide for Project Web Access(PWA). Section 1 describes the common guide for using of system functions such as project server 2007 client connection setting, additional outlook function setting etc. The section 2 describes the guide for system administrator. It is described the guide for project management in section 3, 4.
The SFR R and D and technology monitoring system based on the MS enterprise project management is developed for systematic effective management of 'Development of Basic Key Technologies for Gen IV SFR' project which was performed under the Mid- and Long-term Nuclear R and D Program sponsored by the Ministry of Education, Science and Technology. This system is a tool for project management based on web access. Therefore this manual is a detailed guide for Project Web Access(PWA). Section 1 describes the common guide for using of system functions such as project server 2007 client connection setting, additional outlook function setting etc. The section 2 describes the guide for system administrator. It is described the guide for project management in section 3, 4
Winterbottom, Anna; North, James
This paper describes the aims and design of an open access African Studies Repository (ASR) (http://www.africanstudiesrepository.org/) that is under development. The ASR is a relational database compatible with the open repository platform DSpace but incorporating the participatory online tools collectively known as ‘Web 2.0’. The aim of the ASR is to create a space where everyone who works on Africa, both inside and outside the continent, can store their work, access useful resources, make c...
This comprehensive guide examines the state of electronic serials cataloging with special attention paid to online capacities. E-Serials Cataloging: Access to Continuing and Integrating Resources via the Catalog and the Web presents a review of the e-serials cataloging methods of the 1990s and discusses the international standards (ISSN, ISBD[ER], AACR2) that are applicable. It puts the concept of online accessibility into historical perspective and offers a look at current applications to consider. Practicing librarians, catalogers and administrators of technical services, cataloging and serv
Full Text Available Digital forensics tools have many potential applications in the curation of digital materials in libraries, archives and museums (LAMs. Open source digital forensics tools can help LAM professionals to extract digital contents from born-digital media and make more informed preservation decisions. Many of these tools have ways to display the metadata of the digital media, but few provide file-level access without having to mount the device or use complex command-line utilities. This paper describes a project to develop software that supports access to the contents of digital media without having to mount or download the entire image. The work examines two approaches in creating this tool: First, a graphical user interface running on a local machine. Second, a web-based application running in web browser. The project incorporates existing open source forensics tools and libraries including The Sleuth Kit and libewf along with the Flask web application framework and custom Python scripts to generate web pages supporting disk image browsing.
Scharling, Peter; Hinsby, Klaus; Brennan, Kelsy
Geodata visualization and analysis is founded on proper access to all available data. Throughout several research projects Earthfx and GEUS managed to gather relevant data from both national and local databases into one platform. The web server platform which is easy accessible on the internet displays all types of spatially distributed geodata ranging from geochemistry, geological and geophysical well logs, surface- and airborne geophysics to any type of temporal measurements like water levels and trends in groundwater chemistry. Geological cross sections are an essential tool for the geoscientist. Moving beyond plan-view web mapping, GEUS and Earthfx have developed a webserver technology that provides the user with the ability to dynamically interact with geologic models developed for various projects in Denmark and in transboundary aquifers across the Danish-German border. The web map interface allows the user to interactively define the location of a multi-point profile, and the selected profile will be quickly drawn and illustrated as a slice through the 3D geologic model. Including all borehole logs within a user defined offset from the profile. A key aspect of the webserver technology is that the profiles are presented through a fully dynamic interface. Web users can select and interact with borehole logs contained in the underlying database, adjust vertical exaggeration, and add or remove off-section boreholes by dynamically adjusting the offset projection distance. In a similar manner to the profile tool, an interactive water level and water chemistry graphing tool has been integrated into the web service interface. Again, the focus is on providing a level of functionality beyond simple data display. Future extensions to the web interface and functionality are possible, as the web server utilizes the same code engine that is used for desktop geologic model construction and water quality data management. In summary, the GEUS/Earthfx web server tools
“As Bill Gates and Steve Case proclaim the global omnipresence of the Internet, the majority of non-Western nations and 97 per cent of the world's population remain unconnected to the net for lack of money, access, or knowledge. This exclusion of so vast a share of the global population from the Internet sharply contradicts the claims of those who posit the World Wide Web as a ‘universal' medium of egalitarian communication.” (Trend 2001:2)
Matykiewicz, J.; Anderson, G.; Henderson, D.; Hodgkinson, K.; Hoyt, B.; Lee, E.; Persson, E.; Torrez, D.; Smith, J.; Wright, J.; Jackson, M.
The EarthScope Plate Boundary Observatory (PBO) at UNAVCO, Inc., part of the NSF-funded EarthScope project, is designed to study the three-dimensional strain field resulting from deformation across the active boundary zone between the Pacific and North American plates in the western United States. To meet these goals, PBO will install 880 continuous GPS stations, 103 borehole strainmeter stations, and five laser strainmeters, as well as manage data for 209 previously existing continuous GPS stations and one previously existing laser strainmeter. UNAVCO provides access to data products from these stations, as well as general information about the PBO project, via the PBO web site (http://pboweb.unavco.org). GPS and strainmeter data products can be found using a variety of access methods, incuding map searches, text searches, and station specific data retrieval. In addition, the PBO construction status is available via multiple mapping interfaces, including custom web based map widgets and Google Earth. Additional construction details can be accessed from PBO operational pages and station specific home pages. The current state of health for the PBO network is available with the statistical snap-shot, full map interfaces, tabular web based reports, and automatic data mining and alerts. UNAVCO is currently working to enhance the community access to this information by developing a web service framework for the discovery of data products, interfacing with operational engineers, and exposing data services to third party participants. In addition, UNAVCO, through the PBO project, provides advanced data management and monitoring systems for use by the community in operating geodetic networks in the United States and beyond. We will demonstrate these systems during the AGU meeting, and we welcome inquiries from the community at any time.
Luis Joyanes Aguilar; Gloria García Fernández; Oscar Sanjuán Martínez; Edward Rolando Núñez Valdez; Juan Manuel Cueva Lovelle
Due to the significant increase in threats, attacks and vulnerabilities that affect the Web in recent years has resulted the development and implementation of pools and methods to ensure security measures in the privacy, confidentiality and data integrity of users and businesses. Under certain circumstances, despite the implementation of these tools do not always get the flow of information which is passed in a secure manner. Many of these security tools and methods cannot be accessed by peop...
Boren, Suzanne Austin; Gunlock, Teira L.
The objective of this pilot study was to assess the accessibility of congestive heart failure consumer information on the web. Twenty-seven education trials involving 5589 patients with congestive heart failure were analyzed. Education topics and outcomes were abstracted. Twenty education topics were linked to outcomes. A sample of 15 websites missed 56.7% of education topics and 61.8% of technical website characteristics that have suggested accuracy, reliability, and timeliness of content.
Ramya, C; Shreedhara, K S
In this paper, we propose ART1 neural network clustering algorithm to group users according to their Web access patterns. We compare the quality of clustering of our ART1 based clustering technique with that of the K-Means and SOM clustering algorithms in terms of inter-cluster and intra-cluster distances. The results show the average inter-cluster distance of ART1 is high compared to K-Means and SOM when there are fewer clusters. As the number of clusters increases, average inter-cluster distance of ART1 is low compared to K-Means and SOM which indicates the high quality of clusters formed by our approach.
McCann, M. P.
Using the STOQS Web Application for Access to in situ Oceanographic Data Mike McCann 7 August 2012 With increasing measurement and sampling capabilities of autonomous oceanographic platforms (e.g. Gliders, Autonomous Underwater Vehicles, Wavegliders), the need to efficiently access and visualize the data they collect is growing. The Monterey Bay Aquarium Research Institute has designed and built the Spatial Temporal Oceanographic Query System (STOQS) specifically to address this issue. The need for STOQS arises from inefficiencies discovered from using CF-NetCDF point observation conventions for these data. The problem is that access efficiency decreases with decreasing dimension of CF-NetCDF data. For example, the Trajectory Common Data Model feature type has only one coordinate dimension, usually Time - positions of the trajectory (Depth, Latitude, Longitude) are stored as non-indexed record variables within the NetCDF file. If client software needs to access data between two depth values or from a bounded geographic area, then the whole data set must be read and the selection made within the client software. This is very inefficient. What is needed is a way to easily select data of interest from an archive given any number of spatial, temporal, or other constraints. Geospatial relational database technology provides this capability. The full STOQS application consists of a Postgres/PostGIS database, Mapserver, and Python-Django running on a server and Web 2.0 technology (jQuery, OpenLayers, Twitter Bootstrap) running in a modern web browser. The web application provides faceted search capabilities allowing a user to quickly drill into the data of interest. Data selection can be constrained by spatial, temporal, and depth selections as well as by parameter value and platform name. The web application layer also provides a REST (Representational State Transfer) Application Programming Interface allowing tools such as the Matlab stoqstoolbox to retrieve data
Taneja, Harsh; Wu, Angela Xiao
The dominant understanding of Internet censorship posits that blocking access to foreign-based websites creates isolated communities of Internet users. We question this discourse for its assumption that if given access people would use all websites. We develop a conceptual framework that integrates access blockage with social structures to explain web users' choices, and argue that users visit websites they find culturally proximate and access blockage matters only when such sites are blocked...
de Filippis, Tiziana; Rocchi, Leandro; Rapisardi, Elena
The sharing of research data is a new challenge for the scientific community that may benefit from a large amount of information to solve environmental issues and sustainability in agriculture and urban contexts. Prerequisites for this challenge is the development of an infrastructure that ensure access, management and preservation of data, technical support for a coordinated and harmonious management of data that, in the framework of Open Data Policies, should encourages the reuse and the collaboration. The neogeography and the citizen as sensors approach, highlight that new data sources need a new set of tools and practices so to collect, validate, categorize, and use / access these "crowdsourced" data, that integrate the data sets produced in the scientific field, thus "feeding" the overall available data for analysis and research. When the scientific community embraces the dimension of collaboration and sharing, access and re-use, in order to accept the open innovation approach, it should redesign and reshape the processes of data management: the challenges of technological and cultural innovation, enabled by web 2.0 technologies, bring to the scenario where the sharing of structured and interoperable data will constitute the unavoidable building block to set up a new paradigm of scientific research. In this perspective the Institute of Biometeorology, CNR, whose aim is contributing to sharing and development of research data, has developed the "SensorWebHub" (SWH) infrastructure to support the scientific activities carried out in several research projects at national and international level. It is designed to manage both mobile and fixed open source meteorological and environmental sensors, in order to integrate the existing agro-meteorological and urban monitoring networks. The proposed architecture uses open source tools to ensure sustainability in the development and deployment of web applications with geographic features and custom analysis, as requested
von Haller, B.; Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F.; Delort, C.; Dénes, E.; Diviá, R.; Fuchs, U.; Niedziela, J.; Simonetti, G.; Soós, C.; Telesca, A.; Vande Vyvre, P.; Wegrzynek, A.
Asnier Góngora R.
Full Text Available The article is associated with the creation of a usability lab where various types of tests performed using static and dynamic tools for evaluating the characteristics of usability, accessibility and Communicability by indicators in the software testing process with the user's presence. It also addresses the current situation in Cuba on the issue of evidence of these characteristics and the impact it could bring to the development teams. In addition, an analysis of the result of applying a tool (check list to multiple Web applications on tests conducted at the National Center for Software Quality Cuba (CALISOFT. We also present a set of best practices that support the development of web applications to suit the user.
Dowling, Nicki A; Rodda, Simone N; Lubman, Dan I; Jackson, Alun C
The 'concerned significant others' (CSOs) of people with problem gambling frequently seek professional support. However, there is surprisingly little research investigating the characteristics or help-seeking behaviour of these CSOs, particularly for web-based counselling. The aims of this study were to describe the characteristics of CSOs accessing the web-based counselling service (real time chat) offered by the Australian national gambling web-based counselling site, explore the most commonly reported CSO impacts using a new brief scale (the Problem Gambling Significant Other Impact Scale: PG-SOIS), and identify the factors associated with different types of CSO impact. The sample comprised all 366 CSOs accessing the service over a 21 month period. The findings revealed that the CSOs were most often the intimate partners of problem gamblers and that they were most often females aged under 30 years. All CSOs displayed a similar profile of impact, with emotional distress (97.5%) and impacts on the relationship (95.9%) reported to be the most commonly endorsed impacts, followed by impacts on social life (92.1%) and finances (91.3%). Impacts on employment (83.6%) and physical health (77.3%) were the least commonly endorsed. There were few significant differences in impacts between family members (children, partners, parents, and siblings), but friends consistently reported the lowest impact scores. Only prior counselling experience and Asian cultural background were consistently associated with higher CSO impacts. The findings can serve to inform the development of web-based interventions specifically designed for the CSOs of problem gamblers. PMID:24813552
Faraggi, Eshel; Zhou, Yaoqi; Kloczkowski, Andrzej
We present a new approach for predicting the Accessible Surface Area (ASA) using a General Neural Network (GENN). The novelty of the new approach lies in not using residue mutation profiles generated by multiple sequence alignments as descriptive inputs. Instead we use solely sequential window information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment-based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is tested on predicting the ASA of globular proteins and found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for GENN and ASAquick are available from Research and Information Systems at http://mamiris.com, from the SPARKS Lab at http://sparks-lab.org, and from the Battelle Center for Mathematical Medicine at http://mathmed.org. PMID:25204636
Price, Matthew; Yuen, Erica K; Davidson, Tatiana M; Hubel, Grace; Ruggiero, Kenneth J
Although Web-based treatments have significant potential to assess and treat difficult-to-reach populations, such as trauma-exposed adolescents, the extent that such treatments are accessed and used is unclear. The present study evaluated the proportion of adolescents who accessed and completed a Web-based treatment for postdisaster mental health symptoms. Correlates of access and completion were examined. A sample of 2,000 adolescents living in tornado-affected communities was assessed via structured telephone interview and invited to a Web-based treatment. The modular treatment addressed symptoms of posttraumatic stress disorder, depression, and alcohol and tobacco use. Participants were randomized to experimental or control conditions after accessing the site. Overall access for the intervention was 35.8%. Module completion for those who accessed ranged from 52.8% to 85.6%. Adolescents with parents who used the Internet to obtain health-related information were more likely to access the treatment. Adolescent males were less likely to access the treatment. Future work is needed to identify strategies to further increase the reach of Web-based treatments to provide clinical services in a postdisaster context. PMID:25622071
Baker, Stewart C.
This article argues that accessibility and universality are essential to good Web design. A brief review of library science literature sets the issue of Web accessibility in context. The bulk of the article explains the design philosophies of progressive enhancement and responsive Web design, and summarizes recent updates to WCAG 2.0, HTML5, CSS…
Kadlec, J.; Ames, D. P.
The aim of the presented work is creating a freely accessible, dynamic and re-usable snow cover map of the world by combining snow extent and snow depth datasets from multiple sources. The examined data sources are: remote sensing datasets (MODIS, CryoLand), weather forecasting model outputs (OpenWeatherMap, forecast.io), ground observation networks (CUAHSI HIS, GSOD, GHCN, and selected national networks), and user-contributed snow reports on social networks (cross-country and backcountry skiing trip reports). For adding each type of dataset, an interface and an adapter is created. Each adapter supports queries by area, time range, or combination of area and time range. The combined dataset is published as an online snow cover mapping service. This web service lowers the learning curve that is required to view, access, and analyze snow depth maps and snow time-series. All data published by this service are licensed as open data; encouraging the re-use of the data in customized applications in climatology, hydrology, sports and other disciplines. The initial version of the interactive snow map is on the website snow.hydrodata.org. This website supports the view by time and view by site. In view by time, the spatial distribution of snow for a selected area and time period is shown. In view by site, the time-series charts of snow depth at a selected location is displayed. All snow extent and snow depth map layers and time series are accessible and discoverable through internationally approved protocols including WMS, WFS, WCS, WaterOneFlow and WaterML. Therefore they can also be easily added to GIS software or 3rd-party web map applications. The central hypothesis driving this research is that the integration of user contributed data and/or social-network derived snow data together with other open access data sources will result in more accurate and higher resolution - and hence more useful snow cover maps than satellite data or government agency produced data by
Bakker, Rembrandt; Tiesinga, Paul; Kötter, Rolf
The Scalable Brain Atlas (SBA) is a collection of web services that provide unified access to a large collection of brain atlas templates for different species. Its main component is an atlas viewer that displays brain atlas data as a stack of slices in which stereotaxic coordinates and brain regions can be selected. These are subsequently used to launch web queries to resources that require coordinates or region names as input. It supports plugins which run inside the viewer and respond when a new slice, coordinate or region is selected. It contains 20 atlas templates in six species, and plugins to compute coordinate transformations, display anatomical connectivity and fiducial points, and retrieve properties, descriptions, definitions and 3d reconstructions of brain regions. The ambition of SBA is to provide a unified representation of all publicly available brain atlases directly in the web browser, while remaining a responsive and light weight resource that specializes in atlas comparisons, searches, coordinate transformations and interactive displays. PMID:25682754
Scholl, I.; Girard, Y.; Bykowski, A.
This paper presents the architecture of a Java web-based graphical interface dedicated to the access of the SOHO Data archive. This application allows local and remote users to search in the SOHO data catalog and retrieve the SOHO data files from the archive. It has been developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France), which is one of the European Archives for the SOHO data. This development is part of a joint effort between ESA, NASA and IAS in order to implement long term archive systems for the SOHO data. The software architecture is built as a client-server application using Java language and SQL above a set of components such as an HTTP server, a JDBC gateway, a RDBMS server, a data server and a Web browser. Since HTML pages and CGI scripts are not powerful enough to allow user interaction during a multi-instrument catalog search, this type of requirement enforces the choice of Java as the main language. We also discuss performance issues, security problems and portability on different Web browsers and operating syste ms.
Eberle, Jonas; Hüttich, Christian; Schmullius, Christiane
Time series information is widely used in environmental change analyses and is also an essential information for stakeholders and governmental agencies. However, a challenging issue is the processing of raw data and the execution of time series analysis. In most cases, data has to be found, downloaded, processed and even converted in the correct data format prior to executing time series analysis tools. Data has to be prepared to use it in different existing software packages. Several packages like TIMESAT (Jönnson & Eklundh, 2004) for phenological studies, BFAST (Verbesselt et al., 2010) for breakpoint detection, and GreenBrown (Forkel et al., 2013) for trend calculations are provided as open-source software and can be executed from the command line. This is needed if data pre-processing and time series analysis is being automated. To bring both parts, automated data access and data analysis, together, a web-based system was developed to provide access to satellite based time series data and access to above mentioned analysis tools. Users of the web portal are able to specify a point or a polygon and an available dataset (e.g., Vegetation Indices and Land Surface Temperature datasets from NASA MODIS). The data is then being processed and provided as a time series CSV file. Afterwards the user can select an analysis tool that is being executed on the server. The final data (CSV, plot images, GeoTIFFs) is visualized in the web portal and can be downloaded for further usage. As a first use case, we built up a complimentary web-based system with NASA MODIS products for Germany and parts of Siberia based on the Earth Observation Monitor (www.earth-observation-monitor.net). The aim of this work is to make time series analysis with existing tools as easy as possible that users can focus on the interpretation of the results. References: Jönnson, P. and L. Eklundh (2004). TIMESAT - a program for analysing time-series of satellite sensor data. Computers and Geosciences 30
Mukhopadhyay, Debajyoti; Saha, Dwaipayan; Kim, Young-Chon
The growth of the World Wide Web has emphasized the need for improvement in user latency. One of the techniques that are used for improving user latency is Caching and another is Web Prefetching. Approaches that bank solely on caching offer limited performance improvement because it is difficult for caching to handle the large number of increasingly diverse files. Studies have been conducted on prefetching models based on decision trees, Markov chains, and path analysis. However, the increased uses of dynamic pages, frequent changes in site structure and user access patterns have limited the efficacy of these static techniques. In this paper, we have proposed a methodology to cluster related pages into different categories based on the access patterns. Additionally we use page ranking to build up our prediction model at the initial stages when users haven't already started sending requests. This way we have tried to overcome the problems of maintaining huge databases which is needed in case of log based techn...
Ebenezer, Catherine; Bath, Peter A.; Pinfield, Stephen
1. Introduction The research project as a whole examines the factors that bear on the accessibility of online published professional information within the National Health Service (NHS) in England. The poster focuses on one aspect of this, control of access to the World Wide Web within NHS organisations. The overall aim of this study is to investigate the apparent disjunction between stated policy regarding evidence-based practice and professional learning, and actual IT (information te...
刘建伟; 李斌; 雷宏东
ASP是目前流行的WEB应用程序开发技术。 ASP访问WEB数据库的关键在于建立与数据库的连接。本文介绍了ASP的基本工作原理，数据库访问组件ADO，并详细说明了通过ADO连接Access数据库的多种方法。%ASP is a popular Web application development technologies .The key point of ASP access to a Web database is to estab-lish a connection to the database .This article describes the basic working principle of the ASP , database access components ADO , and a detailed description of methods on connecting to an access database through ADO is given .
This thesis reports on the user-interface design guidelines for usability and accessibility in their connection to human-computer interaction and their implementation in the web design. The goal is to study the theoretical background of the design rules and apply them in designing a real-world website. The analysis of Jakobson’s communication theory applied in the web design and its implications in the design guidelines of visibility, affordance, feedback, simplicity, structure, consisten...
Wagner, Michael M.; Levander, John D.; Brown, Shawn; Hogan, William R.; Millett, Nicholas; Hanna, Josh
This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem—which we define as a configuration and a query of results—exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services. PMID:24551417
Harish Kumar; Anil Kumar Solanki
The Internet grows at an amazing rate as an information gateway and as a medium for business and education industry. Universities with web education rely on web usage analysis to obtain students behavior for web marketing. Web Usage Mining (WUM) integrates the techniques of two popular research fields – Data Mining and the Internet. Web usage mining attempts to discover useful knowledge from the secondary data (Web logs). These useful data pattern are use to analyze visitors activities in the...
Kunszt, Peter Z; Murri, Riccardo; Tschopp, Valery
This paper describes the design and implementation of GridCertLib, a Java library leveraging a Shibboleth-based authentication infrastructure and the SLCS online certificate signing service, to provide short-lived X.509 certificates and Grid proxies. The main use case envisioned for GridCertLib, is to provide seamless and secure access to Grid/X.509 certificates and proxies in web portals: when a user logs in to the portal using SWITCHaai Shibboleth authentication, GridCertLib can automatically obtain a Grid/X.509 certificate from the SLCS service and generate a VOMS proxy from it. We give an overview of the architecture of GridCertLib and briefly describe its programming model. Application to common deployment scenarios are outlined, and we report on our practical experience integrating GridCertLib into the a portal for Bioinformatics applications, based on the popular P-GRADE software.
Full Text Available Catalogare non vuol dire semplicemente costruire un catalogo. Vuol dire far sì che gli utenti accedano tempestivamente alle informazioni pertinenti alle loro esigenze. Il lavoro di identificazione delle risorse raccolte da biblioteche, archivi, musei, dà luogo a ricchi metadati che possono essere riutilizzati per molti scopi ("le attività dell’utente". Ciò comporta la descrizione delle risorse e il mostrare le loro relazioni con persone, famiglie, enti e altre risorse, consentendo così agli utenti di navigare attraverso surrogati delle risorse per ottenere più rapidamente le informazioni di cui hanno bisogno. I metadati costruiti lungo tutto il ciclo di vita di una risorsa sono particolarmente preziosi per molti tipi di utenti: dai creatori delle risorse, agli editori, alle agenzie, ai librai, agli aggregatori di risorse, ai fornitori di sistemi, alle biblioteche, ad altre istituzioni culturali ed agli utenti finali. Il nuovo codice internazionale di catalogazione, RDA: Resource Description e Access, è progettato per soddisfare le attività di base degli utenti producendo metadati ben formati e interconnessi per l'ambiente digitale, dando la possibilità alle biblioteche di rimanere rilevanti nel web semantico. Acknowledge Il testo inglese del saggio è pubblicato in "Serials", November 2011, 24, 3, con il titolo Keeping Libraries Relevant in the Semantic Web with RDA: Resource Description and Access, DOI: http://dx.doi.org/10.1629/24266. Traduzione italiana di Maria Chiara Iorio e Tiziana Possemato, che ringraziano Carlo Bianchini e Mauro Guerrini per la rilettura della traduzione.
The past decade has seen an 'explosion' in electronically archived evidence available on the Internet. Access to, and awareness of, pre-appraised web based evidence such as is available at the Cochrane Library, and more recently the Cancer Library, is now easily accessible to both clinicians and patients. A postal survey was recently sent out to all Radiation Oncology registrars in Australia, New Zealand and Singapore. The aim of the survey was to ascertain previous training in literature searching and critical appraisal, the extent of Internet access and use of web based evidence and awareness of databases including the Cochrane Library. Sixty six (66) out of ninety (90) registrars responded (73% response rate). Fifty five percent of respondents had previously undertaken some form of training related to literature searching or critical appraisal. The majority (68%) felt confident in performing a literature search, although 80% of respondents indicated interest in obtaining further training. The majority (68%) reported accessing web-based evidence for literature searching in the previous week, and 92% in the previous month. Nearly all respondents (89%) accessed web-based evidence at work. Most (94%) were aware of the Cochrane Library with 48% of respondents having used this database. Sixty-eight percent were aware of the Cancer Library. In 2000 a similar survey revealed only 68% of registrars aware and 30% having used the Cochrane Library. These findings reveal almost universal access to the Internet and use of web-based evidence amongst Radiation Oncology registrars. There has been a marked increase in awareness and use of the Cochrane Library with the majority also aware of the recently introduced Cancer Library
Niu, Lu; Luo, Dan; Liu, Ying; Xiao, Shuiyuan
Objective: The present study was designed to assess the quality of Chinese-language Internet-based information on HIV/AIDS. Methods: We entered the following search terms, in Chinese, into Baidu and Sogou: “HIV/AIDS”, “symptoms”, and “treatment”, and evaluated the first 50 hits of each query using the Minervation validation instrument (LIDA tool) and DISCERN instrument. Results: Of the 900 hits identified, 85 websites were included in this study. The overall score of the LIDA tool was 63.7%; the mean score of accessibility, usability, and reliability was 82.2%, 71.5%, and 27.3%, respectively. Of the top 15 sites according to the LIDA score, the mean DISCERN score was calculated at 43.1 (95% confidence intervals (CI) = 37.7–49.5). Noncommercial websites showed higher DISCERN scores than commercial websites; whereas commercial websites were more likely to be found in the first 20 links obtained from each search engine than the noncommercial websites. Conclusions: In general, the HIV/AIDS related Chinese-language websites have poor reliability, although their accessibility and usability are fair. In addition, the treatment information presented on Chinese-language websites is far from sufficient. There is an imperative need for professionals and specialized institutes to improve the comprehensiveness of web-based information related to HIV/AIDS. PMID:27556475
Xavier Suresh R
Full Text Available Abstract Background Many important agricultural traits such as weight gain, milk fat content and intramuscular fat (marbling in cattle are quantitative traits. Most of the information on these traits has not previously been integrated into a genomic context. Without such integration application of these data to agricultural enterprises will remain slow and inefficient. Our goal was to populate a genomic database with data mined from the bovine quantitative trait literature and to make these data available in a genomic context to researchers via a user friendly query interface. Description The QTL (Quantitative Trait Locus data and related information for bovine QTL are gathered from published work and from existing databases. An integrated database schema was designed and the database (MySQL populated with the gathered data. The bovine QTL Viewer was developed for the integration of QTL data available for cattle. The tool consists of an integrated database of bovine QTL and the QTL viewer to display QTL and their chromosomal position. Conclusion We present a web accessible, integrated database of bovine (dairy and beef cattle QTL for use by animal geneticists. The viewer and database are of general applicability to any livestock species for which there are public QTL data. The viewer can be accessed at http://bovineqtl.tamu.edu.
Kannan, Jayanthkumar; Chun, Byung-Gon
This paper introduces the notion of a secure data capsule, which refers to an encapsulation of sensitive user information (such as a credit card number) along with code that implements an interface suitable for the use of such information (such as charging for purchases) by a service (such as an online merchant). In our capsule framework, users provide their data in the form of such capsules to web services rather than raw data. Capsules can be deployed in a variety of ways, either on a trusted third party or the user's own computer or at the service itself, through the use of a variety of hardware or software modules, such as a virtual machine monitor or trusted platform module: the only requirement is that the deployment mechanism must ensure that the user's data is only accessed via the interface sanctioned by the user. The framework further allows an user to specify policies regarding which services or machines may host her capsule, what parties are allowed to access the interface, and with what parameter...
The central premise of this research is that blind and visually impaired (BVI) people cannot use the Internet effectively due to accessibility and usability problems. Use of the Internet is indispensable in today's education system that relies on Web-enhanced instruction (WEI). Therefore, BVI students cannot participate effectively in WEI. Extant…
Metacognitive strategies are regarded as advanced strategies in all the learning strategies. This study focuses on the ap⁃plication of metacognitive strategies in English listening in the web-based self-access learning environment (WSLE) and tries to provide some references for those students and teachers in the vocational colleges.
Full Text Available This study develops a novel method for learning mechatronics using remote monitoring and control, based on a programmable logic controller (PLC and WebAccess. A mechatronics module, a Web‐CAM and a PLC were integrated with WebAccess software to organize a remote laboratory. The proposed system enables users to access the Internet for remote monitoring and control of the mechatronics module via a web browser, thereby enhancing work flexibility by enabling personnel to control mechatronics equipment from a remote location. Mechatronics control and long‐distance monitoring were realized by establishing communication between the PLC and WebAccess. Analytical results indicate that the proposed system is feasible. The suitability of this system is demonstrated in the department of industrial education and technology at National Changhua University of Education, Taiwan. Preliminary evaluation of the system was encouraging and has shown that it has achieved success in helping students understand concepts and master remote monitoring and control techniques.
Mattson, E.; Versteeg, R.; Ankeny, M.; Stormberg, G.
This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster
Singh, Kulwinder; Park, Dong-Won
We propose an architecture which enables people to enquire about information available in directory services by voice using regular phones. We implement a Virtual User Agent (VUA) which mediates between the human user and a business directory service. The system enables the user to search for the nearest clinic, gas station by price, motel by price, food / coffee, banks/ATM etc. and fix an appointment, or automatically establish a call between the user and the business party if the user prefers. The user also has an option to receive appointment confirmation by phone, SMS, or e-mail. The VUA is accessible by a toll free DID (Direct Inward Dialing) number using a phone by anyone, anywhere, anytime. We use the Euclidean formula for distance measurement. Since, shorter geodesic distances (on the Earth’s surface) correspond to shorter Euclidean distances (measured by a straight line through the Earth). Our proposed architecture uses Atom XML syndication format protocol for data integration, VoiceXML for creating the voice user interface (VUI) and CCXML for controlling the call components. We also provide an efficient algorithm for parsing Atom feeds which provide data to the system. Moreover, we describe a cost-effective way for providing global access to the VUA based on Asterisk (an open source IP-PBX). We also provide some information on how our system can be integrated with GPS for locating the user coordinates and therefore efficiently and spontaneously enhancing the system response. Additionally, the system has a mechanism for validating the phone numbers in its database, and it updates the number and other information such as daily price of gas, motel etc. automatically using an Atom-based feed. Currently, the commercial directory services (Example 411) do not have facilities to update the listing in the database automatically, so that why callers most of the times get out-of-date phone numbers or other information. Our system can be integrated very easily
Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming
The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. PMID:27106060
Zinzi, Angelo; Capria, Maria Teresa; Palomba, Ernesto; Antonelli, Lucio Angelo; Giommi, Paolo
In the recent years planetary exploration missions acquired data from minor bodies (i.e., dwarf planets, asteroid and comets) at a detail level never reached before. Since these objects often present very irregular shapes (as in the case of the comet 67P Churyumov-Gerasimenko target of the ESA Rosetta mission) "classical" bidimensional projections of observations are difficult to understand. With the aim of providing the scientific community a tool to access, visualize and analyze data in a new way, ASI Science Data Center started to develop MATISSE (Multi-purposed Advanced Tool for the Instruments for the Solar System Exploration - http://tools.asdc.asi.it/matisse.jsp) in late 2012. This tool allows 3D web-based visualization of data acquired by planetary exploration missions: the output could either be the straightforward projection of the selected observation over the shape model of the target body or the visualization of a high-order product (average/mosaic, difference, ratio, RGB) computed directly online with MATISSE. Standard outputs of the tool also comprise downloadable files to be used with GIS software (GeoTIFF and ENVI format) and 3D very high-resolution files to be viewed by means of the free software Paraview. During this period the first and most frequent exploitation of the tool has been related to visualization of data acquired by VIRTIS-M instruments onboard Rosetta observing the comet 67P. The success of this task, well represented by the good number of published works that used images made with MATISSE confirmed the need of a different approach to correctly visualize data coming from irregular shaped bodies. In the next future the datasets available to MATISSE are planned to be extended, starting from the addition of VIR-Dawn observations of both Vesta and Ceres and also using standard protocols to access data stored in external repositories, such as NASA ODE and Planetary VO.
Groenewegen, D.; Visser, E.
Preprint of paper published in: ICWE 2008 - 8th International Conference on Web Engineering, 14-18 July 2008; doi:10.1109/ICWE.2008.15 In this paper, we present the extension of WebDSL, a domain-specific language for web application development, with abstractions for declarative definition of acces
O'Neil, Daniel A.
Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.
Xue, Zhiyun; Long, L. Rodney; Antani, Sameer; Jeronimo, Jose; Thoma, George R.
Content-based image retrieval (CBIR) is the process of retrieving images by directly using image visual characteristics. In this paper, we present a prototype system implemented for CBIR for a uterine cervix image (cervigram) database. This cervigram database is a part of data collected in a multi-year longitudinal effort by the National Cancer Institute (NCI), and archived by the National Library of Medicine (NLM), for the study of the origins of, and factors related to, cervical precancer/cancer. Users may access the system with any Web browser. The system is built with a distributed architecture which is modular and expandable; the user interface is decoupled from the core indexing and retrieving algorithms, and uses open communication standards and open source software. The system tries to bridge the gap between a user's semantic understanding and image feature representation, by incorporating the user's knowledge. Given a user-specified query region, the system returns the most similar regions from the database, with respect to attributes of color, texture, and size. Experimental evaluation of the retrieval performance of the system on "groundtruth" test data illustrates its feasibility to serve as a possible research tool to aid the study of the visual characteristics of cervical neoplasia.
Chernenkov, V. N.; Vitkovskij, V. V.; Kalinina, N. A.
The state and speed characteristics of Web access to the first five nodes of the projected geographically distributed system of scientific monitoring of near and deep space are analyzed. The possibility of developing an architecture involving user query redirection to a caching server is studied. This will make it possible to relieve hardware communication links substantially and speed up HTTP connection time, especially for nodes linked via heavily congested Internet links.
Lemaire, E D; Deforge, D; Marshall, S; Curran, D
A web-based transitional health record was created to provide regional healthcare professionals with ubiquitous access to information on people with brain injuries as they move through the healthcare system. Participants included public, private, and community healthcare organizations/providers in Eastern Ontario (Canada). One hundred and nineteen service providers and 39 brain injury survivors registered over 6 months. Fifty-eight percent received English and 42% received bilingual services (English-French). Public health providers contacted the regional service coordinator more than private providers (52% urban centres, 26% rural service providers, and 22% both areas). Thirty-five percent of contacts were for technical difficulties, 32% registration inquiries, 21% forms and processes, 6% resources, and 6% education. Seventeen technical enquiries required action by technical support personnel: 41% digital certificates, 29% web forms, and 12% log-in. This web-based approach to clinical information sharing provided access to relevant data as clients moved through or re-entered the health system. Improvements include automated digital certificate management, institutional health records system integration, and more referral tracking tools. More sensitive test data could be accessed on-line with increasing consumer/clinician confidence. In addition to a strong technical infrastructure, human resource issues are a major information security component and require continuing attention to ensure a viable on-line information environment. PMID:16469409
Ozyurt, I Burak; Keator, David B; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R; Bockholt, Jeremy; Grethe, Jeffrey S
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site. PMID:20567938
Pliutau, Denis; Prasad, Narashimha S.
Current approaches to satellite observation data storage and distribution implement separate visualization and data access methodologies which often leads to the need in time consuming data ordering and coding for applications requiring both visual representation as well as data handling and modeling capabilities. We describe an approach we implemented for a data-encoded web map service based on storing numerical data within server map tiles and subsequent client side data manipulation and map color rendering. The approach relies on storing data using the lossless compression Portable Network Graphics (PNG) image data format which is natively supported by web-browsers allowing on-the-fly browser rendering and modification of the map tiles. The method is easy to implement using existing software libraries and has the advantage of easy client side map color modifications, as well as spatial subsetting with physical parameter range filtering. This method is demonstrated for the ASTER-GDEM elevation model and selected MODIS data products and represents an alternative to the currently used storage and data access methods. One additional benefit includes providing multiple levels of averaging due to the need in generating map tiles at varying resolutions for various map magnification levels. We suggest that such merged data and mapping approach may be a viable alternative to existing static storage and data access methods for a wide array of combined simulation, data access and visualization purposes.
Wanner, Miriam; Martin-Diener, Eva; Bauer, Georg; Braun-Fahrländer, Charlotte; Martin, Brian W
Background Web-based interventions are popular for promoting healthy lifestyles such as physical activity. However, little is known about user characteristics, adherence, attrition, and predictors of repeated participation on open access physical activity websites. Objective The focus of this study was Active-online, a Web-based individually tailored physical activity intervention. The aims were (1) to assess and compare user characteristics and adherence to the website (a) in the open access...
Eberle, Jonas; Urban, Marcel; Hüttich, Christian; Schmullius, Christiane
Numerous datasets providing temperature information from meteorological stations or remote sensing satellites are available. However, the challenging issue is to search in the archives and process the time series information for further analysis. These steps can be automated for each individual product, if the pre-conditions are complied, e.g. data access through web services (HTTP, FTP) or legal rights to redistribute the datasets. Therefore a python-based package was developed to provide data access and data processing tools for MODIS Land Surface Temperature (LST) data, which is provided by NASA Land Processed Distributed Active Archive Center (LPDAAC), as well as the Global Surface Summary of the Day (GSOD) and the Global Historical Climatology Network (GHCN) daily datasets provided by NOAA National Climatic Data Center (NCDC). The package to access and process the information is available as web services used by an interactive web portal for simple data access and analysis. Tools for time series analysis were linked to the system, e.g. time series plotting, decomposition, aggregation (monthly, seasonal, etc.), trend analyses, and breakpoint detection. Especially for temperature data a plot was integrated for the comparison of two temperature datasets based on the work by Urban et al. (2013). As a first result, a kernel density plot compares daily MODIS LST from satellites Aqua and Terra with daily means from GSOD and GHCN datasets. Without any data download and data processing, the users can analyze different time series datasets in an easy-to-use web portal. As a first use case, we built up this complimentary system with remotely sensed MODIS data and in situ measurements from meteorological stations for Siberia within the Siberian Earth System Science Cluster (www.sibessc.uni-jena.de). References: Urban, Marcel; Eberle, Jonas; Hüttich, Christian; Schmullius, Christiane; Herold, Martin. 2013. "Comparison of Satellite-Derived Land Surface Temperature and Air
National Oceanic and Atmospheric Administration, Department of Commerce — The ecosystem impacts of ocean acidification (OA) were explored by imposing scenarios designed to mimic OA on a food web model of Puget Sound, a large estuary in...
Full Text Available Abstract Background Innovations in biological and biomedical imaging produce complex high-content and multivariate image data. For decision-making and generation of hypotheses, scientists need novel information technology tools that enable them to visually explore and analyze the data and to discuss and communicate results or findings with collaborating experts from various places. Results In this paper, we present a novel Web2.0 approach, BioIMAX, for the collaborative exploration and analysis of multivariate image data by combining the webs collaboration and distribution architecture with the interface interactivity and computation power of desktop applications, recently called rich internet application. Conclusions BioIMAX allows scientists to discuss and share data or results with collaborating experts and to visualize, annotate, and explore multivariate image data within one web-based platform from any location via a standard web browser requiring only a username and a password. BioIMAX can be accessed at http://ani.cebitec.uni-bielefeld.de/BioIMAX with the username "test" and the password "test1" for testing purposes.
Dogan, Nergiz; Wu, Weisheng; Morrissey, Christapher S.; Chen, Kuan-Bei; Stonestrom, Aaron; Long, Maria; Keller, Cheryl A.; Cheng, Yong; Jain, Deepti; Visel, Axel; Pennacchio, Len A.; Weiss, Mitchell J.; Blobel, Gerd A.; Hardison, Ross C.
Background Regulated gene expression controls organismal development, and variation in regulatory patterns has been implicated in complex traits. Thus accurate prediction of enhancers is important for further understanding of these processes. Genome-wide measurement of epigenetic features, such as histone modifications and occupancy by transcription factors, is improving enhancer predictions, but the contribution of these features to prediction accuracy is not known. Given the importance of t...
Khelghati, Mohammadreza; Hiemstra, Djoerd; Keulen, van Maurice
With increasing amount of data in deep web sources (hidden from general search engines behind web forms), accessing this data has gained more attention. In the algorithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate decisions in sto
This paper proposes an access control model for Web services. The integration of the security model into Web services can realize dynamic right changes of security access control on Web services for improving static access control at present. The new model provides view policy language to describe access control policy of Web services. At the end of the paper we describe an infrastructure of integration of the security model into Web services to enforce access control polices of Web services.%提出了一种用于Web服务的访问控制模型，这种模型和Web服务相结合，能够实现Web服务下安全访问控制权限的动态改变，改善目前静态访问控制问题。新的模型提供的视图策略语言VPL用于描述Web服务的访问控制策略。给出了新的安全模型和Web服务集成的结构，用于执行Web服务访问控制策略。
Lloyd, Steven; Acker, James G.; Prados, Ana I.; Leptoukh, Gregory G.
One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite-based remote sensing data sets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable data set to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface.
Evaluación comparativa de la accesibilidad de los espacios web de las bibliotecas universitarias españolas y norteamericanas Comparative accessibility assessment of Web spaces in Spanish and American university libraries
Full Text Available El objetivo principal de la presente investigación es analizar y comparar el grado de cumplimiento de determinadas pautas de accesibilidad web en dos grupos de espacios web que pertenecen a una misma tipología conceptual: "Bibliotecas Universitarias", pero que forman parte de dos realidades geográficas, sociales y económicas diferentes: España y Norteamérica. La interpretación de los resultados pone de manifiesto que es posible utilizar técnicas webmétricas basadas en las características de accesibilidad web para contrastar dos conjuntos de espacios web cerrados.The main objective of this research is to analyze and compare the degree in which certain Accessibility Guidelines comply with two groups of web spaces which belong to the same conceptual typology: "University Libraries", but conform two different geographic, social and economical realities -Spain and the United States. Interpretation of results reveals the possibility of using web metrics techniques based on Web accessibility characteristics in order to contrast two categories of closed web spaces.
Geiger, Brian; Evans, R. R.; Cellitti, M. A.; Smith, K. Hogan; O'Neal, Marcia R.; Firsing, S. L., III; Chandan, P.
Background: The Internet can be an invaluable resource for obtaining health information by people with disabilities. Although valid and reliable information is available, previous research revealed barriers to accessing health information online. Health education specialists have the responsibilities to insure that it is accessible to all users.…
SUMMARY: Much is expected of the draft human genome sequence, and yet there is no central resource to host the plethora of sequence and mapping information available. Consequently, finding the most useful and reliable human genome data and resources currently available on the web can be challenging, but is not impossible.
Much is expected of the draft human genome sequence, and yet there is no central resource to host the plethora of sequence and mapping information available. Consequently, finding the most useful and reliable human genome data and resources currently available on the web can be challenging, but is not impossible.
Daniel Francisco Arencibia-Arrebola
Full Text Available VacciMonitor has gradually increased its visibility by access to different databases. Thus, it was introduced in the project SciELO, EBSCO, HINARI, Redalyc, SCOPUS, DOAJ, SICC Data Bases, SeCiMed, among almost thirty well-known index sites, including the virtual libraries of the main universities from United States of America and other countries. Through an agreement SciELO-Web of Science (WoS it will be possible to include the journals that are indexed in SciELO in the WoS, however this collaboration work is already presenting its outcomes, it is possible to access the content of SciELO by WoS in the link: http://wokinfo.com/products_tools/multidisciplinar y/scielo/ WoS was designed by the Institute for Scientific Information (ISI and it is one of the products of the pack ISI Web of Knowledge, currently property of Thomson Reuters (1. WoS is a service of citation index and databases, worldwide on-line leader with multidisciplinary information covering the knowledge fields of sciences in general, social sciences as well as arts and humanities with more than 46 million of bibliographical references and other hundreds of citations, that made possible navigation in the broad web of journal articles, lecture materials and other registers included in its collection (1. The logic of the functioning of WoS is based on quantitative criteria, since a bigger production demonstrates a greater number of registered papers in most recognized Journals and to what extend these papers are cited by these journals (2. The information obtained from WoS databases are very useful to address efforts of scientific research to a personal, institutional or national level. Scientists publishing in WoS journals not only produce more scientific literature but also this literature is more consulted and used (3. However, it should be considered that statistics of this site for the bibliometric analysis only take into account those journals in this web, but contains three
Mitchell, Christine M.; Thurman, David A.
AutoHelp is a case-based, Web-accessible help desk for users of the EOSDIS. Its uses a combination of advanced computer and Web technologies, knowledge-based systems tools, and cognitive engineering to offload the current, person-intensive, help desk facilities at the DAACs. As a case-based system, AutoHelp starts with an organized database of previous help requests (questions and answers) indexed by a hierarchical category structure that facilitates recognition by persons seeking assistance. As an initial proof-of-concept demonstration, a month of email help requests to the Goddard DAAC were analyzed and partially organized into help request cases. These cases were then categorized to create a preliminary case indexing system, or category structure. This category structure allows potential users to identify or recognize categories of questions, responses, and sample cases similar to their needs. Year one of this research project focused on the development of a technology demonstration. User assistance 'cases' are stored in an Oracle database in a combination of tables linking prototypical questions with responses and detailed examples from the email help requests analyzed to date. When a potential user accesses the AutoHelp system, a Web server provides a Java applet that displays the category structure of the help case base organized by the needs of previous users. When the user identifies or requests a particular type of assistance, the applet uses Java database connectivity (JDBC) software to access the database and extract the relevant cases. The demonstration will include an on-line presentation of how AutoHelp is currently structured. We will show how a user might request assistance via the Web interface and how the AutoHelp case base provides assistance. The presentation will describe the DAAC data collection, case definition, and organization to date, as well as the AutoHelp architecture. It will conclude with the year 2 proposal to more fully develop the
Jones, R; Menon-Johansson, A; Waters, A M; Sullivan, A K
In recent years, the sexual health of the nation has risen in profile. We face increasing demands and targets, in particular the 48-hour waiting time directive, and as a result clinic access has become a priority. eTriage is a novel, secure, web-based service designed specifically to increase access to our clinics. It has proved a popular booking method, providing access to 10% of all appointments across the Directorate within six months of introduction. KC60 analyses revealed that the majority of users (58%) underwent asymptomatic screening with the remainder having some degree of pathology. There was a greater percentage prevalence of human papilloma virus, chlamydia, non-specific urethritis, gonorrhoea, herpes and trichomonas in the eTriage population when compared with the general clinic population. A notes review illustrated a high degree of concordance between data entered on eTriage registration and clinical review (97%). A patient survey revealed high levels of patient satisfaction with the service. As an adjunct to our existing booking services, eTriage has served to increase patient choice and has proved itself to be a safe, efficient and effective means of improving patient access. PMID:19884355
Keis, Felix; Chwala, Christian; Kunstmann, Harald
Using commercial microwave link networks for precipitation estimation has become popular in the last years. Acquiring the necessary data from the network operators is however still difficult. Usually, data is provided to researches with large temporal delay and at irregular basis. Driven by the demand to facilitate this data accessibility, a custom acquisition software for microwave links has been developed in joint cooperation with our industry partner Ericsson. It is capable of recording data from a great number of microwave links simultaneously and of forwarding the data instantaneously to a newly established KIT-internal database. It makes use of the Simple Network Management Protocol (SNMP) and collects the transmitter and receiver power levels via asynchronous SNMP requests. The software is currently in its first operational test phase, recording data from several hundred Ericsson microwave links in southern Germany. Furthermore the software is used to acquire data with 1 Hz temporal resolution from four microwave links operated by the skiing resort in Garmisch-Partenkirchen. For convenient accessibility of this amount of data we have developed a web frontend for the emerging microwave link database. It provides dynamic real time visualization and basic processing of the recorded transmitter and receiver power levels. Here we will present details of the custom data acquisition software with focus on the design of the KIT microwave link database and on the specifically developed web frontend.
Ronald James Marsh
For more classes visit www.snaptutorial.com WEB 434 Week 1 DQs and Summary WEB 434 Web Accessibility Standards Paper WEB 434 Week 2 DQs and Summary WEB 434 Web-Based Supply Chain Paper WEB 434 Week 3 DQs and Summary WEB 434 Affiliates Program Paper WEB 434 Week 4 DQs and Summary WEB 434 Virtual Organization Presentation WEB 434 Virtual Organization Proposal and Executive Summary Paper WEB 434 Week 5 DQs and Summary
Teng, W.; Chiu, L.; Kempler, S.; Liu, Z.; Nadeau, D.; Rui, H.
Using NASA satellite remote sensing data from multiple sources for hydrologic applications can be a daunting task and requires a detailed understanding of the data's internal structure and physical implementation. Gaining this understanding and applying it to data reduction is a time-consuming task that must be undertaken before the core investigation can begin. In order to facilitate such investigations, the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) has developed the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure or "Giovanni," which supports a family of Web interfaces (instances) that allow users to perform interactive visualization and analysis online without downloading any data. Two such Giovanni instances are particularly relevant to hydrologic applications: the Tropical Rainfall Measuring Mission (TRMM) Online Visualization and Analysis System (TOVAS) and the Agricultural Online Visualization and Analysis System (AOVAS), both highly popular and widely used for a variety of applications, including those related to several NASA Applications of National Priority, such as Agricultural Efficiency, Disaster Management, Ecological Forecasting, Homeland Security, and Public Health. Dynamic, context- sensitive Web services provided by TOVAS and AOVAS enable users to seamlessly access NASA data from within, and deeply integrate the data into, their local client environments. One example is between TOVAS and Florida International University's TerraFly, a Web-enabled system that serves a broad segment of the research and applications community, by facilitating access to various textual, remotely sensed, and vector data. Another example is between AOVAS and the U.S. Department of Agriculture Foreign Agricultural Service (USDA FAS)'s Crop Explorer, the primary decision support tool used by FAS to monitor the production, supply, and demand of agricultural commodities worldwide. AOVAS is also part of GES DISC
Ames, Charles; Auernheimer, Brent; Lee, Young H.
A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.
Baum, Matthias; Bernauer, Jochen; Benneke, Andreas; Fueseschi, Attila; Haake, Thomas; Mennecke, Lars; Urban, Martin; Pretschner, Dietrich-Peter
A multi-lingual data base for bone scintigraphy with instructive patient cases which is accessible under the WWW will be presented. It includes structured case records based on images and clinical information provided in different European languages. In order to support the retrieval of cases clinical information is represented in a language independent way by means of a formalized compositional concept system. This design allows for accessing cases under various clinical aspects and detail u...
The purpose of thesis is to present the development of a web service which eliminates the problems when using the mFi products of the company Ubiquiti Networks, inc. The problems are mainly the limited functionality offered by the software bundled with these devices. The webservice uses the communication protocol SOAP. Additionally and we encountered a relatively unknown database management system named MongoDB. Ubiquiti mFi is a family of gadgets to monitor events in buildings. The protoc...
Yang, Y Tony; Chen, Brian
Access to the Internet is increasingly critical for health information retrieval, access to certain government benefits and services, connectivity to friends and family members, and an array of commercial and social services that directly affect health. Yet older adults, particularly those with disabilities, are at risk of being left behind in this growing age- and disability-based digital divide. The Americans with Disabilities Act (ADA) was designed to guarantee older adults and persons with disabilities equal access to employment, retail, and other places of public accommodation. Yet older Internet users sometimes face challenges when they try to access the Internet because of disabilities associated with age. Current legal interpretations of the ADA, however, do not consider the Internet to be an entity covered by law. In this article, we examine the current state of Internet accessibility protection in the United States through the lens of the ADA, sections 504 and 508 of the Rehabilitation Act, state laws and industry guidelines. We then compare U.S. rules to those of OECD (Organisation for Economic Co-Operation and Development) countries, notably in the European Union, Canada, Japan, Australia, and the Nordic countries. Our policy recommendations follow from our analyses of these laws and guidelines, and we conclude that the biggest challenge in bridging the age- and disability-based digital divide is the need to extend accessibility requirements to private, not just governmental, entities and organizations. PMID:26156518
Prendiville, T W
OBJECTIVES: To establish the information-seeking behaviours of paediatricians in answering every-day clinical queries. DESIGN: A questionnaire was distributed to every hospital-based paediatrician (paediatric registrar and consultant) working in Ireland. RESULTS: The study received 156 completed questionnaires, a 66.1% response. 67% of paediatricians utilised the internet as their first "port of call" when looking to answer a medical question. 85% believe that web-based resources have improved medical practice, with 88% reporting web-based resources are essential for medical practice today. 93.5% of paediatricians believe attempting to answer clinical questions as they arise is an important component in practising evidence-based medicine. 54% of all paediatricians have recommended websites to parents or patients. 75.5% of paediatricians report finding it difficult to keep up-to-date with new information relevant to their practice. CONCLUSIONS: Web-based paediatric resources are of increasing significance in day-to-day clinical practice. Many paediatricians now believe that the quality of patient care depends on it. Information technology resources play a key role in helping physicians to deliver, in a time-efficient manner, solutions to clinical queries at the point of care.
Dr. Khanna SamratVivekanand Omprakash
Full Text Available This paper represents how the co-ordinates from the Google map stored into database . It stored into the central web server . This co-ordinates then transfer to client program for searching the locations of particular location for electronic device . Client can access the data from internet and use into program by using API . Development of software for a particular device for putting into the vehicle has been develop. In the inbuilt circuit assigning sim card and transferring the signal to the network. Supplying a single text of co-ordinates of locations using google map in terms of latitudes and longitudes. The information in terms of string separated by comma can be extracted and stored into the database of web server . Different mobile number with locations can be stored into the database simultaneously into the server of different clients . The concept of 3 Tier Client /Server architecture is used. The sim card can access information of GPRS system with the network provider of card . Setting of electronic device signal for receiving and sending message done. Different operations can be performed on the device as it can be attached with other electronic circuit of vehicle. Windows Mobile application developed for client slide. User can take different decision of vehicle from mobile by sending sms to the device . Device receives the operation and send to the electronic circuit of vehicle for certain operations. From remote place using mobile you can get the information of your vehicle and also you can control vehicle it by providing password to the electronic circuit for authorization and authentication. The concept of vehicle security and location of vehicle can be identified. The functions of vehicle can be accessed and control like speed , brakes and lights etc as per the software application interface with electronic circuit of vehicle.
Milius, Robert P; Heuer, Michael; George, Mike; Pollack, Jane; Hollenbach, Jill A; Mack, Steven J; Maiers, Martin
Genotype list (GL) Strings use a set of hierarchical character delimiters to represent allele and genotype ambiguity in HLA and KIR genotypes in a complete and accurate fashion. A RESTful web service called genotype list service was created to allow users to register a GL string and receive a unique identifier for that string in the form of a URI. By exchanging URIs and dereferencing them through the GL service, users can easily transmit HLA genotypes in a variety of useful formats. The GL service was developed to be secure, scalable, and persistent. An instance of the GL service is configured with a nomenclature and can be run in strict or non-strict modes. Strict mode requires alleles used in the GL string to be present in the allele database using the fully qualified nomenclature. Non-strict mode allows any GL string to be registered as long as it is syntactically correct. The GL service source code is free and open source software, distributed under the GNU Lesser General Public License (LGPL) version 3 or later. PMID:26621609
Full Text Available EXTENDED ABSTRACTFor the first time, the Dolenjska museum Novo mesto provided access to digitised museum resources when they took the decision to enrich the exhibition Novo mesto 1848-1918 by adding digital content. The following goals were identified: the digital content was created at the time of exhibition planning and design, it met the needs of different age groups of visitors, and during the exhibition the content was accessible via touch screen. As such, it also served for educational purposes (content-oriented lectures or problem solving team work. In the course of exhibition digital content was accessible on the museum website http://www.novomesto1848-1918.si. The digital content was divided into the following sections: the web photo gallery, the quiz and the game. The photo gallery was designed in the same way as the exhibition and the print catalogue and extended by the photos of contemporary Novo mesto and accompanied by the music from the orchestron machine. The following themes were outlined: the Austrian Empire, the Krka and Novo mesto, the town and its symbols, images of the town and people, administration and economy, social life and Novo mesto today followed by digitised archive materials and sources from that period such as the Commemorative book of the Uniformed Town Guard, the National Reading Room Guest Book, the Kazina guest book, the album of postcards and the Diploma of Honoured Citizen Josip Gerdešič. The Web application was also a tool for a simple and on line selection of digitised material and the creation of new digital content which proved to be much more convenient for lecturing than Power Point presentations. The quiz consisted of 40 questions relating to the exhibition theme and the catalogue. Each question offered a set of three answers only one of them being correct and illustrated by photography. The application auto selected ten questions and valued the answers immediately. The quiz could be accessed
Wanner, M.; Martin-Diener, E; Bauer, G; Braun-Fahrländer, C.; De Martin, B W
Background: Web-based interventions are popular for promoting healthy lifestyles such as physical activity. However, little is known about user characteristics, adherence, attrition, and predictors of repeated participation on open access physical activity websites. Objective: The focus of this study was Active-online, a Web-based individually tailored physical activity intervention. The aims were (1) to assess and compare user characteristics and adherence to the website (a) in the open a...
Rajman, M; Boynton, I M; Fridlund, B; Fyhrlund, A; Sundgren, B; Lundquist, P; Thelander, H; Wänerskär, M
In this paper we present the results of the StatSearch case study that aimed at providing an enhanced access to statistical data available on the Web. In the scope of this case study we developed a prototype of an information access tool combining a query-based search engine with semi-automated navigation techniques exploiting the hierarchical structuring of the available data. This tool enables a better control of the information retrieval, improving the quality and ease of the access to statistical information. The central part of the presented StatSearch tool consists in the design of an algorithm for automated navigation through a tree-like hierarchical document structure. The algorithm relies on the computation of query related relevance score distributions over the available database to identify the most relevant clusters in the data structure. These most relevant clusters are then proposed to the user for navigation, or, alternatively, are the support for the automated navigation process. Several appro...
Whetzel, Patricia L.; Noy, Natalya F.; Shah, Nigam H.; Alexander, Paul R.; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A.
The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection. PMID:21672956
Sørensen, Lars Schiøtt
During the last two decades, a number of research efforts have been made in the field of computing systmes related to the building construction industry. Most of the projects have focused on a part of the entire design process and have typically been limited to a specific domain. This paper prese...... presents a newly developed computer system based on the World Wide Web on the Internet. The focus is on the simplicity of the systems structure and on an intuitive and user friendly interface...
WEB-based database access technology is the combination of Web technology and database,accessing the database service system created in the B/S structure through a browser.Based on Web database access process,this paper describes the Web database architecture and implementation,moreover,the advantages and disadvantages of several major Web database access technologies are compared and discussed detailedly.All these will help to select the most suitable implementation technology under different applied conditions.%基于WEB的数据库访问技术就是将Web技术和数据库相结合,在B/S结构下建立的通过浏览器去访问数据库的服务系统.本文根据Web数据库的访问流程描述了Web数据库的体系结构与实现,对几种主要的Web数据库访问技术,进行了优缺点及用处进行了较详细的比较和讨论.有助于在不同应用下选择最适合的技术去实现.
... accommodation through other mediums, such as telephone or mail). But see Access Now, Inc. v. Southwest Airlines... 2004 ADA/ABA Guidelines. 69 FR 58768. On June 17, 2008, the Department issued a notice of proposed.... 73 FR 34466. The NPRM addressed the issues raised in the public's comments to the ANPRM and...
Full Text Available Se realizaron pruebas de usuarios a personas con discapacidad a uditiva evaluando el impacto que las diferentes barreras de acc esibilidad causan en este tipo de usuarios. El objetivo de recoger esta in formación fue para comunicar a personas que editan contenido en la Web de forma más empática los problemas d e accesibilidad que más afect an a este colectivo, las personas con discapacidad auditiva,y a sí evitar las barreras de accesibilidad que potencialmente podrían estar creando. Como resultado, se obse rva que las barreras que causan mas impacto a usuarios con discapacidad audi tiva son el “texto complejo” y el “contenido multimedia” sin alternativas. En ambos casos los editores de contenido deberían tener en cuenta vigilar la legibilidad del c ontenido web y acompañar de subtítulos y lenguaje de signos el contenido multimedia.
Full Text Available Web is a rich domain of data and knowledge, which is spread over the world in unstructured manner. The number of users is continuously access the information over the internet. Web mining is an application of data mining where web related data is extracted and manipulated for extracting knowledge. The data mining is used in the domain of web information mining is refers as web mining, that is further divided into three major domains web uses mining, web content mining and web structure mining. The proposed work is intended to work with web uses mining. The concept of web mining is to improve the user feedbacks and user navigation pattern discovery for a CRM system. Finally a new algorithm HMM is used for finding the pattern in data, which method promises to provide much accurate recommendation.
Lloyd, S. A.; Acker, J. G.; Prados, A. I.; Leptoukh, G. G.
One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite- based remote sensing datasets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable dataset to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface. Giovanni provides a simple way to visualize, analyze and access vast amounts of satellite-based Earth science data. Giovanni's features and practical examples of its use will be demonstrated, with an emphasis on how satellite remote sensing can help students understand recent events in the atmosphere and biosphere. Giovanni is actually a series of sixteen similar web-based data interfaces, each of which covers a single satellite dataset (such as TRMM, TOMS, OMI, AIRS, MLS, HALOE, etc.) or a group of related datasets (such as MODIS and MISR for aerosols, SeaWIFS and MODIS for ocean color, and the suite of A-Train observations co-located along the CloudSat orbital path). Recently, ground-based datasets have been included in Giovanni, including the Northern Eurasian Earth Science Partnership Initiative (NEESPI), and EPA fine particulate matter (PM2.5) for air quality. Model data such as the Goddard GOCART model and MERRA meteorological reanalyses (in process) are being increasingly incorporated into Giovanni to facilitate model- data intercomparison. A full suite of data
Jones, Philip; Binns, David; McMenamin, Conor; McAnulla, Craig; Hunter, Sarah
The InterPro BioMart provides users with query-optimized access to predictions of family classification, protein domains and functional sites, based on a broad spectrum of integrated computational models ('signatures') that are generated by the InterPro member databases: Gene3D, HAMAP, PANTHER, Pfam, PIRSF, PRINTS, ProDom, PROSITE, SMART, SUPERFAMILY and TIGRFAMs. These predictions are provided for all protein sequences from both the UniProt Knowledge Base and the UniParc protein sequence archive. The InterPro BioMart is supplementary to the primary InterPro web interface (http://www.ebi.ac.uk/interpro), providing a web service and the ability to build complex, custom queries that can efficiently return thousands of rows of data in a variety of formats. This article describes the information available from the InterPro BioMart and illustrates its utility with examples of how to build queries that return useful biological information. Database URL: http://www.ebi.ac.uk/interpro/biomart/martview. PMID:21785143
Full Text Available Human protein kinases play fundamental roles mediating the majority of signal transduction pathways in eukaryotic cells as well as a multitude of other processes involved in metabolism, cell-cycle regulation, cellular shape, motility, differentiation and apoptosis. The human protein kinome contains 518 members. Most studies that focus on the human kinome require, at some point, the visualization of large amounts of data. The visualization of such data within the framework of a phylogenetic tree may help identify key relationships between different protein kinases in view of their evolutionary distance and the information used to annotate the kinome tree. For example, studies that focus on the promiscuity of kinase inhibitors can benefit from the annotations to depict binding affinities across kinase groups. Images involving the mapping of information into the kinome tree are common. However, producing such figures manually can be a long arduous process prone to errors. To circumvent this issue, we have developed a web-based tool called Kinome Render (KR that produces customized annotations on the human kinome tree. KR allows the creation and automatic overlay of customizable text or shape-based annotations of different sizes and colors on the human kinome tree. The web interface can be accessed at: http://bcb.med.usherbrooke.ca/kinomerender. A stand-alone version is also available and can be run locally.
Mackley, Rob D.; Last, George V.; Allwardt, Craig H.
The Hanford Borehole Geologic Information System (HBGIS) is a prototype web-based graphical user interface (GUI) for viewing and downloading borehole geologic data. The HBGIS is being developed as part of the Remediation Decision Support function of the Soil and Groundwater Remediation Project, managed by Fluor Hanford, Inc., Richland, Washington. Recent efforts have focused on improving the functionality of the HBGIS website in order to allow more efficient access and exportation of available data in HBGIS. Users will benefit from enhancements such as a dynamic browsing, user-driven forms, and multi-select options for selecting borehole geologic data for export. The need for translating borehole geologic data into electronic form within the HBGIS continues to increase, and efforts to populate the database continue at an increasing rate. These new web-based tools should help the end user quickly visualize what data are available in HBGIS, select from among these data, and download the borehole geologic data into a consistent and reproducible tabular form. This revised user’s guide supersedes the previous user’s guide (PNNL-15362) for viewing and downloading data from HBGIS. It contains an updated data dictionary for tables and fields containing borehole geologic data as well as instructions for viewing and downloading borehole geologic data.
The key role of public information in emergency preparedness has more recently been corroborated by the experience of the Great Eastern Japan Earthquake and Tsunami and the subsequent nuclear accident at the Fukushima NPP. Information should meet quality criteria such as openness, accessibility and authenticity. Existing information portals of radiation monitoring networks were frequently used even in Europe, although there was no imminent radiation risk. BfS responded by increasing the polling frequency, publishing current data not validated, refurbishing the web-site of the BfS 'odlinfo.bfs.de' and adding explanatory text. Public feedback served as a valuable input for improving the site's design. Additional services were implemented for developers of smart phone apps. Web-sites similar to 'ODLinfo' are available both on European and international levels. NGOs and grass root projects established platforms for uploading and visualising private dose rate measurements in Japan after 11 March 2011. The BfS site is compared with other platforms. Government information has to compete with non-official sources. Options on information strategies are discussed. (authors)
Corredor, Germán.; Iregui, Marcela; Arias, Viviana; Romero, Eduardo
Virtual microscopy (VM) facilitates visualization and deployment of histopathological virtual slides (VS), a useful tool for education, research and diagnosis. In recent years, it has become popular, yet its use is still limited basically because of the very large sizes of VS, typically of the order of gigabytes. Such volume of data requires efficacious and efficient strategies to access the VS content. In an educative or research scenario, several users may require to access and interact with VS at the same time, so, due to large data size, a very expensive and powerful infrastructure is usually required. This article introduces a novel JPEG2000-based service oriented architecture for streaming and visualizing very large images under scalable strategies, which in addition need not require very specialized infrastructure. Results suggest that the proposed architecture enables transmission and simultaneous visualization of large images, while it is efficient using resources and offering users proper response times.
H. H. Kian
Full Text Available General search engines often provide low precise results even for detailed queries. So there is a vital needto elicit useful information like keywords for search engines to provide acceptable results for user’s searchqueries. Although many methods have been proposed to show how to extract keywords automatically, allattempt to get a better recall, precision and other criteria which describe how the method has done its jobas an author. This paper presents a new automatic keyword extraction method which improves accessibilityof web content by search engines. The proposed method defines some coefficients determining featuresefficiency and tries to optimize them by using a genetic algorithm. Furthermore, it evaluates candidatekeywords by a function that utilizes the result of search engines. When comparing to the other methods,experiments demonstrate that by using the proposed method, a higher score is achieved from searchengines without losing noticeable recall or precision.
The InstadoseTM dosemeter from Mirion Technologies is a small, rugged device based on patented direct ion storage technology and is accredited by the National Voluntary Laboratory Accreditation Program (NVLAP) through NIST, bringing radiation monitoring into the digital age. Smaller than a flash drive, this dosemeter provides an instant read-out when connected to any computer with internet access and a USB connection. Instadose devices provide radiation workers with more flexibility than today's dosemeters. Non Volatile Analog Memory Cell surrounded by a Gas Filled Ion Chamber. Dose changes the amount of Electric Charge in the DIS Analog Memory. The total charge storage capacity of the memory determines the available dose range. The state of the Analog Memory is determined by measuring the voltage across the memory cell. AMP (Account Management Program) provides secure real time access to account details, device assignments, reports and all pertinent account information. Access can be restricted based on the role assignment assigned to an individual. A variety of reports are available for download and customizing. The Advantages of the Instadose dosemeter are: - Unlimited reading capability, - Concerns about a possible exposure can be addressed immediately, - Re-readability without loss of exposure data, with cumulative exposure maintained. (authors)
Access control is the main strategy of security and protection in Web system, the traditional access control can not meet the needs of the growing security. With using the role based access control (RBAC) model and introducing the concept of the role in the web system, the user is mapped to a role in an organization, access to the corresponding role authorization, access authorization and control according to the user's role in an organization, so as to improve the web system flexibility and security permissions and access control.%访问控制是Web系统中安全防范和保护的主要策略，传统的访问控制已不能满足日益增长的安全性需求。本文在web应用系统中，使用基于角色的访问控制（RBAC）模型，通过引入角色的概念，将用户映射为在一个组织中的某种角色，将访问权限授权给相应的角色，根据用户在组织内所处的角色进行访问授权与控制，从而提高了在web系统中权限分配和访问控制的灵活性与安全性。
Davis, Brian N.; Werpy, Jason; Friesz, Aaron M.; Impecoven, Kevin; Quenzer, Robert; Maiersperger, Tom; Meyer, David J.
Current methods of searching for and retrieving data from satellite land remote sensing archives do not allow for interactive information extraction. Instead, Earth science data users are required to download files over low-bandwidth networks to local workstations and process data before science questions can be addressed. New methods of extracting information from data archives need to become more interactive to meet user demands for deriving increasingly complex information from rapidly expanding archives. Moving the tools required for processing data to computer systems of data providers, and away from systems of the data consumer, can improve turnaround times for data processing workflows. The implementation of middleware services was used to provide interactive access to archive data. The goal of this middleware services development is to enable Earth science data users to access remote sensing archives for immediate answers to science questions instead of links to large volumes of data to download and process. Exposing data and metadata to web-based services enables machine-driven queries and data interaction. Also, product quality information can be integrated to enable additional filtering and sub-setting. Only the reduced content required to complete an analysis is then transferred to the user.
Calle Jiménez, Tania; Sánchez Gordón, Sandra; Luján Mora, Sergio
This paper describes some of the challenges that exist to make accessible massive open online courses (MOOCs) on Geographical Information Systems (GIS). These courses are known by the generic name of Geo-MOOCs. A MOOC is an online course that is open to the general public for free, which causes a massive registration. A GIS is a computer application that acquire, manipulate, manage, model and visualize geo-referenced data. The goal of a Geo-MOOC is to expand the culture of spatial thinking an...
Chipman, Jonathan; Drohan, Brian; Blackford, Amanda; Parmigiani, Giovanni; Hughes, Kevin; Bosinoff, Phil
Cancer risk prediction tools provide valuable information to clinicians but remain computationally challenging. Many clinics find that CaGene or HughesRiskApps fit their needs for easy- and ready-to-use software to obtain cancer risks; however, these resources may not fit all clinics' needs. The HughesRiskApps Group and BayesMendel Lab therefore developed a web service, called "Risk Service", which may be integrated into any client software to quickly obtain standardized and up-to-date risk predictions for BayesMendel tools (BRCAPRO, MMRpro, PancPRO, and MelaPRO), the Tyrer-Cuzick IBIS Breast Cancer Risk Evaluation Tool, and the Colorectal Cancer Risk Assessment Tool. Software clients that can convert their local structured data into the HL7 XML-formatted family and clinical patient history (Pedigree model) may integrate with the Risk Service. The Risk Service uses Apache Tomcat and Apache Axis2 technologies to provide an all Java web service. The software client sends HL7 XML information containing anonymized family and clinical history to a Dana-Farber Cancer Institute (DFCI) server, where it is parsed, interpreted, and processed by multiple risk tools. The Risk Service then formats the results into an HL7 style message and returns the risk predictions to the originating software client. Upon consent, users may allow DFCI to maintain the data for future research. The Risk Service implementation is exemplified through HughesRiskApps. The Risk Service broadens the availability of valuable, up-to-date cancer risk tools and allows clinics and researchers to integrate risk prediction tools into their own software interface designed for their needs. Each software package can collect risk data using its own interface, and display the results using its own interface, while using a central, up-to-date risk calculator. This allows users to choose from multiple interfaces while always getting the latest risk calculations. Consenting users contribute their data for future
Sweeney, Edwina; Curran, Kevin; Xie, Ermai
A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.
Celli, Fabrizio; Malapela, Thembani; Wegner, Karna; Subirats, Imma; Kokoliou, Elena; Keizer, Johannes
AGRIS is the International System for Agricultural Science and Technology. It is supported by a large community of data providers, partners and users. AGRIS is a database that aggregates bibliographic data, and through this core data, related content across online information systems is retrieved by taking advantage of Semantic Web capabilities. AGRIS is a global public good and its vision is to be a responsive service to its user needs by facilitating contributions and feedback regarding the AGRIS core knowledgebase, AGRIS's future and its continuous development. Periodic AGRIS e-consultations, partner meetings and user feedback are assimilated to the development of the AGRIS application and content coverage. This paper outlines the current AGRIS technical set-up, its network of partners, data providers and users as well as how AGRIS's responsiveness to clients' needs inspires the continuous technical development of the application. The paper concludes by providing a use case of how the AGRIS stakeholder input and the subsequent AGRIS e-consultation results influence the development of the AGRIS application, knowledgebase and service delivery. PMID:26339471
Dezso, Z; Lukács, A; Racz, B; Szakadat, I; Barabási, A L
While current studies on complex networks focus on systems that change relatively slowly in time, the structure of the most visited regions of the Web is altered at the timescale from hours to days. Here we investigate the dynamics of visitation of a major news portal, representing the prototype for such a rapidly evolving network. The nodes of the network can be classified into stable nodes, that form the time independent skeleton of the portal, and news documents. The visitation of the two node classes are markedly different, the skeleton acquiring visits at a constant rate, while a news document's visitation peaking after a few hours. We find that the visitation pattern of a news document decays as a power law, in contrast with the exponential prediction provided by simple models of site visitation. This is rooted in the inhomogeneous nature of the browsing pattern characterizing individual users: the time interval between consecutive visits by the same user to the site follows a power law distribution, in...
Johnson, V.; Aarseth, S.
A demonstration site has been developed by the authors that enables researchers and students to experiment with the capabilities and performance of NBODY4 running on a GRAPE-6a over the web. NBODY4 is a sophisticated open-source N-body code for high accuracy simulations of dense stellar systems (Aarseth 2003). In 2004, NBODY4 was successfully tested with a GRAPE-6a, yielding an unprecedented low-cost tool for astrophysical research. The GRAPE-6a is a supercomputer card developed by astrophysicists to accelerate high accuracy N-body simulations with a cluster or a desktop PC (Fukushige et al. 2005, Makino & Taiji 1998). The GRAPE-6a card became commercially available in 2004, runs at 125 Gflops peak, has a standard PCI interface and costs less than 10,000. Researchers running the widely used NBODY6 (which does not require GRAPE hardware) can compare their own PC or laptop performance with simulations run on http://www.NbodyLab.org. Such comparisons may help justify acquisition of a GRAPE-6a. For workgroups such as university physics or astronomy departments, the demonstration site may be replicated or serve as a model for a shared computing resource. The site was constructed using an NBodyLab server-side framework.
Koehler, Wallace; Mincey, Danielle
Compares and evaluates the differences between OCLC's dial-up and World Wide Web FirstSearch access methods and their interfaces with the underlying databases. Also examines NetFirst, OCLC's new Internet catalog, the only Internet tracking database from a "traditional" database service. (Author/PEN)
Freeman, Misty Danielle
The purpose of this research was to explore Webmasters' behaviors and factors that influence Web accessibility at postsecondary institutions. Postsecondary institutions that were accredited by the Southern Association of Colleges and Schools were used as the population. The study was based on the theory of planned behavior, and Webmasters'…
VIGIL,FRANK; REEDER,ROXANA G.
The Factsheets web application was conceived out of the requirement to create, update, publish, and maintain a web site with dynamic research and development (R and D) content. Before creating the site, a requirements discovery process was done in order to accurately capture the purpose and functionality of the site. One of the high priority requirements for the site would be that no specialized training in web page authoring would be necessary. All functions of uploading, creation, and editing of factsheets needed to be accomplished by entering data directly into web form screens generated by the application. Another important requirement of the site was to allow for access to the factsheet web pages and data via the internal Sandia Restricted Network and Sandia Open Network based on the status of the input data. Important to the owners of the web site would be to allow the published factsheets to be accessible to all personnel within the department whether or not the sheets had completed the formal Review and Approval (R and A) process. Once the factsheets had gone through the formal review and approval process, they could then be published both internally and externally based on their individual publication status. An extended requirement and feature of the site would be to provide a keyword search capability to search through the factsheets. Also, since the site currently resides on both the internal and external networks, it would need to be registered with the Sandia search engines in order to allow access to the content of the site by the search engines. To date, all of the above requirements and features have been created and implemented in the Factsheet web application. These have been accomplished by the use of flat text databases, which are discussed in greater detail later in this paper.
Provides an overview of universal Web design and discusses guidelines developed by the Web access initiative (WAI) that focus on the access needs of Web users with disabilities. Highlights include barriers for people with print disabilities or motor impairments; the role of libraries; and resources to assist Web designers. (LRW)
National Archives and Records Administration — The OGIS Access System (OAS) provides case management, stakeholder collaboration, and public communications activities including a web presence via a web portal.
上超望; 赵呈领; 刘清堂; 王艳凤
Access control is one of the key technologies in secure and reliable Web services composition value-added application. This paper briefly reviewed the state of the research for access control in Web services composition environment We firstly discussed the challenges to Web services secure compositioa Subsequently we analysed the security problems concerning Web services composition from a hierarchical perspective. Then, we discussed the research progress on the key access control technology from three respects of Web services composition access control architecture, atomic security policy consistent coordination and business process authorization. Finally, the conclusion was given and the problems were pointed out,which should be resolved in future research.%访问控制技术是保证Web服务组合增值应用安全性和可靠性的关键技术.主要论述了组合Web服务访问控制技术的研究现状及其问题.首先论述了组合Web服务安全面临的挑战；接着基于层的视角对组合Web服务安全问题进行了分析；然后从组合Web服务访问控制体系构架、原子安全策略的一致性协同和业务流程访问控制3个方面分析了组合Web服务访问控制核心技术研究的进展；最后,结合已有的研究成果,指出了目前研究的不足以及未来的发展趋势.
上超望; 赵呈领; 刘清堂; 王艳凤
访问控制技术是保证Web服务组合增值应用安全性和可靠性的关键技术.主要论述了组合Web服务访问控制技术的研究现状及其问题.首先论述了组合Web服务安全面临的挑战；接着基于层的视角对组合Web服务安全问题进行了分析；然后从组合Web服务访问控制体系构架、原子安全策略的一致性协同和业务流程访问控制3个方面分析了组合Web服务访问控制核心技术研究的进展；最后,结合已有的研究成果,指出了目前研究的不足以及未来的发展趋势.%Access control is one of the key technologies in secure and reliable Web services composition value-added application. This paper briefly reviewed the state of the research for access control in Web services composition environment. We firstly discussed the challenges to Web services secure composition. Subsequently we analysed the security problems concerning Web services composition from a hierarchical perspective. Then, we discussed the research progress on the key access control technology from three respects of Web services composition access control architecture, atomic security policy consistent coordination and business process authorization. Finally, the conclusion was given and the problems were pointed out,which should be resolved in future research.
WEB 434 Entire Course For more course tutorials visit www.uophelp.com WEB 434 Week 1 DQs and Summary WEB 434 Web Accessibility Standards Paper WEB 434 Week 2 DQs and Summary WEB 434 Web-Based Supply Chain Paper WEB 434 Week 3 DQs and Summary WEB 434 Affiliates Program Paper WEB 434 Week 4 DQs and Summary WEB 434 Virtual Organization Presentation WEB 434 Virtual Organization Proposal and Executive Summary Paper WE...
Full Text Available Se analizan las interfaces de usuario de los catálogos en línea de acceso público (OPACs en entorno web de las bibliotecas universitarias, especializadas, públicas y nacionales de los países parte del Mercosur (Argentina, Brasil, Paraguay, Uruguay, para elaborar un diagnóstico de situación sobre: descripción bibliográfica, análisis temático, mensajes de ayuda al usuario, visualización de datos bibliográficos. Se adopta una metodología cuali-cuantitativa, se utiliza como instrumento de recolección de datos la lista de funcionalidades del sistema que proporciona Hildreth (1982, se actualiza, se obtiene un formulario que permite, mediante 38 preguntas cerradas, observar la frecuencia de aparición de las funcionalidades básicas propias de cuatro áreas: Área I - control de operaciones; Área II - control de formulación de la búsqueda y puntos de acceso; Área III - control de salida y Área IV - asistencia al usuario: información e instrucción. Se trabaja con la información correspondiente a 297 unidades. Se delimitan estratos por tipo de software, tipo de biblioteca y país. Se aplican a los resultados las pruebas de Chi-cuadrado, Odds ratio y regresión logística multinomial. El análisis corrobora la existencia de diferencias significativas en cada uno de los estratos y verifica que la mayoría de los OPACs relevados brindan prestaciones mínimas.User interfaces of web based online public access catalogs (OPACs of academic, special, public and national libraries in countries belonging to Mercosur (Argentina, Brazil, Paraguay, Uruguay are studied to provide a diagnosis of the situation of bibliographic description, subject analisis, help messages and bibliographic display. A cuali-cuantitative methodology is adopted and a checklist of systems functions created by Hildreth (1982 is updated and used as data collection tool. The resulting 38 closed questions checklist has allowed to observe the frequency of appearance of the
Pawlicki, T [UC San Diego Medical Center, La Jolla, CA (United States); Brown, D; Dunscombe, P [Tom Baker Cancer Centre, Calgary, AB (Canada); Mutic, S [Washington University School of Medicine, Saint Louis, MO (United States)
Le, Ha Thanh; Nguyen, Duy Cu; Briand, Lionel
This technical report details our a semi-automated framework for the reverse-engineering and testing of access control (AC) policies for web-based applications. In practice, AC specifications are often missing or poorly documented, leading to AC vulnerabilities. Our goal is to learn and recover AC policies from implementation, and assess them to find AC issues. Built on top of a suite of security tools, our framework automatically explores a system under test, mines domain input specification...
Winter, A. G.; Wildenhain, J; Tyers, M
Summary: The Biological General Repository for Interaction Datasets (BioGRID) representational state transfer (REST) service allows full URL-based access to curated protein and genetic interaction data at the BioGRID database. Appending URL parameters allows filtering of data by various attributes including gene names and identifiers, PubMed ID and evidence type. We also describe two visualization tools that interface with the REST service, the BiogridPlugin2 for Cytoscape and the BioGRID Web...
Hannak, Aniko; Sapiezynski, Piotr; Kakhki, Arash Molavi;
Web search is an integral part of our daily lives. Recently, there has been a trend of personalization in Web search, where different users receive different results for the same search query. The increasing personalization is leading to concerns about Filter Bubble effects, where certain users...... are simply unable to access information that the search engines’ algorithm decidesis irrelevant. Despitetheseconcerns, there has been little quantification of the extent of personalization in Web search today, or the user attributes that cause it. In light of this situation, we make three contributions....... First, we develop a methodology for measuring personalization in Web search results. While conceptually simple, there are numerous details that our methodology must handle in order to accurately attribute differences in search results to personalization. Second, we apply our methodology to 200 users...
Johnson, G. W.; Gonzalez, J.; Brady, J. J.; Gaylord, A.; Manley, W. F.; Cody, R.; Dover, M.; Score, R.; Garcia-Lavigne, D.; Tweedie, C. E.
ARMAP 3D allows users to dynamically interact with information about U.S. federally funded research projects in the Arctic. This virtual globe allows users to explore data maintained in the Arctic Research & Logistics Support System (ARLSS) database providing a very valuable visual tool for science management and logistical planning, ascertaining who is doing what type of research and where. Users can “fly to” study sites, view receding glaciers in 3D and access linked reports about specific projects. Custom “Search” tasks have been developed to query by researcher name, discipline, funding program, place names and year and display results on the globe with links to detailed reports. ARMAP 3D was created with ESRI’s free ArcGIS Explorer (AGX) new build 900 providing an updated application from build 500. AGX applications provide users the ability to integrate their own spatial data on various data layers provided by ArcOnline (http://resources.esri.com/arcgisonlineservices). Users can add many types of data including OGC web services without any special data translators or costly software. ARMAP 3D is part of the ARMAP suite (http://armap.org), a collection of applications that support Arctic science tools for users of various levels of technical ability to explore information about field-based research in the Arctic. ARMAP is funded by the National Science Foundation Office of Polar Programs Arctic Sciences Division and is a collaborative development effort between the Systems Ecology Lab at the University of Texas at El Paso, Nuna Technologies, the INSTAAR QGIS Laboratory, and CH2M HILL Polar Services.
For more course tutorials visit www.tutorialrank.com Tutorial Purchased: 3 Times, Rating: A+ WEB 434 Week 1 DQs and Summary WEB 434 Web Accessibility Standards Paper WEB 434 Week 2 DQs and Summary WEB 434 Web-Based Supply Chain Paper WEB 434 Week 3 DQs and Summary WEB 434 Affiliates Program Paper WEB 434 Week 4 DQs and Summary WEB 434 Virtual Organization Presentation WEB 434 Virtual Organization Proposal and Executi...
Jothi Venkateswaran C.
Full Text Available Users of the web have their own areas of interest. Given the tremendous growth of the web, it is very difficult to redirect the users to their page of interest. Web usage mining techniques can be applied to study the users navigational behaviours based on their previous visit data. These user navigational patterns can be extracted and used for web personalization or web site reorganization recommendations. Web usage mining techniques do not use the semantic knowledge of the web site for such navigational pattern discovery. But, if ontology is applied along with web usage techniques, it can improve the quality of the detected patterns. This research work aims to design a framework that integrates semantic knowledge with web usage mining process that generates the refined website ontology that recommends personalization of web. As the web pages are seen as ontology individuals, the user navigational behaviours over a certain period are considered as the user expected ontology refinement. The user profiles and the web site ontology are compared and the variation between the two is proposed as the new refined web site ontology. The web site ontology has been semi-automatically built and evolves through the adaptation procedure. The result of implementation of this recommendation system indicates that integrating semantic information and page access patterns yield more accurate recommendations.
Blodgett, D. L.; Walker, J. I.; Read, J. S.
The USGS Geo Data Portal (GDP) project started in 2010 with the goal of providing climate and landscape model output data to hydrology and ecology modelers in model-ready form. The system takes a user-specified collection of polygons and a gridded time series dataset and returns a time series of spatial statistics for each polygon. The GDP is designed for scalability and is generalized such that any data, hosted anywhere on the Internet adhering to the NetCDF-CF conventions, can be processed. Five years into the project, over 600 unique users from more than 200 organizations have used the system's web user interface and some datasets have been accessed thousands of times. In addition to the web interface, python and R client libraries have seen steady usage growth and several third-party web applications have been developed to use the GDP for easy data access. Here, we will present lessons learned and improvements made after five years of operation of the system's user interfaces, processing server, and data holdings. A vision for the future availability and processing of massive climate and landscape data will be outlined.
This viewgraph presentation gives an overview of the Access to Space website, including information on the 'tool boxes' available on the website for access opportunities, performance, interfaces, volume, environments, 'wish list' entry, and educational outreach.
National Aeronautics and Space Administration — TerraMetrics, Inc., proposes an SBIR Phase I R/R&D program to investigate and develop a key web services architecture that provides data processing, storage and...
Khelghati, Mohammadreza; Hiemstra, Djoerd; Keulen, van, S.
With increasing amount of data in deep web sources (hidden from general search engines behind web forms), accessing this data has gained more attention. In the algorithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate decisions in stopping crawling or sampling processes which can be so costly in some cases . The tendency to know the sizes of data sources is increased by the competition among businesses on the Web in which th...
Qiao, Liang; Li, Ying; Chen, Xin; Yang, Sheng; Gao, Peng; Liu, Hongjun; Feng, Zhengquan; Nian, Yongjian; Qiu, Mingguo
Full Text Available Now a day's mobile phones are replacing conventional PCs' as users are browsing and searching the Internet via their mobile handsets. Web based services and information can be accessed from any location with the help of these Mobile devices such as mobile phones, Personal Digital Assistants (PDA with relative ease. To access the educational data on mobile devices, web page adaptation is needed, keeping in mind security and quality of data. Various researchers are working on adaptation techniques. Educational web miner aims to develop an interface for kids to use mobile devices in a secure way. This paper presents a framework for adapting the web pages as part of educational web miner so that educational data can be accessed accurately, securely and concisely. The present paper is a part of the project whose aim is to develop an interface for kids, so that they can access the current knowledge bases from mobile devices in a secure way and to get accurate and concise information at ease. The related studies for adaptation technique are also presented in this paper.
Monitoring of PMT dark noise in a neutrino detector BOREXINO is a procedure that indicates condition of the detector. Based on CAN industrial network, top level DeviceNet protocol and WEB visualization, the dark noise monitoring system having 256 channels for the internal detector and for the external muon veto was created. The system is composed as a set of controllers, converting the PMT signals to frequency and transmitting them over Can network. The software is the stack of the DeviceNet protocols, providing the data collecting and transporting. Server-side scripts build web pages of user interface and graphical visualization of data
Ajax and web services are a perfect match for developing web applications. Ajax has built-in abilities to access and manipulate XML data, the native format for almost all REST and SOAP web services. Using numerous examples, this document explores how to fit the pieces together. Examples demonstrate how to use Ajax to access publicly available web services fromYahoo! and Google. You'll also learn how to use web proxies to access data on remote servers and how to transform XML data using XSLT.
R. Suguna; D. Sharmila
Web usage mining is the application of web mining to discover the useful patterns from the web in order to understand and analyze the behavior of the web users and web based applications. It is theemerging research trend for today’s researchers. It entirely deals with web log files which contain the user website access information. It is an interesting thing to analyze and understand the user behaviorabout the web access. Web usage mining normally has three categories: 1. Preprocessing, 2. Pa...
Lundegaard, Claus; Lamberth, K; Harndahl, M;
NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding. The...... predictions are based on artificial neural networks trained on data from 55 MHC alleles (43 Human and 12 non-human), and position-specific scoring matrices (PSSMs) for additional 67 HLA alleles. As only the MHC class I prediction server is available, predictions are possible for peptides of length 8–11 for...... all 122 alleles. artificial neural network predictions are given as actual IC50 values whereas PSSM predictions are given as a log-odds likelihood scores. The output is optionally available as download for easy post-processing. The training method underlying the server is the best available, and has...
Bari, Pranit; P M Chawan
The paper discusses about web usage mining involves the automatic discovery of user access patterns from one or more Web servers. This article provides a survey and analysis of current Web usage mining systems and technologies. The paper also confers about the procedure in which the web usage mining of the data sets is carried out. Finally the paper concludes with the areas in which web usage mining is implemented
Lucila Maria Costi Santarosa
Full Text Available o Eduquito, ambiente digital/virtual de aprendizagem desenvolvido pela equipe de pesquisadores do NIEE/UFRGS, busca apoiar processos de inclusão sociodigital, por ser projetado em sintonia com os princípios de acessibilidade e de desenho universal, normatizados pela WAI/W3C. O desenvolvimento da plataforma digital/virtual acessível e os resultados da utilização por pessoas com deficiências são discutidos, revelando um processo permanente de verificação e de validação dos recursos e da funcionalidade do ambiente Eduquito junto a diversidade humana. Apresentamos e problematizamos duas ferramentas de autoria individual e coletiva - a Oficina Multimídia e o Bloguito, um blog acessível -, novos recursos do ambiente Eduquito que emergem da aplicabilidade do conceito de pervasividade, buscando instituir espaços de letramento e impulsionar práticas de mediação tecnológica para a inclusão sociodigital no contexto da Web 2.0.the Eduquito, a digital/virtual learning environment developed by a NIEE / UFRGS team of researchers, seeks to support processes of socio-digital inclusion, and for that reason it was devised according to accessibility principles and universal design systematized by WAI/W3C. We discuss the development of a digital/virtual accessible platform and the results of its use by people with special needs, revealing an ongoing process of verification and validation of features and functionality of the Eduquito environment considering human diversity. We present and question two individual and collective authorship tools - the Multimedia Workshop and Bloguito, an accessible blog - new features of Eduquito Environment that emerge from the applicability of the concept of pervasiveness, in order to establish literacy spaces and boost technological mediation for socio-digital inclusion in the Web 2.0 context.
Web应用的访问控制一直以来受到广泛关注。由于Http的无状态性,给应用的安全设计带来较大难度。Spring Security提供了完整的访问控制机制,从而给应用安全设计提供了强大的支持。在介绍Spring Securi-ty对访问对象的访问控制整体框架的基础上,重点讨论了用户认证和基于URL的安全保护的访问授权的设置方法。并简要介绍了基于方法的安全保护及JSP页面内容的安全保护的配置及应用要点。%Access Control in web applications has been widely concerned.Because Http has no state,it brings some difficulties in security design of applications.Spring Security provides complete access control mechanism,which provides strong support for security design.On the basis of introducing the spring security framework Overall control to access objects,Focus on user authentication and setting methods of security access authorization based on URL.Security based on method as well as security setting and key points of applications of JSP page content was briefly introduced in the paper.
简要比较了几种站点数据库访问方案,认为以ASP5.0为脚本环境、Microsoft VFP6.0为Web数据库、 Win NT Server4.0为运行平台是目前访问WEB数据库的最好解决方案。从而较为系统地介绍了利用ASP和ADO访问Web数据库的方法和技巧,并举例说明。%This paper briefly compared several solutions to access Web database,among which the author considered the best solution is that script enviroment is ASP5.0,Microsoft VFP6.0 is Web database,and Win NT Server4.0 is its running platform.So in this paper,the skills and methods of using ASP and ADO to access database is systematically introduced,at the same time an example is also presented.
Ross, A.; Stackhouse, P. W.; Tisdale, B.; Tisdale, M.; Chandler, W.; Hoell, J. M., Jr.; Kusterer, J.
The NASA Langley Research Center Science Directorate and Atmospheric Science Data Center have initiated a pilot program to utilize Geographic Information System (GIS) tools that enable, generate and store climatological averages using spatial queries and calculations in a spatial database resulting in greater accessibility of data for government agencies, industry and private sector individuals. The major objectives of this effort include the 1) Processing and reformulation of current data to be consistent with ESRI and openGIS tools, 2) Develop functions to improve capability and analysis that produce "on-the-fly" data products, extending these past the single location to regional and global scales. 3) Update the current web sites to enable both web-based and mobile application displays for optimization on mobile platforms, 4) Interact with user communities in government and industry to test formats and usage of optimization, and 5) develop a series of metrics that allow for monitoring of progressive performance. Significant project results will include the the development of Open Geospatial Consortium (OGC) compliant web services (WMS, WCS, WFS, WPS) that serve renewable energy and agricultural application products to users using GIS software and tools. Each data product and OGC service will be registered within ECHO, the Common Metadata Repository, the Geospatial Platform, and Data.gov to ensure the data are easily discoverable and provide data users with enhanced access to SSE data, parameters, services, and applications. This effort supports cross agency, cross organization, and interoperability of SSE data products and services by collaborating with DOI, NRCan, NREL, NCAR, and HOMER for requirements vetting and test bed users before making available to the wider public.
Stahl, Douglas C.; Evans, Richard M.; Afrin, Lawrence B.; DeTeresa, Richard M.; Ko, Dave; MITCHELL, KEVIN
Electronic discovery of the clinical trials being performed at a specific research center is a challenging task, which presently requires manual review of the center’s locally maintained databases or web pages of protocol listings. Near real-time automated discovery of available trials would increase the efficiency and effectiveness of clinical trial searching, and would facilitate the development of new services for information providers and consumers. Automated discovery efforts to date hav...
Full Text Available The purpose of developing e-Government is to make public administrations more efficient and transparent and to allow citizens to more comfortably and effectively access information. Such benefits are even more important to people with a physical disability, allowing them to reduce waiting times in procedures and travel. However, it is not in widespread use among this group, as they not only harbor the same fears as other citizens, but also must cope with the barriers inherent to their disability. This research proposes a solution to help persons with disabilities access e-Government services. This work, in cooperation with the Spanish Federation of Spinal-Cord Injury Victims and the Severely Disabled, includes the development of a portal specially oriented towards people with disabilities to help them locate and access services offered by Spanish administrations. Use of the portal relies on digital authentication of users based on X.509, which are found in identity cards of Spanish citizens. However, an analysis of their use reveals that this feature constitutes a significant barrier to accessibility. This paper proposes a more accessible solution using a USB cryptographic token that can conceal from users all complexity entailed in access to certificate-based applications, while assuring the required security.
Stahl, Douglas C; Evans, Richard M; Afrin, Lawrence B; DeTeresa, Richard M; Ko, Dave; Mitchell, Kevin
Electronic discovery of the clinical trials being performed at a specific research center is a challenging task, which presently requires manual review of the center's locally maintained databases or web pages of protocol listings. Near real-time automated discovery of available trials would increase the efficiency and effectiveness of clinical trial searching, and would facilitate the development of new services for information providers and consumers. Automated discovery efforts to date have been hindered by issues such as disparate database schemas, vocabularies, and insufficient standards for easy intersystem exchange of high-level data, but adequate infrastructure now exists that make possible the development of applications for near real-time automated discovery of trials. This paper describes the current state (design and implementation) of the Web Services Specification for Publication and Discovery of Clinical Trials as developed by the Technology Task Force of the Association of American Cancer Institutes. The paper then briefly discusses a prototype web service-based application that implements the specification. Directions for evolution of this specification are also discussed. PMID:14728248
WEB 434 Week 1 DQs and Summary WEB 434 Web Accessibility Standards Paper WEB 434 Week 2 DQs and Summary WEB 434 Web-Based Supply Chain Paper WEB 434 Week 3 DQs and Summary WEB 434 Affiliates Program Paper WEB 434 Week 4 DQs and Summary WEB 434 Virtual Organization Presentation WEB 434 Virtual Organization Proposal and Executive Summary Paper WEB 434 Week 5 DQs and Summary
Elsa Barber; Silvia Pisano; Sandra Romagnoli; Verónica Parsiale; Gabriela De Pedro; Carolina Gregui
Se analizan las interfaces de usuario de los catálogos en línea de acceso público (OPACs) en entorno web de las bibliotecas universitarias, especializadas, públicas y nacionales de los países parte del Mercosur (Argentina, Brasil, Paraguay, Uruguay), para elaborar un diagnóstico de situación sobre: descripción bibliográfica, análisis temático, mensajes de ayuda al usuario, visualización de datos bibliográficos. Se adopta una metodología cuali-cuantitativa, se utiliza como instrumento de recol...
U.S. Environmental Protection Agency — EPA's Envirofacts Website hosts web enabled tools accessing the Envirofacts Data Warehouse and the Internet to provide a single point of access to select EPA...
Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.
The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology-specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the
Incarnato, Danny; Neri, Francesco; Diamanti, Daniela; Oliviero, Salvatore
The prediction of pairing between microRNAs (miRNAs) and the miRNA recognition elements (MREs) on mRNAs is expected to be an important tool for understanding gene regulation. Here, we show that mRNAs that contain Pumilio recognition elements (PRE) in the proximity of predicted miRNA-binding sites are more likely to form stable secondary structures within their 3'-UTR, and we demonstrated using a PUM1 and PUM2 double knockdown that Pumilio proteins are general regulators of miRNA accessibility. On the basis of these findings, we developed a computational method for predicting miRNA targets that accounts for the presence of PRE in the proximity of seed-match sequences within poorly accessible structures. Moreover, we implement the miRNA-MRE duplex pairing as a two-step model, which better fits the available structural data. This algorithm, called MREdictor, allows for the identification of miRNA targets in poorly accessible regions and is not restricted to a perfect seed-match; these features are not present in other computational prediction methods. PMID:23863844
Do, Nhan; Marinkovich, Andre; Koisch, John; Wheeler, Gary
Our clinical providers spend an estimated four hours weekly answering phone messages from patients. Our nurses spend five to ten hours weekly on returning phone calls. Most of this time is spent conveying recent clinical results, reviewing with patients the discharge instructions such as consults or studies ordered during the office visits, and handling patients' requests for medication renewals. Over time this will lead to greater patients' dissatisfaction because of lengthy waiting time and lack of timely access to their medical information. This would also lead to greater nursing and providers' dissatisfaction because of unreasonable work load. PMID:14728335
Wan, Miao; Jönsson, Arne; Wang, Cong; Li, Lixiang; Yang, Yixian
Users of a Web site usually perform their interest-oriented actions by clicking or visiting Web pages, which are traced in access log files. Clustering Web user access patterns may capture common user interests to a Web site, and in turn, build user profiles for advanced Web applications, such as Web caching and prefetching. The conventional Web usage mining techniques for clustering Web user sessions can discover usage patterns directly, but cannot identify the latent factors or hidden relat...
The need for social inclusion, informed choice and the facilitation of independent living for people with learning disabilities (LD) is being emphasised ever more by government, professionals, academics and, indeed, by people with LD themselves, particularly in self-advocacy groups. Achieving goals around inclusion and autonomy requires access to…
Perrucci, GP; Fitzek, FHP; Zhang, Qi;
This paper advocates a novel approach for mobile web browsing based on cooperation among wireless devices within close proximity operating in a cellular environment. In the actual state of the art, mobile phones can access the web using different cellular technologies. However, the supported data......-range links can then be used for cooperative mobile web browsing. By implementing the cooperative web browsing on commercial mobile phones, it will be shown that better performance is achieved in terms of increased data rate and therefore reduced access times, resulting in a significantly enhanced web...
Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.
The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…
International audience The steady growth of the World Wide Web raises challenges regarding the preservation of meaningful Web data. Tools used currently by Web archivists blindly crawl and store Web pages found while crawling, disregarding the kind of Web site currently accessed (which leads to suboptimal crawling strategies) and whatever structured content is contained in Web pages (which results in page-level archives whose content is hard to exploit). We focus in this PhD work on the cr...
Access is a small database which is widely used in lightweight WEB applications,a large number of users develop WEB applications with ASP and PHP scripting languages in Internet industry. Its database engine JET and ACE have come out one after another,but few people pay attention to the impacts of different engines on the perfor?mance of database,and there are few studies in practical applications. This paper attempts to use simple test meth?ods and procedures to confirm the differences between the old and new versions of the engine,and through the actual test get a conclusion contrary to the general understanding:Old Access database engine has a rapid treatment speed than the new one,its improvements lies in the stability rather than speed.%Access是一种被轻量级WEB应用所广泛使用的小型数据库,在互联网行业中有大量的用户配合ASP和PHP等脚本语言进行WEB应用开发.它的数据库引擎JET和ACE相继问世,但是却鲜有人关注新旧两种引擎对于数据库性能的影响,在实际应用中研究甚少.本文试图用简单的测试方法和程序证实新旧版本引擎之间的差异,并通过实际测试得出了与一般认识相悖的结论:Access数据库引擎的处理速度新不如旧,主要进步在于稳定性而非速度.
Ichihara, Yasuyo G.
Internet imaging is used as interactive visual communication. It is different form other electronic imaging fields because the imaging is transported from one client to many others. If you and I each had different color vision, we may see Internet Imaging differently. So what do you see in a digital color dot picture such as the Ishihara pseudoisochromatic plates? The ishihara pseudoisochromatic test is the most widely used screening test for red-green color deficiency. The full verison contains 38 plates. Plates 18-21 are hidden digit designs. For example, plate 20 has 45 hidden digit designs that cannot be seen by normal trichromats but can be distinguished by most color deficient observers. In this study, we present a new digital color pallette. This is the web accessibility palette where the same information on Internet imaging can be seen correctly by any color vision person. For this study, we have measured the Ishihara pseudoisochromatic test. We used the new Minolta 2D- colorimeter system, CL1040i that can define all pixels in a 4cm x 4cm square to take measurements. From the results, color groups of 8 to 10 colors in the Ishihara plates can be seen on isochromatic lines of CIE-xy color spaces. On each plate, the form of a number is composed of 4 colors and the background colors are composed of the remaining 5 colors. Normal trichromats, it is difficult to find the difference between the 4 color group which makes up the form of the number and the 5 color group of the background colors. We also found that for normal trichromats, colors like orange and red that are highly salient are included in the warm color group and are distinguished form the cool color group of blue, green and gray. Form the results of our analysis of the Ishihara pseudoisochromatic test we suggest the web accessibility palette consists of 4 colors.
Purpose: To build an infrastructure that enables radiologists on-call and external users a teleradiological access to the HTML-based image distribution system inside the hospital via internet. In addition, no investment costs should arise on the user side and the image data should be sent renamed using cryptographic techniques. Materials and Methods: A pure HTML-based system manages the image distribution inside the hospital, with an open source project extending this system through a secure gateway outside the firewall of the hospital. The gateway handles the communication between the external users and the HTML server within the network of the hospital. A second firewall is installed between the gateway and the external users and builds up a virtual private network (VPN). A connection between the gateway and the external user is only acknowledged if the computers involved authenticate each other via certificates and the external users authenticate via a multi-stage password system. All data are transferred encrypted. External users get only access to images that have been renamed to a pseudonym by means of automated processing before. Results: With an ADSL internet access, external users achieve an image load frequency of 0.4 CT images per second. More than 90% of the delay during image transfer results from security checks within the firewalls. Data passing the gateway induce no measurable delay. (orig.)
B. Madasamy; Dr. J. Jebmalar Tamilselvi
Mining the web is defined as discovering knowledge from hypertext and World Wide Web. The World Wide Web is one of the longest rising areas of intelligence gathering. Now a day there are billions of web pages, HTML archive accessible via the internet, and the number is still increasing. However, considering the inspiring diversity of the web, retrieving of interestingness web based content has become a very complex task. The large amount of data heterogeneity, complex format, high dimensional...
Boufkhad, Yacine; Viennot, Laurent
The web is now de facto the first place to publish data. However, retrieving the whole database represented by the web appears almost impossible. Some parts are known to be hard to discover automatically, giving rise to the so called hidden or invisible web. On the one hand, search engines try to index most of the web. Almost all related work is based on discovering the web by crawling. This paper is devoted to estimate how accurate is the view of the web obtained by crawling. Our approach is...
Web 2.0 is a highly accessible introductory text examining all the crucial discussions and issues which surround the changing nature of the World Wide Web. It not only contextualises the Web 2.0 within the history of the Web, but also goes on to explore its position within the broader dispositif of emerging media technologies.The book uncovers the connections between diverse media technologies including mobile smart phones, hand-held multimedia players, ""netbooks"" and electronic book readers such as the Amazon Kindle, all of which are made possible only by the Web 2.0. In addition, Web 2.0 m
Jones P; Binns D.; McMenamin C.; McAnulla C.; Hunter S
The InterPro BioMart provides users with query-optimized access to predictions of family classification, protein domains and functional sites, based on a broad spectrum of integrated computational models (‘signatures’) that are generated by the InterPro member databases: Gene3D, HAMAP, PANTHER, Pfam, PIRSF, PRINTS, ProDom, PROSITE, SMART, SUPERFAMILY and TIGRFAMs. These predictions are provided for all protein sequences from both the UniProt Knowledge Base and the UniParc protein sequence arc...
Lun Cai; Jing-Ling Liu; Xi Wang
In web services testing, accessing the interactive contents of measured services and the information of service condition accurately are the key issues of system design and realization. A non-intrusive solution based on axis2 is presented to overcome the difficulty of the information retrieval in web service testing. It can be plugged in server side or client side freely to test pre-deployed or deployed web services. Moreover, it provides a monitoring interface and the corresponding subscription publication mechanism for users based on web services to support the quality assurance grounded on service-oriented architecture (SOA) application service.
Su-Hua Wang; Che-Hung Lin
Due to increasing popularity of the World Wide Web, web-based systems are widely used. Most corporate web sites try to enhance their usability by providing artistic web presentations. However, the design of web sites is not judged solely on an artistic basis. Two of the most important design criteria for web sites are access to web content and navigation architecture. This research proposes a platform for automatically evaluating the quality of web navigation architecture. Because of the hier...
Mirtaheri, Seyed M.; Dinçktürk, Mustafa Emre; Hooshmand, Salman; Bochmann, Gregor v.; Jourdan, Guy-Vincent; Onut, Iosif Viorel
Web crawlers visit internet applications, collect data, and learn about new web pages from visited pages. Web crawlers have a long and interesting history. Early web crawlers collected statistics about the web. In addition to collecting statistics about the web and indexing the applications for search engines, modern crawlers can be used to perform accessibility and vulnerability checks on the application. Quick expansion of the web, and the complexity added to web applications have made the ...
Casteleyn, Sven; Daniel, Florian; Dolog, Peter;
Nowadays, Web applications are almost omnipresent. The Web has become a platform not only for information delivery, but also for eCommerce systems, social networks, mobile services, and distributed learning environments. Engineering Web applications involves many intrinsic challenges due...... design and implementation to deployment and maintenance. They stress the importance of models in Web application development, and they compare well-known Web-specific development processes like WebML, WSDM and OOHDM to traditional software development approaches like the waterfall model and the spiral...... model. Important problem areas inherent to the Web, like localization, personalization, accessibility, and usage analysis, are dealt with in detail, and a final chapter provides both a description of and an outlook on recent Semantic Web and Web 2.0 developments. Overall, their book delivers...
文章首先简要介绍了P2P网络和Web Service的技术背景,接着提出了P2P网络与Web Service集成的思想,为此提出Web Service Broker的概念,从而实现P2P网络Peer透明访问Web Service.
The field of planetary sciences has greatly expanded in recent years with space missions orbiting around most of the planets of our Solar System. The growing amount and wealth of data available make it difficult for scientists to exploit data coming from many sources that can initially be heterogeneous in their organization, description and format. It is an important objective of the Europlanet-RI (supported by EU within FP7) to add value to space missions by significantly contributing to the effective scientific exploitation of collected data; to enable space researchers to take full advantage of the potential value of data sets. To this end and to enhance the science return from space missions, innovative tools have to be developed and offered to the community. AMDA (Automated Multi-Dataset Analysis, http://cdpp-amda.cesr.fr/) is a web-based facility developed at CDPP Toulouse in France (http://cdpp.cesr.fr) for on line analysis of space physics data (heliosphere, magnetospheres, planetary environments) coming from either its local database or distant ones. AMDA has been recently integrated as a service to the scientific community for the Plasma Physics thematic node of the Europlanet-RI IDIS (Integrated and Distributed Information Service, http://www.europlanet-idis.fi/) activities, in close cooperation with IWF Graz (http://europlanet-plasmanode.oeaw.ac.at/index.php?id=9). We will report the status of our current technical and scientific efforts to integrate in the local database of AMDA various planetary plasma datasets (at Mercury, Venus, Mars, Earth and Moon, Jupiter, Saturn) from heterogeneous sources, including NASA/Planetary Data System (http://ppi.pds.nasa.gov/). We will also present our prototype Virtual Observatory activities to connect the AMDA tool to the IVOA Aladin astrophysical tool to enable pluridisciplinary studies of giant planet auroral emissions. This presentation will be done on behalf of the CDPP Team and Europlanet-RI IDIS plasma node
The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication) within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013). As a system it seeks to overcome overload or excess of irrelevant i...
Offline Web applications are increasingly popular. The possibility to have both the advantages of Web applications and traditional desktop applications is exiting. An offline Web application can be accessed from all computers, with any operating system, as well as offering to store information locally, giving the user the opportunity to use the application when the user does not have Internet access. The concept of offline Web applications is tempting, but it is important to integrate securit...
This dissertation addresses issues for the efficient access to Web databases and services. We propose a distributed ontology for a meaningful organization of and efficient access to Web databases. Next, we dedicate most of our work on presenting a comprehensive query infrastructure for the emerging concept of Web services. The core of this query infrastructure is to enable the efficient delivery of Web services b...
P. V. G. S. Mudiraj B. Jabber K. David raju
Full Text Available Web usage mining is a main research area in Web mining focused on learning about Web users and their interactions with Web sites. The motive of mining is to find users’ access models automatically and quickly from the vast Web log data, such as frequent access paths, frequent access page groups and user clustering. Through web usage mining, the server log, registration information and other relative information left by user provide foundation for decision making of organizations. This article provides a survey and analysis of current Web usage mining systems and technologies. There are generally three tasks in Web Usage Mining: Preprocessing, Pattern analysis and Knowledge discovery. Preprocessing cleans log file of server by removing log entries such as error or failure and repeated request for the same URL from the same host etc... The main task of Pattern analysis is to filter uninteresting information and to visualize and interpret the interesting pattern to users. The statistics collected from the log file can help to discover the knowledge. This knowledge collected can be used to take decision on various factors like Excellent, Medium, Weak users and Excellent, Medium and Weak web pages based on hit counts of the web page in the web site. The design of the website is restructured based on user’s behavior or hit counts which provides quick response to the web users, saves memory space of servers and thus reducing HTTP requests and bandwidth utilization. This paper addresses challenges in three phases of Web Usage mining along with Web Structure Mining.This paper also discusses an application of WUM, an online Recommender System that dynamically generates links to pages that have not yet been visited by a user and might be of his potential interest. Differently from the recommender systems proposed so far, ONLINE MINER does not make use of any off-line component, and is able to manage Web sites made up of pages dynamically generated.
Percivall, George; Plesea, Lucian
The WMS Global Mosaic provides access to imagery of the global landmass using an open standard for web mapping. The seamless image is a mosaic of Landsat 7 scenes; geographically-accurate with 30 and 15 meter resolutions. By using the OpenGIS Web Map Service (WMS) interface, any organization can use the global mosaic as a layer in their geospatial applications. Based on a trade study, an implementation approach was chosen that extends a previously developed WMS hosting a Landsat 5 CONUS mosaic developed by JPL. The WMS Global Mosaic supports the NASA Geospatial Interoperability Office goal of providing an integrated digital representation of the Earth, widely accessible for humanity's critical decisions.
S. Oswalt Manoj
The World Wide Web has more online web database which can be searched through their web query interface. Deep Web contents are accessed by queries submitted to Web databases and the returned data records are enwrapped in dynamically generated Web pages. Extracting structured data from deep Web pages is a challenging task due to the underlying complicate structures of such pages. Until now, a large number of techniques have been proposed to address this problem, but all of them have inherent l...
Rodriguez, Jose Manuel; Carro, Angel; Valencia, Alfonso; Tress, Michael L.
This paper introduces the APPRIS WebServer (http://appris.bioinfo.cnio.es) and WebServices (http://apprisws.bioinfo.cnio.es). Both the web servers and the web services are based around the APPRIS Database, a database that presently houses annotations of splice isoforms for five different vertebrate genomes. The APPRIS WebServer and WebServices provide access to the computational methods implemented in the APPRIS Database, while the APPRIS WebServices also allows retrieval of the annotations. ...
Dolog, Peter; Nejdl, Wolfgang
Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...
Full Text Available Abstract Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA
Caters to an under-served niche market of small and medium-sized web consulting projects Eases people's project management pain Uses a clear, simple, and accessible style that eschews theory and hates walls of text
The database means a collection of many types of occurrences of logical records containing relationships between records and data elementary aggregates. Management System database (DBMS) - a set of programs for creating and operation of a database. Theoretically, any relational DBMS can be used to store data needed by a Web server. Basically, it was observed that the simple DBMS such as Fox Pro or Access is not suitable for Web sites that are used intensively. For large-scale Web applications...
VisPort: Web-Based Access to Community-Specific Visualization Functionality [Shedding New Light on Exploding Stars: Visualization for TeraScale Simulation of Neutrino-Driven Supernovae (Final Technical Report)
Baker, M Pauline
The VisPort visualization portal is an experiment in providing Web-based access to visualization functionality from any place and at any time. VisPort adopts a service-oriented architecture to encapsulate visualization functionality and to support remote access. Users employ browser-based client applications to choose data and services, set parameters, and launch visualization jobs. Visualization products typically images or movies are viewed in the user's standard Web browser. VisPort emphasizes visualization solutions customized for specific application communities. Finally, VisPort relies heavily on XML, and introduces the notion of visualization informatics - the formalization and specialization of information related to the process and products of visualization.
随着网络技术的进一步发展，Web服务（Web Services）技术逐渐被应用于各类管理系统中，Web服务本身具有组件模型无关性、平台无关性、编程语言无关性的优良特性，使得Web服务可以用于系统的集成。本文着重介绍一种基于Web服务的学生公寓门禁管理系统，从系统结构、系统设计模式、Web服务关键性技术等方面阐释系统的设计，构建于Web服务基础上的学生公寓门禁管理系统的数据能够被其它应用系统直接调用，用于高校信息系统集成化建设。%With the in-depth development of network technology, web services technology is gradually applied to vari-ous types of management systems. Web services can be used for the integration of the system due to the excellent characteristics of its own component model-independent, platform independent, programming language independence. In this paper, a kind of access control management system is designed for student apartments based on web services;the system design is illustrated with system architecture, system design patterns and web services critical technology. The data of building the students the apartment access control management system based on web services can be directly transferred by other applying system and applied for the other applications with the construction of university information systems integration.
Stackhouse, P. W.; Barnett, A. J.; Tisdale, M.; Tisdale, B.; Chandler, W.; Hoell, J. M., Jr.; Westberg, D. J.; Quam, B.
The NASA LaRC Atmospheric Science Data Center has deployed it's beta version of an existing geophysical parameter website employing off the shelf Geographic Information System (GIS) tools. The revitalized web portal is entitled the "Surface meteorological and Solar Energy" (SSE - https://eosweb.larc.nasa.gov/sse/) and has been supporting an estimated 175,000 users with baseline solar and meteorological parameters as well as calculated parameters that enable feasibility studies for a wide range of renewable energy systems, particularly those systems featuring solar energy technologies. The GIS tools enable, generate and store climatological averages using spatial queries and calculations (by parameter for the globe) in a spatial database resulting in greater accessibility by government agencies, industry and individuals. The data parameters are produced from NASA science projects and reformulated specifically for the renewable energy industry and other applications. This first version includes: 1) processed and reformulated set of baseline data parameters that are consistent with Esri and open GIS tools, 2) development of a limited set of Python based functions to compute additional parameters "on-the-fly" from the baseline data products, 3) updated the current web sites to enable web-based displays of these parameters for plotting and analysis and 4) provided for the output of data parameters in geoTiff, ASCII and .netCDF data formats. The beta version is being actively reviewed through interaction with a group of collaborators from government and industry in order to test web site usability, display tools and features, and output data formats. This presentation provides an overview of this project and the current version of the new SSE-GIS web capabilities through to the end usage. This project supports cross agency and cross organization interoperability and access to NASA SSE data products and OGC compliant web services and aims also to provide mobile platform
The web site is a library's most important feature. Patrons use the web site for numerous functions, such as renewing materials, placing holds, requesting information, and accessing databases. The homepage is the place they turn to look up the hours, branch locations, policies, and events. Whether users are at work, at home, in a building, or on…
B.Hemanth kumar,; Prof. M.Surendra Prasad Babu
Accessing web resources (Information) is an essential facility provided by web applications to every body. Semantic web is one of the systems that provide a facility to access the resources through web service applications. Semantic web and web Services are new emerging web based technologies. An automatic information processing system can be developed by using semantic web and web services, each having its own contribution within the context of developing web-based information systems and ap...
Full Text Available As the types of user accessible data and information escalates, so does the variety of Information Retrieval (IR practices which can match to achieve the challenges instigated. By expanding its applicability which can broaden the use, integrating technologies and methods and as long as the quest for the perfectly accurate system continues to exist it is quite possible and likely that Information Retrieval can become one of the key technology areas for current and future research and practice. This paper expounds the recent research advances in the area of Contextual Information Retrieval. It tracks and investigates the evolution of retrieval models from the pre-web (traditional Information Retrieval paradigm and Web information retrieval to the most prominent interactive Web information retrieval field of contextual information retrieval focusing on developing models and strategies of contextual IR.
M. Tayfun Gülle
Departing from the idea that internet, which has become a deep information tunnel, is causing a problem in access to “accurate information”, it is expressed that societies are imprisoned within the world of “virtual reality” with web 2.0/web 3.0 technologies and social media applications. In order to diagnose this problem correctly, the media used from past to present for accessing information are explained shortly as “social tools.” Furthermore, it is emphasised and summarised with an editor...
Ketul Patel; Dr. A.R. Patel
The traffic on World Wide Web is increasing rapidly and huge amount of data is generated due to users’ numerous interactions with web sites. Web Usage Mining is the application of data mining techniques to discover the useful and interesting patterns from web usage data. It supports to know frequently accessed pages, predict user navigation, improve web site structure etc. In order to apply Web Usage Mining, various steps are performed. This paper discusses the process of Web Usage Mining con...
Blin, Kai; Pedersen, Lasse Ebdrup; Weber, Tilmann;
allow designing sgRNAs for non-model organisms exist. Here, we present CRISPy-web (http://crispy.secondarymetabolites.org/), an easy to use web tool based on CRISPy to design sgRNAs for any user-provided microbial genome. CRISPy-web allows researchers to interactively select a region of their genome of...... interest to scan for possible sgRNAs. After checks for potential off-target matches, the resulting sgRNA sequences are displayed graphically and can be exported to text files. All steps and information are accessible from a web browser without the requirement to install and use command line scripts....
Zaretzki, J.; Bergeron, C.; Huang, T.-W.;
Regioselectivity-WebPredictor (RS-WebPredictor) is a server that predicts isozyme-specific cytochrome P450 (CYP)-mediated sites of metabolism (SOMs) on drug-like molecules. Predictions may be made for the promiscuous 2C9, 2D6 and 3A4 CYP isozymes, as well as CYPs 1A2, 2A6, 2B6, 2C8, 2C19 and 2E1....... RS-WebPredictor is the first freely accessible server that predicts the regioselectivity of the last six isozymes. Server execution time is fast, taking on average 2s to encode a submitted molecule and 1s to apply a given model, allowing for high-throughput use in lead optimization projects.......Availability: RS-WebPredictor is accessible for free use at http://reccr.chem.rpi.edu/ Software/RS-WebPredictor. © 2013 The Author....
In order for decision makers to efficiently make accurate decisions, pertinent information must be accessed easily and quickly. Component based architectures are suitable for creating today's three-tiered client-server systems. Experts in each particular field can develop each tier independently. The first tier can be built using HTML and web browsers. The middle tier can be implemented by using existing server side programming technologies that enable dynamic web page creation. The third tie...
Cömert, Çetin; Akıncı, Halil
Web services have emerged as the next generation of Web-based technology for interoperability. Web services are modular, self-describing, self-contained applications that are accessible over the Internet. Various communities that either produce or use Information and Communication Technologies are working on web services nowadays. There are already a number of software companies providing tools to develop and deploy Web Services. In the Web Services view, every different system or component o...
Saba, Luca; Banchhor, Sumit K; Suri, Harman S; Londhe, Narendra D; Araki, Tadashi; Ikeda, Nobutaka; Viskovic, Klaudija; Shafique, Shoaib; Laird, John R; Gupta, Ajay; Nicolaides, Andrew; Suri, Jasjit S
. Statistical tests were performed to demonstrate consistency, reliability and accuracy of the results. The proposed AtheroCloud™ system is completely reliable, automated, fast (3-5 seconds depending upon the image size having an internet speed of 180Mbps), accurate, and an intelligent, web-based clinical tool for multi-center clinical trials and routine telemedicine clinical care. PMID:27318571
Yu, Catherine H; Bahniwal, Robinder; Laupacis, Andreas; Leung, Eman; Orr, Michael S; Straus, Sharon E.
Objective To identify and evaluate the effectiveness, clinical usefulness, sustainability, and usability of web-compatible diabetes-related tools. Data sources Medline, EMBASE, CINAHL, Cochrane Central Register of Controlled Trials, world wide web. Study selection Studies were included if they described an electronic audiovisual tool used as a means to educate patients, care givers, or clinicians about diabetes management and assessed a psychological, behavioral, or clinical outcome. Data ext...
Himangni Rathore; Hemant Verma
Web is a rich domain of data and knowledge, which is spread over the world in unstructured manner. The number of users is continuously access the information over the internet. Web mining is an application of data mining where web related data is extracted and manipulated for extracting knowledge. The data mining is used in the domain of web information mining is refers as web mining, that is further divided into three major domains web uses mining, web content mining and web stru...
Khushboo Khurana; M. B. Chandak
Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...
Full Text Available Web usage mining is the application of web mining to discover the useful patterns from the web in order to understand and analyze the behavior of the web users and web based applications. It is theemerging research trend for today’s researchers. It entirely deals with web log files which contain the user website access information. It is an interesting thing to analyze and understand the user behaviorabout the web access. Web usage mining normally has three categories: 1. Preprocessing, 2. Pattern Discovery and 3. Pattern Analysis. This paper proposes the association rule mining algorithms for betterWeb Recommendation and Web Personalization. Web recommendation systems are considered as an important role to understand customers’ behavior, interest, improving customer convenience, increasingservice provider profits and future needs.
Department of Transportation — The AccessAML is a web-based internet single application designed to reduce the vulnerability associated with several accounts assinged to a single users. This is a...
Discusses key issues in addressing the challenge of Web accessibility for people with disabilities, including tools for Web authoring, repairing, and accessibility validation, and relevant legal issues. Presents standards for Web accessibility, including the Section 508 Standards from the Federal Access Board, and the World Wide Web Consortium's…
Gabriel Fontanet Nadal; Jaime Jaume Mayol
Accesible Tourism is a kind of Tourism that is specially dedicated to disabled people. This Tourism refers to the removal of physical elements that difficult the disabled people mobility at the destination. The Accesible Tourism should take care of both physical and web accessibility. The Web Accessibility of a web is defined as the capability this web to be accessed by people with any kind of disability. Some organizations generate rules to improve web accessibility. An analysis of Web Acces...
Manvi,; Bhatia, Komal kumar; Dixit, Ashutosh
Deep Web is content hidden behind HTML forms. Since it represents a large portion of the structured, unstructured and dynamic data on the Web, accessing Deep-Web content has been a long challenge for the database community. This paper describes a crawler for accessing Deep-Web using Ontologies. Performance evaluation of the proposed work showed that this new approach has promising results.
Cetl, V.; T. Kliment; Kliment, M.
The effective access and use of geospatial information (GI) resources acquires a critical value of importance in modern knowledge based society. Standard web services defined by Open Geospatial Consortium (OGC) are frequently used within the implementations of spatial data infrastructures (SDIs) to facilitate discovery and use of geospatial data. This data is stored in databases located in a layer, called the invisible web, thus are ignored by search engines. SDI uses a catalogue (discovery) ...
Information and services on the web are accessible for everyone. Users of the web differ in their background, culture, political and social environment, interests and so on. Ambient intelligence was envisioned as a concept for systems which are able to adapt to user actions and needs....... With the growing amount of information and services, the web applications become natural candidates to adopt the concepts of ambient intelligence. Such applications can deal with divers user intentions and actions based on the user profile and can suggest the combination of information content and services which...... suit the user profile the most. This paper summarizes the domain engineering framework for such adaptive web applications. The framework provides guidelines to develop adaptive web applications as members of a family. It suggests how to utilize the design artifacts as knowledge which can be used...
Want to know how to make your pages look beautiful, communicate your message effectively, guide visitors through your website with ease, and get everything approved by the accessibility and usability police at the same time? Head First Web Design is your ticket to mastering all of these complex topics, and understanding what's really going on in the world of web design. Whether you're building a personal blog or a corporate website, there's a lot more to web design than div's and CSS selectors, but what do you really need to know? With this book, you'll learn the secrets of designing effecti
SRD 69 NIST Chemistry WebBook (Web, free access) The NIST Chemistry WebBook contains: Thermochemical data for over 7000 organic and small inorganic compounds; thermochemistry data for over 8000 reactions; IR spectra for over 16,000 compounds; mass spectra for over 33,000 compounds; UV/Vis spectra for over 1600 compounds; electronic and vibrational spectra for over 5000 compounds; constants of diatomic molecules(spectroscopic data) for over 600 compounds; ion energetics data for over 16,000 compounds; thermophysical property data for 74 fluids.
Web Call Example Application from Ericsson Developer Connection is an application that hosted at a web server and supplies functionality of VoIP phone calls. Users can access the service from desktop browser, mobile phone browser or Java ME Client. Users can also manage their contact books. Each user can have more than one VoIP service accounts, so they can choose the cheapest on when they make phone call. The Web Call Example Application supports two kinds of VoIP phone call connection: Rela...
Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.
Hennig, Teresa; Hepworth, George; Yudovich, Dagi (Doug)
Authoritative and comprehensive coverage for building Access 2013 Solutions Access, the most popular database system in the world, just opened a new frontier in the Cloud. Access 2013 provides significant new features for building robust line-of-business solutions for web, client and integrated environments. This book was written by a team of Microsoft Access MVPs, with consulting and editing by Access experts, MVPs and members of the Microsoft Access team. It gives you the information and examples to expand your areas of expertise and immediately start to develop and upgrade projects. Exp
Griffith, J.A.; Egbert, S.L.
Remote sensing education is increasingly in demand across academic and professional disciplines. Meanwhile, Internet technology and the World Wide Web (WWW) are being more frequently employed as teaching tools in remote sensing and other disciplines. The current wealth of information on the Internet and World Wide Web must be distilled, nonetheless, to be useful in remote sensing education. An extensive literature base is developing on the WWW as a tool in education and in teaching remote sensing. This literature reveals benefits and limitations of the WWW, and can guide its implementation. Among the most beneficial aspects of the Web are increased access to remote sensing expertise regardless of geographic location, increased access to current material, and access to extensive archives of satellite imagery and aerial photography. As with other teaching innovations, using the WWW/Internet may well mean more work, not less, for teachers, at least at the stage of early adoption. Also, information posted on Web sites is not always accurate. Development stages of this technology range from on-line posting of syllabi and lecture notes to on-line laboratory exercises and animated landscape flyovers and on-line image processing. The advantages of WWW/Internet technology may likely outweigh the costs of implementing it as a teaching tool.
Colors play a particularly important role in both designing and accessing Web pages. A well-designed color scheme improves Web pages' visual aesthetic and facilitates user interactions. As far as we know, existing color assessment studies focus on images; studies on color assessment and editing for Web pages are rare. This paper investigates color assessment for Web pages based on existing online color theme-rating data sets and applies this assessment to Web color edit. This study consists o...
Zhengtao Liu; Jiandong Wang
Web information integrated management system requires a powerful and versatile data model that is able to represent a highly heterogeneous mix of data such as web pages, XML, deep web, files, etc. It requires access to both structured and unstructured data. Such collections of data have been referred to as dataspace. In order to build a web dataspace support platform, we described some principles. According to these principles, we design architecture for the web dataspace support platform. Ba...
Klatt, Edward C.
The principal goal of this thesis is to examine responsive web design approach and the SVG (Scalable Vector Graphics) standard for vector graphics. Both topics are ultimately used on designing a web page. The opening chapter presents the difficulty of ensuring the users with effective and accessible web pages, across all kinds of mobile devices that are currently available on the market. Responsive web design is presented as a viable solution for the development of web pages, which are optimi...
A web based survey is an effective tool which is used frequently in academic and non academic researches. Increase in internet usage and easy access to web technology facilitate the growing popularity of web surveys but absence of exhaustive literature on web surveys presents a significant challenge. This paper presents basic guidelines for developing a robust web survey. Aspects related to data quality, coverage bias, questionnaire design, non response bias, response bias, pro...
Full Text Available As amount of information and web development increase considerably, some technics and methods are required to allow efficient access to data and information extraction from them. Extracting useful pattern from worldwide networks that are referred to as web mining is considered as one of the main applications of data mining. The key challenges of web users are exploring websites for finding the relevant information by taking minimum time in an efficient manner. Discovering the hidden knowledge in the manner of interaction in the web is considered as one of the most important technics in web utilization mining. Information overload is one of the main problems in current web and for tackling this problem the web personalization systems are presented that adapts the content and services of a website with user's interests and browsing behavior. Today website personalization is turned into a popular event for web users and it plays a leading role in speed of access and providing users' desirable information. The objective of current article is extracting index based on users' behavior and web personalization using web mining technics based on utilization and association rules. In proposed methods the weighting criteria showing the extent of interest of users to the pages are expressed and a method is presented based on combination of association rules and clustering by perceptron neural network for web personalization. The proposed method simulation results suggest the improvement of precision and coverage criteria with respect to other compared methods.
ZHANG Guo-yin; GU Guo-chang; LI Jian-li
The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information,so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.
K. F. Bharati
Full Text Available The traditional search engines available over the internet are dynamic in searching the relevant content over the web. The search engine has got some constraints like getting the data asked from a varied source, where the data relevancy is exceptional. The web crawlers are designed only to more towards a specific path of the web and are restricted in moving towards a different path as they are secured or at times restricted due to the apprehension of threats. It is possible to design a web crawler that will have the capability of penetrating through the paths of the web, not reachable by the traditional web crawlers, in order to get a better solution in terms of data, time and relevancy for the given search query. The paper makes use of a newer parser and indexer for coming out with a novel idea of web crawler and a framework to support it. The proposed web crawler is designed to attend Hyper Text Transfer Protocol Secure (HTTPS based websites and web pages that needs authentication to view and index. User has to fill a search form and his/her creditionals will be used by the web crawler to attend secure web server for authentication. Once it is indexed the secure web server will be inside the web crawler’s accessible zone
Manvi; Bhatia, Komal Kumar; Dixit, Ashutosh
Web is a wide term which mainly consists of surface web and hidden web. One can easily access the surface web using traditional web crawlers, but they are not able to crawl the hidden portion of the web. These traditional crawlers retrieve contents from web pages, which are linked by hyperlinks ignoring the information hidden behind form pages, which cannot be extracted using simple hyperlink structure. Thus, they ignore large amount of data hidden behind search forms. This paper emphasizes o...
Full Text Available Web data extraction is concerned, among other things, with routine data accessing and downloading from continuously-updated dynamic Web pages. There is a relevant trade-off between the rate at which the external Web sites are accessed and the computational burden on the accessing client. We address the problem by proposing a predictive model, typical of the Operating Systems literature, of the rate-of-update of each Web source. The presented model has been implemented into a new version of the Dynamo project: a middleware that assists in generating informative RSS feeds out of traditional HTML Web sites. To be effective, i.e., make RSS feeds be timely and informative and to be scalable, Dynamo needs a careful tuning and customization of its polling policies, which are described in detail.
Itoh, Yuji; Urushihata, Toshiya; Sakuma, Toru; Ikemune, Sachiko; Tojo, Masanori; Miyake, Teruhisa; Takahashi, Hiroshi; Ohkoshi, Norio; Ishizuka, Kazushige; Ono, Tsukasa
This report describes a Web application intended for visually impaired users. Today hundreds of millions of peoplebenefit from the Internet (or the World Wide Web), which is the greatest source of information in the world. The World WideWeb Consortium (W3C) has set the guidelines for Web content accessibility, which allows visually impaired people to accessand use Web contents. However, many of Web sites do not yet follow these guidelines. Thus, we propose a Web applicationsystem that collect...
With the constant spread of internet access, the world of software is constantly transforming product shapes into services delivered via web browsers. Modern next generation web applications change the way browsers and users interact with servers. A lot of word scale services have already been delivered by top companies as Single Page Applications. Moving services online poses a big attention towards data protection and web application security. Single Page Application are exposed to server-s...
Haritsa, Jayant R.
Search engines are currently the standard medium for locating and accessing information on the Web. However, they may not scale to match the anticipated explosion of Web content since they support only extremely coarse-grained queries and axe based on centralized architectures. In this paper, we discuss how database technology can be successfully utilized to address the above problems. We also present the main features of a prototype Web database system called DIASPORA that we have developed ...
One of the application areas of data mining is the World Wide Web (WWW or Web), which serves as a huge, widely distributed, global information service for every kind of information such as news, advertisements, consumer information, financial management, education, government, e-commerce, health services, and many other information services. The Web also contains a rich and dynamic collection of hyperlink information, Web page access and usage information, providing sources for data mining. The amount of information on the Web is growing rapidly, as well as the number of Web sites and Web page
Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…
M. Tayfun Gülle
Full Text Available Departing from the idea that internet, which has become a deep information tunnel, is causing a problem in access to “accurate information”, it is expressed that societies are imprisoned within the world of “virtual reality” with web 2.0/web 3.0 technologies and social media applications. In order to diagnose this problem correctly, the media used from past to present for accessing information are explained shortly as “social tools.” Furthermore, it is emphasised and summarised with an editorial viewpoint that the means of reaching accurate information can be increased via the freedom of expression channel which will be brought forth by “good librarianship” applications. IFLA Principles of Freedom of Expression and Good Librarianship is referred to at the end of the editorial.
Deshpande, Yogesh; Murugesan, San; Ginige, Athula; Hansen, Steve; Schwabe, Daniel; Gaedke, Martin; White, Bebo
Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: a) why is it needed? b) what is its domain of operation? c) how does it help and what should it do to improve Web application develo...
Rodriguez, Jose Manuel; Carro, Angel; Valencia, Alfonso; Tress, Michael L
This paper introduces the APPRIS WebServer (http://appris.bioinfo.cnio.es) and WebServices (http://apprisws.bioinfo.cnio.es). Both the web servers and the web services are based around the APPRIS Database, a database that presently houses annotations of splice isoforms for five different vertebrate genomes. The APPRIS WebServer and WebServices provide access to the computational methods implemented in the APPRIS Database, while the APPRIS WebServices also allows retrieval of the annotations. The APPRIS WebServer and WebServices annotate splice isoforms with protein structural and functional features, and with data from cross-species alignments. In addition they can use the annotations of structure, function and conservation to select a single reference isoform for each protein-coding gene (the principal protein isoform). APPRIS principal isoforms have been shown to agree overwhelmingly with the main protein isoform detected in proteomics experiments. The APPRIS WebServer allows for the annotation of splice isoforms for individual genes, and provides a range of visual representations and tools to allow researchers to identify the likely effect of splicing events. The APPRIS WebServices permit users to generate annotations automatically in high throughput mode and to interrogate the annotations in the APPRIS Database. The APPRIS WebServices have been implemented using REST architecture to be flexible, modular and automatic. PMID:25990727
Gandhimathi K; Vijaya MS
A World Wide Web (WWW) is a system of interlinked hypertext documents accessed via internet. The web structure mining is based on the graph structure of hyperlinks and it extracts the useful information from structure of web data. Web structure mining aims to generate structural summary about web sites and web pages. Identifying the web community is one of the goal in web structure mining. This paper presents an alternate web community identification approach to detect the similar...
Sherif Kamel Shaheen
Full Text Available The research aims at evaluating Arabic Libraries’ Web-based Catalogues in the light of Principles and Recommendations published in: IFLA’s Guidelines For OPAC Displays (September 30, 2003 Draft For Worldwide Review. The total No. Of Recommendations reached” 38 “were categorized under three main titles, as follows: User Needs (12 recommendations, Content and arrangement Principle (25 recommendations, Standardization Principle (one recommendation However that number increased to reach 88 elements when formulated as evaluative criteria and included in the study’s checklist.
Fok, Chien Liang; Sun, Fei; Mangum, Matt; Mok, Al; He, Binghan; Sentis, Luis
The Cloud-based Advanced Robotics Laboratory (CARL) integrates a whole body controller and web-based teleoperation to enable any device with a web browser to access and control a humanoid robot. By integrating humanoid robots with the cloud, they are accessible from any Internet-connected device. Increased accessibility is important because few people have access to state-of-the-art humanoid robots limiting their rate of development. CARL's implementation is based on modern software libraries...
Apolinario-Hagen Jennifer Anette
Full Text Available Hintergrund: Angesichts der Debatte über regionale sowie soziostrukturell bedingte Versorgungslücken in der psychotherapeutischen Versorgung erhöht sich gegenwärtig das Interesse an E-Mental-Health-Interventionen wie der internetbasierten Psychotherapie, Online-Selbsthilfe und an neuen Ansätzen zur Selbstermächtigung. Profitieren könnten Gesundheitsberufe im Hinblick auf eine informierte Entscheidungsfindung, wenn sie die neuesten Entwicklungen kennen. Wenn allerdings diese “digitale Revolution” jene Patienten, die mit dem Web 2.0 nur unzureichend vertraut sind, nicht erreichen kann, wird sich der Zugang zu Psychotherapien kaum verbessern lassen. Daher soll mit dieser Übersichtsarbeit geklärt werden, ob und inwieweit Internettherapien als eine wirksame Alternative zur konventionellen Psychotherapie in der Grundversorgung empfohlen werden können.
Pro Access 2010 Development is a fundamental resource for developing business applications that take advantage of the features of Access 2010 and the many sources of data available to your business. In this book, you'll learn how to build database applications, create Web-based databases, develop macros and Visual Basic for Applications (VBA) tools for Access applications, integrate Access with SharePoint and other business systems, and much more. Using a practical, hands-on approach, this book will take you through all the facets of developing Access-based solutions, such as data modeling, co
Ulrich Fuller, Laurie
The easy guide to Microsoft Access returns with updates on the latest version! Microsoft Access allows you to store, organize, view, analyze, and share data; the new Access 2013 release enables you to build even more powerful, custom database solutions that integrate with the web and enterprise data sources. Access 2013 For Dummies covers all the new features of the latest version of Accessand serves as an ideal reference, combining the latest Access features with the basics of building usable databases. You'll learn how to create an app from the Welcome screen, get support
Traditional call centers can be accessed via speech only, and the call center based on web provides both data and speech access, but it needs a powerful terminal-computer. By analyzing traditional call centers and call centers based on web, this paper presents the framework of an advanced call center supporting WAP access. A typical service is also described in detail.
In this thesis, we investigate the path towards a focused web harvesting approach which can automatically and efficiently query websites, navigate through results, download data, store it and track data changes over time. Such an approach can also facilitate users to access a complete collection of
Archuleta, Christy-Ann M.; Eames, Deanna R.
The Rio Grande Civil Works and Restoration Projects Web Application, developed by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers (USACE) Albuquerque District, is designed to provide publicly available information through the Internet about civil works and restoration projects in the Rio Grande Basin. Since 1942, USACE Albuquerque District responsibilities have included building facilities for the U.S. Army and U.S. Air Force, providing flood protection, supplying water for power and public recreation, participating in fire remediation, protecting and restoring wetlands and other natural resources, and supporting other government agencies with engineering, contracting, and project management services. In the process of conducting this vast array of engineering work, the need arose for easily tracking the locations of and providing information about projects to stakeholders and the public. This fact sheet introduces a Web application developed to enable users to visualize locations and search for information about USACE (and some other Federal, State, and local) projects in the Rio Grande Basin in southern Colorado, New Mexico, and Texas.
Full Text Available The database means a collection of many types of occurrences of logical records containing relationships between records and data elementary aggregates. Management System database (DBMS - a set of programs for creating and operation of a database. Theoretically, any relational DBMS can be used to store data needed by a Web server.Basically, it was observed that the simple DBMS such as Fox Pro or Access is not suitable for Web sites that are used intensively. For large-scale Web applications need high performance DBMS's able to run multiple applications simultaneously. Hyper Text Markup Language (HTML is used to create hypertext documents for web pages. The purpose of HTML is rather the presentation of information – paragraphs, fonts, tables,than semantics description document.
Martins, Wellington Santos; Soares Lucas, Divino César; de Souza Neves, Kelligton Fabricio; Bertioli, David John
Simple sequence repeats (SSR), also known as microsatellites, have been extensively used as molecular markers due to their abundance and high degree of polymorphism. We have developed a simple to use web software, called WebSat, for microsatellite molecular marker prediction and development. WebSat is accessible through the Internet, requiring no program installation. Although a web solution, it makes use of Ajax techniques, providing a rich, responsive user interface. WebSat allows the submi...
景帅; 王颖纯; 刘燕权
纵览针对残疾人的网站可访问性相关研究，理论文章多于实证调研，缺乏对顶尖大学图书馆网站的实证数据分析。为此文章选取美国8所常春藤盟校，研究其图书馆网站可访问性特点及相关政策是否符合美国1990年颁布的《美国残疾人法》（ADA），并通过WAVE和邮件问询进行调研评估。研究发现：虽然每个网站都有很好的可用性和可操作性，同时也均向残疾人提供了服务说明以及指向残疾服务的链接，特别是对视障者提供了屏幕阅读器等辅助技术以增强其可访性，但这些网站均有WAVE认定的6类准则违规缺陷中的一种或多种。最常见的问题为缺失文档语言（44％）、冗余链接（69％）、可疑链接（50%）、跳过导航标题（44%）等。%This study evaluates the websites of the US ’ Ivy League Schools for accessibility by users with disabilities to determine if they are compliant with accessibility standards established by the Americans with Disabilities Act (ADA). By using the web accessibility evaluator WAVE and email survey, the author found that among selected sixteen web-sites in the eight universities, each site has good availability and operability, and all the libraries ’ websites offer services to people with disabilities and links to disability services, especially offer screen readers and other assistive technologies to enhance the access of people with visual impairment. However, errors are present at all the websites. The most com-mon problems are the lack of missing document language (44%), redundant link (69%), suspicious link (50%), and skip navigation(44%).
Full Text Available A striking feature of web-building spiders is the use of silk to make webs, mainly for prey capture. However, building a web is energetically expensive and increases the risk of predation. To reduce such costs and still have access to abundant prey, some web-building spiders have evolved web invasion behaviour. In general, no consistent patterns of web invasion have emerged and the factors determining web invasion remain largely unexplored. Here we report web invasion among conspecifics in seven nocturnal species of orb-web spiders, and examined the factors determining the probability of webs that could be invaded and taken over by conspecifics. About 36% of webs were invaded by conspecifics, and 25% of invaded webs were taken over by the invaders. A web that was built higher and intercepted more prey was more likely to be invaded. Once a web was invaded, the smaller the size of the resident spider, the more likely its web would be taken over by the invader. This study suggests that web invasion, as a possible way of reducing costs, may be widespread in nocturnal orb-web spiders.
Huurdeman, H.C.; Ben David, A.; Samar, T.
Web archives provide access to snapshots of the Web of the past, and could be valuable for research purposes. However, access to these archives is often limited, both in terms of data availability, and interfaces to this data. This paper explores new methods to overcome these limitations. It present
Hawkins, I.; Battle, R.; Miller-Bagwell, A.
We describe a partnership approach in use at UC Berkeley's Center for EUV Astrophysics (CEA) that facilitates the adaptation of astrophysics data and information---in particular from NASA's EUVE satellite---for use in the K--12 classroom. Our model is founded on a broad collaboration of personnel from research institutions, centers of informal science teaching, schools of education, and K--12 schools. Several CEA-led projects follow this model of collaboration and have yielded multimedia, Internet-based, lesson plans for grades 6 through 12 that are created and distributed on the World Wide Web (http://www.cea.berkeley.edu/Education). Use of technology in the classroom can foster an environment that more closely reflects the processes scientists use in doing research (Linn, diSessa, Pea, & Songer 1994, J.Sci.Ed.Tech., ``Can Research on Science Learning and Instruction Inform Standards for Science Education?"). For instance, scientists rely on technological tools to model, analyze, and ultimately store data. Linn et al. suggest introducing technological tools to students from the earliest years to facilitate scientific modeling, scientific collaborations, and electronic communications in the classroom. Our investigation aims to construct and evaluate a methodology for effective participation of scientists in K--12 education, thus facilitating fruitful interactions with teachers and other educators and increasing effective use of technology in the classroom. We describe several team-based strategies emerging from these project collaborations. These strategies are particular to the use of the Internet and World Wide Web as relatively new media for authoring K--12 curriculum materials. This research has been funded by NASA contract NAS5-29298, NASA grant ED-90033.01-94A to SSL/UCB, and NASA grants NAG5-2875 and NAGW-4174 to CEA/UCB.
Gabriel Fontanet Nadal
Full Text Available Accesible Tourism is a kind of Tourism that is specially dedicated to disabled people. This Tourism refers to the removal of physical elements that difficult the disabled people mobility at the destination. The Accesible Tourism should take care of both physical and web accessibility. The Web Accessibility of a web is defined as the capability this web to be accessed by people with any kind of disability. Some organizations generate rules to improve web accessibility. An analysis of Web Accessibility in Tourist Web Sites is shown at this document.
V.Chitraa; Dr. Antony Selvdoss Davamani
World Wide Web is a huge repository of web pages and links. It provides abundance of information for the Internet users. The growth of web is tremendous as approximately one million pages are added daily. Users' accesses are recorded in web logs. Because of the tremendous usage of web, the web log files are growing at a faster rate and the size is becoming huge. Web data mining is the application of data mining techniques in web data. Web Usage Mining applies mining techniques in log data to ...
Webometrics is concerned with measuring aspects of the web: web sites, web pages, parts of web pages, words in web pages, hyperlinks, web search engine results. The importance of the web itself as a communication medium and for hosting an increasingly wide array of documents, from journal articles to holiday brochures, needs no introduction. Given this huge and easily accessible source of information, there are limitless possibilities for measuring or counting on a huge scale (e.g., the number of web sites, the number of web pages, the number of blogs) or on a smaller scale (e.g., the number o
Bush, Nigel E.; Bowen, Deborah J.; Jean Wooldridge; Abi Ludwig; Hendrika Meischke; Robert Robbins
Much is written about Internet access, Web access, Web site accessibility, and access to online health information. The term access has, however, a variety of meanings to authors in different contexts when applied to the Internet, the Web, and interactive health communication. We have summarized those varied uses and definitions and consolidated them into a framework that defines Internet and Web access issues for health researchers. We group issues into two categories: connectivity and human...
Calì, Andrea; Martinenghi, D.; R. Torlone
The Deep Web is constituted by data accessible through Web pages, but not readily indexable by search engines, as they are returned in dynamic pages. In this paper we propose a framework for accessing Deep Web sources, represented as relational tables with so-called ac- cess limitations, with keyword-based queries. We formalize the notion of optimal answer and investigate methods for query processing. To our knowledge, this problem has never been studied in ...
Full Text Available Mining the web is defined as discovering knowledge from hypertext and World Wide Web. The World Wide Web is one of the longest rising areas of intelligence gathering. Now a day there are billions of web pages, HTML archive accessible via the internet, and the number is still increasing. However, considering the inspiring diversity of the web, retrieving of interestingness web based content has become a very complex task. The large amount of data heterogeneity, complex format, high dimensional data and lack of structure of web, knowledge mining is a challenging task. In this paper, it is proposed to introduce a new framework generated to handle unstructured complex data. This web knowledge mining expertise brings forward a kind of XML-based distributed data mining architecture. Based on the research of web knowledge mining, XML is used to create well structured data. Web knowledge mining framework attempts to determine useful knowledge from derived data, complex format, and high dimensional data obtained from the interactions of the users through the Web.
Full Text Available Predicting current and potential species distributions and abundance is critical for managing invasive species, preserving threatened and endangered species, and conserving native species and habitats. Accurate predictive models are needed at local, regional, and national scales to guide field surveys, improve monitoring, and set priorities for conservation and restoration. Modeling capabilities, however, are often limited by access to software and environmental data required for predictions. To address these needs, we built a comprehensive web-based system that: (1 maintains a large database of field data; (2 provides access to field data and a wealth of environmental data; (3 accesses values in rasters representing environmental characteristics; (4 runs statistical spatial models; and (5 creates maps that predict the potential species distribution. The system is available online at www.niiss.org, and provides web-based tools for stakeholders to create potential species distribution models and maps under current and future climate scenarios.
Ansari, Zahid; Ahmed, Waseem; Azeem, M. F.; Babu, A. Vinaya
The explosive growth of World Wide Web (WWW) has necessitated the development of Web personalization systems in order to understand the user preferences to dynamically serve customized content to individual users. To reveal information about user preferences from Web usage data, Web Usage Mining (WUM) techniques are extensively being applied to the Web log data. Clustering techniques are widely used in WUM to capture similar interests and trends among users accessing a Web site. Clustering ai...
Full Text Available This paper deals with the application of LiveConnect for the remote control of real devices/stations over the Web. In this context, both the concept of Lean Web Automation and a flexible Java-based application tool have been developed ensuring a fast and secure process data transfer between device-server and Web browser by the subscriber/publisher principle. Index Term: Web-based remote control, Lean Web Automation, teletechnology, Web Access Kit.
Full Text Available This paper advocates a novel approach for mobile web browsing based on cooperation among wireless devices within close proximity operating in a cellular environment. In the actual state of the art, mobile phones can access the web using different cellular technologies. However, the supported data rates are not sufficient to cope with the ever increasing traffic requirements resulting from advanced and rich content services. Extending the state of the art, higher data rates can only be achieved by increasing complexity, cost, and energy consumption of mobile phones. In contrast to the linear extension of current technology, we propose a novel architecture where mobile phones are grouped together in clusters, using a short-range communication such as Bluetooth, sharing, and accumulating their cellular capacity. The accumulated data rate resulting from collaborative interactions over short-range links can then be used for cooperative mobile web browsing. By implementing the cooperative web browsing on commercial mobile phones, it will be shown that better performance is achieved in terms of increased data rate and therefore reduced access times, resulting in a significantly enhanced web browsing user experience on mobile phones.
Given that semantic Web realization is based on the critical mass of metadata accessibility and the representation of data with formal knowledge, it needs to generate metadata that is specific, easy to understand and well-defined. However, semantic annotation of the web documents is the successful way to make the Semantic Web vision a reality. This paper introduces the Semantic Web and its vision (stack layers) with regard to some concept definitions that helps the understanding of semantic a...
Jagli, Mrs. Dhanamma; Oswal, Sangeeta
Web usage mining: automatic discovery of patterns in clickstreams and associated data collected or generated as a result of user interactions with one or more Web sites. This paper describes web usage mining for our college log files to analyze the behavioral patterns and profiles of users interacting with a Web site. The discovered patterns are represented as clusters that are frequently accessed by groups of visitors with common interests. In this paper, the visitors and hits were forecaste...
Sundaravel, A.; Wilkinson, D. C.
The Geostationary Operational Environmental Satellite-R Series (GOES-R) makes use of advanced instruments and technologies to monitor the Earth's surface and provide with accurate space weather data. The first GOES-R series satellite is scheduled to be launched in 2015. The data from the satellite will be widely used by scientists for space weather modeling and predictions. This project looks into the ways of how these datasets can be made available to the scientists on the Web and to assist them on their research. We are working on to develop a prototype web-based system that allows users to browse, search and download these data. The GOES-R datasets will be archived in NetCDF (Network Common Data Form) and CSV (Comma Separated Values) format. The NetCDF is a self-describing data format that contains both the metadata information and the data. The data is stored in an array-oriented fashion. The web-based system will offer services in two ways: via a web application (portal) and via web services. Using the web application, the users can download data in NetCDF or CSV format and can also plot a graph of the data. The web page displays the various categories of data and the time intervals for which the data is available. The web application (client) sends the user query to the server, which then connects to the data sources to retrieve the data and delivers it to the users. Data access will also be provided via SOAP (Simple Object Access Protocol) and REST (Representational State Transfer) web services. These provide functions which can be used by other applications to fetch data and use the data for further processing. To build the prototype system, we are making use of proxy data from existing GOES and POES space weather datasets. Java is the programming language used in developing tools that formats data to NetCDF and CSV. For the web technology we have chosen Grails to develop both the web application and the services. Grails is an open source web application
Internet的迅速发展,使World Wide Web(WWW)成为一个巨大的、蕴涵着具有潜在价值知识的分布式信息空间.数据挖掘是从大量的数据中发现隐合的规律性内容,解决数据的应用质量问题,并充分利用有用的数据,帮助决策者调整策略,减少风险,做出正确的决策,是最具有前瞻性的一项技术.数据挖掘技术应用在Web环境下,通过对服务器日志信息采集,创建Web日志挖掘模型,分析经常访问的信息串,以利于网站管理者和经营者对网站管理进行决策参考.
Full Text Available AGRIS is the International System for Agricultural Science and Technology. It is supported by a large community of data providers, partners and users. AGRIS is a database that aggregates bibliographic data, and through this core data, related content across online information systems is retrieved by taking advantage of Semantic Web capabilities. AGRIS is a global public good and its vision is to be a responsive service to its user needs by facilitating contributions and feedback regarding the AGRIS core knowledgebase, AGRIS’s future and its continuous development. Periodic AGRIS e-consultations, partner meetings and user feedback are assimilated to the development of the AGRIS application and content coverage. This paper outlines the current AGRIS technical set-up, its network of partners, data providers and users as well as how AGRIS’s responsiveness to clients’ needs inspires the continuous technical development of the application. The paper concludes by providing a use case of how the AGRIS stakeholder input and the subsequent AGRIS e-consultation results influence the development of the AGRIS application, knowledgebase and service delivery.
Fuertes Castro, José Luis; Pérez Pérez, Aurora
Muchos sitios Web tienen un importante problema de accesibilidad, ya que su diseño no ha contemplado la gran diversidad funcional que presentan cada uno de los potenciales usuarios. Las directrices de accesibilidad del contenido Web, desarrolladas por el Consorcio de la Web, están compuestas por una serie de recomendaciones para que una página Web pueda utilizarse por cualquier persona. Uno de los principales problemas surge a la hora de comprobar la accesibilidad de una página Web, dado que,...
The University of Arizona Artificial Intelligence Lab (AI Lab) Dark Web project is a long-term scientific research program that aims to study and understand the international terrorism (Jihadist) phenomena via a computational, data-centric approach. We aim to collect "ALL" web content generated by international terrorist groups, including web sites, forums, chat rooms, blogs, social networking sites, videos, virtual world, etc. We have developed various multilingual data mining, text mining, and web mining techniques to perform link analysis, content analysis, web metrics (technical
Hall, Wendy; Tiropanis, Thanassis
This paper examines the evolution of the World Wide Web as a network of networks and discusses the emergence of Web Science as an interdisciplinary area that can provide us with insights on how the Web developed, and how it has affected and is affected by society. Through its different stages of evolution, the Web has gradually changed from a technological network of documents to a network where documents, data, people and organisations are interlinked in various and often unexpected ways. It...
杨镇雄; 蔡祖锐; 陈国华; 汤庸; 张龙
开放存取（open access，OA）期刊属于网络深层资源且分散在互联网中，传统的搜索引擎不能对其建立索引，不能满足用户获取OA期刊资源的需求，从而造成了开放资源的浪费。针对如何集中采集万维网上分散的开放存取期刊资源的问题，提出了一个面向OA期刊的分布式主题爬虫架构。该架构采用主从分布式设计，提出了基于用户预定义规则的OA期刊页面学术信息提取方法，由一个主控中心节点控制多个可动态增减的爬行节点，采用基于Chrome浏览器的插件机制来实现分布式爬行节点的可扩展性和部署的灵活性。%Open access journal is a kind of deep online resources and disperses on the Internet, and it is difficult for the traditional search engines to index these online resources, so the user can not access directly the open access journal via search engines, resulting in a waste of these open resources. This paper proposes a novel focused Web crawler with distributed architecture to collect the open access journal resources scattering throughout the Internet. This architecture adopts the distributed master-slave design, which consists of a master control center and multiple distributed crawler nodes, and proposes an academic information extraction method based on user predefined rules from the open access journals. These distributed crawling nodes can be adjusted dynamically and use Chrome browser based plug-in mechanism to achieve scalability and deployment flexibility.
Smokefree.gov is committed to providing access to all individuals—disabled or not—who are seeking information on its Web sites. To provide this information, the smokefree.gov Web site has been designed to comply with Section 508 of the Rehabilitation Act (as amended). Section 508 requires that all individuals with disabilities (whether they are federal government employees or members of the general public) have access to and use of information and data comparable to that provided to individuals without disabilities, unless an undue burden would be imposed.
随着容器技术在云计算中的大量应用，以Docker为代表的容器引擎在PaaS中大放光彩，出现一大批基于Docker的云计算初创公司。然而由于基于容器的云平台的特殊性，一般不会为容器分配固定IP，导致用户无法直接对云平台中的容器进行访问控制，对用户添加自定义服务等操作增加不便，ContainerSSh则是专门针对该问题而设计的解决方案。%With the development of container technology in cloud computing,it has been a large number of applications are created in cloud com-puting, Docker is a popular container engine and a lot of startups provide services based on Docker.So container technology has a very important for PaaS. However,due to the special nature of cloud platform based container, most of cloud platform will not assign fixed IP to container,user can not directly access and control the container in the cloud platform,so users are difficult to add personalized service. ContainerSSh is a solution that is designed to solve that problems.
V. Lakshmi Praba; T. Vasantha
The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to web data and documents. Web content mining and web structure mining have important roles in identifying the relevant web page. Relevancy of web page denotes how well a retrieved web page or set of we...
Los catálogos en línea de acceso público del Mercosur disponibles en entorno web: características del Proyecto UBACYT F054 Online public access catalogs of Mercosur in a web environment: characteristics of UBACYT F054 Project
Elsa E. Barber
Full Text Available Se presentan los lineamientos teórico-metodológicos del proyecto de investigación UBACYT F054 (Programación Científica y Técnica de la Universidad de Buenos Aires 2004-2007. Se analiza la problemática de los catálogos en línea de acceso público (OPACs disponibles en entorno web de las bibliotecas nacionales, universitarias, especializadas y públicas del Mercosur. Se estudian los aspectos vinculados con el control operativo, la formulación de la búsqueda, los puntos de acceso, el control de salida y la asistencia al usuario. El proyecto se propone, desde un abordaje cuantitativo y cualitativo, efectuar un diagnóstico de situación válido para los catálogos de la región. Plantea, además, un estudio comparativo con el fin de vislumbrar las tendencias existentes dentro de esta temática en bibliotecas semejantes de Argentina, Brasil, Paraguay y Uruguay.The theoretical-methodological aspects of the research project UBACYT F054 (Universidad de Buenos Aires Technical and Scientific Program, 2004- 2007 are outlined. Online Public Access Catalogs (OPACs in web environment in national, academic, public and special libraries in countries belonging to Mercosur are analized. Aspects related to the operational control, search formulation, access points, output control and user assistance are studied. The project aims, both quantitatively and qualitatively, to make a situation diagnosis valid for the catalogs of the region. It also offers a comparative study in order to see the existing tendencies on the subject in similar libraries in Argentina, Brasil, Paraguay and Uruguay.
André, N.; Cecconi, B.; Renard, B.; Budnik, E.; Genot, V.; Jacquey, C.; Hitier, R.; Bourrel, N.; Gangloff, M.; Pallier, E.; Bouchemit, M.; Besson, B.; Topf, F.; Baumjohann, W.; Khodachenko, M.; Rucker, H.; Zhang, T.
The field of planetary sciences has greatly expanded in recent years with space missions orbiting around most of the planets of our Solar System. The growing amount and wealth of data available make it difficult for scientists to exploit data coming from many sources that can initially be heterogeneous in their organization, description and format. It is an important objective of the Europlanet-RI and IMPEx projects (supported by EU within FP7) to add value to space missions by significantly contributing to the effective scientific exploitation of collected data; to enable space researchers to take full advantage of the potential value of data sets. To this end and to enhance the science return from space missions, innovative tools have to be developed and offered to the community. AMDA (Automated Multi-Dataset Analysis, http://cdpp-amda.cesr.fr/) is a web-based facility developed at CDPP Toulouse in France (http://cdpp.cesr.fr) for on line analysis of space physics data (heliosphere, magnetospheres, planetary environments) coming from either its local database or distant ones. AMDA has been recently integrated as a service to the scientific community for the Plasma Physics thematic node of the Europlanet-RI IDIS (Integrated and Distributed Information Service, http://www.europlanet-idis.fi/) activities, in close cooperation with IWF Graz (http://europlanetplasmanode. oeaw.ac.at/index.php?id=9). We will report the status of our current technical and scientific efforts to integrate in the local database of AMDA various planetary plasma datasets (at Mercury, Venus, Mars, Earth and moon, Jupiter, Saturn) from heterogeneous sources, including NASA/Planetary Data System (http://ppi.pds.nasa.gov/). We will also present our prototype Virtual Observatory activities to connect the AMDA tool to the IVOA Aladin astrophysical tool to enable pluridisciplinary studies of giant planet auroral emissions.
YİRMİBEŞOĞLU, Eda; ÖZTÜRK, Ayşen Sevgi; Erkal, Haldun Şükrü; EGEHAN, İbrahim
OBJECTIVES The contents of web pages from radiation oncology centers in Turkey were evaluated. Accessibility of the web pages through search engines was also assessed. METHODS A search was made for the presence of web sites for radiation oncology centers of 44 hospitals and for the accessibility of these sites through actively forwarding links using the “Google” search engine. RESULTS All centers had web sites. Twenty-nine centers had actively forwarding links. Web pages from 23 centers incl...
Earth science information is important to decisionmakers who formulate public policy related to mineral resource sustainability, land stewardship, environmental hazards, the economy, and public health. To meet the growing demand for easily accessible data, the Mineral Resources Program has developed, in cooperation with other Federal and State agencies, an Internet-based, data-delivery system that allows interested customers worldwide to download accurate, up-to-date mineral resource-related data at any time. All data in the system are spatially located and customers with Internet access and a modern Web browser can easily produce maps having user-defined overlays for any region of interest.
Full Text Available The rapid growth of the web and the lack of structure or an integrated schema create various issues to access the information for users. All users’ access on web information are saved in the related server log files. The circumstance of using these files is implemented as a resource for finding some patterns of user's behavior. Web mining is a subset of data mining and it means the mining of the related data from WWW, which is categorized into three parts including web content mining, web structure mining and web usage mining, based on the part of data, which is mined. It seems necessary to have a technique, which is capable of learning the users’ interests and based on the interests, which could filter the unrelated interests automatically or it could offer the related information to the user in reasonable amount of time. The web usage mining makes a profile from users to recognize them and it has direct relationship to web personalizing. The primary objective of personalizing systems is to prepare the thing, which is required by users, without asking them explicitly. In the other way, formal models prepare the possibility of system’s behavior modeling. The Petri and queue nets as some samples of these models can analyze the user's behavior in web. The primary objective of this paper is to present a colored Petri net to model the user's interactions for offering a list of pages recommendation to them in web. Estimating the user's behavior is implemented in some cases like offering the proper pages to continue the browse in web, ecommerce and targeted advertising. The preliminary results indicate that the proposed method is able to improve the accuracy criterion 8.3% rather static method.
Catherine E Yoshida
-based methods of sub-typing allows for continuity with historical serotyping data as we transition towards the increasing adoption of genomic analyses in epidemiology. The SISTR platform is freely available on the web at https://lfz.corefacility.ca/sistr-app/.
Yoshida, Catherine E; Kruczkiewicz, Peter; Laing, Chad R; Lingohr, Erika J; Gannon, Victor P J; Nash, John H E; Taboada, Eduardo N
-typing allows for continuity with historical serotyping data as we transition towards the increasing adoption of genomic analyses in epidemiology. The SISTR platform is freely available on the web at https://lfz.corefacility.ca/sistr-app/. PMID:26800248
WebFTS is a web-delivered file transfer and management solution which allows users to invoke reliable, managed data transfers on distributed infrastructures. The fully open source solution offers a simple graphical interface through which the power of the FTS3 service can be accessed without the installation of any special grid tools. Created following simplicity and efficiency criteria, WebFTS allows the user to access and interact with multiple grid and cloud storage. The “transfer engine” used is FTS3, the service responsible for distributing the majority of LHC data across WLCG infrastructure. This provides WebFTS with reliable, multi-protocol, adaptively optimised data transfers.The talk will focus on the recent development which allows transfers from/to Dropbox and CERNBox (CERN ownCloud deployment)
A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers
Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich
The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced
Full Text Available A large amount of data on the WWW remains inaccessible to crawlers of Web search engines because it can only be exposed on demand as users fill out and submit forms. The Hidden web refers to the collection of Web data which can be accessed by the crawler only through an interaction with the Web-based search form and not simply by traversing hyperlinks. Research on Hidden Web has emerged almost a decade ago with the main line being exploring ways to access the content in online databases that are usually hidden behind search forms. The efforts in the area mainly focus on designing hidden Web crawlers that focus on learning forms and filling them with meaningful values. The paper gives an insight into the various Hidden Web crawlers developed for the purpose giving a mention to the advantages and shortcoming of the techniques employed in each.
O'Hara, Kieron; Hall, Wendy
The Semantic Web is a vision of a web of linked data, allowing querying, integration and sharing of data from distributed sources in heterogeneous formats, using ontologies to provide an associated and explicit semantic interpretation. The article describes the series of layered formalisms and standards that underlie this vision, and chronicles their historical and ongoing development. A number of applications, scientific and otherwise, academic and commercial, are reviewed. The Semantic Web ...
Thuraisingham, Bhavani; Clifton, Chris; Gupta, Amar; Bertino, Elisa; Ferrari, Elena
This paper provides directions for web and e-commerce applications security. In particular, access control policies, workflow security, XML security and federated database security issues pertaining to the web and ecommerce applications are discussed.
Full Text Available When designing for e-learning the objective is to design for learning i.e. the technology supporting the learning activity should aid and support the learning process and be an arena where learning is likely to occur. To obtain this when designing e-learning for the workplace the author argue that it is important to have knowledge on how users actually access and use e-learning systems. In order to gain this knowledge web logs from a web lecture developed for a Scandinavian public body has been analyzed. During a period of two and a half months 15 learners visited the web lecture 74 times. The web lecture consisted of streaming video with exercises and additional links to resources on the WWW to provide an opportunity to investigate the topic from multiple perspectives. The web lecture took approximately one hour to finish. Using web usage mining for the analysis seven groups or interaction patterns emerged: peaking, one go, partial order, partial unordered, single module, mixed modules, non-video modules. Furthermore the web logs paint a picture of the learning activities being interrupted. This suggests that modules needs to be fine-grained (e.g. less than 8 minutes per video clip so learners’ do not need to waste time having to watch parts of a video clip while waiting for the part of interest to appear or having to fast forward. A clear and logical structure is also important to help the learner find their way back accurately and fast.
Jain, Ratnesh Kumar; Kasana, Dr. R. S.; Jain, Dr. Suresh
World Wide Web is a huge data repository and is growing with the explosive rate of about 1 million pages a day. As the information available on World Wide Web is growing the usage of the web sites is also growing. Web log records each access of the web page and number of entries in the web logs is increasing rapidly. These web logs, when mined properly can provide useful information for decision-making. The designer of the web site, analyst and management executives are interested in extracti...
Hall, Wendy; O'Hara, Kieron
The Semantic Web is a proposed extension to the World Wide Web (WWW) that aims to provide a common framework for sharing and reusing data across applications. The most common interfaces to the World Wide Web present it as a Web of Documents, linked in various ways including hyperlinks. But from the data point of view, each document is a black box – the data are not given independently of their representation in the document. This reduces its power, and also (as most information needs to be ex...
In this thesis, we investigate the path towards a focused web harvesting approach which can automatically and efficiently query websites, navigate through results, download data, store it and track data changes over time. Such an approach can also facilitate users to access a complete collection of relevant data to their topics of interest and monitor it over time. To realize such a harvester, we focus on the following obstacles. First, we try to find methods that can achieve the best coverag...
Modraj Bhavsar; Mrs. P. M. Chavan
On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each...
L.Saoudi; A.Boukerram; S.Mhamedi
Traditional search engines deal with the Surface Web which is a set of Web pages directly accessible through hyperlinks and ignores a large part of the Web called hidden Web which is a great amount of valuable information of online database which is “hidden” behind the query forms. To access to those information the crawler have to fill the forms with a valid data, for this reason we propose a new approach which use SQLI technique in order to find the most promising keywords of a specific dom...
Full Text Available Problem statement: In the internet era web sites on the internet are useful source of information for almost every activity. So there is a rapid development of World Wide Web in its volume of traffic and the size and complexity of web sites. Web mining is the application of data mining, artificial intelligence, chart technology and so on to the web data and traces users visiting behaviors and extracts their interests using patterns. Because of its direct application in e-commerce, Web analytics, e-learning, information retrieval, web mining has become one of the important areas in computer and information science. There are several techniques like web usage mining exists. But all processes its own disadvantages. This study focuses on providing techniques for better data cleaning and transaction identification from the web log. Approach: Log data is usually noisy and ambiguous and preprocessing is an important process for efficient mining process. In the preprocessing, the data cleaning process includes removal of records of graphics, videos and the format information, the records with the failed HTTP status code and robots cleaning. Sessions are reconstructed and paths are completed by appending missing pages in preprocessing. And also the transactions which depict the behavior of users are constructed accurately in preprocessing by calculating the Reference Lengths of user access by considering byte rate. Results: When the number of records is considered, for example, for 1000 record, only 350 records are resulted using data cleaning. When the execution time is considered, the initial log take s119 seconds for execution, whereas, only 52 seconds are required by proposed technique. Conclusion: The experimental results show the performance of the proposed algorithm and comparatively it gives the good results for web usage mining compared to existing approaches.
Perhaps the most significant tool of our internet age is the web search engine, providing a powerful interface for accessing the vast amount of information available on the world wide web and beyond. While still in its infancy compared to the knowledge tools that precede it - such as the dictionary or encyclopedia - the impact of web search engines on society and culture has already received considerable attention from a variety of academic disciplines and perspectives. This article aims to organize a meta-discipline of “web search studies,” centered around a nucleus of major research on web search engines from five key perspectives: technical foundations and evaluations; transaction log analyses; user studies; political, ethical, and cultural critiques; and legal and policy analyses.
Rodríguez-Palchevich, Diana Rosa
This paper briefly describes the Web Accessibility and Usability and suggests for best practices and good examples for web sites for kids and school libraries. The article concludes with a reflection on the impact of applying Web Accessibility and Usability on such web sites.
Background This study aims to rank policy concerns and policy-related research issues in order to identify policy and research gaps on access to medicines (ATM) in low- and middle-income countries in Latin America and the Caribbean (LAC), as perceived by policy makers, researchers, NGO and international organization representatives, as part of a global prioritization exercise. Methods Data collection, conducted between January and May 2011, involved face-to-face interviews in El Salvador, Colombia, Dominican Republic, and Suriname, and an e-mail survey with key-stakeholders. Respondents were asked to choose the five most relevant criteria for research prioritization and to score policy/research items according to the degree to which they represented current policies, desired policies, current research topics, and/or desired research topics. Mean scores and summary rankings were obtained. Linear regressions were performed to contrast rankings concerning current and desired policies (policy gaps), and current and desired research (research gaps). Results Relevance, feasibility, and research utilization were the top ranked criteria for prioritizing research. Technical capacity, research and development for new drugs, and responsiveness, were the main policy gaps. Quality assurance, staff technical capacity, price regulation, out-of-pocket payments, and cost containment policies, were the main research gaps. There was high level of coherence between current and desired policies: coefficients of determination (R2) varied from 0.46 (Health system structure; r = 0.68, P <0.01) to 0.86 (Sustainable financing; r = 0.93, P <0.01). There was also high coherence between current and desired research on Rational selection and use of medicines (r = 0.71, P <0.05, R2 = 0.51), Pricing/affordability (r = 0.82, P <0.01, R2 = 0.67), and Sustainable financing (r = 0.76, P <0.01, R2 = 0.58). Coherence was less for Health system structure (r = 0.61, P <0.01, R2 = 0.38). Conclusions This
Falquet, Laurent; Bordoli, Lorenza; Ioannidis, Vassilios; Pagni, Marco; Jongeneel, C. Victor
EMBnet is a consortium of collaborating bioinformatics groups located mainly within Europe (http://www.embnet.org). Each member country is represented by a ‘node’, a group responsible for the maintenance of local services for their users (e.g. education, training, software, database distribution, technical support, helpdesk). Among these services a web portal with links and access to locally developed and maintained software is essential and different for each node. Our web portal targets bio...
Solomon, David J.
Web-based surveying is becoming widely used in social science and educational research. The Web offers significant advantages over more traditional survey techniques however there are still serious methodological challenges with using this approach. Currently coverage bias or the fact significant numbers of people do not have access, or choose not to use the Internet is of most concern to researchers. Survey researchers also have much to learn concerning the most effective ways to conduct s...
We describe a relatively new effort within CMS to converge on a set of web based tools, using state of the art industry techniques, to engage with the CMS offline computing system. CMS collaborators require tools to monitor various components of the computing system and interact with the system itself. The current state of the various CMS web tools is described along side current planned developments. The CMS collaboration comprises of nearly 3000 people from all over the world. As well as its collaborators, its computing resources are spread all over globe and are accessed via the LHC grid to run analysis, large scale production and data transfer tasks. Due to the distributed nature of collaborators effective provision of collaborative tools is essential to maximise physics exploitation of the CMS experiment, especially when the size of the CMS data set is considered. CMS has chosen to provide such tools over the world wide web as a top level service, enabling all members of the collaboration to interact with the various offline computing components. Traditionally web interfaces have been added in HEP experiments as an afterthought. In the CMS offline we have decided to put web interfaces, and the development of a common CMS web framework, on an equal footing with the rest of the offline development. Tools exist within CMS to transfer and catalogue data (PhEDEx and DBS/DLS), run Monte Carlo production (ProdAgent) and submit analysis (CRAB). Effective human interfaces to these systems are required for users with different agendas and practical knowledge of the systems to effectively use the CMS computing system. The CMS web tools project aims to provide a consistent interface to all these tools
Cetl, V.; Kliment, T.; Kliment, M.
The effective access and use of geospatial information (GI) resources acquires a critical value of importance in modern knowledge based society. Standard web services defined by Open Geospatial Consortium (OGC) are frequently used within the implementations of spatial data infrastructures (SDIs) to facilitate discovery and use of geospatial data. This data is stored in databases located in a layer, called the invisible web, thus are ignored by search engines. SDI uses a catalogue (discovery) service for the web as a gateway to the GI world through the metadata defined by ISO standards, which are structurally diverse to OGC metadata. Therefore, a crosswalk needs to be implemented to bridge the OGC resources discovered on mainstream web with those documented by metadata in an SDI to enrich its information extent. A public global wide and user friendly portal of OGC resources available on the web ensures and enhances the use of GI within a multidisciplinary context and bridges the geospatial web from the end-user perspective, thus opens its borders to everybody. Project "Crosswalking the layers of geospatial information resources to enable a borderless geospatial web" with the acronym BOLEGWEB is ongoing as a postdoctoral research project at the Faculty of Geodesy, University of Zagreb in Croatia (http://bolegweb.geof.unizg.hr/). The research leading to the results of the project has received funding from the European Union Seventh Framework Programme (FP7 2007-2013) under Marie Curie FP7-PEOPLE-2011-COFUND. The project started in the November 2014 and is planned to be finished by the end of 2016. This paper provides an overview of the project, research questions and methodology, so far achieved results and future steps.
Ramesh, C; Govardhan, A
With the rapid growth of internet technologies, Web has become a huge repository of information and keeps growing exponentially under no editorial control. However the human capability to read, access and understand Web content remains constant. This motivated researchers to provide Web personalized online services such as Web recommendations to alleviate the information overload problem and provide tailored Web experiences to the Web users. Recent studies show that Web usage mining has emerged as a popular approach in providing Web personalization. However conventional Web usage based recommender systems are limited in their ability to use the domain knowledge of the Web application. The focus is only on Web usage data. As a consequence the quality of the discovered patterns is low. In this paper, we propose a novel framework integrating semantic information in the Web usage mining process. Sequential Pattern Mining technique is applied over the semantic space to discover the frequent sequential patterns. Th...
Full Text Available In today’s high tech environment every organization, individual computer users use internet for accessing web data. To maintain high confidentiality and security of the data secure web solutions are required. In this paper we described dedicated anonymous web browsing solutions which makes our browsing faster and secure. Web application which play important role for transferring our secret information including like email need more and more security concerns. This paper also describes that how we can choose safe web hosting solutions and what the main functions are which provides more security over server data. With the browser security network security is also important which can be implemented using cryptography solutions, VPN and by implementing firewalls on the network. Hackers always try to steal our identity and data, they track our activities using the network application software’s and do harmful activities. So in this paper we described that how we can monitor them from security purposes.
Full Text Available With the constant spread of internet access, the world of software is constantly transforming product shapes into services delivered via web browsers. Modern next generation web applications change the way browsers and users interact with servers. A lot of word scale services have already been delivered by top companies as Single Page Applications. Moving services online poses a big attention towards data protection and web application security. Single Page Application are exposed to server-side web applications security in a new way. Also, having application logic being executed by untrusted client environment requires close attention on client application security. Single Page Applications are vulnerable to the same security threads as server-side web application thus not making them less secure. Defending techniques can be easily adapted to guard against hacker attacks.
Thorlund Jepsen, Erik; Seiden, Piet; Ingwersen, Peter Emil Rerup;
Because of the increasing presence of scientific publications on the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research on techniques and methods for retrieval of scientific Web publications is called for. In this article, we report...... on the initial steps taken toward the construction of a test collection of scientific Web publications within the subject domain of plant biology. The steps reported are those of data gathering and data analysis aiming at identifying characteristics of scientific Web publications. The data used in this article...... were generated based on specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AllTheWeb, and AltaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality...
Nowadays people spend most of their days using mobiles and tablets to access web sites. It is really important to ensure that all customers are able to access a web site no matter where they are and which devices they use. That is why the interest in creating responsive design is increasing. Responsive Web Design is an approach that provides responding of the web site according to the environment where it is accessed. The thesis shows how to create responsive UI for the web store based on...
Discusses efforts by the Federal Depository Library Program to make information accessible more or mostly by electronic means. Topics include Web-based locator tools; collection development; digital archives; bibliographic metadata; and access tools and user interfaces. (Author/LRW)
Roman, J. H. (Jorge H.)
As programmers we have worked with many Application Development Interface API development kits. They are well suited for interaction with a particular system. A vast source of information can be made accessible by using the http protocol through the web as an API. This setup has many advantages including the vast knowledge available on setting web servers and services. Also, these tools are available on most hardware and operating system combinations. In this paper I will cover the various types of systems that can be developed this way, their advantages and some drawbacks of this approach. Index Terms--Application Programmer Interface, Distributed applications, Hyper Text Transfer Protocol, Web.
Snell, James L; Kulchenko, Pavel
The web services architecture provides a new way to think about and implement application-to-application integration and interoperability that makes the development platform irrelevant. Two applications, regardless of operating system, programming language, or any other technical implementation detail, communicate using XML messages over open Internet protocols such as HTTP or SMTP. The Simple Open Access Protocol (SOAP) is a specification that details how to encode that information and has become the messaging protocol of choice for Web services.Programming Web Services with SOAP is a detail
Hrivnac, J; The ATLAS collaboration
High Energy Physics experiments start using a Web Service style application to access functionality of their main frameworks. Those frameworks, however, are not ready to be executed in a standard Web Service environment as frameworks are too complex, monolithic and use non-standard and non-portable technologies. ATLAS Tag Browser is one of those Web Service. To provide the possibility to extract full ATLAS events from the standard Web Service, we need to access to full ATLAS offline framework - Athena. As Athena cannot run directly within any Web Service, the client server approach has been chosen. Web Service calls Athena remotely over XML-RPC connection using Athenaeum framework.
LIU Wei; LI Xian; LING Yanyan; ZHANG Xiaoyu; MENG Xiaofeng
With the rapid development of Web, there are more and more Web databases available for users to access. At the same time, job searchers often have difficulties in first finding the right sources and then querying over them, providing such an integrated job search system over Web databases has become a Web application in high demand. Based on such consideration, we build a deep Web data integration system that supports unified access for users to multiple job Web sites as a job meta-search engine. In this paper, the architecture of the system is given first, and the key components in the system are introduced.
In this thesis we developed a prototype robot, which can be controlled by user via web interface and is accessible trough a web browser. Web interface updates sensor data and streams video captured with the web-cam mounted on the robot in real-time. Raspberry Pi computer runs the back-end code of the thesis. General purpose input-output header on Raspberry Pi communicates with motor driver and sensors. Wireless dongle and web-cam connected trough USB, ensure wireless communication and vid...
Full Text Available Web testing is the name given to software testing that focuses on web applications. Issues such as the security of the web application, the basic functionality of the site, its accessibility to handicapped users and fully able users, as well as readiness for expected traffic and number of users and the ability to survive a massive spike in user traffic, both of which are related to load testing. In this paper web testing based on tool, challenges and methods have been discussed which will help in handling some challenges during the website development. This paper presents best methods for testing a web application.
The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources. However, many questions have been raised about the use of metadata schemas such as which metadata schemas have been used on the Web? How did they describe Web accessible information? What is the distribution of these metadata schemas among Web pages? Do certain schemas dominate the o...
Goodrich, John W.
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Full Text Available On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1 First we describe the basics of web mining, types of web mining. 2 Details of each web mining technique.3We propose the architecture for the personalized web page recommendation.
Semantic Web Services for Web Databases introduces an end-to-end framework for querying Web databases using novel Web service querying techniques. This includes a detailed framework for the query infrastructure for Web databases and services. Case studies are covered in the last section of this book. Semantic Web Services For Web Databases is designed for practitioners and researchers focused on service-oriented computing and Web databases.
Rocco, D; Liu, L; Critchlow, T
Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address these challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.
Delin, Kevin A. (Inventor); Jackson, Shannon P. (Inventor)
A Sensor Web formed of a number of different sensor pods. Each of the sensor pods include a clock which is synchronized with a master clock so that all of the sensor pods in the Web have a synchronized clock. The synchronization is carried out by first using a coarse synchronization which takes less power, and subsequently carrying out a fine synchronization to make a fine sync of all the pods on the Web. After the synchronization, the pods ping their neighbors to determine which pods are listening and responded, and then only listen during time slots corresponding to those pods which respond.
Full Text Available Accessing web resources (Information is an essential facility provided by web applications to every body. Semantic web is one of the systems that provide a facility to access the resources through web service applications. Semantic web and web Services are new emerging web based technologies. An automatic information processing system can be developed by using semantic web and web services, each having its own contribution within the context of developing web-based information systems and applications. This combination, called Semantic Web Services (SWS, provides several potential opportunities and challenges in ebusiness. Web services provide a verity of dynamic services for accessing web resources, but until now, they have been managed separately from conventional web contents resources. A new system is proposed here for a semantic web information retrieval, which incorporates semantic web, web services and J2EE technologies to enable dynamically locate the web resources that include homogeneous or heterogeneous web contents and web services. In this multi-tier architecture system the middle tier components contains the semantic web services.
Traditional call centers can be accessed via speech only, and the call center based on web provides both da-ta and speech access,but it needs a powerful terminal-computer.By analyzing traditional call centers and call cen-ters based on web, this paper presents the framework of an advanced call center supporting WAP access.A typical service is also described in detail.
Brescia, Massimo; Cavuoti, Stefano; Esposito, Francesco; Fiore, Michelangelo; Garofalo, Mauro; Guglielmo, Marisa; Longo, Giuseppe; Manna, Francesco; Nocella, Alfonso; Vellucci, Civita
Astronomy is undergoing through a methodological revolution triggered by an unprecedented wealth of complex and accurate data. DAMEWARE (DAta Mining & Exploration Web Application and REsource) is a general purpose, Web-based, Virtual Observatory compliant, distributed data mining framework specialized in massive data sets exploration with machine learning methods. We present the DAMEWARE (DAta Mining & Exploration Web Application REsource) which allows the scientific community to perform data...
Cohen, Andrew; Vitányi, Paul
Normalized web distance (NWD) is a similarity or normalized semantic distance based on the World Wide Web or any other large electronic database, for instance Wikipedia, and a search engine that returns reliable aggregate page counts. For sets of search terms the NWD gives a similarity on a scale from 0 (identical) to 1 (completely different). The NWD approximates the similarity according to all (upper semi)computable properties. We develop the theory and give applications. The derivation of ...
"From "blogs" to "wikis", the Web is now more than a mere repository of information. Martin Griffiths investigates how this new interactivity is affecting the way physicists communicate and access information." (5 pages)
U.S. Department of Health & Human Services — A search-based Web service that provides access to disease, condition and wellness information via MedlinePlus health topic data in XML format. The service accepts...
This thesis builds a foundation for the philosophy of theWeb by examining the crucial question: What does a Uniform Resource Identifier (URI) mean? Does it have a sense, and can it refer to things? A philosophical and historical introduction to the Web explains the primary purpose of theWeb as a universal information space for naming and accessing information via URIs. A terminology, based on distinctions in philosophy, is employed to define precisely what is meant by informati...
Filipe Silva; Gabriel David
Currently, information systems are usually supported by databases (DB) and accessed through a Web interface. Pages in such Web sites are not drawn from HTML files but are generated on the fly upon request. Indexing and searching such dynamic pages raises several extra difficulties not solved by most search engines, which were designed for static contents. In this paper we describe the development of a search engine that overcomes most of the problems for a specific Web site, how the limitatio...
Szomszor, Martin; Cattuto, Ciro; Alani, Harith; O'Hara, Kieron; Baldassarri, Andrea; Loreto, Vittorio; Vito D. P. Servedio
While the Semantic Web has evolved to support the meaningful exchange of heterogeneous data through shared and controlled conceptualisations, Web 2.0 has demonstrated that large-scale community tagging sites can enrich the semantic web with readily accessible and valuable knowledge. In this paper, we investigate the integration of a movies folksonomy with a semantic knowledge base about user-movie rentals. The folksonomy is used to enrich the knowledge base with descriptions and categorisatio...
Computer Science This thesis examines methods for accessing information stored in a relational database from a Web Page. The stateless and connectionless nature of the Web's Hypertext Transport Protocol as well as the open nature of the Internet Protocol pose problems in the areas of database concurrency, security, speed, and performance. We examined the Common Gateway Interface, Server API, Oracle's Web/database architecture, and the Java Database Connectivity interface in terms of p...
Jain, Ratnesh Kumar; Jain, Dr Suresh
World Wide Web is a huge data repository and is growing with the explosive rate of about 1 million pages a day. As the information available on World Wide Web is growing the usage of the web sites is also growing. Web log records each access of the web page and number of entries in the web logs is increasing rapidly. These web logs, when mined properly can provide useful information for decision-making. The designer of the web site, analyst and management executives are interested in extracting this hidden information from web logs for decision making. Web access pattern, which is the frequently used sequence of accesses, is one of the important information that can be mined from the web logs. This information can be used to gather business intelligence to improve sales and advertisement, personalization for a user, to analyze system performance and to improve the web site organization. There exist many techniques to mine access patterns from the web logs. This paper describes the powerful algorithm that mine...
Li, R.; Shen, Y.; Huang, W.; Wu, H.
Meulenhoff, P.J.; Ostendorf, D.R.; Živković, M.; Meeuwissen, H.B.; Gijsen, B.M.M.
In this paper, we analyze overload control for composite web services in service oriented architectures by an orchestrating broker, and propose two practical access control rules which effectively mitigate the effects of severe overloads at some web services in the composite service. These two rules
Ginsburg, Jane C.
Cautions professors and universities on Web-related copyright infringement. Explores the copyright implications of scenarios such as the posting of class readings on the Web (including issues of access and purpose, institutional liability, and permission), the posting of course notes, and ownership of lectures. (EV)
Herman, I.; Gylling, M.
Although using advanced Web technologies at their core, e-books represent a parallel universe to everyday Web documents. Their production workflows, user interfaces, their security, access, or privacy models, etc, are all distinct. There is a lack of a vision on how to unify Digital Publishing and t
Karreman, Joyce; Geest, van der Thea; Buursink, Esmee
Background: The W3C Web Accessibility Initiative has issued guidelines for making websites better and easier to access for people with various disabilities (W3C Web Accessibility Initiative guidelines 1999). - Method: The usability of two versions of a website (a non-adapted site and a site that wa
Karreman, Joyce; van der Geest, Thea; Buursink, Esmee
Background: The W3C Web Accessibility Initiative has issued guidelines for making websites better and easier to access for people with various disabilities (W3C Web Accessibility Initiative guidelines 1999). Method: The usability of two versions of a website (a non-adapted site and a site that was adapted on the basis of easy-to-read guidelines)…
Baranov, P. A.; BEYBUTOV E.R.
This paper provides an overview of core technologies implemented by comparably new products on the information security market web application firewalls. Web applications are a very widely-used and convenient way of presenting remote users with access to corporate information resources. They can, however, become single point of failure rendering all the information infrastructure inaccessible to legitimate clients. To prevent malicious access attempts to endpoint information resources and, in...
Web Archives of ATLAS Plenary Sessions, Workshops, Meetings, and Tutorials recorded over the past two years are available via the University of Michigan portal here. Most recent additions include the ROOT Workshop held at CERN on March 26-27, the Physics Analysis Tools Workshop held in Bergen, Norway on April 23-27, and the CTEQ Workshop: "Physics at the LHC: Early Challenges" held at Michigan State University on May 14-15. Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally. In addition, you will find access to a variety of general tutorials and events via the portal. Suggestions for events or tutorials to record in 2007, as well as feedback on existing archives is always welcome. Please contact us at firstname.lastname@example.org. Thank you and enjoy the lectures! The Michigan Web Lecture Team Tushar Bhatnagar, Steven Goldfarb, Jeremy Herr, Mitch McLachlan, Homer A....
上超望; 刘清堂; 赵呈领; 童名文
Business process access control mechanism is a difficult problem in composite web services security applications.Considering the deficiency in current researches,an Activity Authorization Based Dynamic Access Control Model for BPEL4WS (AACBP)is proposed.By dissolving the coupling relationship between the organization model and the business process model,AACBP utilizes activity authorization as the basic unit to implement BPEL4WS access control.Through the activity instances,the model implements fine-gained access control of the activities,and realizes the synchronization of authorization and business process execution.At last,the paper also describes the implementa-tion architecture of AACBP model in web services secure composition.%业务流程访问控制机制是组合Web服务安全应用中的难点问题。针对现有研究不足，提出基于活动授权的Web服务业务流程动态访问控制模型AACBP（Activity Authorization Based Dynamic Access Control Model for BPEL4WS）。通过解除组织模型和业务流程模型间的耦合关系，AACBP将活动授权作为BPEL4WS（Business Process Expression Language for Web Services）活动访问控制实施的基本单元。依据活动实例动态感知上下文，AACBP细粒度约束活动访问授权，实现授权流与业务流程执行同步。最后给出AACBP模型在Web服务安全组合中的实施机制。
Full Text Available The Internet offers multiple solutions to linkcompanies with their partners, customers or suppliersusing IT solutions, including a special focus on Webservices. Web services are able to solve the problem relatedto the exchange of data between business partners, marketsthat can use each other's services, problems ofincompatibility between IT applications. As web servicesare described, discovered and accessed programs based onXML vocabularies and Web protocols, Web servicesrepresents solutions for Web-based technologies for smalland medium-sized enterprises (SMEs. This paper presentsa web service framework for economic applications. Also, aprototype of this IT solution using web services waspresented and implemented in a few companies from IT,commerce and consulting fields measuring the impact ofthe solution in the business environment development.
Trupti B. Mane , Prof. Girish P. Potdar
Full Text Available The World Wide Web (WWW is getting a lot of attention as it is becoming huge repository ofinformation. A web page gets deployed on websiteby its web template system. Those templates can beused by any individual or organization to set up their website. Also the templates provide its readersthe ease of access to the contents guided by consistent structures. Hence the template detection techniques are emerging as Web Templates are becoming more and more important. Earlier systems consider all documents are guaranteed to conform to a common template and hence template extraction is done with those assumptions. However it is not feasible in real application. Our focus is on extracting templates from heterogeneous web pages. But due to large variety of web documents, there is a need to manage unknown number of templates. This can be achieved by clustering web documents by selecting a good partition method. The correctness of extracted templates depending on quality of clustering
Spink, Amanda; Koricich, Andrew; Jansen, B J; Cole, Charles
Sexual information seeking is an important element within human information behavior. Seeking sexually related information on the Internet takes many forms and channels, including chat rooms discussions, accessing Websites or searching Web search engines for sexual materials. The study of sexual Web queries provides insight into sexually-related information-seeking behavior, of value to Web users and providers alike. We qualitatively analyzed queries from logs of 1,025,910 Alta Vista and AlltheWeb.com Web user queries from 2001. We compared the differences in sexually-related Web searching between Alta Vista and AlltheWeb.com users. Differences were found in session duration, query outcomes, and search term choices. Implications of the findings for sexual information seeking are discussed. PMID:15006171
Pazos Arias, José J; Díaz Redondo, Rebeca P
The recommendation of products, content and services cannot be considered newly born, although its widespread application is still in full swing. While its growing success in numerous sectors, the progress of the Social Web has revolutionized the architecture of participation and relationship in the Web, making it necessary to restate recommendation and reconciling it with Collaborative Tagging, as the popularization of authoring in the Web, and Social Networking, as the translation of personal relationships to the Web. Precisely, the convergence of recommendation with the above Social Web pillars is what motivates this book, which has collected contributions from well-known experts in the academy and the industry to provide a broader view of the problems that Social Recommenders might face with. If recommender systems have proven their key role in facilitating the user access to resources on the Web, when sharing resources has become social, it is natural for recommendation strategies in the Social Web...
Full Text Available Web applications vulnerabilities allow attackers to perform malicious actions that range from gaining unauthorized account access to obtaining sensitive data. The number of reported web application vulnerabilities in last decade is increasing dramatically. The most of vulnerabilities result from improper input validation and sanitization. The most important of these vulnerabilities based on improper input validation and sanitization are: SQL injection (SQLI, Cross-Site Scripting (XSS and Buffer Overflow (BOF. In order to address these vulnerabilities we designed and developed the WAPTT (Web Application Penetration Testing Tool tool - web application penetration testing tool. Unlike other web application penetration testing tools, this tool is modular, and can be easily extended by end-user. In order to improve efficiency of SQLI vulnerability detection, WAPTT uses an efficient algorithm for page similarity detection. The proposed tool showed promising results as compared to six well-known web application scanners in detecting various web application vulnerabilities.
The National Academy Press is the publisher for the National Academy of Sciences, the National Academy of Engineering, the Institute of Medicine, and the National Research Council. Through this web site, you have access to a virtual treasure trove of books, reports and publicatio...
Greeshma G. Vijayan
Full Text Available As the Internet continues to grow in size and popularity, web traffic and network bottlenecks are major issues in the network world. The continued increase in demand for objects on the Internet causes severe overloading in many sites and network links. Many users have no patience in waiting more than few seconds for downloading a web page. Web traffic reduction techniques are necessary for accessing the web sites efficiently with the facility of existing network. Web pre-fetching techniques and web caching reduces the web latency that we face on the internet today. This paper describes about the various prefetching and caching techniques, how they predict the web object to be pre-fetched and what are the issues challenges involved when these techniques are applied to a mobile environment
Vidya S. Dandagi
Full Text Available Semantic Web is a system that allows machines to understand complex human requests. Depending on the meaning semantic web replies. Semantics is the learning of the meanings of linguistic appearance. It is the main branch of contemporary linguistics. Semantics is meaning of words, text or a phrase and relations between them. RDF provides essential support to the Semantic Web. To represent distributed information RDF is created. Applications can use RDF created and process it in an adaptive manner. Knowledge representation is done using RDF standards and it is machine understandable. This paper describes the creation of a semantic web using RDF, and retrieval of accurate results using SparQL query language.
The Web browser has become one of the major application interfaces for remotely participating in magnetic fusion. Web portals are used to present very diverse sources of information in a unified way. While a web portal has several benefits over other software interfaces, such as providing single point of access for multiple computational services, and eliminating the need for client software installation, the design and development of a web portal has unique challenges. One of the challenges is that a web portal needs to be fast and interactive despite a high volume of tools and information that it presents. Another challenge is the visual output on the web portal often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments; therefore the applications and information should be customizable depending on the needs of users. An appropriate software architecture and web technologies can meet these problems. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It utilizes a multi-tier software architecture, and web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services for users to select. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, shared visualization and online instant message services. Furthermore, the web portal will provide a mechanism to allow users to create their own applications on the web portal as well as bridging capabilities to external applications such as Twitter and other social networks. In this series of slides, we describe the software architecture of this scientific web portal and our experiences in utilizing web 2.0 technologies. A
Sprimont, P.-G.; Ricci, D.; Nicastro, L.
Stumme, Gerd; Hotho, Andreas; Berendt, Bettina
Semantic Web Mining aims at combining the two fast-developing research areas Semantic Web and Web Mining. This survey analyzes the convergence of trends from both areas: an increasing number of researchers is working on improving the results of Web Mining by exploiting semantic structures in the Web, and they make use of Web Mining techniques for building the Semantic Web. Last but not least, these techniques can be used for mining the Semantic Web itself. The Semantic Web is t...
In this research paper we are briefly presenting current major web problems and introducing semantic web technologies with the claim of solving existing web's problems. Furthermore we are describing Ontology as the main building block of semantic web and focusing on its contributions to semantic web progress and current limitations.
Thomas, David A.; Li, Qing
The World Wide Web is evolving in response to users who demand faster and more efficient access to information, portability, and reusability of digital objects between Web-based and computer-based applications and powerful communication, publication, collaboration, and teaching and learning tools. This article reviews current uses of Web-based…
This paper deals with the study of Deep Web that is a hidden unexplored part of internet. Further the characterstics and challenges of searching data in deep web are discussed along with the methods to access such information over the internet. Finally , a comparison is made of various floating term realted to deep web.
@@ 0 Introduction The surprising growth of the Internet, coupled with the rapid development of Web technique and more and more emergence of web information system and application, is bring great opportunities and big challenges to us. Since the Web provides cross-platform universal access to resources for the massive user population, even greater demand is requested to manage data and services effectively.
Describes database-driven Web pages that dynamically display different information each time the page is accessed in response to the user's needs. Highlights include information management; online assignments; grade tracking; updating Web pages; creating database-driven Web pages; and examples of how they have been used for a high school physics…
Sharples, Mike; Kloos, Carlos Delgado; Dimitriadis, Yannis; Garlatti, Serge; Specht, Marcus
Many modern web-based systems provide a "responsive" design that allows material and services to be accessed on mobile and desktop devices, with the aim of providing "ubiquitous access." Besides offering access to learning materials such as podcasts and videos across multiple locations, mobile, wearable and ubiquitous…
A web spider is an automated program or a script that independently crawls websites on the internet. At the same time its job is to pinpoint and extract desired data from websites. The data is then saved in a database and is later used for different purposes. Some spiders download websites which are then saved into large repositories, while others search for more specific data, such as emails or phone numbers. The most well known and the most important application of web crawlers is crawling ...
Subhra Prosun Paul
Full Text Available The Web is a progressively more important resource in many aspects of life: education, employment, government, commerce, healthcare, recreation, and more. It is essential that the web be accessible to people with equal access and equal opportunity to all also with disabilities. An accessible web can also help people with disabilities more actively contribute in society. This paper concentrates on mainly two things; firstly, it briefly examines accessibility guidelines, evaluation methods and analysis tools. Secondly, it analyzes and evaluates the web accessibility of e-Government websites of Bangladesh according to the „W3C Web Content Accessibility Guidelines‟. We also present a recommendation for improvement of e-Government website accessibility in Bangladesh.
Kishore, T Krishna; Narayana, N Lakshmi
Most of the web user's requirements are search or navigation time and getting correctly matched result. These constrains can be satisfied with some additional modules attached to the existing search engines and web servers. This paper proposes that powerful architecture for search engines with the title of Probabilistic Semantic Web Mining named from the methods used. With the increase of larger and larger collection of various data resources on the World Wide Web (WWW), Web Mining has become one of the most important requirements for the web users. Web servers will store various formats of data including text, image, audio, video etc., but servers can not identify the contents of the data. These search techniques can be improved by adding some special techniques including semantic web mining and probabilistic analysis to get more accurate results. Semantic web mining technique can provide meaningful search of data resources by eliminating useless information with mining process. In this technique web servers...
Bell, Hudson; Tang, Nelson K. H.
A user survey of 60 company Web sites (electronic commerce, entertainment and leisure, financial and banking services, information services, retailing and travel, and tourism) determined that 30% had facilities for conducting online transactions and only 7% charged for site access. Overall, Web sites were rated high in ease of access, content, and…
Brescia, Massimo; Esposito, Francesco; Fiore, Michelangelo; Garofalo, Mauro; Guglielmo, Magda; Longo, Giuseppe; Manna, Francesco; Nocella, Alfonso; Vellucci, Civita
Astronomy is undergoing through a methodological revolution triggered by an unprecedented wealth of complex and accurate data. DAMEWARE (DAta Mining & Exploration Web Application and REsource) is a general purpose, Web-based, Virtual Observatory compliant, distributed data mining framework specialized in massive data sets exploration with machine learning methods. We present the DAMEWARE (DAta Mining & Exploration Web Application REsource) which allows the scientific community to perform data mining and exploratory experiments on massive data sets, by using a simple web browser. DAMEWARE offers several tools which can be seen as working environments where to choose data analysis functionalities such as clustering, classification, regression, feature extraction etc., together with models and algorithms.
The science plays a crucial role in the modern society, and the popularization of science in its electronic form is closely related to the rise and development of the World Wide Web. Since 1990s -the introduction of the Web as a part of the Internet- the science popularization has become more and more involved in the web-based society. Therefore, the Web has become an important technical support of the science popularization. The Web, on the one hand, has increased the accessibility, visibili...
Nigel E. Bush
Full Text Available Much is written about Internet access, Web access, Web site accessibility, and access to online health information. The term access has, however, a variety of meanings to authors in different contexts when applied to the Internet, the Web, and interactive health communication. We have summarized those varied uses and definitions and consolidated them into a framework that defines Internet and Web access issues for health researchers. We group issues into two categories: connectivity and human interface. Our focus is to conceptualize access as a multicomponent issue that can either reduce or enhance the public health utility of electronic communications.
Madhavan, J.; Afanasiev, L.; Antova, L.; Halevy, A.
Over the past few years, we have built a system that has exposed large volumes of Deep-Web content to Google.com users. The content that our system exposes contributes to more than 1000 search queries per-second and spans over 50 languages and hundreds of domains. The Deep Web has long been acknowledged to be a major source of structured data on the web, and hence accessing Deep-Web content has long been a problem of interest in the data management community. In this paper, we report on where...
Development of the US Department of Energy (DOE) Nuclear Criticality Safety (NCS) Program (NCSP) Web site is the result of the efforts of many members of the NCS community and is maintained by Lawrence Livermore National Laboratory (LLNL) under the direction of the NCSP Management Team. This World-Wide-Web resource was developed as part of the DOE response to Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 97-2, which reflected the need to make criticality safety information available to a wide audience. The NCSP Web site provides information of interest to NCS professionals and includes links to other sites actively involved in the collection and dissemination of criticality safety information. To the extent possible, the hyperlinks on this Web site direct the user to the original source of the referenced material to ensure access to the latest, most accurate version. This site is intended to provide a central location for access to relevant NCS information in a user-friendly environment for the criticality safety community
Full Text Available Web usage mining performs mining on web usage data or web logs. It is now possible to perform data mining on web log records collected from the web page history. A web log is a listing of page reference data/click stream data. The behavior of the web page readers is imprinted in the web server log files. By looking at the sequence of pages a user accesses, a user profile could be developed thus aiding in personalization. With personalization, web access or the contents of web page are modified to better fit the desires of the user and also to identify the browsing behavior of the user can improve system performance, enhance the quality and delivery of Internet Information services to the end user and identify the population of potential customers. With clustering, the desires are determined based on similarities. In this study, a Fuzzy clustering algorithm is designed and implemented. For the proposed algorithm, meaningful behavior patterns are extracted by applying efficient Fuzzy clustering algorithm, to log data. It is proved that performance of the proposed system is better than that of the existing best algorithm. The proposed Fuzzy clustering w-miner algorithm can provide popular information to web page visitors.
Full Text Available Traditional search engines deal with the Surface Web which is a set of Web pages directly accessible through hyperlinks and ignores a large part of the Web called hidden Web which is a great amount of valuable information of online database which is “hidden” behind the query forms. To access to those information the crawler have to fill the forms with a valid data, for this reason we propose a new approach which use SQLI technique in order to find the most promising keywords of a specific domain for automatic form submission. The effectiveness of proposed framework has been evaluated through experiments using real web sites and encouraging preliminary results were obtained
Full Text Available Currently, computers are changing from single, isolated devices into entry points to a worldwide network of information exchange and business transactions called the World Wide Web (WWW. However, the success of the WWW has made it increasingly difficult to find, access, present and maintain the information required by a wide variety of users. In response to this problem, many new research initiatives and commercial enterprises have been set up to enrich the available information with machine-process able semantics. This Semantic Web will provide intelligent access to heterogeneous, distributed information, enabling software products (agents to mediate between user needs and the information sources available. In this paper we describe some areas for application of this new technology. We focus on on-going work in the fields of knowledge management and electronic commerce. We also take a perspective on the semantic web-enabled web services which will help to bring the semantic web to its full potential.
Casey, Maire; Pahl, Claus
Component-based software engineering on the Web differs from traditional component and software engineering. We investigate Web component engineering activites that are crucial for the development,com position, and deployment of components on the Web. The current Web Services and Semantic Web initiatives strongly influence our work. Focussing on Web component composition we develop description and reasoning techniques that support a component developer in the composition activities,fo cussing...
Full Text Available In this paper we present clustering method is very sensitive to the initial center values ,requirements on the data set too high, and cannot handle noisy data the proposal method is using information entropy to initialize the cluster centers and introduce weighting parameters to adjust the location of cluster centers and noise problems. The navigation datasets which are sequential in nature, Clustering web data is finding the groups which share common interests and behavior by analyzing the data collected in the web servers, this improves clustering on web data efficiently using improved fuzzy c-means(FCM clustering. Web usage mining is the application of data mining techniques to web log data repositories. It is used in finding the user access patterns from web access log. Web data Clusters are formed using on MSNBC web navigation dataset.
Mohammad Karim Saberi; Alireza Isfandyari-Moghaddam; Sedigheh Mohamadesmaeil
The aim of this research is to scrutinize the accessibility and decay of web citations (URLs) used in the refereed articles published by Journal of Artificial Societies and Social Simulation (JASSS). To do this, at first, we downloaded all articles of JASSS from 1998 to 2007. After acquiring all articles, their web citations are extracted and analyzed from the accessibility and decay point of view. Moreover, for initially missed web citations complementary pathways such as using internet expl...
Duquennoy, Simon; Grimaud, Gilles; Vandewalle, Jean-Jacques
Embedded systems such as smart cards or sensors are now widespread, but are often closed systems, only accessed via dedicated terminals. A new trend consists in embedding Web servers in small devices, making both access and application development easier. In this paper, we propose a TCP performance model in the context of embedded Web servers, and we introduce a taxonomy of the contents possibly served by Web applications. The main idea of this paper is to adapt the communication stack behavi...
Zhang, Huijing; Choffnes, David
Mobile devices that connect to the Internet via cellular networks are rapidly becoming the primary medium for accessing Web content. Cellular service providers (CSPs) commonly deploy Web proxies and other middleboxes for security, performance optimization and traffic engineering reasons. However, the prevalence and policies of these Web proxies are generally opaque to users and difficult to measure without privileged access to devices and servers. In this paper, we present a methodology to de...
The introduction of mobile devices with smaller screens motivates the need for web pages that work correctly across many different devices—referred to as responsive web design. Mobile access is a key feature for companies: both to reach new customers, and also to provide an enhanced service to existing customers. Testing the correct appearance of a responsive web page on different devices is not a trivial task because there are no standard rules for responsiveness, and the layout may need to ...
Traditionally the interaction between users and the Grid is done with command line tools. However, these tools are difficult to use by non-expert users providing minimal help and generating outputs not always easy to understand especially in case of errors. Graphical User Interfaces are typically limited to providing access to the monitoring or accounting information and concentrate on some particular aspects failing to cover the full spectrum of grid control tasks. To make the Grid more user friendly more complete graphical interfaces are needed. Within the DIRAC project we have attempted to construct a Web based User Interface that provides means not only for monitoring the system behavior but also allows to steer the main user activities on the grid. Using DIRAC's web interface a user can easily track jobs and data. It provides access to job information and allows performing actions on jobs such as killing or deleting. Data managers can define and monitor file transfer activity as well as check requests set by jobs. Production managers can define and follow large data productions and react if necessary by stopping or starting them. The Web Portal is build following all the grid security standards and using modern Web 2.0 technologies which allow to achieve the user experience similar to the desktop applications. Details of the DIRAC Web Portal architecture and User Interface will be presented and discussed.
Full Text Available The importance of accessibility to digital e-learning resources is widely acknowledged. The World Wide Web Consortium Web Accessibility Initiative has played a leading role in promoting the importance of accessibility and developing guidelines that can help when developing accessible web resources. The accessibility of e-learning resources provides additional challenges. While it is important to consider the technical and resource related aspects of e-learning when designing and developing resources for students with disabilities, there is a need to consider pedagogic and contextual issues as well. A holistic framework is therefore proposed and described, which in addition to accessibility issues takes into account learner needs, learning outcomes, local factors, infrastructure, usability and quality assurance. The practical application and implementation of this framework is discussed and illustrated through the use of examples and case studies.
Nielsen, Jens Munk
Food webs are structured by intricate nodes of species interactions which govern the flow of organic matter in natural systems. Despite being long recognized as a key component in ecology, estimation of food web functioning is still challenging due to the difficulty in accurately measuring species interactions within a food web. Novel tracing methods that estimate species diet uptake and trophic position are therefore needed for assessing food web dynamics. The focus of this thesis is the use...
Collins, Trevor; Quick, Kevin; Joiner, Richard; Littleton, Karen
This paper presents the design and application of a web service architecture for providing shared access to web-based visual representations, such as dynamic models, simulations and visualizations. The Shared Representations (SR) system was created to facilitate the development of collaborative and co-operative learning activities over the web, and has been applied to provide shared group access to: a high-resolution image viewer, a virtual petrological microscope, and a forces and motion spr...
Bayu Kanigoro; Widodo Budiharto; Jurike V. Moniaga; Muhsin Shodiq
Once an individual has access to the Internet, there is a wide variety of different methods of communication and information exchange over the network, one of them is telepresence robot. This study presents a web framework for web conference system of intelligent telepresence robot. The robot is controlled using web conference system from Google App Engine, so the manager/supervisor at office/industry can direct the robot to the intended person to start a discussion/inspection. We build a web...
Rathipriya, R.; Thangavel, K.; Bagyamani, J.
Web mining is the nontrivial process to discover valid, novel, potentially useful knowledge from web data using the data mining techniques or methods. It may give information that is useful for improving the services offered by web portals and information access and retrieval tools. With the rapid development of biclustering, more researchers have applied the biclustering technique to different fields in recent years. When biclustering approach is applied to the web usage data it automaticall...
The exponential expanding of the numbers of web sites and Internet users makes WWW the most important global information resource. From information publishing and electronic commerce to entertainment and social networking, the Web allows an inexpensive and efficient access to the services provided by individuals and institutions. The basic units for distributing these services are the web sites scattered throughout the world. However, the extreme fragility of web services and content, the hig...
Yogish H K; Dr.G.T. Raju; Manjunath T. N,
The World Wide Web serves as huge, widely distributed, global information service centre for news, advertisements, consumer information, financial management, education, government, e-commerce and many other information services. The web also contains a rich and dynamic collection of hyperlink information and web page access and usage information, providing rich sources of data for data mining. The Web usage mining is the area of data mining which deals with the discovery and analysis of usag...
S.LATHA SHANMUGAVADIVU; Dr.M.Rajaram
Caching is a technique first used by memory management to reduce bus traffic and latency of data access. Web traffic has increased tremendously since the beginning of the 1990s.With the significant increase of Web traffic, caching techniques are applied to Web caching to reduce network traffic, user-perceived latency, and serverload by caching the documents in local proxies. In this paper, analization of both advantages and disadvantages of some current Web cache replacement algorithms inc...
Shaojun Zhong; Zhijuan Deng
A practical distributed web crawler architecture is designed. The distributed cooperative grasping algorithm is put forward to solve the problem of distributed Web Crawler grasping. Log structure and Hash structure are combined and a large-scale web store structure is devised, which can meet not only the need of a large amount of random accesses, but also the need of newly added pages. Experiment results have shown that the distributed Web Crawler's performance, scalability, and load balance ...
Full Text Available A practical distributed web crawler architecture is designed. The distributed cooperative grasping algorithm is put forward to solve the problem of distributed Web Crawler grasping. Log structure and Hash structure are combined and a large-scale web store structure is devised, which can meet not only the need of a large amount of random accesses, but also the need of newly added pages. Experiment results have shown that the distributed Web Crawler's performance, scalability, and load balance are better.
Spam pages are designed to maliciously appear among the top search results by excessive usage of popular terms. Therefore, spam pages should be removed using an effective and efficient spam detection system. Previous methods for web spam classification used several features from various information sources (page contents, web graph, access logs, etc.) to detect web spam. In this paper, we follow page-level classification approach to build fast and scalable spam filters. We show that each web ...
Umar, Azeem; Tatari, Kamran Khan
Web development is different from traditional software development. Like in all software applications, usability is one of the core components of web applications. Usability engineering and web engineering are rapidly growing fields. Companies can improve their market position by making their products and services more accessible through usability engineering. User testing is often skipped when approaching deadline. This is very much true in case of web application development. Achieving good...