Ratnayake, Rakhitha Nimesh
This book is intended for WordPress developers and designers who want to develop quality web applications within a limited time frame and for maximum profit. Prior knowledge of basic web development and design is assumed.
Building a web application that attracts and retains regular visitors is tricky enough, but creating a social application that encourages visitors to interact with one another requires careful planning. This book provides practical solutions to the tough questions you'll face when building an effective community site -- one that makes visitors feel like they've found a new home on the Web. If your company is ready to take part in the social web, this book will help you get started. Whether you're creating a new site from scratch or reworking an existing site, Building Social Web Applications
Information and services on the web are accessible for everyone. Users of the web differ in their background, culture, political and social environment, interests and so on. Ambient intelligence was envisioned as a concept for systems which are able to adapt to user actions and needs. With the gr......Information and services on the web are accessible for everyone. Users of the web differ in their background, culture, political and social environment, interests and so on. Ambient intelligence was envisioned as a concept for systems which are able to adapt to user actions and needs...... suit the user profile the most. This paper summarizes the domain engineering framework for such adaptive web applications. The framework provides guidelines to develop adaptive web applications as members of a family. It suggests how to utilize the design artifacts as knowledge which can be used...
Casteleyn, Sven; Daniel, Florian; Dolog, Peter
Nowadays, Web applications are almost omnipresent. The Web has become a platform not only for information delivery, but also for eCommerce systems, social networks, mobile services, and distributed learning environments. Engineering Web applications involves many intrinsic challenges due...... to their distributed nature, content orientation, and the requirement to make them available to a wide spectrum of users who are unknown in advance. The authors discuss these challenges in the context of well-established engineering processes, covering the whole product lifecycle from requirements engineering through...
How do you create a mission-critical site that provides exceptional performance while remaining flexible, adaptable, and reliable 24/7? Written by the manager of a UI group at Yahoo!, Developing Large Web Applications offers practical steps for building rock-solid applications that remain effective even as you add features, functions, and users. You'll learn how to develop large web applications with the extreme precision required for other types of software. Avoid common coding and maintenance headaches as small websites add more pages, more code, and more programmersGet comprehensive soluti
Progressive Web Applications are native-like applications running inside of a browser context. In my presentation I would like describe their characteristics, benchmarks and building process using a quick and simple case study example with focus on Service Workers api.
The World Wide Web has expanded from a huge information storage repository into a worldwide application platform. Web applications have several benefits compared to desktop applications. An application can be used anywhere from any system and device, which means that only one version is needed, they do not need to be installed and developers can modify running applications. Despite all the benefits of the Web, web applications are suffering because they are developed using the same technologi...
Alexandru Dan CĂPRIŢĂ
Full Text Available Unit tests are a vital part of several software development practicesand processes such as Test-First Programming, Extreme Programming andTest-Driven Development. This article shortly presents the software quality andtesting concepts as well as an introduction to an automated unit testingframework for PHP web based applications.
The unique characteristic of web applications is that they are supposed to be used by much bigger and diverse set of users and stakeholders. An example application area is e-Learning or business to business interaction. In eLearning environment, various users with different background use the eLearning......-based applications aim to leave some of their features at the design stage in the form of variables which are dependent on several criteria. The resolution of the variables is called adaptation and can be seen from two perspectives: adaptation by humans to the changed requirements of stakeholders and dynamic system...... adaptation to the changed parameters of environments, user or context. Adaptation can be seen as an orthogonal concern or viewpoint in a design process. In this paper I will discuss design abstractions which are employed in current design methods for web applications. I will exemplify the use...
Maria Cristina ENACHE
Recently, a new web development technique for creating interactive web applications, dubbed AJAX, has emerged in response to the limited degree of interactivity in large-grain stateless web interactions. In this new model, the web interface is composed of individual components which can be
Paulsworth, Ashley [Sunvestment Group, Frederick, MD (United States); Kurtz, Jim [Sunvestment Group, Frederick, MD (United States); Brun de Pontet, Stephanie [Sunvestment Group, Frederick, MD (United States)
Sunvestment Energy Group (previously called Sunvestment Group) was established to create a web application that brings together site hosts, those who will obtain the energy from the solar array, with project developers and funders, including affinity investors. Sunvestment Energy Group (SEG) uses a community-based model that engages with investors who have some affinity with the site host organization. In addition to a financial return, these investors receive non-financial value from their investments and are therefore willing to offer lower cost capital. This enables the site host to enjoy more savings from solar through these less expensive Community Power Purchase Agreements (CPPAs). The purpose of this award was to develop an online platform to bring site hosts and investors together virtually.
Hemel, Z.; Groenewegen, D.M.; Kats, L.C.L.; Visser, E.
Modern web application development frameworks provide web application developers with highlevel abstractions to improve their productivity. However, their support for static verification of applications is limited. Inconsistencies in an application are often not detected statically, but appear as
DOSPINESCU, Octavian; PERCA, Marian
..., Software as a Service, Cloud Computing and Social Media are based on web technologies consisting of complex languages, protocols and standards, built around clientserver architecture. One of the most used technologies in mobile applications are the Web Services defined as an application model supported by any operating system able to provide certain fun...
Do you need to keep up with the latest hacks, attacks, and exploits effecting web applications? Then you need Seven Deadliest Web Application Attacks. This book pinpoints the most dangerous hacks and exploits specific to web applications, laying out the anatomy of these attacks including how to make your system more secure. You will discover the best ways to defend against these vicious hacks with step-by-step instruction and learn techniques to make your computer and network impenetrable. .. .. Attacks detailed in this book include: ..: ..; Cross-Site Scripting (XSS) ..; Cross-Site Request Fo
Thalheim, Bernhard; Prinz, Andreas; Buchberger, Bruno
The papers in this volume aim at obtaining a common understanding of the challenging research questions in web applications comprising web information systems, web services, and web interoperability; obtaining a common understanding of verification needs in web applications; achieving a common understanding of the available rigorous approaches to system development, and the cases in which they have succeeded; identifying how rigorous software engineering methods can be exploited to develop suitable web applications; and at developing a European-scale research agenda combining theory, methods a
Full Text Available Information and communication technologies are designed to support and anticipate the continuing changes of the information society, while outlining new economic, social and cultural dimensions. We see the growth of new business models whose aim is to remove traditional barriers and improve the value of goods and services. Information is a strategic resource and its manipulation raises new problems for all entities involved in the process. Information and communication technologies should be a stable support in managing the flow of data and support the integrity, confidentiality and availability. Concepts such as eBusiness, eCommerce, Software as a Service, Cloud Computing and Social Media are based on web technologies consisting of complex languages, protocols and standards, built around client-server architecture. One of the most used technologies in mobile applications are the Web Services defined as an application model supported by any operating system able to provide certain functionalities using Internet technologies to promote interoperability between various appli-cations and platforms. Web services use HTTP, XML, SSL, SMTP and SOAP, because their stability has proven over the years. Their functionalities are highly variable, with Web services applications exchange type, weather, arithmetic or authentication services. In this article we will talk about SOAP and REST architectures for web services in mobile applications and we will also provide some practical examples based on Android platform.
Groenewegen, D.M.; Visser, E.
This paper is a pre-print of: Danny M. Groenewegen, and Eelco Visser. Weaving Web Applications with WebDSL (Demonstration). In: Gary T. Leavens (editor) Companion to the 24th ACM SIGPLAN Conference on Object-Oriented Programing, Systems, Languages, and Applications (OOPSLA 2009). WebDSL is a
Vuorimaa, Petri; Laine, Markku; Litvinova, Evgenia; Shestakov, Denis
Web Applications have become an omnipresent part of our daily lives. They are easy to use, but hard to develop. WYSIWYG editors, form builders, mashup editors, and markup authoring tools ease the development of Web Applications. However, more advanced Web Applications require servers-side programming, which is beyond the skills of end-user developers. In this paper, we discuss how declarative languages can simplify Web Application development and empower end-users as Web developers. We first ...
Luján Mora, Sergio
Slides of the course "Web Application Programming with Google Maps" taught at the Politechnika Lubelska (Lublin, Poland) in november 2011. Transparencias del curso "Web Application Programming with Google Maps" impartido en la Politechnika Lubelska (Lublin, Polonia) en noviembre de 2011.
Computer security has been an increasing concern for IT professionals for a number of years, yet despite all the efforts, computer systems and networks remain highly vulnerable to attacks of different kinds. Design flaws and security bugs in the underlying software are among the main reasons for this. This lecture focuses on security aspects of Web application development. Various vulnerabilities typical to web applications (such as Cross-site scripting, SQL injection, cross-site request forgery etc.) are introduced and discussed. Sebastian Lopienski is CERN’s deputy Computer Security Officer. He works on security strategy and policies; offers internal consultancy and audit services; develops and maintains security tools for vulnerability assessment and intrusion detection; provides training and awareness raising; and does incident investigation and response. During his work at CERN since 2001, Sebastian has had various assignments, including designing and developing software to manage and support servic...
Full Text Available Web applications vulnerabilities allow attackers to perform malicious actions that range from gaining unauthorized account access to obtaining sensitive data. The number of reported web application vulnerabilities in last decade is increasing dramatically. The most of vulnerabilities result from improper input validation and sanitization. The most important of these vulnerabilities based on improper input validation and sanitization are: SQL injection (SQLI, Cross-Site Scripting (XSS and Buffer Overflow (BOF. In order to address these vulnerabilities we designed and developed the WAPTT (Web Application Penetration Testing Tool tool - web application penetration testing tool. Unlike other web application penetration testing tools, this tool is modular, and can be easily extended by end-user. In order to improve efficiency of SQLI vulnerability detection, WAPTT uses an efficient algorithm for page similarity detection. The proposed tool showed promising results as compared to six well-known web application scanners in detecting various web application vulnerabilities.
Ibarra, A.; Kennedy, M.; Rodríguez, P.; Hernández, C.; Saxton, R.; Gabriel, C.
Web services have been opening a wide avenue for software integration. In this paper, we have reported our experiments with three applications that are built by utilizing and providing web services for Geographic Information Systems (GIS...
Full Text Available Web-TLR is a Web verification engine that is based on the well-established Rewriting Logic--Maude/LTLR tandem for Web system specification and model-checking. In Web-TLR, Web applications are expressed as rewrite theories that can be formally verified by using the Maude built-in LTLR model-checker. Whenever a property is refuted, a counterexample trace is delivered that reveals an undesired, erroneous navigation sequence. Unfortunately, the analysis (or even the simple inspection of such counterexamples may be unfeasible because of the size and complexity of the traces under examination. In this paper, we endow Web-TLR with a new Web debugging facility that supports the efficient manipulation of counterexample traces. This facility is based on a backward trace-slicing technique for rewriting logic theories that allows the pieces of information that we are interested to be traced back through inverse rewrite sequences. The slicing process drastically simplifies the computation trace by dropping useless data that do not influence the final result. By using this facility, the Web engineer can focus on the relevant fragments of the failing application, which greatly reduces the manual debugging effort and also decreases the number of iterative verifications.
As the Internet has evolved, so have the various vulnerabilities, which largely stem from the fact that developers are unaware of the importance of a robust application security program. This book aims to educate readers on application security and building secure web applications using the new Java Platform. The text details a secure web application development process from the risk assessment phase to the proof of concept phase. The authors detail such concepts as application risk assessment, secure SDLC, security compliance requirements, web application vulnerabilities and threats, security
Groeneveld, F.; Mesbah, A.; Van Deursen, A.
Stanciulescu, Adrian; Vanderdonckt, Jean; Proceedings of 6th Int. Conf. on Computer-Aided Design of User Interfaces CADUI'2006
The capabilities of multimodal applications running on the web are well delineated since they are mainly constrained by what their underlying standard mark up language offers, as opposed to hand-made multimodal applications. As the experience in developing such multimodal web applications is growing, the need arises to identify and define major design options of such application to pave the way to a structured development life cycle. This paper provides a design space of independent design op...
There has been a lot of discussion within the Grid community about the use of Web Services technologies in building large-scale, loosely-coupled, cross-organisation applications. In this talk we are going to explore the principles that govern Service-Oriented Architectures and the promise of Web Services technologies for integrating applications that span administrative domains. We are going to see how existing Web Services specifications and practices could provide the necessary infrastructure for implementing Grid applications. Biography Dr. Savas Parastatidis is a Principal Research Associate at the School of Computing Science, University of Newcastle upon Tyne, UK. Savas is one of the authors of the "Grid Application Framework based on Web Services Specifications and Practices" document that was influential in the convergence between Grid and Web Services and the move away from OGSI (more information can be found at http://www.neresc.ac.uk/ws-gaf). He has done research on runtime support for distributed-m...
In recent years, many desktop applications have been ported to the world wide web in order to reduce (multiplatform) development, distribution and maintenance costs. However, there is little data concerning the usability of web applications, and the impact of their usability on the total cost of developing and using such applications. In this paper we present a comparison of web and desktop applications from the usability point of view. The comparison is based on an empirical study that inves...
Vesterli, Sten E
Developing Web Applications with Oracle ADF Essentials covers the basics of Oracle ADF and then works through more complex topics such as debugging and logging features and JAAS Security in JDeveloper as the reader gains more skills. This book will follow a tutorial approach, using a practical example, with the content and tasks getting harder throughout.""Developing Web Applications with Oracle ADF Essentials"" is for you if you want to build modern, user-friendly web applications for all kinds of data gathering, analysis, and presentations. You do not need to know any advanced HTML or JavaSc
Ratnayake, Rakhitha Nimesh
An extensive, practical guide that explains how to adapt WordPress features, both conventional and trending, for web applications.This book is intended for WordPress developers and designers who have the desire to go beyond conventional website development to develop quality web applications within a limited time frame and for maximum profit. Experienced web developers who are looking for a framework for rapid application development will also find this to be a useful resource. Prior knowledge with of WordPress is preferable as the main focus will be on explaining methods for adapting WordPres
Ren, Jingyuan; Williams, Nadya; Clementi, Luca; Krishnan, Sriram; Li, Wilfred W
Biomedical applications have become increasingly complex, and they often require large-scale high-performance computing resources with a large number of processors and memory. The complexity of application deployment and the advances in cluster, grid and cloud computing require new modes of support for biomedical research. Scientific Software as a Service (sSaaS) enables scalable and transparent access to biomedical applications through simple standards-based Web interfaces. Towards this end, we built a production web server (http://ws.nbcr.net) in August 2007 to support the bioinformatics application called MEME. The server has grown since to include docking analysis with AutoDock and AutoDock Vina, electrostatic calculations using PDB2PQR and APBS, and off-target analysis using SMAP. All the applications on the servers are powered by Opal, a toolkit that allows users to wrap scientific applications easily as web services without any modification to the scientific codes, by writing simple XML configuration files. Opal allows both web forms-based access and programmatic access of all our applications. The Opal toolkit currently supports SOAP-based Web service access to a number of popular applications from the National Biomedical Computation Resource (NBCR) and affiliated collaborative and service projects. In addition, Opal's programmatic access capability allows our applications to be accessed through many workflow tools, including Vision, Kepler, Nimrod/K and VisTrails. From mid-August 2007 to the end of 2009, we have successfully executed 239,814 jobs. The number of successfully executed jobs more than doubled from 205 to 411 per day between 2008 and 2009. The Opal-enabled service model is useful for a wide range of applications. It provides for interoperation with other applications with Web Service interfaces, and allows application developers to focus on the scientific tool and workflow development. Web server availability: http://ws.nbcr.net.
Schwarz, Mathias Romme
some of the common errors through an API that is designed to be safe by default. Second, we present a novel technique for checking HTML validity for output that is generated by web applications. Through string analysis, we approximate the output of web applications as context-free grammars. We model...... the HTML validation algorithm and the DTD language, and we generalize the validation algorithm to work for context-free grammars. Third, we present a novel technique for identifying client-state manipulation vulnerabilities. The technique uses a combination of output analysis and information ow analysis......Numerous web application frameworks have been developed in recent years. These frameworks enable programmers to reuse common components and to avoid typical pitfalls in web application development. Although such frameworks help the programmer to avoid many common errors, we nd...
Sullivan, Bryan; Liu, Vincent
.... Sullivan and Liu have created a savvy, essentials-based approach to web app security packed with immediately applicable tools for any information security practitioner sharpening his or her tools or just starting...
Sullivan, Bryan; Liu, Vincent
.... Sullivan and Liu have created a savvy, essentials-based approach to web app security packed with immediately applicable tools for any information security practitioner sharpening his or her tools or just starting out...
Full Text Available The current paper tackles the issue of determining a method for estimating maintenance costs for web applications. The current state of research in the field of web application maintenance is summarized and leading theories and results are highlighted. The cost of web maintenance is determined by the number of man-hours invested in maintenance tasks. Web maintenance tasks are categorized into content maintenance and technical maintenance. Research is centered on analyzing technical maintenance tasks. The research hypothesis is formulated on the assumption that the number of man-hours invested in maintenance tasks can be assessed based on the web application’s user interaction level, complexity and content update effort. Data regarding the costs of maintenance tasks is collected from 24 maintenance projects implemented by a web development company that tackles a wide area of web applications. Homogeneity and diversity of collected data is submitted for debate by presenting a sample of the data and depicting the overall size and comprehensive nature of the entire dataset. A set of metrics dedicated to estimating maintenance costs in web applications is defined based on conclusions formulated by analyzing the collected data and the theories and practices dominating the current state of research. Metrics are validated with regards to the initial research hypothesis. Research hypothesis are validated and conclusions are formulated on the topic of estimating the maintenance cost of web applications. The limits of the research process which represented the basis for the current paper are enunciated. Future research topics are submitted for debate.
Full Text Available The present study aims to introduce and analyze bibliometric application within Web and also to expounds on the status of link analysis in order to point out its application with respect to the existing web-based information sources. Findings indicate that bibliometrics could have required application in the area of digital resources available through Net. Link analysis is a process by which one could make statistical analysis of correlation between hyperlinks and therefore understand the accuracy, veracity and efficacy of citations within a digital document. Link analysis, in effect, is counted as a part of information ranking algorithm within the web environment. The number, linkage and quality of given links to a website are of utmost importance for ranking/status in the Web. The tools applied in this topic include, page ranking strategy, link analysis algorithm, latent semantic indexing and the classical input-output model. The present study analyzes Big Web and Small Web link analysis and explains the means for utilizing web charts in order to better understand the link analysis process.
In recent years, many desktop applications have been ported to the world wide web in order to reduce (multiplatform) development, distribution and maintenance costs. However, there is little data concerning the usability of web applications, and the impact of their usability on the total cost...... of developing and using such applications. In this paper we present a comparison of web and desktop applications from the usability point of view. The comparison is based on an empirical study that investigates the performance of a group of users on two calendaring applications: Yahoo!Calendar and Microsoft...... Calendar. The study shows that in the case of web applications the performance of the users is significantly reduced, mainly because of the restricted interaction mechanisms provided by current web browsers....
The Social Web constitutes a shift in information flow from the traditional Web. Previously, content was provided by the owners of a website, for consumption by the end-user. Nowadays, these websites are being replaced by Social Web applications which are frameworks for the publication of user-provided content. Traditionally, Web content could be `trusted' to some extent based on the site it originated from. Algorithms such as Google's PageRank were (and still are) used to compute the importance of a website, based on analysis of underlying link topology. In the Social Web, analysis of link topology merely tells us about the importance of the information framework which hosts the content. Consumers of information still need to know about the importance/reliability of the content they are reading, and therefore about the reliability of the producers of that content. Research into trust and reputation of the producers of information in the Social Web is still very much in its infancy. Every day, people are forced to make trusting decisions about strangers on the Web based on a very limited amount of information. For example, purchasing a product from an eBay seller with a `reputation' of 99%, downloading a file from a peer-to-peer application such as Bit-Torrent, or allowing Amazon.com tell you what products you will like. Even something as simple as reading comments on a Web-blog requires the consumer to make a trusting decision about the quality of that information. In all of these example cases, and indeed throughout the Social Web, there is a pressing demand for increased information upon which we can make trusting decisions. This chapter examines the diversity of sources from which trust information can be harnessed within Social Web applications and discusses a high level classification of those sources. Three different techniques for harnessing and using trust from a range of sources are presented. These techniques are deployed in two sample Social Web
Zaplata, Sonja; Dreiling, Viktor; Lamersdorf, Winfried
Use of web services also on mobile devices becomes increasingly relevant. However, realizing such mobile web services based on the standard protocol stack is often inappropriate for resource-restricted mobile devices in dynamic networks. On the other hand, using specialized alternative protocols restricts compatibility with traditional service applications. Thus, existing approaches often do not allow to integrate heterogeneous service instances dynamically, as it is, e.g., required for executing mobile service-based business processes.
Stavros Ioannis Valsamidis
Full Text Available The usage of web applications can be measured with the use of metrics. In a LMS, a typical web application, there are no appropriate metrics which would facilitate their qualitative and quantitative measurement. The purpose of this paper is to propose the use of existing techniques with a different way, in order to analyze the log file of a typical LMS and deduce useful conclusions. Three metrics for course usage measurement are used. It also describes two algorithms for course classification and suggestion actions. The metrics and the algorithms and were in Open eClass LMS tracking data of an academic institution. The results from 39 courses presented interest insights. Although the case study concerns a LMS it can also be applied to other web applications such as e-government, e-commerce, e-banking, blogs e.t.c.
Quan Liang Chen
Full Text Available This paper presents high-level functional Web components such as frames, framesets, and pivot tables, which conventional development environments for Web applications have not yet supported. Frameset Web components provide several editing facilities such as adding, deleting, changing, and nesting of framesets to make it easier to develop Web applications that use frame facilities. Pivot table Web components sum up various kinds of data in two dimensions. They reduce the amount of code to be written by developers greatly. The paper also describes the system that implements these high-level functional components as visual Web components. This system assists designers in the development of Web applications based on the page-transition framework that models a Web application as a set of Web page transitions, and by using visual Web components, makes it easier to write processes to be executed when a Web page transfers to another.
Umapathy, Karthikeyan; Wallace, F. Layne
Web applications have become commonplace in the Information Systems curriculum. Much of the discussion about Web development for capstone courses has centered on the scripting tools. Very little has been discussed about different ways to incorporate the Web server into Web application development courses. In this paper, three different ways of…
Changing consumer behavior drives the demand for convenient and easy-to-use mobile applications across industries. This also impacts the financial sector. Banks are eager to offer their services as mobile applications to match the modern consumer needs. The mobile applications are not independently able to provide the required functionality; they interact with the existing core business functions by consuming secure Web Services over the Internet. The thesis analyses th...
We provide a description of work at the National Aeronautics and Space Administration (NASA) on building system based on semantic-web concepts and technologies. NASA has been one of the early adopters of semantic-web technologies for practical applications. Indeed there are several ongoing 0 endeavors on building semantics based systems for use in diverse NASA domains ranging from collaborative scientific activity to accident and mishap investigation to enterprise search to scientific information gathering and integration to aviation safety decision support We provide a brief overview of many applications and ongoing work with the goal of informing the external community of these NASA endeavors.
The best way to learn anything is by doing. The author uses a friendly tone and fun examples to ensure that you learn the basics of application development. Once you have read this book, you should have the necessary skills to build your own applications.If you have no experience but want to learn how to create applications in HTML5, this book is the only help you'll need. Using practical examples, HTML5 Web Application Development by Example will develop your knowledge and confidence in application development.
If you are a Node.js developer who wants to take your Express skills to the next level and develop high performing, reliable web applications using best practices, this book is ideal for you. The only prerequisite is knowledge of Node.js.
Stanciulescu, Adrian; Vanderdonckt, Jean
The capabilities of multimodal applications running on the web are well de-lineated since they are mainly constrained by what their underlying standard mark up language offers, as opposed to hand-made multimodal applications. As the experience in developing such multimodal web applications is growing, the need arises to identify and define major design options of such application to pave the way to a structured development life cycle. This paper provides a design space of independent design options for multimodal web applications based on three types of modalities: graphical, vocal, tactile, and combined. On the one hand, these design options may provide designers with some explicit guidance on what to decide or not for their future user interface, while exploring various design alternatives. On the other hand, these design options have been implemented as graph transformations per-formed on a user interface model represented as a graph. Thanks to a transformation engine, it allows designers to play with the different values of each design option, to preview the results of the transformation, and to obtain the corresponding code on-demand
Web engineering is the application of systematic and quantifiable approaches (concepts, methods, techniques, tools) to cost-effective requirements analysis, design, implementation, testing, operation, and maintenance of high quality web applications. Over the past years, Content Management Systems
Thuraisingham, Bhavani; Clifton, Chris; Gupta, Amar; Bertino, Elisa; Ferrari, Elena
This paper provides directions for web and e-commerce applications security. In particular, access control policies, workflow security, XML security and federated database security issues pertaining to the web and ecommerce applications are discussed.
A framework and process that explains how to perform security regression testing for web applications. This paper discusses and proposes a framework based on open source tools that can be used to perform automated security regression testing of web applications.
Berger, Dillon Tanner
The purpose of this technical note is to give a brief explanation of the AWAKE Web Server, the current web applications it serves, and how to edit, maintain, and update the source code. The majority of this paper is dedicated to the development of the server and its web applications.
Full Text Available This paper presents an overview about the evaluation of risks and vulnerabilities in a web based distributed application by emphasizing aspects concerning the process of security assessment with regards to the audit field. In the audit process, an important activity is dedicated to the measurement of the characteristics taken into consideration for evaluation. From this point of view, the quality of the audit process depends on the quality of assessment methods and techniques. By doing a review of the fields involved in the research process, the approach wants to reflect the main concerns that address the web based distributed applications using exploratory research techniques. The results show that many are the aspects which must carefully be worked with, across a distributed system and they can be revealed by doing a depth introspective analyze upon the information flow and internal processes that are part of the system. This paper reveals the limitations of a non-existing unified security risk assessment model that could prevent such risks and vulnerabilities debated. Based on such standardize models, secure web based distributed applications can be easily audited and many vulnerabilities which can appear due to the lack of access to information can be avoided.
Romaniuk, Ryszard S.
XLth Wilga Summer 2017 Symposium on Photonics Applications and Web Engineering was held on 28 May-4 June 2017. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, modern optics, mechatronics, applied physics, electronics technologies and applications. There were presented around 300 oral and poster papers in a few main topical tracks, which are traditional for Wilga, including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Things, measurement systems for astronomy, high energy physics experiments, and other. The paper is a traditional introduction to the 2017 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations. This year Symposium was divided to the following topical sessions/conferences: Optics, Optoelectronics and Photonics, Computational and Artificial Intelligence, Biomedical Applications, Astronomical and High Energy Physics Experiments Applications, Material Research and Engineering, and Advanced Photonics and Electronics Applications in Research and Industry.
Full Text Available Electronic Commerce over the Internet, aims to become a global conveyor belt of business transactions. Web applications of increasing sophistication emerge in almost every business sector, reflecting a variety of technical and technological approaches. In this paper we argue that system developers need to reconsider their professional practices in the context of these new technologies by taking advantage of opportunities like short response cycles and easy diffusion of systems results, while they recognise the limitations of traditional practice. We discuss a framework of IS development issues for Internet based applications and propose guidelines towards new development practices.
Møller, Anders; Schwarz, Mathias Romme
Although numerous frameworks for web application programming have been developed in recent years, writing web applications remains a challenging task. Guided by a collection of classical design principles, we propose yet another framework. It is based on a simple but flexible server......-oriented architecture that coherently supports general aspects of modern web applications, including dynamic XML construction, session management, data persistence, caching, and authentication, but it also simplifies programming of server-push communication and integration of XHTML-based applications and XML-based web...... services.The resulting framework provides a novel foundation for developing maintainable and secure web applications....
K.B. van der Vlist (Kevin)
htmlabstractEven though web application development is supported by professional tooling, debugging support is lacking. If one starts to debug a web application, hardly any tooling support exists. Only the core components like server processes and a web browser are exposed. Developers need to
Weerd, I. van de
The development of complex, data-intensive web applications is becoming simpler due to the usage of content management systems. Conventional information systems development methods as well as web application development methods do not cover the specific needs of a method for web content
Symfony is a high performance PHP framework for developing MVC web applications. Symfony1 allowed for ease of use but its shortcoming was the difficulty of extending it. However, this difficulty has now been eradicated by the more powerful and extensible Symfony2. Information on more advanced techniques for extending Symfony can be difficult to find, so you need one resource that contains the advanced features in a way you can understand. This tutorial offers solutions to all your Symfony extension problems. You will get to grips with all the extension points that Symfony, Twig, and Doctrine o
Web engineering is the application of systematic and quantifiable approaches (concepts, methods, techniques, tools) to cost-effective requirements analysis, design, implementation, testing, operation, and maintenance of high quality web applications. Over the past years, Content Management Systems (CMS) have emerged as an important foundation for the web engineering process. CMS can be defined as a tool for the creation, editing and management of web information in an integral way. A CMS appe...
Full Text Available Digital library system contributes the development of digital resource digital resource that can be accessed via the Internet. Librarymanagement system contributed to the development of automation membership data processing, circulation and cataloging. In this thesisis to develop a new concept of digital library systems and library management system by integrating these two systems architecture. Integration architecture implemented by inserting component library management system into the digital library system architecture. Web application technology required for these components in order to be integrated with the digital library system components. The newsystem has the advantage of this development application utilization of borrowing, membership and kataloging to a sharable over the internet, so applications that can be used together. Information can be delivered between the library catalog, without leaving the digitallibrary function in the utilization of shared digital resources derived from uploading by each librarian.Keywords : Digital library system; Library management system; Web application
Ahmed, M.A.; Van den Hoven, J.
Much of the literature on responsibility in the IT field addresses the responsibilities of members of the IT profession. In this paper, we investigate to what extent the responsibilities associated with computing practitioners apply to freelance web developers. The relevant moral question is not
Barenji Ali Vatankhah
Full Text Available This paper discusses an integration driven framework for enabling the RFID based identification of parts to perform robotic distributor operations in the random mix based parts control based on web application. The RFID technology senses newly arriving parts to be distribution robot, the robot is able to recognize them and perform cooperative distributing via web-based application. The developed web application control system is implemented in the educational robotic arm. RFID system sends real time information from parts to the web application and web based application makes a decision for control of the robot arm, controller of robot controls the robot as based on the decision from web application. The proposed control system has increases the reconfiguration and scalability of robot system.
Carrazza, Stefano; Palazzo, Daniele; Rojo, Juan
We present APFEL Web, a web-based application designed to provide a flexible user-friendly tool for the graphical visualization of parton distribution functions (PDFs). In this note we describe the technical design of the APFEL Web application, motivating the choices and the framework used for the development of this project. We document the basic usage of APFEL Web and show how it can be used to provide useful input for a variety of collider phenomenological studies. Finally we provide some examples showing the output generated by the application.
The Internet has become a prominent platform for the deployment of computer applications. Web-browsers are an important interface for e-mail, on-line shopping, and banking applications. Despite this popularity, the development of web applications is a difficult job through their complex
Full Text Available Because web applications are complex software systems in constant evolution, they become real targets for hackers as they provide direct access to corporate or personal data. Web application security is supposed to represent an essential priority for organizations in order to protect sensitive customer data, or those of the employees of a company. Worldwide, there are many organizations that report the most common types of attacks on Web applications and methods for their prevention. While the paper is an overview, it puts forward several typical examples of web application vulnerabilities that are due to programming errors; these may be used by attackers to take unauthorized control over computers.
Aaron M. French
Service Science is the basis of information system and web services that ascribe to the provider/client model. This paper developments a methodology that can be used in the development of web services such as websites, web applications and eCommerce. The goal is to development a methodology that will add structure to a highly unstructured problem to assist in the development and success of web services. The new methodology proposed will be called the Web Development Life Cycle (WDLC) and adap...
Full Text Available This paper analyzes the Web and raises a significant question: Does the Web save the time of the users? This question is analyzed in the context of Five Laws of the Web. What do these laws mean? The laws are meant to be elemental, to convey a deep understanding and capture the essential meaning of the World Wide Web. These laws may seem simplistic, but in fact they express a simple, crystal-clear vision of what the Web ought to be. Moreover, we intend to echo the simplicity of Ranganathans Five Laws of Library Science which inspired them
Fatemeh Amoohosseini; Tahereh Aziminia
This paper analyzes the Web and raises a significant question: Does the Web save the time of the users? This question is analyzed in the context of Five Laws of the Web. What do these laws mean? The laws are meant to be elemental, to convey a deep understanding and capture the essential meaning of the World Wide Web. These laws may seem simplistic, but in fact they express a simple, crystal-clear vision of what the Web ought to be. Moreover, we intend to echo the simplicity of Ranganathans Fi...
IBM WebSphere Application Server 8.0 Administration Guide is a highly practical, example-driven tutorial. You will be introduced to WebSphere Application Server 8.0, and guided through configuration, deployment, and tuning for optimum performance. If you are an administrator who wants to get up and running with IBM WebSphere Application Server 8.0, then this book is not to be missed. Experience with WebSphere and Java would be an advantage, but is not essential.
de Knikker, Remko; Guo, Youjun; Li, Jin-Long; Kwan, Albert K H; Yip, Kevin Y; Cheung, David W; Cheung, Kei-Hoi
Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1) the platforms on which the applications run are heterogeneous, 2) their web interface is not machine-friendly, 3) they use a non-standard format for data input and output, 4) they do not exploit standards to define application interface and message exchange, and 5) existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD) that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH) category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates with these web services using a web
Cheung David W
Full Text Available Abstract Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1 the platforms on which the applications run are heterogeneous, 2 their web interface is not machine-friendly, 3 they use a non-standard format for data input and output, 4 they do not exploit standards to define application interface and message exchange, and 5 existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates
Van Deursen, A.; Mesbah, A.; Nederlof, A.
In this paper we review five years of research in the field of automated crawling and testing of web applications. We describe the open source Crawljax tool, and the various extensions that have been proposed in order to address such issues as cross-browser compatibility testing, web application
Lin, Sally; Second International Conference on Electronic Commerce, Web Application and Communication (ECWAC 2012)
ECWAC2012 is an integrated conference devoted to Electronic Commerce, Web Application and Communication. In the this proceedings you can find the carefully reviewed scientific outcome of the second International Conference on Electronic Commerce, Web Application and Communication (ECWAC 2012) held at March 17-18,2012 in Wuhan, China, bringing together researchers from all around the world in the field.
Lin, Sally; Second International Conference on Electronic Commerce, Web Application and Communication (ECWAC 2012)
ECWAC2012 is an integrated conference devoted to Electronic Commerce, Web Application and Communication. In the this proceedings you can find the carefully reviewed scientific outcome of the second International Conference on Electronic Commerce, Web Application and Communication (ECWAC 2012) held at March 17-18,2012 in Wuhan, China, bringing together researchers from all around the world in the field.
This fascinating book demonstrates how you can build web applications to mine the enormous amount of data created by people on the Internet. With the sophisticated algorithms in this book, you can write smart programs to access interesting datasets from other web sites, collect data from users of your own applications, and analyze and understand the data once you've found it.
Written as a practical, step-by-step tutorial, Creating HTML5 Apps with SproutCore is full of engaging examples to help you learn in a practical context.This book is for any person looking to write software for the Web or already writing software for the Web. Whether your background is in web development or in software development, Creating HTML5 Apps with SproutCore will help you expand your skills so that you will be ready to apply the software development principles in the web development space.
Mulone, Pablo Martin; Gordon, Richard
This is a cookbook and you may read the chapters in any order. The recipes need not be read sequentially. There are a good amount of code examples and relevant screenshots to ease learning pains. The target audience are Python developers with basic knowledge of web2py who want to gain further knowledge of web2py
Darwin, Peter Bacon
Nishi, Kentaro; Shintani, Toramatsu; Matsuo, Tokuro; Tashiro, Noriharu; Ito, Takayuki
WWW has developed rapidly, and it is becoming easy to make personal web sites. In general, we create and edit web pages by using a HTML authoring software or writing HTML source codes in a text editor. Then, we need to upload them to a web server. When we make and build our own web pages by the existing tools, it takes a lot of time and effort to complete the necessary tasks. In this paper, we propose a home page authoring support system in which we directly edit and make web pages on a web browser. Current experimental results demonstrate that our system can effectively support novices to create their web pages. Also, we show two real-world applications that effectively utilize our system.
Alin Zamfiroiu; Bogdan Vintila
Mobile applications are becoming increasingly used because of the multitude of existing mobile devices. Mobile application development becomes more complex. For mobile devices there are native applications that run directly on the device, web applications accessed via mobile browsers and hybrid applications. Mobile Application Development in any form should be made with quality assurance since when determining the target group and the application architecture. Management of mobile application...
Palmer, Grant; Arnold, James O. (Technical Monitor)
There are many advantages to developing web-based scientific applications. Any number of people can access the application concurrently. The application can be accessed from a remote location. The application becomes essentially platform-independent because it can be run from any machine that has internet access and can run a web browser. Maintenance and upgrades to the application are simplified since only one copy of the application exists in a centralized location. This paper details the creation of web-based applications using Java servlets. Java is a powerful, versatile programming language that is well suited to developing web-based programs. A Java servlet provides the interface between the central server and the remote client machines. The servlet accepts input data from the client, runs the application on the server, and sends the output back to the client machine. The type of servlet that supports the HTTP protocol will be discussed in depth. Among the topics the paper will discuss are how to write an http servlet, how the servlet can run applications written in Java and other languages, and how to set up a Java web server. The entire process will be demonstrated by building a web-based application to compute stagnation point heat transfer.
Hampton, J.; Simons, R.
This document describes the application design philosophy for the Comprehensive Nuclear Test Ban Treaty Research & Development Web Site. This design incorporates object-oriented techniques to produce a flexible and maintainable system of applications that support the web site. These techniques will be discussed at length along with the issues they address. The overall structure of the applications and their relationships with one another will also be described. The current problems and future design changes will be discussed as well.
Doru E. TILIUTE
Full Text Available Web service is a technological solution for software interoperability that supports the seamless integration of diverse applications. In the vision of web service architecture, web services are described by the Web Service Description Language (WSDL, discovered through Universal Description, Discovery and Integration (UDDI and communicate by the Simple Object Access Protocol (SOAP. Such a divination has never been fully accomplished yet. Although it was criticized that WSDL only has a syntactic definition of web services, but was not semantic, prior initiatives in semantic web services did not establish a correct methodology to resolve the problem. This paper examines the distinction and relationship between the syntactic and semantic definitions for web services that characterize different purposes in service computation. Further, this paper proposes that the semantics of web service are neutral and independent from the service interface definition, data types and platform. Such a conclusion can be a universal law in software engineering and service computing. Several use cases in the GIScience application are examined in this paper, while the formalization of geospatial services needs to be constructed by the GIScience community towards a comprehensive ontology of the conceptual definitions and relationships for geospatial computation. Advancements in semantic web services research will happen in domain science applications.
Full Text Available This study aims to determine the benefits of a web application in improving the efficiency and effectiveness of services to lecturers. The research method consists of literature study and data collection analysis based on observations. Implementing a web application, an observation is conducted followed by a comparison on data prior to the implementation. The evaluation results show that the implementation of a web application improves efficiency and effectiveness in the use of time and resources in providing services to lecturers in information access.
Scripting languages require the use of high-level library functions to implement efficient image processing; thus, real-time image blur in web-based applications is a challenging task unless specific library functions are available for this purpose. We present a pyramid blur algorithm, which can...... be implemented using a subimage copy function, and evaluate its performance with various web browsers in comparison to an infinite impulse response filter. While this pyramid algorithm was first proposed for GPU-based image processing, its applicability to web-based applications indicates that some GPU...
The highly successful security book returns with a new edition, completely updated Web applications are the front door to most organizations, exposing them to attacks that may disclose personal information, execute fraudulent transactions, or compromise ordinary users. This practical book has been completely updated and revised to discuss the latest step-by-step techniques for attacking and defending the range of ever-evolving web applications. You'll explore the various new technologies employed in web applications that have appeared since the first edition and review the new attack technique
Babar, Shahzad; Mehmood, Aamer
Web Accessibility emerged as problem when disabled and elder people started interaction with web contents soon after the inception of World Wide Web. When web based GIS applications appeared on the scene of web and users of these kinds of applications increased, these applications faced the similar problem of accessibility. The intensity of web accessibility problems in GIS based applications has increased rapidly during recent years due to extensive interaction of user with maps. Web Accessi...
Fluit, Christiaan; Sabou, Marta; Harmelen, Frank van
The Semantic Web is an extension of the current World Wide Web, based on the idea of exchanging information with explicit, formal, and machine-accessible descriptions of meaning. Providing information with such semantics will enable the construction of applications that have an increased awareness
Teerling, M.L.; Huizingh, Eelko K.R.E.
While mass customization is the tailoring of products and services to the needs and wants of individual customers, web site customization is the tailoring of web sites to individual customers’ preferences. Based on a review of site customization applications, the authors propose a model with four
Alternative browsers are gaining significant market share, and both Apple and Microsoft are releasing OS upgrades which portend some interesting changes in Web development. Of particular interest for language learning professionals may be new developments in the area of Web browser based applications, particularly using an approach dubbed "Ajax."…
Wei, L; Sengupta, S
In health care systems, users may access multiple applications during one session of interaction with the system. However, users must sign on to each application individually, and it is difficult to maintain a common context among these applications. We are developing a session management system for web-based applications using LDAP directory service, which will allow single sign-on to multiple web-based applications, and maintain a common context among those applications for the user. This paper discusses the motivations for building this system, the system architecture, and the challenges of our approach, such as the session objects management for the user, and session security.
Full Text Available This paper presents the RMatlab-app2web tool which enables the use of R or MATLAB scripts as CGI programs for generating dynamic web content. RMatlab-app2web is highly adjustable. It can be run on both, Windows and Unix-like systems. CGI scripts written in PHP take information entered on web-based forms on the client browser, pass it to R or MATLAB on the server and display the output on the client browser. Adjustable to the servers requirements, the data transfer procedure can use either the GET or the POST routine. The application allows to call R or MATLAB to run previously written scripts. It does not allow to run completely flexible user code. We run a multivariate OLS regression to demonstrate the use of the RMatlab-app2web tool.
Mariette, Jérôme; Escudié, Frédéric; Bardou, Philippe; Nabihoudine, Ibouniyamine; Noirot, Céline; Trotard, Marie-Stéphane; Gaspin, Christine; Klopp, Christophe
.... With Jflow, we introduce a Workflow Management System (WMS), composed of jQuery plug-ins which can easily be embedded in any web application and a Python library providing all requested features to setup, run and monitor workflows...
Heiderich, Mario; Heyes, Gareth; Lindsay, David
Web applications are used every day by millions of users, which is why they are one of the most popular vectors for attackers. Obfuscation of code has allowed hackers to take one attack and create hundreds-if not millions-of variants that can evade your security measures. Web Application Obfuscation takes a look at common Web infrastructure and security controls from an attacker's perspective, allowing the reader to understand the shortcomings of their security systems. Find out how an attacker would bypass different types of security controls, how these very security controls introduce new ty
Liviu Adrian COTFAS
Full Text Available Web Service Composition allows the development of easily reconfigurable applications that can be quickly adapted to business changes. Due to the shift in paradigm from traditional systems, new approaches are needed in order to evaluate the reliability of web service composition applications. In this paper we present an approach based on intelligent agents for semiautomatic composition as well as methods for assessing reliability. Abstract web services, corresponding to a group of services that accomplishes a specific functionality are used as a mean of assuring better system reliability. The model can be extended with other Quality of Services – QoS attributes.
Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.; Mostafa, M. G.
The paper describes a technique to develop a web based financial system, following latest technology and business needs. In the development of web based application, the user friendliness and technology both are very important. It is used ASP .NET MVC 4 platform and SQL 2008 server for development of web based financial system. It shows the technique for the entry system and report monitoring of the application is user friendly. This paper also highlights the critical situations of development, which will help to develop the quality product.
This book follows a standard tutorial-based approach which will teach you how to make a web app using R and Shiny quickly and easily.This book is for anybody who wants to produce interactive data summaries over the Web, whether you want to share them with a few colleagues or the whole world. You need no previous experience with R, Shiny, HTML, or CSS to begin using this book, although you will need at least a little previous experience with programming in a different language.
Lopes, Pedro; Oliveira, José Luís
As the "omics" revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter's complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a "semantic web in a box" approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.
This book is a step-by-step, practical tutorial with a simple approach to help you build RESTful web applications and services on the .NET framework quickly and efficiently.This book is for ASP.NET web developers who want to explore REST-based services with C# 5. This book contains many real-world code examples with explanations whenever necessary. Some experience with C# and ASP.NET 4 is expected.
Tracy, Fran; Jordan, Katy
This paper draws upon the experience of an interdisciplinary research group in engaging undergraduate university students in the design and development of semantic web technologies. A flexible approach to participatory design challenged conventional distinctions between "designer" and "user" and allowed students to play a role…
Kim, Ji-Hyeon; Jung, Jae-Cheon; Chang, Young-Woo; Chang, Hoon-Seon; Kim, Jae-Cheol; Kim, Hang-Bae [Korea Power Engineering Company, Daejeon (Korea, Republic of); Kim, Kyu-Ho; Lee, Dong-Chul [Korea Electric Power Data Network, Daejeon (Korea, Republic of)
The purpose of Enterprise Application Integration (EAI) is to enable the interoperability between two or more enterprise software systems. These systems, for example, can be an Enterprise Resource Planning (ERP) system, an Enterprise Asset Management (EAM) system or a Condition Monitoring system. Traditional EAI approach, based on point-to-point connection, is expensive, vendor specific with limited modules and restricted interoperability with other ERPs and applications. To overcome these drawbacks, the Web Service based EAI has emerged. It allows the integration without point to point linking and with less costs. Many approaches of Web service based EAI are combined with ORACLE, SAP, PeopleSoft, WebSphere, SIEBEL etc. as a system integration platform. The approach still has the restriction that only predefined clients can access the services. This means clients must know exactly the protocol for calling the services and if they don't have the access information they never can get the services. This is because these Web services are based on syntactic service description. In this paper, a semantic based EAI approach, that allows the uninformed clients to access the services, is introduced. The semantic EAI is designed with the Web services that have semantic service descriptions. The Semantic Web Services(SWS) are described in Web Ontology Language for Services(OWL-S), a semantic service ontology language, and advertised in Universal Description, Discovery and Integration (UDDI). Clients find desired services through the UDDI and get services from service providers through Web Service Description Language(WSDL)
The U.S. Department of Energy (DOE) requires all employees who hold a security clearance and have access to classified information and/or special nuclear material to be trained in the area of Safeguards and Security. Since the advent of the World Wide Web, personnel who are responsible for training have capitalized on this communication medium to develop and deliver Web-based training. Unlike traditional computer based training where the student was required to find a workstation where the training program resided, one of Web-based training strongest advantage is that the training can be delivered right to the workers desk top computer. This paper will address reasons for the driving forces behind the utilization of Web-based training at the Laboratory with a brief explanation of the different types of training conducted. Also discussed briefly is the different types of distance learning used in conjunction with Web-based training. The implementation strategy will be addressed and how the Laboratory utilized a Web-Based Standards Committee to develop standards for Web-based training applications. Web-based problems resulting from little or no communication between training personnel across the Laboratory will be touched on and how this was solved. Also discussed is the development of a ''Virtual Training Center'' where personnel can shop on-line for their training needs. Web-based training programs within the Safeguards and Security arena will be briefly discussed. Specifically, Web-based training in the area of Materials Control and Accountability will be explored. A Web-based example of what a student would experience during a training session is also discussed. A short closing statement of what the future of Web-based Training holds in the future is offered.
Web applications became most popular medium in the Internet. Popularity, easiness of web application script languages and frameworks together with careless development results in high number of web application vulnerabilities and high number of attacks performed. There are several types of attacks possible because of improper input validation: SQL injection Cross-site scripting, Cross-Site Request Forgery (CSRF), web spam in blogs and others. In order to secure web applications intrusion detection (IDS) and intrusion prevention systems (IPS) are being used. Intrusion detection systems are divided in two groups: misuse detection (traditional IDS) and anomaly detection. This paper presents data mining based algorithm for anomaly detection. The principle of this method is the comparison of the incoming HTTP traffic with a previously built profile that contains a representation of the "normal" or expected web application usage sequence patterns. The frequent sequence patterns are found with GSP algorithm. Previously presented detection method was rewritten and improved. Some tests show that the software catches malicious requests, especially long attack sequences, results quite good with medium length sequences, for short length sequences must be complemented with other methods.
Asish Kumar Dalai; Sanjay Kumar Jena
Reports on web application security risks show that SQL injection is the top most vulnerability. The journey of static to dynamic web pages leads to the use of database in web applications. Due to the lack of secure coding techniques, SQL injection vulnerability prevails in a large set of web applications. A successful SQL injection attack imposes a serious threat to the database, web application, and the entire web server. In this article, the authors have proposed a novel method for prevent...
The main goal of this diploma thesis is a presentation of Autocommerce web project development. All used technologies are described. On the basis of the project we also present agile methodologies for software development which helps us to efficiently build the whole information system from the idea to the first version. Such methodologies are mostly used by startups, because they are perfect for building and testing products in a very short time & for adapting with the provided feedback from...
Full Text Available Web applications are becoming more and more complex. Testing such applications is an intricate hard and time-consuming activity. Therefore, testing is often poorly performed or skipped by practitioners. Test automation can help to avoid this situation. Hence, this paper presents a novel approach to perform automated software testing for web applications based on its navigation. On the one hand, web navigation is the process of traversing a web application using a browser. On the other hand, functional requirements are actions that an application must do. Therefore, the evaluation of the correct navigation of web applications results in the assessment of the specified functional requirements. The proposed method to perform the automation is done in four levels: test case generation, test data derivation, test case execution, and test case reporting. This method is driven by three kinds of inputs: i UML models; ii Selenium scripts; iii XML files. We have implemented our approach in an open-source testing framework named Automatic Testing Platform. The validation of this work has been carried out by means of a case study, in which the target is a real invoice management system developed using a model-driven approach.
Wang, Xin; Ye, Yan; Qi, Jiahui; Wu, Min
Due to http protocol restrictions, the traditional web real-time applications cannot push information from the server to the browser. Although it can be achieved through technical means, but there are obvious shortcomings. This paper introduces an English testing system design and development on server-side, which is based on the WebSocket protocol. Through the WebSocket, a bidirectional communication channel can be established between the browser and the server. It realizes the real-time communication in the English test system.
This work presents the idea and the realization of web application for monitoring the operation of the mainframe computer, servers with Linux operating system and application servers. Web application is intended for administrators of these systems, as an aid to better understand the current state, load and operation of the individual components of the server systems.
Zhou, W.; Pierre, G.E.O.; Chi, C.-H.
Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to
This report describes the modifications at CERN's web application for electron and laser beam technologies. There are updates at both the front and the back end of the application. New electron and laser machines were added and also old machines were updated. There is also a new feature for printing needed information.
Patel, Sandeep Kumar
This book is a standard tutorial for web application developers presented in a comprehensive, step-by-step manner to explain the nuances involved. It has an abundance of code and examples supporting explanations of each feature. This book is intended for Java developers wanting to create rich and responsive applications using AJAX. Basic experience of using jQuery is assumed.
Hariadi, Bambang; Dewiyani Sunarto, M. J.; Sudarmaningtyas, Pantjawati
This study aimed to develop a web-based learning application as a form of learning revolution. The form of learning revolution includes the provision of unlimited teaching materials, real time class organization, and is not limited by time or place. The implementation of this application is in the form of hybrid learning by using Google Apps for…
M. Di Benedetto
Ngu, Phuc Huy
The explosion of mobile applications both in number and variety raises the need of shedding light on their architecture, composition and quality. Indeed, it is crucial to understand which mobile application paradigm fits better to what type of application and usage. Such understanding has direct consequences on the user experience, the development cost and sale revenues of mobile apps. In this thesis, we identify four main mobile application paradigms and evaluate them from the viewpoints of ...
Full Text Available Various studies have been carried out since 2005 under the leadership of Ministry of Environment and Urbanism of Turkey, in order to observe the quality of air in Turkey, to develop new policies and to develop a sustainable air quality management strategy. For this reason, a national air quality monitoring network has been developed providing air quality indices. By this network, the quality of the air has been continuously monitored and an important information system has been constructed in order to take precautions for preventing a dangerous situation. The biggest handicap in the network is the data access problem for instant and time series data acquisition and processing because of its proprietary structure. Currently, there is no service offered by the current air quality monitoring system for exchanging information with third party applications. Within the context of this work, a web service has been developed to enable location based querying of the current/past air quality data in Turkey. This web service is equipped with up-todate and widely preferred technologies. In other words, an architecture is chosen in which applications can easily integrate. In the second phase of the study, a web-based application was developed to test the developed web service and this testing application can perform location based acquisition of air-quality data. This makes it possible to easily carry out operations such as screening and examination of the area in the given time-frame which cannot be done with the national monitoring network.
Şahin, K.; Işıkdağ, U.
Various studies have been carried out since 2005 under the leadership of Ministry of Environment and Urbanism of Turkey, in order to observe the quality of air in Turkey, to develop new policies and to develop a sustainable air quality management strategy. For this reason, a national air quality monitoring network has been developed providing air quality indices. By this network, the quality of the air has been continuously monitored and an important information system has been constructed in order to take precautions for preventing a dangerous situation. The biggest handicap in the network is the data access problem for instant and time series data acquisition and processing because of its proprietary structure. Currently, there is no service offered by the current air quality monitoring system for exchanging information with third party applications. Within the context of this work, a web service has been developed to enable location based querying of the current/past air quality data in Turkey. This web service is equipped with up-todate and widely preferred technologies. In other words, an architecture is chosen in which applications can easily integrate. In the second phase of the study, a web-based application was developed to test the developed web service and this testing application can perform location based acquisition of air-quality data. This makes it possible to easily carry out operations such as screening and examination of the area in the given time-frame which cannot be done with the national monitoring network.
Mao, Jian; Chen, Yue; Shi, Futian; Jia, Yaoqi; Liang, Zhenkai
Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT) systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users' browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach.
Full Text Available Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users’ browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach.
Bebo White is a Departmental Associate (retired) at SLAC and has spent considerable time at CERN. In addition, he holds faculty appointments at Hong Kong University, the University of San Francisco, and Contra Costa College. He is a frequent speaker at conferences, academic institutions, and for commercial organizations around the world. Bebo has been a member of the International World Wide Web Conference Committee (IW3C2) since 1996 and in that time has served as General Co-Chair of two of the conferences ...
Prandini, Marco; Faldella, Eugenio; Laschi, Roberto
"Hosting" represents a commonplace solution for the low-cost implementation of web sites through the efficient sharing of the resources of a single server. The arising security problems, however, are not always easily dealt with under the Discretionary Access Control model implemented by traditional operating systems. More robust separation between the hosted sites, as well as more robust protection of the host system, can be attained by exploiting the features typical of Mandatory Access Control systems. Recently, these systems have recently been made available to the vast Linux community through projects like SELinux and grsecurity. This paper describes the architecture of a secure hosting server, integrating SELinux functionalities into the Apache/PHP platform, designed with the goal of increasing security without adding administrative burdens or impacting performance.
Full Text Available This paper revisits the debate concerning which development environment should be used to teach server-side Web Application Development courses to undergraduate students. In 2002, following an industry-based survey of Web developers, a decision was made to adopt an open source platform consisting of PHP and MySQL rather than a Microsoft platform utilising Access and Active Server Pages. Since that date there have been a number of significant changes within the computing industry that suggest that perhaps it is appropriate to revisit the original decision. This paper investigates expert opinion by reviewing current literature regarding web development environments, it looks at the results of a survey of web development companies and it examines the current employment trends in the web development area. The paper concludes by examining the impact of making a decision to change the development environment used to teach Web Application Development to a third year computing degree class and describes the impact on course delivery that the change has brought about.
Full Text Available This paper proposes a relational constraint driven technique that synthesizes test cases automatically for web applications. Using a static analysis, servlets can be modeled as relational transducers, which manipulate backend databases. We present a synthesis algorithm that generates a sequence of HTTP requests for simulating a user session. The algorithm relies on backward symbolic image computation for reaching a certain database state, given a code coverage objective. With a slight adaptation, the technique can be used for discovering workflow attacks on web applications.
BABINCEV IVAN M.; VULETIC DEJAN V.
The Kali Linux operating system is described as well as its purpose and possibilities. There are listed groups of tools that Kali Linux has together with the methods of their functioning, as well as a possibility to install and use tools that are not an integral part of Kali. The final part shows a practical testing of web applications using the tools from the Kali Linux operating system. The paper thus shows a part of the possibilities of this operating system in analaysing web applications ...
Lanham, Clifton; Kallner, Shawn; Gernand, Jeffrey
A Web application facilitates collaborative development of the ground operations planning document. This will reduce costs and development time for new programs by incorporating the data governance, access control, and revision tracking of the ground operations planning data. Ground Operations Planning requires the creation and maintenance of detailed timelines and documentation. The GOPDb Web application was created using state-of-the-art Web 2.0 technologies, and was deployed as SaaS (Software as a Service), with an emphasis on data governance and security needs. Application access is managed using two-factor authentication, with data write permissions tied to user roles and responsibilities. Multiple instances of the application can be deployed on a Web server to meet the robust needs for multiple, future programs with minimal additional cost. This innovation features high availability and scalability, with no additional software that needs to be bought or installed. For data governance and security (data quality, management, business process management, and risk management for data handling), the software uses NAMS. No local copy/cloning of data is permitted. Data change log/tracking is addressed, as well as collaboration, work flow, and process standardization. The software provides on-line documentation and detailed Web-based help. There are multiple ways that this software can be deployed on a Web server to meet ground operations planning needs for future programs. The software could be used to support commercial crew ground operations planning, as well as commercial payload/satellite ground operations planning. The application source code and database schema are owned by NASA.
Feingold, Eric R.; Grevera, George J.; Mezrich, Reuben S.; Horii, Steven C.; Khalsa, Satjeet S.; Phan, Le
Guntram Graef; Martin Gaedke
The lifecycle of Web-based applications is characterized by frequent changes to content, user interface, and functionality. Updating content, improving the services provided to users, drives further development of a Web-based application. The major goal for the success of a Web-based application becomes therefore its evolution. Though, development and maintenance of Web-based applications suffers from the underlying document-based implementation model. A disciplined evolution of Web based app...
Sports and technology have always co-operated to bring better and more specific sports statistics. The collection of sports game data as well as the ability to generate valuable sports statistics of it is growing. This thesis investigates the development of a sports statistics application that should be able to collect sports game data, structure the data according to suitable data models and show statistics in a proper way. The application was set to be a web application that was developed u...
Ardizzone, Valeria; Bruno, Riccardo; Calanducci, Antonio; Carrubba, Carla; Fargetta, Marco; Ingrà, Elisa; Inserra, Giuseppina; La Rocca, Giuseppe; Monforte, Salvatore; Pistagna, Fabrizio; Ricceri, Rita; Rotondo, Riccardo; Scardaci, Diego; Barbera, Roberto
In this paper we present the architecture of a framework for building Science Gateways supporting official standards both for user authentication and authorization and for middleware-independent job and data management. Two use cases of the customization of the Science Gateway framework for Semantic-Web-based life science applications are also described.
van Zwol, Roelof; Fokkinga, M.M.; Jeronimus, V.; Jeronimus, V.N.; Apers, Peter M.G.; Lacroix, Z.
With large volumes of data being exchanged on the Internet, query languages are needed to bridge the gap between databases and the web. Furthermore, the differentiation in data types used by webbased applications is ever growing, despite all standardization efforts. The Data eXchange Language (DXL)
Jerzy Nogiec; Kelley Trombly-Freytag; Dana Walbridge
Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application debugging, yet the scope is deliberately kept limited to allow for easy comprehension and rapid application development. The system clearly decouples the application processing and visualization, which in turn allows for clean separation of layout and processing development. Duplication of standard web page features such as toolbars and navigational aids is therefore eliminated. The framework employs the popular Model-View-Controller (MVC) architecture, but it also uses the filter mechanism for several of its base functionalities, which permits easy extension of the provided core functionality of the system.
Full Text Available Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application debugging, yet the scope is deliberately kept limited to allow for easy comprehension and rapid application development. The system clearly decouples the application processing and visualization, which in turn allows for clean separation of layout and processing development. Duplication of standard web page features such as toolbars and navigational aids is therefore eliminated. The framework employs the popular Model-View-Controller (MVC architecture, but it also uses the filter mechanism for several of its base functionalities, which permits easy extension of the provided core functionality of the system.
Hermosillo, Gabriel; Gomez, Roberto; Seinturier, Lionel; Duchien, Laurence
International audience; Adding security functions in existing Web application servers is now vital for the IS of companies and organizations. Writing crosscutting functions in complex software should take advantage of the modularity offered by new software development approaches. With Aspect-Oriented Programming (AOP), separating concerns when designing an application fosters reuse, parameterization and maintenance. In this paper, we design a security aspect called AProSec for detecting SQL i...
Mesbah, A.; Van Deursen, A.
Recently, a new web development technique for creating interactive web applications, dubbed AJAX, has emerged. In this new model, the single-page web interface is composed of individual components which can be updated/replaced independently. With the rise of AJAX web applications classical
Gray, Alasdair J. G.; Sadler, Jason; Kit, Oles; Kyzirakos, Kostis; Karpathiotakis, Manos; Calbimonte, Jean-Paul; Page, Kevin; García-Castro, Raúl; Frazer, Alex; Galpin, Ixent; Fernandes, Alvaro A. A.; Paton, Norman W.; Corcho, Oscar; Koubarakis, Manolis; De Roure, David; Martinez, Kirk; Gómez-Pérez, Asunción
Sensing devices are increasingly being deployed to monitor the physical world around us. One class of application for which sensor data is pertinent is environmental decision support systems, e.g., flood emergency response. For these applications, the sensor readings need to be put in context by integrating them with other sources of data about the surrounding environment. Traditional systems for predicting and detecting floods rely on methods that need significant human resources. In this paper we describe a semantic sensor web architecture for integrating multiple heterogeneous datasets, including live and historic sensor data, databases, and map layers. The architecture provides mechanisms for discovering datasets, defining integrated views over them, continuously receiving data in real-time, and visualising on screen and interacting with the data. Our approach makes extensive use of web service standards for querying and accessing data, and semantic technologies to discover and integrate datasets. We demonstrate the use of our semantic sensor web architecture in the context of a flood response planning web application that uses data from sensor networks monitoring the sea-state around the coast of England. PMID:22164110
Full Text Available Abstract Background Metagenomics is a new field of research on natural microbial communities. High-throughput sequencing techniques like 454 or Solexa-Illumina promise new possibilities as they are able to produce huge amounts of data in much shorter time and with less efforts and costs than the traditional Sanger technique. But the data produced comes in even shorter reads (35-100 basepairs with Illumina, 100-500 basepairs with 454-sequencing. CARMA is a new software pipeline for the characterisation of species composition and the genetic potential of microbial samples using short, unassembled reads. Results In this paper, we introduce WebCARMA, a refined version of CARMA available as a web application for the taxonomic and functional classification of unassembled (ultra-short reads from metagenomic communities. In addition, we have analysed the applicability of ultra-short reads in metagenomics. Conclusions We show that unassembled reads as short as 35 bp can be used for the taxonomic classification of a metagenome. The web application is freely available at http://webcarma.cebitec.uni-bielefeld.de.
Full Text Available Although web applications evolved to mature solutions providing sophisticated user experience, they also became complex for the same reason. Complexity primarily affects the server-side generation of dynamic pages as they are aggregated from multiple sources and as there are lots of possible processing paths depending on parameters. Browser-based tests are an adequate instrument to detect errors within generated web pages considering the server-side process and path complexity a black box. However, these tests do not detect the cause of an error which has to be located manually instead. This paper proposes to generate metadata on the paths and parts involved during server-side processing to facilitate backtracking origins of detected errors at development time. While there are several possible points of interest to observe for backtracking, this paper focuses user interface components of web frameworks.
Lablans, Martin; Borg, Andreas; Ückert, Frank
Medical research networks rely on record linkage and pseudonymization to determine which records from different sources relate to the same patient. To establish informational separation of powers, the required identifying data are redirected to a trusted third party that has, in turn, no access to medical data. This pseudonymization service receives identifying data, compares them with a list of already reported patient records and replies with a (new or existing) pseudonym. We found existing solutions to be technically outdated, complex to implement or not suitable for internet-based research infrastructures. In this article, we propose a new RESTful pseudonymization interface tailored for use in web applications accessed by modern web browsers. The interface is modelled as a resource-oriented architecture, which is based on the representational state transfer (REST) architectural style. We translated typical use-cases into resources to be manipulated with well-known HTTP verbs. Patients can be re-identified in real-time by authorized users' web browsers using temporary identifiers. We encourage the use of PID strings for pseudonyms and the EpiLink algorithm for record linkage. As a proof of concept, we developed a Java Servlet as reference implementation. The following resources have been identified: Sessions allow data associated with a client to be stored beyond a single request while still maintaining statelessness. Tokens authorize for a specified action and thus allow the delegation of authentication. Patients are identified by one or more pseudonyms and carry identifying fields. Relying on HTTP calls alone, the interface is firewall-friendly. The reference implementation has proven to be production stable. The RESTful pseudonymization interface fits the requirements of web-based scenarios and allows building applications that make pseudonymization transparent to the user using ordinary web technology. The open-source reference implementation implements the
Alpuente, María; Ballis, Demis; Romero, Daniel
This paper presents a Rewriting Logic framework that formalizes the interactions between Web servers and Web browsers through a communicating protocol abstracting HTTP. The proposed framework includes a scripting language that is powerful enough to model the dynamics of complex Web applications by encompassing the main features of the most popular Web scripting languages (e.g. PHP, ASP, Java Servlets). We also provide a detailed characterization of browser actions (e.g. forward/backward navigation, page refresh, and new window/tab openings) via rewrite rules, and show how our models can be naturally model-checked by using the Linear Temporal Logic of Rewriting (LTLR), which is a Linear Temporal Logic specifically designed for model-checking rewrite theories. Our formalization is particularly suitable for verification purposes, since it allows one to perform in-depth analyses of many subtle aspects related to Web interaction. Finally, the framework has been completely implemented in Maude, and we report on some successful experiments that we conducted by using the Maude LTLR model-checker.
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF VETERANS AFFAIRS Proposed Information Collection (Internet Student CPR Web Registration Application); Comment... use of other forms of information technology. Title: Internet Student CPR Web Registration Application...
Kreutel, Jörn; Gerlach, Andrea; Klekamp, Stefanie; Schulz, Kristin
We describe the ideas and results of an applied research project that aims at leveraging the expressive power of semantic web technologies as a server-side backend for mobile applications that provide access to location and multimedia data and allow for a rich user experience in mobile scenarios, ranging from city and museum guides to multimedia enhancements of any kind of narrative content, including e-book applications. In particular, we will outline a reusable software architecture for both server-side functionality and native mobile platforms that is aimed at significantly decreasing the effort required for developing particular applications of that kind.
Tso, Kam S.; Pajevski, Michael J.
Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers
Stepien, Bernard; Peyton, Liam
Traditional approaches to integration testing typically use a variety of different test tools (such as HTTPUnit, Junit, DBUnit) and manage data in a variety of formats (HTML, Java, SQL) in order to verify web application state at different points in the architecture of a web application. Managing test campaigns across these different tools and correlating intermediate results in different formats is a difficult problem which we address in this paper. In particular, the major contribution of this paper is to demonstrate that a specification-based approach to integration testing enables one to define integration test campaigns more succinctly and efficiently in a single language/tool and correlate intermediate results in a single data format. We also evaluate the effectiveness of TTCN-3 (a standards-based test specification language and framework) in supporting such an approach.
Full Text Available The contemporary organizations develop business processes in a very complex environment. The IT&C technologies are used by organizations to improve their competitive advantages. But, the IT&C technologies are not perfect. They are developed in an iterative process and their quality is the result of the lifecycle activities. The audit and evaluation processes are required by the increased complexity of the business processes supported by IT&C technologies. In order to organize and develop a high-quality audit process, the evaluation team must analyze the risks, threats and vulnerabilities of the information system. The paper highlights the security vulnerabilities in web applications and the processes of their detection. The web applications are used as IT&C tools to support the distributed information processes. They are a major component of the distributed information systems. The audit and evaluation processes are carried out in accordance with the international standards developed for information system security assurance.
Souer, J.; van de Weerd, I.|info:eu-repo/dai/nl/304836664; Versendaal, J.M.|info:eu-repo/dai/nl/07506104X; Brinkkemper, S.|info:eu-repo/dai/nl/07500707X
Web applications are evolving towards strong content-centered Web applications. The development processes and implementation of these applications are unlike the development and implementation of traditional information systems. In this paper we propose WebEngineering Method; a method for developing
Gao, Sheng; Mioc, Darka; Boley, Harold
applications using a Web-based GIS. Recent progress on the database storage and geospatial Web Services has advanced the use of Web-based GIS for health applications, with various proprietary software, open source software, and Application Programming Interfaces (APIs) available. Current challenges in applying...
Keng, Tan Chin; Ching, Yeoh Kah
The use of web applications has become a trend in many disciplines including education. In view of the influence of web application in education, this study examines web application technologies that could enhance undergraduates' learning experiences, with focus on Quantity Surveying (QS) and Information Technology (IT) undergraduates. The…
Serban, Alexandru; Crisan-Vida, Mihaela; Mada, Leonard; Stoicu-Tivadar, Lacramioara
User interfaces are important to facilitate easy learning and operating with an IT application especially in the medical world. An easy to use interface has to be simple and to customize the user needs and mode of operation. The technology in the background is an important tool to accomplish this. The present work aims to creating a web interface using specific technology (HTML table design combined with CSS3) to provide an optimized responsive interface for a complex web application. In the first phase, the current icMED web medical application layout is analyzed, and its structure is designed using specific tools, on source files. In the second phase, a new graphic adaptable interface to different mobile terminals is proposed, (using HTML table design (TD) and CSS3 method) that uses no source files, just lines of code for layout design, improving the interaction in terms of speed and simplicity. For a complex medical software application a new prototype layout was designed and developed using HTML tables. The method uses a CSS code with only CSS classes applied to one or multiple HTML table elements, instead of CSS styles that can be applied to just one DIV tag at once. The technique has the advantage of a simplified CSS code, and a better adaptability to different media resolutions compared to DIV-CSS style method. The presented work is a proof that adaptive web interfaces can be developed just using and combining different types of design methods and technologies, using HTML table design, resulting in a simpler to learn and use interface, suitable for healthcare services.
Full Text Available This article is the result of searching and selecting new technologies that help programmer in developing web applications. It also represents a pleading for using it, showing its advantages and disadvantages. Alongside the article there are features regarding binding elements, modules, filters and directives. It is a synthesis and a guide of good practice for innovative programmers. All technical issues presented are supported by a case study.
Yuan, Ying; Mei, Kun; Bian, Fuling
With the growth of the World Wide Web technologies, the access to and use of geospatial information changed in the past decade radically. Previously, the data processed by a GIS as well as its methods had resided locally and contained information that was sufficiently unambiguous in the respective information community. Now, both data and methods may be retrieved and combined from anywhere in the world, escaping their local contexts. The last few years have seen a growing interest in the field of semantic geospatial web. With the development of semantic web technologies, we have seen the possibility of solving the heterogeneity/interoperation problem in the GIS community. The semantic geospatial web application can support a wide variety of tasks including data integration, interoperability, knowledge reuse, spatial reasoning and many others. This paper proposes a flexible framework called GeoSWF (short for Geospatial Semantic Web Framework), which supports the semantic integration of the distributed and heterogeneous geospatial information resources and also supports the semantic query and spatial relationship reasoning. We design the architecture of GeoSWF by extending the MVC Pattern. The GeoSWF use the geo-2007.owl proposed by W3C as the reference ontology of the geospatial information and design different application ontologies according to the situation of heterogeneous geospatial information resources. A Geospatial Ontology Creating Algorithm (GOCA) is designed for convert the geospatial information to the ontology instances represented by RDF/OWL. On the top of these ontology instances, the GeoSWF carry out the semantic reasoning by the rule set stored in the knowledge base to generate new system query. The query result will be ranking by ordering the Euclidean distance of each ontology instances. At last, the paper gives the conclusion and future work.
Daniluk, Paweł; Wilczyński, Bartek; Lesyng, Bogdan
One of the requirements for a successful scientific tool is its availability. Developing a functional web service, however, is usually considered a mundane and ungratifying task, and quite often neglected. When publishing bioinformatic applications, such attitude puts additional burden on the reviewers who have to cope with poorly designed interfaces in order to assess quality of presented methods, as well as impairs actual usefulness to the scientific community at large. In this note we present WeBIAS-a simple, self-contained solution to make command-line programs accessible through web forms. It comprises a web portal capable of serving several applications and backend schedulers which carry out computations. The server handles user registration and authentication, stores queries and results, and provides a convenient administrator interface. WeBIAS is implemented in Python and available under GNU Affero General Public License. It has been developed and tested on GNU/Linux compatible platforms covering a vast majority of operational WWW servers. Since it is written in pure Python, it should be easy to deploy also on all other platforms supporting Python (e.g. Windows, Mac OS X). Documentation and source code, as well as a demonstration site are available at http://bioinfo.imdik.pan.pl/webias . WeBIAS has been designed specifically with ease of installation and deployment of services in mind. Setting up a simple application requires minimal effort, yet it is possible to create visually appealing, feature-rich interfaces for query submission and presentation of results.
Full Text Available Abstract Background Normal mode analysis (NMA has become the method of choice to investigate the slowest motions in macromolecular systems. NMA is especially useful for large biomolecular assemblies, such as transmembrane channels or virus capsids. NMA relies on the hypothesis that the vibrational normal modes having the lowest frequencies (also named soft modes describe the largest movements in a protein and are the ones that are functionally relevant. Results We developed a web-based server to perform normal modes calculations and different types of analyses. Starting from a structure file provided by the user in the PDB format, the server calculates the normal modes and subsequently offers the user a series of automated calculations; normalized squared atomic displacements, vector field representation and animation of the first six vibrational modes. Each analysis is performed independently from the others and results can be visualized using only a web browser. No additional plug-in or software is required. For users who would like to analyze the results with their favorite software, raw results can also be downloaded. The application is available on http://www.bioinfo.no/tools/normalmodes. We present here the underlying theory, the application architecture and an illustration of its features using a large transmembrane protein as an example. Conclusion We built an efficient and modular web application for normal mode analysis of proteins. Non specialists can easily and rapidly evaluate the degree of flexibility of multi-domain protein assemblies and characterize the large amplitude movements of their domains.
Sernadela, Pedro; González-Castro, Lorena; Oliveira, José Luís
In recent years, we have witnessed an explosion of biological data resulting largely from the demands of life science research. The vast majority of these data are freely available via diverse bioinformatics platforms, including relational databases and conventional keyword search applications. This type of approach has achieved great results in the last few years, but proved to be unfeasible when information needs to be combined or shared among different and scattered sources. During recent years, many of these data distribution challenges have been solved with the adoption of semantic web. Despite the evident benefits of this technology, its adoption introduced new challenges related with the migration process, from existent systems to the semantic level. To facilitate this transition, we have developed Scaleus, a semantic web migration tool that can be deployed on top of traditional systems in order to bring knowledge, inference rules, and query federation to the existent data. Targeted at the biomedical domain, this web-based platform offers, in a single package, straightforward data integration and semantic web services that help developers and researchers in the creation process of new semantically enhanced information systems. SCALEUS is available as open source at http://bioinformatics-ua.github.io/scaleus/ .
Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J
In the software engineering world, many modelling notations and languages have been developed to aid application development. The technologies, Java and Web services, play an increasingly important role in web applications. However, because of issues of complexity, it is difficult to build multi-threaded Java applications and Web Service applications, and even more difficult to model. Furthermore, it is difficult to reconcile the directly-coded application with the model-based application....
Romaniuk, Ryszard S.
Wilga Summer 2015 Symposium on Photonics Applications and Web Engineering was held on 23-31 May. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, electronics technologies and applications. There were presented around 300 presentations in a few main topical tracks including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Thins, and other. The paper is an introduction the 2015 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations.
Brigham, Tara J
Google is a company that is constantly expanding and growing its services and products. While most librarians possess a "love/hate" relationship with Google, there are a number of reasons you should consider exploring some of the tools Google has created and made freely available. Applications and services such as Google Docs, Slides, and Google+ are functional and dynamic without the cost of comparable products. This column will address some of the issues users should be aware of before signing up to use Google's tools, and a description of some of Google's Web applications and services, plus how they can be useful to librarians in health care.
Full Text Available M-learning web based applications are a particular case of web applications designed to be operated from mobile devices. Also, their purpose is to implement learning aspects. Project management of such applications takes into account the identified peculiarities. M-learning web based application characteristics are identified. M-learning functionality covers the needs of an educational process. Development is described taking into account the mobile web and its influences over the analysis, design, construction and testing phases. Activities building up a work breakdown structure for development of m-learning web based applications are presented. Project monitoring and control techniques are proposed. Resources required for projects are discussed.
Full Text Available Telemedicine scenarios include today in-hospital care management, remote teleconsulting, collaborative diagnosis and emergency situations handling. Different types of information need to be accessed by means of etherogeneous client devices in different communication environments in order to enable high quality continuous sanitary assistance delivery wherever and whenever needed. In this paper, a Web-based telemedicine architecture based on Java, XML and XSL technologies is presented. By providing dynamic content delivery services and Java based client applications for medical data consultation and modification, the system enables effective access to an Electronic Patient Record based standard database by means of any device equipped with a Web browser, such as traditional Personal Computers and workstation as well as modern Personal Digital Assistants. The effectiveness of the proposed architecture has been evaluated in different scenarios, experiencing fixed and mobile clinical data transmissions over Local Area Networks, wireless LANs and wide coverage telecommunication network including GSM and GPRS.
Full Text Available The purpose of this paper is to present a set of best practices for moving PHP web applications from a traditional hosting to a Cloud based one. PHP applications are widespread nowadays and they come in many shapes and sizes and that is why they require a special attention. The paper goes beyond just moving the code in the Cloud and setting up the run-time environment as some architectural changes must be done at application level most of the time. The decision of how and when to make these changes can make the difference between a successful migra-tion and a failed one. It will be presented how to decouple and scale an application, how to scale a database while following the high availability principles.
Mesbah, A.; Van Deursen, A.; Lenselink, S.
Full Text Available In the last few years has begun the development of dynamic web applications, often called Web2.0. From this development wascreated a technology called Mashups. Mashups may easily combine huge amounts of data sources and functionalities of existing as wellas future web applications and services. Therefore they are used to develop a new device, which offers new possibilities of informationusage. This technology provides possibilities of developing basic as well as robust web applications not only for IT or GIS specialists,but also for common users. Software companies have developed web projects for building mashup application also called mashupeditors.
Tso, Kam S.; Pajevski, Michael J.; Johnson, Bryan
Cyber security has gained national and international attention as a result of near continuous headlines from financial institutions, retail stores, government offices and universities reporting compromised systems and stolen data. Concerns continue to rise as threats of service interruption, and spreading of viruses become ever more prevalent and serious. Controlling access to application layer resources is a critical component in a layered security solution that includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. In this paper we discuss the development of an application-level access control solution, based on an open-source access manager augmented with custom software components, to provide protection to both Web-based and Java-based client and server applications.
Vlachos, Michael; Stassinopoulos, George
This paper presents a totally generic client-server model for accessing legacy and new databases according to the three tier architecture principles. It is based on an integrated environment that eases the dynamic creation and instantiation of secure web applications that access multiple database management systems. Emphasis is placed on the ability to query almost any type of relational database and queries can simultaneously address a multitude of data sources. The information is collected, assembled and presented to users depending on a possible set of user profiles. These profiles originate from work on securing the conduct of clinical studies. This has been achieved in the context of the EU funded project HARP (Harmonisation for the Security of Web Technologies and Applications). The generic character of the model is exploited through an accompanying set of development tools. This permits efficient and effective creation and maintenance of applications in several domains of health telematics and beyond. Main merit is the lightweight character of the resulting platform, whereby all necessary instantiations are accomplished through a set of related XML documents.
Full Text Available Recently there has been a growing interest in the investigation and development of the next generation web - the Semantic Web. While most of the current forms of web content are designed to be presented to humans, but are barely understandable by computers, the content of the Semantic Web is structured in a semantic way so that it is meaningful to computers as well as to humans. In this paper, we report a survey of recent research on the Semantic Web. In particular, we present the opportunities that this revolution will bring to us: web-services, agent-based distributed computing, semantics-based web search engines, and semantics-based digital libraries. We also discuss the technical and cultural challenges of realizing the Semantic Web: the development of ontologies, formal semantics of Semantic Web languages, and trust and proof models. We hope that this will shed some light on the direction of future work on this field.
It is more convenient to talk about changes in a domainspecific way than to formulate them at the programming construct level or-even worse-purely lexical level. Using aspect-oriented programming, changes can be modularized and made reapplicable. In this paper, selected change types in web...... applications are analyzed. They are expressed in terms of general change types which, in turn, are implemented using aspect-oriented programming. Some of general change types match aspect-oriented design patterns or their combinations....
Chu, Binh Minh
E-commerce and e-commerce website are among popular terms nowadays. They exist all around the world. People use them with or without awareness. From a developer point of view, study how an e-commerce website works is a compelling, and at the same time, chal-lenging topic. This thesis report documents the process of developing an e-commerce web application from the beginning until the end. Two main goals of the development were to create a simple e-commerce website but fully functional and...
Cline, Owen; Van Sickel, Peter
WebSphere Application Server (WAS) is complex and multifaceted middleware used by huge enterprises as well as small businesses. In this book, the authors do an excellent job of covering the many aspects of the software. While other books merely cover installation and configuration, this book goes beyond that to cover the critical verification and management process to ensure a successful installation and implementation. It also addresses all of the different packages-from Express to Network-so that no matter what size your company is, you will be able to successfully implement WAS V6. To de
The purpose of this thesis is to demonstrate a systematic analysis of web applications quality for electronic banking. In the theoretical part we focused on the ISO/IEC 25000 standard, which deals with quality of the product. With help of SQuaRE set of standards we have reviewed the contents of some standards from ISO/IEC 25000 family. At the and we used ISO/IEC 25010 quality model and ISO/IEC 25040 which contains procedure for assessing the product quality. For the assessment of user in...
Deploying business applications on the internal Web is a priority at Oak Ridge National Laboratory (Lockheed Martin Energy Research) and Lockheed Martin Energy Systems, Inc. as with most corporations. Three separate applications chose the Oracle Application Server (OAS), using the PL/SQL cartridge as a Web deployment method. This method was chosen primarily because the data was already stored in Oracle tables and developers knew HJSQL or at least SQL. The Database Support group had the responsibility of installing, testing, and determining standard methods for interfacing with the PL/SQL cartridge of the OAS. Note that the term Web Application Server was used for version 3, but in this discussion, OAS will be used for both version 3 and version 4.
Okladnikov, Igor; Gordov, Evgeny; Titov, Alexander; Bogomolov, Vasily; Martynova, Yuliya; Shulgina, Tamara
Kobara, S.; Howard, M. K.; Simoniello, C.; Jochens, A. E.; Gulf Of Mexico Coastal Ocean Observing System Regional Association (Gcoos-Ra)
Spatial and temporal information on the ecology of marine species and encompassing oceanographic environment is vital to the development of effective strategies for marine resource management and biodiversity conservation. Assembling data and generating products is a time-consuming and often laborious part of the workflow required of fisheries specialists, resource managers, marine scientists and other stakeholder groups for effective fishery management and marine spatial planning. Workflow costs for all groups can be significantly reduced through the use of interoperable networked data systems. The Gulf of Mexico Coastal Ocean Observing System Regional Association (GCOOS-RA) is one of 11 RAs comprising the non-Federal part of the U.S. Integrated Ocean Observing System (IOOS). The RAs serve the region’s needs for data and information: by working with data providers to offer their data in standardized ways following IOOS guidance, by gathering stakeholders’ needs and requirements, and by producing basic products or facilitating product-generation by others to meet those needs. The GCOOS Data Portal aggregates regional near real-time data and serves these data through standardized service interfaces suitable for automated machine access or in formats suitable for human consumption. The related Products Portal generates products in graphical displays for humans and in standard formats for importing into common software packages. Web map applications are created using ArcGIS server RESTful service, publicly available Open Geospatial Consortium (OGC) Web Map Service (WMS) layers, and Web Coverage Service (WCS). Use of standardize interfaces allows us to construct seamless workflows that carry data from sensors through to products in an automated fashion. As a demonstration of the power of interoperable standards-based systems we have developed tailored product web pages for recreational boaters and fishermen. This is a part of an ongoing project to provide an
Lossent, A.; Rodriguez Peon, A.; Wagner, A.
The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.
Richardson, Emily J; Escalettes, Franck; Fotheringham, Ian; Wallace, Robert J; Watson, Mick
Whole-genome shotgun metagenomics experiments produce DNA sequence data from entire ecosystems, and provide a huge amount of novel information. Gene discovery projects require up-to-date information about sequence homology and domain structure for millions of predicted proteins to be presented in a simple, easy-to-use system. There is a lack of simple, open, flexible tools that allow the rapid sharing of metagenomics datasets with collaborators in a format they can easily interrogate. We present Meta4, a flexible and extensible web application that can be used to share and annotate metagenomic gene predictions. Proteins and predicted domains are stored in a simple relational database, with a dynamic front-end which displays the results in an internet browser. Web services are used to provide up-to-date information about the proteins from homology searches against public databases. Information about Meta4 can be found on the project website, code is available on Github, a cloud image is available, and an example implementation can be seen at.
Asish Kumar Dalai
Full Text Available Reports on web application security risks show that SQL injection is the top most vulnerability. The journey of static to dynamic web pages leads to the use of database in web applications. Due to the lack of secure coding techniques, SQL injection vulnerability prevails in a large set of web applications. A successful SQL injection attack imposes a serious threat to the database, web application, and the entire web server. In this article, the authors have proposed a novel method for prevention of SQL injection attack. The classification of SQL injection attacks has been done based on the methods used to exploit this vulnerability. The proposed method proves to be efficient in the context of its ability to prevent all types of SQL injection attacks. Some popular SQL injection attack tools and web application security datasets have been used to validate the model. The results obtained are promising with a high accuracy rate for detection of SQL injection attack.
Full Text Available A web service is a service offered by a device electronically to communicate with other electronic device using the World wide web. Smartphone is an electronic device that almost everyone has, especially student and parent for getting information about the school. In BINUS School Serpong mobile application, web services used for getting data from web server like student and menu data. Problem faced by BINUS School Serpong today is the time-consuming application update when using the native application while the application updates are very frequent. To resolve this problem, BINUS School Serpong mobile application will use the web service. This article showed the usage of web services with XML for retrieving data of student. The result from this study is that by using web service, smartphone can retrieve data consistently between multiple platforms.
Mortensen, Louise Hindborg; Qin, Jiayi; Cruz-Paredes, Carla
to be slightly negatively affected, however the bacteria feeding nematodes a bit less so. The effects had not yet transferred to the lower soil layer (3-6 cm) at the site. Sampling in 2016 and 2017 will clarify the variability of both indirect effects of wood ash application to the terrestrial food web......In 2006, the European Council established a mandatory target of 20 % renewable energy of consumption by 2020. Part of the replacement is burning biomass for heating and electricity. ~ Whole tree biomass harvesting for biofuel combustion intensifies removal of nutrients from the by applying ash from...... the consequences of returning wood ash to biofuel producing coniferous forest. We that the change in pH and increased availability of nutrients after ash application to forest floor can facilitate an increase in the bacteria to fungi ratio with possible effects for the soil food by applying ash of different...
Bjørnes, Charlotte D.; Cummings, Elizabeth; Nøhr, Christian
methods. Contextualization of the new application was also central in all phases to ensure a focus not only on the technology itself, but also the way it is used and in which relations and contexts. In evaluation of the tool, the patients' descriptions as user substantiate that the use of Internet......Based on a clinical intervention study this paper adds to the significance of users involvement in design processes and substantiate the potential of online, flexible health informatics tools as useful components to accommodate organizational changes that short stay treatment demands. A dialogue......-based web application was designed and implemented to accommodate patients' information and communication needs in short stay hospital settings. To ensure the system meet the patients' needs, both patients and healthcare professionals were involved in the design process by applying various participatory...
Ries, Kernell G.; Guthrie, John G.; Rea, Alan H.; Steeves, Peter A.; Stewart, David W.
. Streamflow measurements are collected systematically over a period of years at partial-record stations to estimate peak-flow or low-flow statistics. Streamflow measurements usually are collected at miscellaneous-measurement stations for specific hydrologic studies with various objectives.StreamStats is a Web-based Geographic Information System (GIS) application that was created by the USGS, in cooperation with Environmental Systems Research Institute, Inc. (ESRI)1, to provide users with access to an assortment of analytical tools that are useful for water-resources planning and management. StreamStats functionality is based on ESRI’s ArcHydro Data Model and Tools, described on the Web at http://resources.arcgis.com/en/communities/hydro/01vn0000000s000000.htm. StreamStats allows users to easily obtain streamflow statistics, basin characteristics, and descriptive information for USGS data-collection stations and user-selected ungaged sites. It also allows users to identify stream reaches that are upstream and downstream from user-selected sites, and to identify and obtain information for locations along the streams where activities that may affect streamflow conditions are occurring. This functionality can be accessed through a map-based user interface that appears in the user’s Web browser, or individual functions can be requested remotely as Web services by other Web or desktop computer applications. StreamStats can perform these analyses much faster than historically used manual techniques.StreamStats was designed so that each state would be implemented as a separate application, with a reliance on local partnerships to fund the individual applications, and a goal of eventual full national implementation. Idaho became the first state to implement StreamStats in 2003. By mid-2008, 14 states had applications available to the public, and 18 other states were in various stages of implementation.
Velázquez Santana Eugenio César
Full Text Available The application of new information technologies such as Google Web Toolkit and App Engine are making a difference in the academic management of Higher Education Institutions (IES, who seek to streamline their processes as well as reduce infrastructure costs. However, they encounter the problems with regard to acquisition costs, the infrastructure necessary for their use, as well as the maintenance of the software; It is for this reason that the present research aims to describe the application of these new technologies in HEIs, as well as to identify their advantages and disadvantages and the key success factors in their implementation. As a software development methodology, SCRUM was used as well as PMBOK as a project management tool. The main results were related to the application of these technologies in the development of customized software for teachers, students and administrators, as well as the weaknesses and strengths of using them in the cloud. On the other hand, it was also possible to describe the paradigm shift that data warehouses are generating with respect to today's relational databases.
Jaime Alberto Guzmán Luna
Full Text Available This article proposes applying semantic web and artificial intelligence planning techniques to a web services composition model dealing with problems of ambiguity in web service description and handling incomplete web information. The model uses an OWL-S services and implements a planning technique which handles open world semantics in its reasoning process to resolve these problems. This resulted in a web services composition system incorporating a module for interpreting OWL-S services and converting them into a planning problem in PDDL (a planning module handling incomplete information and an execution service module concurrently interacting with the planner for executing each composition plan service.
Jaime Alberto Guzmán Luna
Full Text Available This article proposes applying semantic web and artificial intelligence planning techniques to a web services composition model dealing with problems of ambiguity in web service description and handling incomplete web information. The model uses an OWL-S services and implements a planning technique which handles open world semantics in its reasoning process to resolve these problems. This resulted in a web services composition system incorporating a module for interpreting OWL-S services and converting them into a planning problem in PDDL (a planning module handling incomplete information and an execution service module concurrently interacting with the planner for executing each composition plan service.
We propose a model for multi-user data-driven communicating Web applications. An arbitrary number of users may access the application concurrently through Web sites and Web services. A Web service may have an arbitrary number of instances. The interaction between users and Web application is data-driven. Synchronous communication is done by shared access to the database and global application state. Private information may be stored in a local state. Asynchronous communication is done by message passing. A version of first-order linear time temporal logic (LTL-FO) is proposed to express behavioral properties of Web applications. The model is used to formally specify a significant fragment of an e-business application. Some of its desirable properties are expressed as LTL-FO formulas. We study a decision problem, namely whether the model satisfies an LTL-FO formula. We show the undecidability of the unrestricted verification problem and discuss some restrictions that ensure decidability.
Full Text Available Today web applications have become a necessity and many companies use them as a communication tool to keep in touch with their customers. The usage of Web Application in current time increases as the numberof internet users has been rised. For reason of Rich Internet Application, the desktop application developer wasmoved to web application developer with AJAX technology. BINUS School Serpong is a Cambridge Curriculum base International School that uses web application for access every information about the school. By usingAJAX, performance of web application should be improved and the bandwidth usage is decreased. Problems thatoccur at BINUS School Serpong is not all part of the web application that uses AJAX. This paper introducesusage of AJAX in ASP.NET with C# programming language in web application BINUS School Serpong. It is expected by using ASP.NET AJAX, BINUS School Serpong website performance will be faster because of reducing web page reload. The methodology used in this paper is literature study. Results from this study are to prove that the ASP.NET AJAX can be used easily and improve BINUS School Serpong website performance. Conclusion of this paper is the implementation of ASP.NET AJAX improves performance of web application in BINUS School Serpong.
Hansen, Thomas Riisgaard
This demonstration presents a simple browser plug-in that grant web applications the ability to use multiple nearby devices for displaying web content. A web page can e.g. be designed to present additional information on nearby devices. The demonstration introduces a light weight peer-to-peer arc...
Дмитро Андрійович Скачков
Full Text Available The problem of the study and measurement of web applications including architectural features of CMS / CMF Drupal is considered. The experimental model to determine the modes of web applications is built. The requirements for mathematical model of code parameter estimation that is executed for process control optimization are described. The model of estimation of the resource consumption by the web application is developed. Software implementation of the system using PHP and Python languages is made
Full Text Available Abstract Background Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. Results RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include 1 automatic integration of multi-agency data and diagnostic results on a daily basis; 2 a web-based data editing interface that enables authorized users to add, edit and extract data; and 3 an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. Conclusions RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from
Yamamoto, Daisuke; Nagao, Katashi
In this paper, we developed a Web-based video annotation system, named iVAS (intelligent Video Annotation Server). Audiences can associate any video content on the Internet with annotations. The system analyzes video content in order to acquire cut/shot information and color histograms. And it also automatically generates a Web page for editing annotations. Then, audiences can create annotation data by two methods. The first one helps the users to create text data such as person/object names, scene descriptions, and comments interactively. The second method facilitates the users associating any video fragments with their subjective impression by just clicking a mouse button. The generated annotation data are accumulated and managed by an XML database connected with iVAS. We also developed some application systems based on annotations such as video retrieval, video simplification, and video-content-based community support. One of the major advantages of our approach is easy integration of hand-coded and automatically-generated (such as color histograms and cut/shot information) annotations. Additionally, since our annotation system is open for public, we must consider some reliability or correctness of annotation data. We also developed an automatic evaluation method of annotation reliability using the users' feedback. In the future, these fundamental technologies will contribute to the formation of new communities centered around video content.
Peña, Carlos; Malm, Tobias
There is an ever growing number of molecular phylogenetic studies published, due to, in part, the advent of new techniques that allow cheap and quick DNA sequencing. Hence, the demand for relational databases with which to manage and annotate the amassing DNA sequences, genes, voucher specimens and associated biological data is increasing. In addition, a user-friendly interface is necessary for easy integration and management of the data stored in the database back-end. Available databases allow management of a wide variety of biological data. However, most database systems are not specifically constructed with the aim of being an organizational tool for researchers working in phylogenetic inference. We here report a new software facilitating easy management of voucher and sequence data, consisting of a relational database as back-end for a graphic user interface accessed via a web browser. The application, VoSeq, includes tools for creating molecular datasets of DNA or amino acid sequences ready to be used in commonly used phylogenetic software such as RAxML, TNT, MrBayes and PAUP, as well as for creating tables ready for publishing. It also has inbuilt BLAST capabilities against all DNA sequences stored in VoSeq as well as sequences in NCBI GenBank. By using mash-ups and calls to web services, VoSeq allows easy integration with public services such as Yahoo! Maps, Flickr, Encyclopedia of Life (EOL) and GBIF (by generating data-dumps that can be processed with GBIF's Integrated Publishing Toolkit).
Full Text Available Web browser is one of the most important internet facilities for surfing the internet. A good web browser must incorporate literally tens of features such as integrated search engine, automatic updates, etc. Each year, ten web browsers are formally introduced as top best reviewers by some organizations. In this paper, we propose the implementation of TOPSIS technique to rank ten web browsers. The proposed model of this paper uses five criteria including speed, features, security, technical support and supported configurations. In terms of speed, Safari is the best web reviewer followed by Google Chrome and Internet Explorer while Opera is the best web reviewer when we look into 20 different features. We have also ranked these web browsers using all five categories together and the results indicate that Opera, Internet explorer, Firefox and Google Chrome are the best web browsers to be chosen.
Full Text Available Abstract Background Creating a user friendly web based application which executes an R script allows physicians, epidemiologists, and others unfamiliar with the statistical language to perform powerful statistical analyses easily. The geographic mapping of data is an important tool in spatial epidemiological analysis, and the R project includes many tools for such analyses, but few for visualization. Hence, web applications that run R for epidemiological analysis need to be able to present the results in a geographic format. Results Rwui is a web application for creating web based applications for running R scripts. We describe updates to Rwui that enable it to create web applications for R scripts which return the results of the analysis to the web page as geographic maps. Conclusions Rwui enables statisticians to create web applications for R scripts without the need to learn web programming. Creating a web application provides users access to an R based analysis without the need to learn R. Recent updates to Rwui have increased its applicability in the field of spatial epidemiological analysis.
Rick, Jochen; Guzdial, Mark
Since 1998, we have been developing and researching CoWeb, a version of Wiki designed to support collaborative learning. In this article, we summarize our results of situating CoWeb across the academic landscape of Georgia Tech. In architecture, CoWeb enabled faculty to serve more students in a design-based course. In English composition, a…
Full Text Available The lifecycle of Web-based applications is characterized by frequent changes to content, user interface, and functionality. Updating content, improving the services provided to users, drives further development of a Web-based application. The major goal for the success of a Web-based application becomes therefore its evolution. Though, development and maintenance of Web-based applications suffers from the underlying document-based implementation model. A disciplined evolution of Web based applications requires the application of software engineering practice for systematic further development and reuse of software artifacts. In this contribution we suggest to adopt the component paradigm to development and evolution of Web-based applications. The approach is based on a dedicated component technology and component-software architecture. It allows abstracting from many technical aspects related to the Web as an application platform by introducing domain specific markup languages. These languages allow the description of services, which represent domain components in our Web-component-software approach. Domain experts with limited knowledge of technical details can therefore describe application functionality and the evolution of orthogonal aspects of the application can be de-coupled. The whole approach is based on XML to achieve the necessary standardization and economic efficiency for the use in real world projects.
When developing web applications using traditional methods, developers need to partition the application logic between client side and server side, then implement these two parts separately (often using two different programming languages) and write the communication code to synchronize the application's state between the two parts. CloudBrowser is a server- centric web framework that eliminates this need for partitioning applications entirely. In CloudBrowser, the application code is execute...
Full Text Available The web and the social web play an increasingly important role as an information source for Members of Parliament and their assistants, journalists, political analysts and researchers. It provides important and crucial background information, like reactions to political events and comments made by the general public. The case study presented in this paper is driven by two European parliaments (the Greek and the Austrian parliament and targets an effective exploration of political web archives. In this paper, we describe semantic technologies deployed to ease the exploration of the archived web and social web content and present evaluation results.
Claudia Elena Dinucă
Full Text Available Nowadays, the web is an important part of our daily life. The web is now the best medium of doing business. Large companies rethink their business strategy using the web to improve business. Business carried on the Web offers the opportunity to potential customers or partners where their products and specific business can be found. Business presence through a company web site has several advantages as it breaks the barrier of time and space compared with the existence of a physical office. To differentiate through the Internet economy, winning companies have realized that e-commerce transactions is more than just buying / selling, appropriate strategies are key to improve competitive power. One effective technique used for this purpose is data mining. Data mining is the process of extracting interesting knowledge from data. Web mining is the use of data mining techniques to extract information from web data. This article presents the three components of web mining: web usage mining, web structure mining and web content mining.
Schatten, Markus; Kakulapati, Vijayalakshmi; Cubrilo, Mirko
Social semantic Web or Web 3.0 application gained major attention from academia and industry in recent times. Such applications try to take advantage of user supplied meta data, using ideas from the semantic Web initiative, in order to provide better services. An open problem is the formalization of such meta data, due to its complex and often inconsistent nature. A possible solution to inconsistencies are string similarity metrics which are explained and analyzed. A study of performance and ...
Full Text Available The paper focuses on security and performance concerns in mobile web development. The approach used in the study involved surveying journal publications to identify security and performance concerns. The paper highlights some of the contemporary issues currently being faced by application developers as they create, update and maintain mobile web applications including Cross-Site Scripting, Cookie hijacking/theft, location hijacking, history theft, behaviour analysis, session hijacking, API design, security and the type of web server used considered.
Cross-platform development frameworks make it possible to develop web applications for multiple platforms with just one code-base. The aim of this thesis is to determine which kind of framework is best suited to bring a web application targeting desktop browsers to the mobile platform. This includes being able to use already developed web components. The evalutation was done by first selecting six frameworks from two different choices of cross-platform technologies. Then a literature study wa...
Avoundjian, Tigran; Khosropour, Christine M; Golden, Matthew R; Barbee, Lindley A; Dombrowski, Julia C
Many health departments use a "reactor grid" to determine which laboratory-reported syphilis serologic test results require investigation. We developed a Web-based tool, the Syphilis Reactor Grid Evaluator (SRGE), to facilitate health department reactor grid evaluations and test the tool using data from Seattle & King County, Washington. We developed SRGE using the R Shiny Web application framework. When populated with a data set including titer results and final disposition codes, SRGE displays the percent of verified early syphilis cases by serologic titer result and patient age in each cell of the grid. The results can be optionally stratified by sex, test type, and previous rapid plasma reagin titer. The impact of closing laboratory results without investigation in cells selected by the user is dynamically computed. The SRGE calculates the percent of all laboratory reports closed ("efficiency gained"), the proportion of all early syphilis cases closed without investigation ("case finding loss"), and the ratio of percent of cases identified for investigation to percent of all laboratory reports investigated ("efficiency ratio"). After defining algorithms, users can compare them side-by-side, combine subgroup-specific algorithms, and export results. We used SRGE to compare the current Public Health-Seattle & King County (PHSKC) reactor grid to 5 alternate algorithms. Of 13,504 rapid plasma reagin results reported to PHSKC from January 1, 2006, to December 31, 2015, 1565 were linked to verified early syphilis cases. Updating PHSKC's current reactor grid could result in an efficiency gain of 4.8% to 25.2% (653-3403 laboratory reports) and case finding loss of 1% to 8.4% (10-99 fewer cases investigated). The Syphilis Reactor Grid Evaluator can be used to rapidly evaluate alternative approaches to optimizing the reactor grid. Changing the reactor grid in King County to close more laboratory results without investigation could improve efficiency with minimal impact on
Full Text Available There is an ever growing number of molecular phylogenetic studies published, due to, in part, the advent of new techniques that allow cheap and quick DNA sequencing. Hence, the demand for relational databases with which to manage and annotate the amassing DNA sequences, genes, voucher specimens and associated biological data is increasing. In addition, a user-friendly interface is necessary for easy integration and management of the data stored in the database back-end. Available databases allow management of a wide variety of biological data. However, most database systems are not specifically constructed with the aim of being an organizational tool for researchers working in phylogenetic inference. We here report a new software facilitating easy management of voucher and sequence data, consisting of a relational database as back-end for a graphic user interface accessed via a web browser. The application, VoSeq, includes tools for creating molecular datasets of DNA or amino acid sequences ready to be used in commonly used phylogenetic software such as RAxML, TNT, MrBayes and PAUP, as well as for creating tables ready for publishing. It also has inbuilt BLAST capabilities against all DNA sequences stored in VoSeq as well as sequences in NCBI GenBank. By using mash-ups and calls to web services, VoSeq allows easy integration with public services such as Yahoo! Maps, Flickr, Encyclopedia of Life (EOL and GBIF (by generating data-dumps that can be processed with GBIF's Integrated Publishing Toolkit.
Peña, Carlos; Malm, Tobias
There is an ever growing number of molecular phylogenetic studies published, due to, in part, the advent of new techniques that allow cheap and quick DNA sequencing. Hence, the demand for relational databases with which to manage and annotate the amassing DNA sequences, genes, voucher specimens and associated biological data is increasing. In addition, a user-friendly interface is necessary for easy integration and management of the data stored in the database back-end. Available databases allow management of a wide variety of biological data. However, most database systems are not specifically constructed with the aim of being an organizational tool for researchers working in phylogenetic inference. We here report a new software facilitating easy management of voucher and sequence data, consisting of a relational database as back-end for a graphic user interface accessed via a web browser. The application, VoSeq, includes tools for creating molecular datasets of DNA or amino acid sequences ready to be used in commonly used phylogenetic software such as RAxML, TNT, MrBayes and PAUP, as well as for creating tables ready for publishing. It also has inbuilt BLAST capabilities against all DNA sequences stored in VoSeq as well as sequences in NCBI GenBank. By using mash-ups and calls to web services, VoSeq allows easy integration with public services such as Yahoo! Maps, Flickr, Encyclopedia of Life (EOL) and GBIF (by generating data-dumps that can be processed with GBIF's Integrated Publishing Toolkit). PMID:22720030
Full Text Available Research on this case study aims to evaluate the web application survey PT. KalbeMorinaga Indonesia. This case study was done because of some performance issue on theapplication. The problems related to the ability of software to speed data processing and reporting.Research was done on the performance of software and was combined using the gray box, that is,the evaluation of the test and evaluation of the code of the network itself. In addition, it was usedfor testing software for test automation. Automation tests were done as much as several times witha different number of virtual users. Results obtained were analyzed so as to be used as a referencefor the results which pages have problems with performance issues, and were tested by using asolution which then were tested again on that page and comparing the results. The final results ofthe analysis were compared with the results of the first test and then the results of the analysis andrecommendations for improvement can be used by application developers in the future.
Objectives: The aim of this study was to develop and test a web-based application for the dietary management of patients with diabetic nephropathy. Design: Observational descriptive study. Settings and subjects: RenalSmart® is a web-based application used to assist dietitians in clinical practice, from tertiary to primary ...
Full Text Available Web applications are usually installed on and accessed through a Web server. For security reasons, these Web servers generally provide very few privileges to Web applications, defaulting to executing them in the realm of a guest ac- count. In addition, performance often is a problem as Web applications may need to be reinitialised with each access. Various solutions have been designed to address these security and performance issues, mostly independently of one another, but most have been language or system-specic. The X-Switch system is proposed as an alternative Web application execution environment, with more secure user-based resource management, persistent application interpreters and support for arbitrary languages/interpreters. Thus it provides a general-purpose environment for developing and deploying Web applications. The X-Switch system's experimental results demonstrated that it can achieve a high level of performance. Further- more it was shown that X-Switch can provide functionality matching that of existing Web application servers but with the added benet of multi-user support. Finally the X-Switch system showed that it is feasible to completely separate the deployment platform from the application code, thus ensuring that the developer does not need to modify his/her code to make it compatible with the deployment platform.
Prykhodko, Pavlo; Collins, Paula
A great interest of IT engineers at CERN is to simplify the access to the Data Quality Monitoring (DQM) applications that usually lay behind several layers of security firewalls. In order to make it simple and thus help to save time for the scientists who rely on this data, additional application for the Web had to be developed and tested. The goal of this thesis work was to develop such a Web DQM application for CERN. First, a Web Graphical User Interface (GUI) was developed. In parallel, an Apache server was installed and configured for testing. Moreover, software program called ROOTJS that processes and displays CERN data files on the Web was presented. Through this thesis project, new functionalities were developed to meet the requirements. Furthermore, the ROOTJS program was merged with the Web GUI application and series of tests were performed to showcase the capabilities of the application which was developed through this thesis work.
Scotch, Matthew; Yip, Kevin Y.; Cheung, Kei-Hoi
Development of public health informatics applications often requires the integration of multiple data sources. This process can be challenging due to issues such as different file formats, schemas, naming systems, and having to scrape the content of web pages. A potential solution to these system development challenges is the use of Web 2.0 technologies. In general, Web 2.0 technologies are new internet services that encourage and value information sharing and collaboration among individuals. In this case report, we describe the development and use of Web 2.0 technologies including Yahoo! Pipes within a public health application that integrates animal, human, and temperature data to assess the risk of West Nile Virus (WNV) outbreaks. The results of development and testing suggest that while Web 2.0 applications are reasonable environments for rapid prototyping, they are not mature enough for large-scale public health data applications. The application, in fact a “systems of systems,” often failed due to varied timeouts for application response across web sites and services, internal caching errors, and software added to web sites by administrators to manage the load on their servers. In spite of these concerns, the results of this study demonstrate the potential value of grid computing and Web 2.0 approaches in public health informatics. PMID:18755998
Scotch, Matthew; Yip, Kevin Y; Cheung, Kei-Hoi
Development of public health informatics applications often requires the integration of multiple data sources. This process can be challenging due to issues such as different file formats, schemas, naming systems, and having to scrape the content of web pages. A potential solution to these system development challenges is the use of Web 2.0 technologies. In general, Web 2.0 technologies are new internet services that encourage and value information sharing and collaboration among individuals. In this case report, we describe the development and use of Web 2.0 technologies including Yahoo! Pipes within a public health application that integrates animal, human, and temperature data to assess the risk of West Nile Virus (WNV) outbreaks. The results of development and testing suggest that while Web 2.0 applications are reasonable environments for rapid prototyping, they are not mature enough for large-scale public health data applications. The application, in fact a "systems of systems," often failed due to varied timeouts for application response across web sites and services, internal caching errors, and software added to web sites by administrators to manage the load on their servers. In spite of these concerns, the results of this study demonstrate the potential value of grid computing and Web 2.0 approaches in public health informatics.
Mac Vittie, Lori
The web application stack - a growing threat vector Understand the threat and learn how to defend your organisation This book is intended for application developers, system administrators and operators, as well as networking professionals who need a comprehensive top-level view of web application security in order to better defend and protect both the 'web' and the 'application' against potential attacks. This book examines the most common, fundamental attack vectors and shows readers the defence techniques used to combat them. ContentsIntroductionAttack SurfaceThreat VectorsThreat Mitigatio
Harrison, Katherine J; de Crécy-Lagard, Valérie; Zallot, Rémi
The examination of gene neighborhood is an integral part of comparative genomics but no tools to produce publication quality graphics of gene clusters are available. Gene Graphics is a straightforward web application for creating such visuals. Supported inputs include National Center for Biotechnology Information (NCBI) gene and protein identifiers with automatic fetching of neighboring information, GenBank files and data extracted from the SEED database. Gene representations can be customized for many parameters including gene and genome names, colors and sizes. Gene attributes can be copied and pasted for rapid and user-friendly customization of homologous genes between species. In addition to Portable Network Graphics (PNG) and Scalable Vector Graphics (SVG), produced representations can be exported as Tagged Image File Format (TIFF) or Encapsulated PostScript (EPS), formats that are standard for publication. Hands-on tutorials with real life examples inspired from publications are available for training. Gene Graphics is freely available at https://katlabs.cc/genegraphics/ and source code is hosted at https://github.com/katlabs/genegraphics. email@example.com, firstname.lastname@example.org. Supplementary data are available at Bioinformatics online.
Full Text Available The geo-spatial information service has failed to reflect the live status of spot and meet the needs of integrated monitoring and real-time information for a long time. To tackle the problems in observation sharing and integrated management of space-borne, air-borne, and ground-based platforms and efficient service of spatio-temporal information, an observation sharing model was proposed. The key technologies in real-time dynamic geographical information system (GIS including maximum spatio-temporal coverage-based optimal layout of earth-observation sensor Web, task-driven and feedback-based control, real-time access of streaming observations, dynamic simulation, warning and decision support were detailed. An real-time dynamic Web geographical information system (WebGIS named GeoSensor and its applications in sensing and management of spatio-temporal information of Yangtze River basin including navigation, flood prevention, and power generation were also introduced.
Keshavan, Anisha; Datta, Esha; M McDonough, Ian; Madan, Christopher R; Jordan, Kesshi; Henry, Roland G
Tissue classification plays a crucial role in the investigation of normal neural development, brain-behavior relationships, and the disease mechanisms of many psychiatric and neurological illnesses. Ensuring the accuracy of tissue classification is important for quality research and, in particular, the translation of imaging biomarkers to clinical practice. Assessment with the human eye is vital to correct various errors inherent to all currently available segmentation algorithms. Manual quality assurance becomes methodologically difficult at a large scale - a problem of increasing importance as the number of data sets is on the rise. To make this process more efficient, we have developed Mindcontrol, an open-source web application for the collaborative quality control of neuroimaging processing outputs. The Mindcontrol platform consists of a dashboard to organize data, descriptive visualizations to explore the data, an imaging viewer, and an in-browser annotation and editing toolbox for data curation and quality control. Mindcontrol is flexible and can be configured for the outputs of any software package in any data organization structure. Example configurations for three large, open-source datasets are presented: the 1000 Functional Connectomes Project (FCP), the Consortium for Reliability and Reproducibility (CoRR), and the Autism Brain Imaging Data Exchange (ABIDE) Collection. These demo applications link descriptive quality control metrics, regional brain volumes, and thickness scalars to a 3D imaging viewer and editing module, resulting in an easy-to-implement quality control protocol that can be scaled for any size and complexity of study. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Full Text Available This paper assesses possibilities of a new approach of control map applications on the screen without locomotive system. There is a project about usability of Eye Tracking System in Geoinformatic and Cartographic fields at Department of Geoinformatics at Palacky University. The eye tracking system is a device for measuring eye/gaze positions and eye/gaze movement ("where we are looking". There is a number of methods and outputs, but the most common are "heat-maps" of intensity and/or time. Just this method was used in the first part, where was analyzed the number of common web map portals, especially distribution of their tools and functions on the screen. The aim of research is to localize by heat-maps the best distribution of control tools for movement with map (function "pan". It can analyze how sensitive are people on perception of control tools in different web pages and platforms. It is a great experience to compare accurate survey data with personal interpretation and knowledge. Based on these results is the next step – design of "control tools" which is command by eye-tracking device. There has been elected rectangle areas located on the edge of map (AOI – areas of interest, with special function which have defined some time delay. When user localizes one of these areas the map automatically moves to the way on which edge is localized on, and time delay prevents accidental movement. The technology for recording the eye movements on the screen offers this option because if you properly define the layout and function controls of the map, you need only connect these two systems. At this moment, there is a technical constrain. The solution of movement control is based on data transmission between eye-tracking-device-output and converter in real-time. Just real-time transfer is not supported in every case of SMI (SensoMotoric Instruments company devices. More precisely it is the problem of money, because eye-tracking device and every
Scotch, Matthew; Yip, Kevin Y.; Cheung, Kei-Hoi
Development of public health informatics applications often requires the integration of multiple data sources. This process can be challenging due to issues such as different file formats, schemas, naming systems, and having to scrape the content of web pages. A potential solution to these system development challenges is the use of Web 2.0 technologies. In general, Web 2.0 technologies are new internet services that encourage and value information sharing and collaboration among individuals....
case, the ordering process is, of course, not fully automated. Standardized products, on the other hand, are easily identified and the cost charged to the print buyer can be retrieved from predefined price lists. Typically, higher volumes will result in more attractive prices. An additional advantage of this type of products is that they are often defined such that they can be produced in bulk using conventional printing techniques. If one wants to automate the ganging, a connection must be established between the on-line ordering and the production planning system. (For digital printing, there typically is no need to gang products since they can be produced more effectively separately.) Many of the on-line print solutions support additional features also available in general purpose e-commerce sites. We here think of the availability of virtual shopping baskets, the connectivity with payment gateways and the support of special facilities for interfacing with courier services (bar codes, connectivity to courier web sites for tracking shipments etc.). Supporting these features also assumes an intimate link with the print production system. Another development that goes beyond the on-line ordering of printed material and the submission of full pages and/or documents, is the interactive, on-line definition of the content itself. Typical applications in this respect are, e.g., the creation of business cards, leaflets, letter heads etc. On a more professional level, we also see that more and more publishing organizations start using on-line publishing platforms to organize their work. These professional platforms can also be connected directly to printing portals and thus enable extra automation. In this paper, we will discuss for each of the different applications presented above (traditional Print Portals, Web2Print applications and professional, on-line publishing platforms) how they interact with prepress and print production systems and how they contribute to the
Deshpande, Yogesh; Murugesan, San; Ginige, Athula; Hansen, Steve; Schwabe, Daniel; Gaedke, Martin; White, Bebo
Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: a) why is it needed? b) what is its domain of operation? c) how does it help and what should it do to improve Web application develo...
Moore, D R; Feurer, I D; Zavala, E Y; Shaffer, D; Karp, S; Hoy, H; Moore, D E
Most centers utilize phone or written surveys to screen candidates who self-refer to be living kidney donors. To increase efficiency and reduce resource utilization, we developed a web-based application to screen kidney donor candidates. The aim of this study was to evaluate the use of this web-based application. Method and time of referral were tabulated and descriptive statistics summarized demographic characteristics. Time series analyses evaluated use over time. Between January 1, 2011 and March 31, 2012, 1200 candidates self-referred to be living kidney donors at our center. Eight hundred one candidates (67%) completed the web-based survey and 399 (33%) completed a phone survey. Thirty-nine percent of donors accessed the application on nights and weekends. Postimplementation of the web-based application, there was a statistically significant increase (p web-based application as opposed to telephone contact. Also, there was a significant increase (p = 0.025) in the total number of self-referrals post-implementation from 61 to 116 per month. An interactive web-based application is an effective strategy for the initial screening of donor candidates. The web-based application increased the ability to interface with donors, process them efficiently and ultimately increased donor self-referral at our center. © Copyright 2012 The American Society of Transplantation and the American Society of Transplant Surgeons.
Web technologies represent one among the choices for development of different types of applications, which is becoming increasingly popular. In the diploma thesis we discussed development of web, desktop and mobile applications, created using Angular, Electron and NativeScript frameworks. They support preparation of cross-platform applications. The core field was covered by television listings, so we presented each of its processes in detail. We described actors involved in the process of cre...
Brown, Titus; Huggins, Jason
To date, the relation between multilingualism and the Semantic Web has not yet received enough attention in the research community. One major challenge for the Semantic Web community is to develop architectures, frameworks and systems that can help in overcoming national and language barriers, facilitating equal access to information produced in different cultures and languages. As such, this volume aims at documenting the state-of-the-art with regard to the vision of a Multilingual Semantic Web, in which semantic information will be accessible in and across multiple languages. The Multiling
Schäuble, Sascha; Stavrum, Anne-Kristin; Bockwoldt, Mathias; Puntervoll, Pål; Heiland, Ines
Systems Biology Markup Language (SBML) is the standard model representation and description language in systems biology. Enriching and analysing systems biology models by integrating the multitude of available data, increases the predictive power of these models. This may be a daunting task, which commonly requires bioinformatic competence and scripting. We present SBMLmod, a Python-based web application and service, that automates integration of high throughput data into SBML models. Subsequent steady state analysis is readily accessible via the web service COPASIWS. We illustrate the utility of SBMLmod by integrating gene expression data from different healthy tissues as well as from a cancer dataset into a previously published model of mammalian tryptophan metabolism. SBMLmod is a user-friendly platform for model modification and simulation. The web application is available at http://sbmlmod.uit.no , whereas the WSDL definition file for the web service is accessible via http://sbmlmod.uit.no/SBMLmod.wsdl . Furthermore, the entire package can be downloaded from https://github.com/MolecularBioinformatics/sbml-mod-ws . We envision that SBMLmod will make automated model modification and simulation available to a broader research community.
Frantz, Albert; Franco, Milvio
.... We used existing Semantic Web tools to construct an ATO knowledge base. The knowledge base is used to select potential air missions to reassign to strike time sensitive targets by the computer...
Huang, Wen-Hao David; Hood, Denice Ward; Yoo, Sun Joo
Situated in the gender digital divide framework, this survey study investigated the role of computer anxiety in influencing female college students' perceptions toward Web 2.0 applications for learning. Based on 432 college students' "Web 2.0 for learning" perception ratings collected by relevant categories of "Unified Theory of Acceptance and Use…
Voumard, Jérémie; Aye, Zar Chi; Derron, Marc-Henri; Jaboyedoff, Michel
Roads and railways are threatened throughout the year by several natural hazards around the world, leading to the closing of transportation corridors, loss of access, deviation travels and potentially infrastructures damages and loss of human lives and also financial, social and economic consequences. Protection measures used to reduce the exposure to natural hazards are usually expensive and cannot be deployed on an entire transportation network. It is thus necessary to choose priority areas where protection measures need to be built. The aim of this study is to propose a friendly tool to evaluate and to understand issues and consequences of section closing and affected parts of a transportation network at small region scale. The proposed tool, currently in its design and building phase, will provide ways to simulate different closure scenarios and to analyze their consequences on transportation network; like deviating traffic on others roads and railways sections, additional time and distance travel or accessibility for emergency services like police, firefighters and ambulances. The tool is based on OpenGeo architecture, which is composed of open-source components. It integrates PostGIS for database, GeoServer and GeoWebCache for application servers and finally GeoExt and OpenLayers for user interface. Users will be able to attribute quantitative (like roads and railway type and closure consequences) and qualitative (like section unavailability duration, season, etc.) data to the different roads and railways sections based on their user rights. They will also be able to evaluate different track closures consequences in terms of different scenarios. Once finalized, the goal of this project including natural hazards, traffic and geomatic thematic is to propose a decision support tool for public authorities firstly and for specialists secondly so that they can evaluate easily and accurately as much as possible to highlight the weakpoints of the transportation
Newton, Richard; Deonarine, Andrew; Wernisch, Lorenz
The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an
Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics
Knipp, D.; Kilcommons, L. M.; Damas, M. C.
We have created a simple and user-friendly web application to visualize output from empirical atmospheric models that describe the lower atmosphere and the Space-Atmosphere Interface Region (SAIR). The Atmospheric Model Web Explorer (AtModWeb) is a lightweight, multi-user, Python-driven application which uses standard web technology (jQuery, HTML5, CSS3) to give an in-browser interface that can produce plots of modeled quantities such as temperature and individual species and total densities of neutral and ionized upper-atmosphere. Output may be displayed as: 1) a contour plot over a map projection, 2) a pseudo-color plot (heatmap) which allows visualization of a variable as a function of two spatial coordinates, or 3) a simple line plot of one spatial coordinate versus any number of desired model output variables. The application is designed around an abstraction of an empirical atmospheric model, essentially treating the model code as a black box, which makes it simple to add additional models without modifying the main body of the application. Currently implemented are the Naval Research Laboratory NRLMSISE00 model for neutral atmosphere and the International Reference Ionosphere (IRI). These models are relevant to the Low Earth Orbit environment and the SAIR. The interface is simple and usable, allowing users (students and experts) to specify time and location, and choose between historical (i.e. the values for the given date) or manual specification of whichever solar or geomagnetic activity drivers are required by the model. We present a number of use-case examples from research and education: 1) How does atmospheric density between the surface and 1000 km vary with time of day, season and solar cycle?; 2) How do ionospheric layers change with the solar cycle?; 3 How does the composition of the SAIR vary between day and night at a fixed altitude?
Jeliazkova, Nina; Jeliazkov, Vedrin
The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture...
Full Text Available This article is focused on mobile development using Visual Studio 2005, web services and their connection to Oracle Server, willing to help programmers to realize simple and useful mobile applications.
National Aeronautics and Space Administration — To finalize a comprehensive NASA Cis-Lunar / Earth-Moon Libration Orbit Reference and Web Application begun using FY13 IRAD funding approved in May 2013. This GSFC...
Hediger, Martin R; De Vico, Luca
We present a web interface for the BioFET-SIM program. The web interface allows to conveniently setup calculations based on the BioFET-SIM multiple charges model. As an illustration, two case studies are presented. In the first case, a generic peptide with opposite charges on both ends is inverted in orientation on a semiconducting nanowire surface leading to a corresponding change in sign of the computed sensitivity of the device. In the second case, the binding of an antibody/antigen complex on the nanowire surface is studied in terms of orientation and analyte/nanowire surface distance. We demonstrate how the BioFET-SIM web interface can aid in the understanding of experimental data and postulate alternative ways of antibody/antigen orientation on the nanowire surface.
Full Text Available User centred design is an approach in process of development any kind of human product where the main idea is to create a product for the end user. This article presents User centred design method in developing web mapping services. This method can be split into four main phases – user research, creation of concepts, developing with usability research and lunch of product. The article describes each part of this phase with an aim to provide guidelines for developers and primarily with an aim to improve the usability of web mapping services.
Cao, Xinhua; Hoo, Kent S., Jr.; Zhang, Hong; Ching, Wan; Zhang, Ming; Wong, Stephen T. C.
We described a web-based data warehousing method for retrieving and analyzing neurological multimedia information. The web-based method supports convenient access, effective search and retrieval of clinical textual and image data, and on-line analysis. To improve the flexibility and efficiency of multimedia information query and analysis, a three-tier, multimedia data warehouse for epilepsy research has been built. The data warehouse integrates clinical multimedia data related to epilepsy from disparate sources and archives them into a well-defined data model.
Full Text Available Abstract Background As the “omics” revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter’s complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. Results COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a “semantic web in a box” approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. Conclusions The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.
Allen, David G; Mahto, Raj V; Otondo, Robert F
Recruitment theory and research show that objective characteristics, subjective considerations, and critical contact send signals to prospective applicants about the organization and available opportunities. In the generating applicants phase of recruitment, critical contact may consist largely of interactions with recruitment sources (e.g., newspaper ads, job fairs, organization Web sites); however, research has yet to fully address how all 3 types of signaling mechanisms influence early job pursuit decisions in the context of organizational recruitment Web sites. Results based on data from 814 student participants searching actual organization Web sites support and extend signaling and brand equity theories by showing that job information (directly) and organization information (indirectly) are related to intentions to pursue employment when a priori perceptions of image are controlled. A priori organization image is related to pursuit intentions when subsequent information search is controlled, but organization familiarity is not, and attitudes about a recruitment source also influence attraction and partially mediate the effects of organization information. Theoretical and practical implications for recruitment are discussed. (c) 2007 APA
The widespread use of the Internet and the World Wide Web led to the availability of many platforms for developing dynamic Web application and the problem of choosing the most appropriate platform that will be easy to use for undergraduate students of web applications development in tertiary institutions. Students beginning to learn web…
An XML based web application conception model. We present our web conception of applications and modelisation approach which are dated on new representation and exchange of data XML standard. The design of a web site is not an easy task. It needs an important analysis and conceptualisation effort. It deals with ...
In this article, the author presents the history of human-to-computer interaction based upon the design of sophisticated computerized speech recognition algorithms. Advancements such as the arrival of cloud-based computing and software like Google's Web Speech API allows anyone with an Internet connection and Chrome browser to take advantage of…
industry-standard approach of Java 2 Enterprise Edition (J2EE) Enterprise JavaBeans ( EJB ) layered over an Microsoft SQL Server or Oracle database— while...Domain Information Sharing GWT Google Web Toolkit FabIL Fabric Intermediate Language J2EE Java 2 Enterprise Edition EJB Enterprise JavaBeans Jif Java
Škuta, Ctibor; Bartůněk, Petr; Svozil, Daniel
Bajt, Susanne K.
The current generation of new students, referred to as the Millennial Generation, brings a new set of challenges to the community college. The influx of these technologically sophisticated students, who interact through the social phenomenon of Web 2.0 technology, bring expectations that may reshape institutions of higher learning. This chapter…
Ataide Martins, João Paulo; Rougeth de Oliveira, Marco Antônio; Oliveira de Queiroz, Mário Sérgio
A web-based application is developed to generate 4D-QSAR descriptors using the LQTA-QSAR methodology, based on molecular dynamics (MD) trajectories and topology information retrieved from the GROMACS package. The LQTAGrid module calculates the intermolecular interaction energies at each grid point, considering probes and all aligned conformations resulting from MD simulations. These interaction energies are the independent variables or descriptors employed in a QSAR analysis. A friendly front end web interface, built using the Django framework and Python programming language, integrates all steps of the LQTA-QSAR methodology in a way that is transparent to the user, and in the backend, GROMACS and LQTAGrid are executed to generate 4D-QSAR descriptors to be used later in the process of QSAR model building. © 2018 Wiley Periodicals, Inc.
Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri P.
Cezar Liviu CERVINSCHI
Full Text Available Starting from the idea that Web 2.0 represents “the era of dynamic web”, the paper proposes to provide arguments (demonstrated by physical results regarding the question that is at the foundation if this article. Due to the findings we can definitely affirm that Web 2.0 is a solution to building powerful and robust software, since the Internet has become more than just a simple presence on the users’ desktop that develops easy access to information, services, entertainment, online transactions, e-commerce, e-learning and so on, but basically every kind of human or institutional interaction can happen online. This paper seeks to study the impact of two of these braches upon the user – e-commerce and e-testing. The statistic reports will be made on different sets of people, while the conclusions are the results of a detailed research and study of the applications’ behaviour in the actual operating environment.
Jensen, Simon Holm; Madsen, Magnus; Møller, Anders
Feigenbaum, Lee; Martin, Sean; Roy, Matthew N; Szekely, Benjamin; Yung, Wing C
This article presents the design goals and features of the open-source Boca RDF server in the context of a community of cancer-tumor modeling investigators. Boca supplements the desirable data features of the Semantic Web with important enterprise and application features to power a new generation of Semantic-Web-based applications. The data features enable the integration and retrieval of tremendous quantities of diverse data. The enterprise features promote data integrity, fidelity, provenance and robustness. The application features provide for collaborative applications and dynamic user interfaces.
Antonio Vega Corona
Full Text Available El crecimiento continuo de la Internet ha permitido a las personas, alrededor de todo mundo, realizar transacciones en línea, buscar información o navegar usando el explorador de la Web. A medida que más gente se siente cómoda usando los exploradores de Web, más empresas productoras de software tratan de ofrecer interfaces Web como una forma alternativa para proporcionar acceso a sus aplicaciones. La naturaleza de la conexión Web y las restricciones impuestas por el ancho de banda disponible, hacen la integración de aplicaciones Web y los sistemas de bases de datos críticas. Debido a que las aplicaciones que usan bases de datos proporcionan una interfase gráfica para editar la información en la base de datos y debido a que cada columna en una tabla de una base de datos corresponde a un control en una interfase gráfica, el desarrollo de estas aplicaciones puede consumirun tiempo considerable, ya que la validación de campos y reglas de integridad referencial deben ser respetadas. Se propone un diseño orientado a objetos para así facilitar el desarrollo de aplicaciones que usan sistemas de bases de datos.The continuous growth of the Internet has driven people, all around the globe, to performtransactions on-line, search information or navigate using a browser. As more people feelcomfortable using a Web browser, more software companies are trying to alternatively offerWeb interfaces to provide access to their applications. The consequent nature of the Webconnection and the restrictions imposed by the available bandwidth make the successfulintegration of Web applications and database systems critical. Because popular databaseapplications provide a user interface to edit and maintain the information in the databaseand because each column in the database table maps to a graphic user interface control,the deployment of these applications can be time consuming; appropriate fi eld validationand referential integrity rules must be observed
Andreeva, J; Dzhunov, I; Karavakis, E; Kokoszkiewicz, L; Nowotka, M; Saiz, P.; Tuckett, D
Mortensen, L. H.; Qin, J.; Krogh, Paul Henning
In 2006, the European Council established a mandatory target of 20 % renewable energy of the total energy consumption by 2020. Part of the replacement is burning biomass for heating and electricity instead of fossil fuels. Whole-tree biomass harvesting for biofuel combustion intensifies removal...... can facilitate an increase in the bacteria to fungi ratio with possible cascading effects for the soil food web structure. This is tested by applying ash of different concentrations to experimental plots in a coniferous forest. During the course of the project soil samples will be collected...... with varying intervals and subsequently analyzed. The food web analysis includes several trophic levels; bacteria/fungi, protozoa, nematodes, enchytraeids, microarthropods and arthropods. The initial results indicate that bacteria and protozoa are stimulated in the uppermost soil layer (0-3 cm) two months...
Rodriguez-Gil, Luis; Orduna, Pablo; Bollen, Lars; Govaerts, Sten; Holzer, Adrian; Gillet, Dennis; Lopez-de-Ipina, Diego; Garcia-Zubia, Javier
Developing educational apps that cover a wide range of learning contexts and languages is a challenging task. In this paper, we introduce the AppComposer Web app to address this issue. The AppComposer aims at empowering teachers to easily translate and adapt existing apps that fit their educational
Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly
Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly
Internet is now a become a most effective option in disseminating information, because the Internet has a range of very large networks. One of the information received great places on the Internet include information relating to the world of communication, especially in the field of credit sales business on-line. This information is conveyed through one of the storage media of information on the Internet Web site. Developed countries in Europe to implement cellular communication technology in...
Siddiqui, Ahmad Tasnim; Aljahdali, Sultan
Today web is the best medium of communication in modern business. Many companies are redefining their business strategies to improve the business output. Business over internet provides the opportunity to customers and partners where their products and specific business can be found. Nowadays online business breaks the barrier of time and space as compared to the physical office. Big companies around the world are realizing that e-commerce is not just buying and selling over Internet, rather ...
Laoui, Abdel; Polyakov, Valery R
Web services are a new technology that enables to integrate applications running on different platforms by using primarily XML to enable communication among different computers over the Internet. Large number of applications was designed as stand alone systems before the concept of Web services was introduced and it is a challenge to integrate them into larger computational networks. A generally applicable method of wrapping stand alone applications into Web services was developed and is described. To test the technology, it was applied to the QikProp for DOS (Windows). Although performance of the application did not change when it was delivered as a Web service, this form of deployment had offered several advantages like simplified and centralized maintenance, smaller number of licenses, and practically no training for the end user. Because by using the described approach almost any legacy application can be wrapped as a Web service, this form of delivery may be recommended as a global alternative to traditional deployment solutions. Copyright © 2011 Wiley Periodicals, Inc.
Full Text Available Library of Universitas Pendidikan Indonesia (UPI has a quality policy commitment to continuous improvement in every area and process. It can be achieved by continuously optimizing organizational learning. Web 2.0 is a media application that can help the organizational learning process because it has the characteristics of read and write, as well as having the flexibility of time use, but the application must be in accordance with the culture and character of the organization. Therefore, this study aimed to find out the Web 2.0 application that can be applied to the organizational learning in the Library of UPI. The method used is a mixed method qualitative and quantitative approach. Research stage refers to the stage of planning and support phases of Web 2.0 Tools Implementation Model. The results showed that the application of Web 2.0 can be applied to the organizational learning in the Library UPI. It refers to the tendency of organizational culture Library of UPI that is good and tendency of HR Library UPI attitude against the use of the Internet and computers are very good. Web 2.0 applications that can be used by UPI library are blogs, online forums, and wiki as a primary tools. Facebook, Youtube, chat application, twitter and Instagram as a supporting tools.
Full Text Available In this paper we survey the recent activities and achievements of our research group in the deployment of XMLrelated technologies in Cultural Heritage applications concerning the encoding of temporal semantics in Web documents. In particular we will review "The Valid Web", which is an XML/XSL infrastructure we defined and implemented for the definition and management of historical information within multimedia documents available on the Web, and its further extension to the effective encoding of advanced temporal features like indeterminacy, multiple granularities and calendars, enabling an efficient processing in a user-friendly Web-based environment. Potential uses of the developed infrastructures include a broad range of applications in the cultural heritage domain, where the historical perspective is relevant, with potentially positive impacts on E-Education and E-Science.
Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko
ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl
Russell, Glenn [Idaho National Lab. (INL), Idaho Falls, ID (United States)
Jensen, Casper Svenning; Møller, Anders; Su, Zhendong
Zhou, W.; Pierre, G.E.O.; Chi, C.-H.
NoSQL Cloud data services provide scalability and high availability properties for web applications but at the same time they sacrifice data consistency. However, many applications cannot afford any data inconsistency. CloudTPS is a scalable transaction manager to allow cloud database services to
Web applications need better user interface to be interactive and attractive. A new approach/concept of dimensional enhancement - 2.5D "a 2D display of a virtual 3D environment", which can be implemented in social networking sites and further in other system applications.
iLM is a Web based application for representation, management and sharing of IMS LIP conformant user profiles. The tool is developed using a service oriented architecture with emphasis on the easy data sharing. Data elicitation from user profiles is based on the utilization of XQuery scripts and sharing with other applications is achieved through…
Goodall, J. L.; Castronova, A. M.; Huynh, N.; Caicedo, J. M.
Management of water systems often requires the integration of data and models across a range of sources and disciplinary expertise. Service-Oriented Architectures (SOA) have emerged as a powerful paradigm for providing this integration. Including models within a SOA presents challenges because services are not well suited for applications that require state management and large data transfers. Despite these challenges, thoughtful inclusion of models as resources within a SOA could have distinct advantages that center on the idea of abstracting complex computer hardware and software from service consumers while, at the same time, providing powerful resources to client applications. With these advantages and challenges of using models within SOA in mind, this work explores the potential of a modeling service standard as a means for integrating models as resources within SOA. Specifically, we investigate the use of the Open Geospatial Consortium (OGC) Web Processing Service (WPS) standard for exposing models as web services. Through extension of a Python-based implementation of WPS (called pyWPS), we present a demonstration of the methodology through a case study involving a storm event that floods roads and disrupts travel in Columbia, SC. The case study highlights the benefit of an urban infrastructure system with its various subsystems (stormwater, transportation, and structures) interacting and exchanging data seamlessly.
Moniz, L.J.; Cooch, E.G.; Ellner, S.P.; Nichols, J.D.; Nichols, J.M.
In this paper we use information theory techniques on time series of abundances to determine the topology of a food web. At the outset, the food web participants (two consumers, two resources) are known; in addition we know that each consumer prefers one of the resources over the other. However, we do not know which consumer prefers which resource, and if this preference is absolute (i.e., whether or not the consumer will consume the non-preferred resource). Although the consumers and resources are identified at the beginning of the experiment, we also provide evidence that the consumers are not resources for each other, and the resources do not consume each other. We do show that there is significant mutual information between resources; the model is seasonally forced and some shared information between resources is expected. Similarly, because the model is seasonally forced, we expect shared information between consumers as they respond to the forcing of the resources. The model that we consider does include noise, and in an effort to demonstrate that these methods may be of some use in other than model data, we show the efficacy of our methods with decreasing time series size; in this particular case we obtain reasonably clear results with a time series length of 400 points. This approaches ecological time series lengths from real systems.
Dian Atnantomi Wiliyanto
Full Text Available This research is conducted to determine the effectiveness of web based expert system application for identification and intervention of children with special needs in inclusive school. 40 teachers of inclusive school in Surakarta participated in this research. The result showed that: (1 web based expert system application was suitable with the needs of teachers/officers, had 50% (excellence criteria, (2 web based expert system application was worthwhile for identification of children with special needs, had 50% (excellence criteria, (3 web based expert system application was easy to use, had 52.5% (good criteria, and (4 web based expert system application had result accuracy in making decision, had 52.5% (good criteria. It shows that the use of web based expert system application is effective to be used by teachers in inclusive school in conducting identification and intervention with percentage on average was more than 50%.
Gomez, Fabinton Sotelo; Ordóñez, Armando
Previously a framework for integrating web resources providing educational services in dotLRN was presented. The present paper describes the application of this framework in a rural school in Cauca--Colombia. The case study includes two web resources about the topic of waves (physics) which is oriented in secondary education. Web classes and…
Mao, Jian; Chen, Yue; Shi, Futian; Jia, Yaoqi; Liang, Zhenkai
Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT) systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users’ browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach. PMID:28245610
Chu, Samuel Kai Wah; Woo, Matsuko; King, Ronnel B; Choi, Stephen; Cheng, Miffy; Koo, Peggy
This study surveyed Web 2.0 application in three types of selected health or medical-related organisations such as university medical libraries, hospitals and non-profit medical-related organisations. Thirty organisations participated in an online survey on the perceived purposes, benefits and difficulties in using Web 2.0. A phone interview was further conducted with eight organisations (26.7%) to collect information on the use of Web 2.0. Data were analysed using both quantitative and qualitative approaches. Results showed that knowledge and information sharing and the provision of a better communication platform were rated as the main purposes of using Web 2.0. Time constraints and low staff engagement were the most highly rated difficulties. In addition, most participants found Web 2.0 to be beneficial to their organisations. Medical-related organisations that adopted Web 2.0 technologies have found them useful, with benefits outweighing the difficulties in the long run. The implications of this study are discussed to help medical-related organisations make decisions regarding the use of Web 2.0 technologies. © 2011 The authors. Health Information and Libraries Journal © 2011 Health Libraries Group.
Setton, Robert; Tierney, Christina; Tsai, Tony
To evaluate the validity of fertility web sites and applications (apps) by comparing the predicted fertile window of these modalities to the actual fertile window of a standard 28-day cycle. This was a descriptive study. The top resulting free web sites and electronic apps downloadable to a cellular phone that provide calendars for fertility and ovulation prediction were assessed. Cycles were standardized to 28 days in length, 4 days of menses, and the last menstrual period was set to January 1, 2015. The predicted date of ovulation and fertility window generated were compared with an actual estimated date of ovulation on cycle day 15, January 15, and a fertile window consisting of cycle day 10 to cycle day 15, the day of ovulation plus the preceding 5 cycle days, January 10-15. Data from 20 web sites and 33 apps were collected. Of all the web sites and apps used, one web site and three apps predicted the precise fertile window. Web sites and electronic apps used by the general public to predict fertile windows are generally inaccurate, although the clinical effect of this inaccuracy is unknown. Although they all include the most fertile cycle day, the range of the fertility window varies widely. Patients who are trying to conceive with the assistance of calendars generated from web sites and electronic apps should be counseled on the inaccuracy of these modalities.
Full Text Available Web Services based architectures have already been established as the preferred way to integrate SOA specific components, from the front-end to the back-end business services. One of the key elements of such architecture are data-based or entity services. In this context, SDO standard and SDO related technologies have been confirmed as a possible approach to aggregate such enterprise-wide federation of data services, mainly backed by database servers, but not limited to them. In the followings, we will discuss an architectural purpose based on SDO approach to seamlessly integrate presentation and data services within an enterprise SOA context. This way we will outline the benefits of a common end-to-end data integration strategy. Also, we will try to argue that using HTML5 based clients as front end services in conjunction with SDO data services could be an effective strategy to adopt the mobile computing in the enterprise context.
Pritchett, Christal C.; Wohleb, Elisha C.; Pritchett, Christopher G.
This research study was designed to examine the degree of perceived importance of interactive technology applications among various groups of certified educators; the degree to which education professionals utilized interactive online technology applications and to determine if there was a significant difference between the different groups based…
Ruben Peredo Valderrama, Alejandro Canales Cruz
Full Text Available In this paper a new architecture for development of Semantic Web Applications for decision taking using the paradigm ofWeb-Based Education (WBE is presented. This architecture is based on the IEEE 1484 LTSA (Learning Technology SystemArchitecture specification, Multi-Agent System (MAS and the software components named Intelligent Reusable LearningComponents Object Oriented (IRLCOO. IRLCOO are a special type of Sharable Content Object (SCO used like compositionunits under the Sharable Content Object Reusable Model (SCORM. SCORM is used to create reusable and interoperablelearning content. The new architecture is oriented to offer interoperability at level application under the philosophy ofService-Oriented Architecture (SOA.
Watanabe, Tetsuya; Araki, Kosuke; Yamaguchi, Toshimitsu; Minatani, Kazunori
We have developed software that uses the R statistics software environment to automatically generate tactile graphs — i.e. graphs that can be read by blind people using their sense of touch. We released this software as a Web application to make it available to anyone, from anywhere. This Web application can automatically generate images for tactile graphs from numerical data in a CSV file. It is currently able to generate four types of graph — scatter plots, line graphs, bar charts and pie c...
Friberg, P. A.; Luis, R. S.; Quintiliani, M.; Lisowski, S.; Hunter, S.
Recently, a novel set of modules has been included in the Open Source Earthworm seismic data processing system, supporting the use of web applications. These include the Mole sub-system, for storing relevant event data in a MySQL database (see M. Quintiliani and S. Pintore, SRL, 2013), and an embedded webserver, Moleserv, for serving such data to web clients in QuakeML format. These modules have enabled, for the first time using Earthworm, the use of web applications for seismic data processing. These can greatly simplify the operation and maintenance of seismic data processing centers by having one or more servers providing the relevant data as well as the data processing applications themselves to client machines running arbitrary operating systems.Web applications with secure online web access allow operators to work anywhere, without the often cumbersome and bandwidth hungry use of secure shell or virtual private networks. Furthermore, web applications can seamlessly access third party data repositories to acquire additional information, such as maps. Finally, the usage of HTML email brought the possibility of specialized web applications, to be used in email clients. This is the case of EWHTMLEmail, which produces event notification emails that are in fact simple web applications for plotting relevant seismic data.Providing web services as part of Earthworm has enabled a number of other tools as well. One is ISTI's EZ Earthworm, a web based command and control system for an otherwise command line driven system; another is a waveform web service. The waveform web service serves Earthworm data to additional web clients for plotting, picking, and other web-based processing tools. The current Earthworm waveform web service hosts an advanced plotting capability for providing views of event-based waveforms from a Mole database served by Moleserve.The current trend towards the usage of cloud services supported by web applications is driving improvements in Java
Wan Yusryzal Wan Ibrahim
Full Text Available Spatiotemporal changes are very important information to reveal the characteristics of the urbanization process. Sharing the information is beneficial for public awareness which then improves their participation in adaptive management for spatial planning process. Open-source software and web application are freely available tools that can be the best medium used by any individual or agencies to share this important information. The objective of the paper is to discuss on the spatiotemporal land use change in Iskandar Malaysia by using open-source GIS (Quantum GIS and publish them through web application (Mash-up. Land use in 1994 to 2011 were developed and analyzed to show the landscape change of the region. Subsequently, web application was setup to distribute the findings of the study. The result show there is significant changes of land use in the study area especially on the decline of agricultural and natural land which were converted to urban land uses. Residential and industrial areas largely replaced the agriculture and natural areas particularly along the coastal zone of the region. This information is published through interactive GIS web in order to share it with the public and stakeholders. There are some limitations of web application but still not hindering the advantages of using it. The integration of open-source GIS and web application is very helpful in sharing planning information particularly in the study area that experiences rapid land use and land cover change. Basic information from this study is vital for conducting further study such as projecting future land use change and other related studies in the area.
Lobach, D F; Spell, R U; Hales, J W; Rabold, J S
The number of health-related Web sites on the Internet is increasing. Incorporating these sites into clinical decision support systems and other health care applications can significantly enhance the educational and instructional value of such systems. While search engines exist for finding sites and criteria are available for assessing site quality, few tools are available for managing Web-based health care information. Management of Web-based information is particularly challenging because the information is continually changing and new resources are continually being added. In this paper, we describe the development and use of a Web-link manager for health care applications. This system retains search strategies for repeated use, catalogues search results in a search results database, accommodates tracking of site review and use status, and provides periodic checking of link integrity for sites that are used in local applications. The Web-link manager is currently in use to manage the links used in a clinical decision support system that presents clinical practice guidelines interactively to clinicians at the point of care.
Emily J Richardson
Full Text Available Whole-genome-shotgun (WGS metagenomics experiments produce DNA sequence data from entire ecosystems, and provide a huge amount of novel information. Gene discovery projects require up-to-date information about sequence homology and domain structure for millions of predicted proteins to be presented in a simple, easy-to-use system. There is a lack of simple, open, flexible tools that allow the rapid sharing of metagenomics datasets with collaborators in a format they can easily interrogate. We present Meta4, a flexible and extensible web-application that can be used to share and annotate metagenomic gene predictions. Proteins and predicted domains are stored in a simple relational database, with a dynamic front-end which displays the results in an internet browser. Web-services are used to provide up-to-date information about the proteins from homology searches against public databases. Information about Meta4 can be found on the project website (http://www.ark-genomics.org/bioinformatics/meta4, code is available on Github (https://github.com/mw55309/meta4, a cloud image is available, and an example implementation can be seen at http://www.ark-genomics.org/tools/meta4
Full Text Available Abstract The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i an information model, based on a common OWL-DL ontology ii links to related ontologies; iii data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF representation, or initiate the associated calculations. The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative Structure-Activity Relationship (QSAR models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The
Jeliazkova, Nina; Jeliazkov, Vedrin
The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations.The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative) Structure-Activity Relationship (QSAR) models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The downloadable web application
The thesis presents Oracle ADF framework, its technological background, architecture and general features. For user interface implementation, ADF framework provides ADF Faces RC Framework, a key component for developing RIA applications that combine the best features of traditional desktop applications with access via the Internet. We also describe an integrated development environment Oracle JDeveloper, which covers the entire ADF framework and is the preferred integrated development environ...
The World Wide Web has changed research habits, and these changes were further expanded when "Web 2.0" became popular in 2005. Bibliometrics is a helpful tool used for describing patterns of publication, for interpreting progression over time, and the geographical distribution of research in a given field. Few studies employing bibliometrics, however, have been carried out on the correlative nature of scientific literature and Web 2.0. The aim of this bibliometric analysis was to provide an overview of Web 2.0 implications in the biomedical literature. The objectives were to assess the growth rate of literature, key journals, authors, and country contributions, and to evaluate whether the various Web 2.0 applications were expressed within this biomedical literature, and if so, how. A specific query with keywords chosen to be representative of Web 2.0 applications was built for the PubMed database. Articles related to Web 2.0 were downloaded in Extensible Markup Language (XML) and were processed through developed hypertext preprocessor (PHP) scripts, then imported to Microsoft Excel 2010 for data processing. A total of 1347 articles were included in this study. The number of articles related to Web 2.0 has been increasing from 2002 to 2012 (average annual growth rate was 106.3% with a maximum of 333% in 2005). The United States was by far the predominant country for authors, with 514 articles (54.0%; 514/952). The second and third most productive countries were the United Kingdom and Australia, with 87 (9.1%; 87/952) and 44 articles (4.6%; 44/952), respectively. Distribution of number of articles per author showed that the core population of researchers working on Web 2.0 in the medical field could be estimated at approximately 75. In total, 614 journals were identified during this analysis. Using Bradford's law, 27 core journals were identified, among which three (Studies in Health Technology and Informatics, Journal of Medical Internet Research, and Nucleic Acids
Espinosa R, Alfredo; Silva F, Brisa M; Quintero R, Agustin [Instituto de Investigaciones Electricas, Cuernavaca, Morelos (Mexico)
In this article its is described a technique for the development of a Web application for a real time information system that allows the remote and concurrent connection of different equipment to the network historical data base of the system, without the need of the installation of any software component in the remote equipment of the user who makes the consultation. It defines and establishes the software architecture that allows the development of the Web application, the analysis stages, the operation of the technology to be used, as well as the design, development and implementation of the application. Finally, the accomplishments obtained with the development of the Web application for a real time information system are described. [Spanish] En este articulo se describe una tecnica para el desarrollo de una aplicacion web para un sistema de informacion en tiempo real, que permita la conexion remota y concurrente de diferentes equipos en la red a la base de datos historica del sistema, sin necesidad de que se instale ningun componente de software en el equipo remoto del usuario que realiza la consulta. Se define y establece la arquitectura de software que permite el desarrollo de la aplicacion web, las etapas de analisis, el funcionamiento de la tecnologia a utilizar, asi como el diseno, desarrollo e implementacion de la aplicacion. Finalmente, se describen los logros obtenidos con el desarrollo de la aplicacion web para un sistema de informacion en tiempo real.
Ellis, Jarrett T.
The U.S. Geological Survey (USGS) maintains and operates more than 8,200 continuous streamgages nationwide. Types of data that may be collected, computed, and stored for streamgages include streamgage height (water-surface elevation), streamflow, and water quality. The streamflow data allow scientists and engineers to calculate streamflow statistics, such as the 1-percent annual exceedance probability flood (also known as the 100-year flood), the mean flow, and the 7-day, 10-year low flow, which are used by managers to make informed water resource management decisions, at each streamgage location. Researchers, regulators, and managers also commonly need physical characteristics (basin characteristics) that describe the unique properties of a basin. Common uses for streamflow statistics and basin characteristics include hydraulic design, water-supply management, water-use appropriations, and flood-plain mapping for establishing flood-insurance rates and land-use zones. The USGS periodically publishes reports that update the values of basin characteristics and streamflow statistics at selected gaged locations (locations with streamgages), but these studies usually only update a subset of streamgages, making data retrieval difficult. Additionally, streamflow statistics and basin characteristics are most often needed at ungaged locations (locations without streamgages) for which published streamflow statistics and basin characteristics do not exist. Missouri StreamStats is a web-based geographic information system that was created by the USGS in cooperation with the Missouri Department of Natural Resources to provide users with access to an assortment of tools that are useful for water-resources planning and management. StreamStats allows users to easily obtain the most recent published streamflow statistics and basin characteristics for streamgage locations and to automatically calculate selected basin characteristics and estimate streamflow statistics at ungaged
Full Text Available The main objective of the article is to provide insight into technologies and approaches available to maintain consistent state on both client and server sides. The article describes basic RIA application state persistence difficulties and offers approaches to overcoming such problems using asynchronous data transmission synchronization channels and other user-available browser abilities.
This book is for professional PHP developers who wish to master the powerful Yii 2 application framework. It is assumed that you have knowledge of object-oriented programming. The previous version of the Yii framework is only briefly mentioned, but it''ll be even easier to grasp Yii 2 with the knowledge of Yii 1.1.x.
25 3. Low Spec PC Solution .......................................................................26 4. Tubby Clients...Screen Shot ....................................................72 Figure 28. WITS Q1 Modifiable Search Screen Shot...Mobile Phone Service API Application Programming Interfaces ASP Active Server Pages CAC Common Access Card C-Cubed Computer Center Corporation
Artzi, Shay; Dolby, Julian; Jensen, Simon Holm
The study aims to investigate are using Web 2.0 applications promoting reflective thinking skills for higher education student in faculty for education. Although the literature reveals that technology integration is a trend in higher education and researchers and educators have increasingly shared their ideas and examples of implementations of Web…
Amy C Robertson
Full Text Available Feedback, especially timely, specific, and actionable feedback, frequently does not occur. Efforts to better understand methods to improve the effectiveness of feedback are an important area of educational research. This study represents preliminary work as part of a plan to investigate the perceptions of a student-driven system to request feedback from faculty using a mobile device and Web-based application. We hypothesize that medical students will perceive learner-initiated, timely feedback to be an essential component of clinical education. Furthermore, we predict that students will recognize the use of a mobile device and Web application to be an advantageous and effective method when requesting feedback from supervising physicians. Focus group data from 18 students enrolled in a 4-week anesthesia clerkship revealed the following themes: (1 students often have to solicit feedback, (2 timely feedback is perceived as being advantageous, (3 feedback from faculty is perceived to be more effective, (4 requesting feedback from faculty physicians poses challenges, (5 the decision to request feedback may be influenced by the student’s clinical performance, and (6 using a mobile device and Web application may not guarantee timely feedback. Students perceived using a mobile Web-based application to initiate feedback from supervising physicians to be a valuable method of assessment. However, challenges and barriers were identified.
von Franqué, Alexander; Tellioglu, Hilda
Many educational institutions use Learning Management Systems to provide e-learning content to their students. This often includes quizzes that can help students to prepare for exams. However, the content is usually web-optimized and not very usable on mobile devices. In this work a native mobile application ("UML Quiz") that imports…
Scheele, Christian Elling
Traditional information campaigns aimed at incentivising the kind of behaviour change that will lead to more sustainable levels of energy consumption have been proven inefficient. Politicians and government bodies could consider using green web applications as an alternative. However, there is li...
Silvia Trif; Adrian Visoiu
This paper aims to present a refined algorithm for choosing the appropriate security implementation for mobile applications connecting to web services. Common security scenarios are presented. Each scenario has several characteristics associated. The correlations between these security scenarios characteristics are computed and selected only the characteristics that are less correlated. The proposed algorithm inventories the available scenarios, inventories the requirements and selects the se...
The article presents the main aspects for configuring the httpd file in Solaris Unix operating system and the facilities by using the Qt cross-platform application for the Web server administration. The considerations are available for the configuring of the DNS server Bind 8 and 9.
Full Text Available The article presents the main aspects for configuring the httpd file in Solaris Unix operating system and the facilities by using the Qt cross-platform application for the Web server administration. The considerations are available for the configuring of the DNS server Bind 8 and 9.
Enokida, Kazumichi; Sakaue, Tatsuya; Morita, Mitsuhiro; Kida, Shusaku; Ohnishi, Akio
In this paper, the development of a web application for self-access English vocabulary courses at a national university in Japan will be reported upon. Whilst the basic concepts are inherited from an old Flash-based online vocabulary learning system that had been long used at the university, the new HTML5-based app comes with several new features…
Groenewegen, D.M.; Visser, E.
This paper is a pre-print of: Danny M. Groenewegen, Eelco Visser. Integration of Data Validation and User Interface Concerns in a DSL for Web Applications. In Mark G. J. van den Brand, Jeff Gray, editors, Software Language Engineering, Second International Conference, SLE 2009, Denver, USA, October,
Pritchett, Christopher G.; Pritchett, Christal C.; Wohleb, Elisha C.
This research study was designed to determine the degree of use of Web 2.0 technology applications by certified education professionals and examine differences among various groups as well as reasons for these differences. A quantitative survey instrument was developed to gather demographic information and data. Participants reported they would be…
Beijer, L.J.; Rietveld, T.C.; Beers, M.M. van; Slangen, R.M.; Heuvel, H. van den; Swart, B.J.M. de; Geurts, A.C.H.
Abstract In The Netherlands, a web application for speech training, E-learning-based speech therapy (EST), has been developed for patients with dysarthria, a speech disorder resulting from acquired neurological impairments such as stroke or Parkinson's disease. In this report, the EST infrastructure
Song, X.M., E-mail: email@example.com; Pan, W.; Chen, L.Y.; Song, X.; Li, X.D.
Highlights: • An original way to develop web application with a new framework (jQuery + PHP + Matlab) is introduced. • A convenient but powerful application for electromagnetic calculation is implemented. • The web application can run in any popular browser, on any hardware and in any operating system. • No any plugin is needed; no any maintenance is required. - Abstract: Recently, many web tools [1–3] in fusion society have been designed and demonstrated, which has been proved to be powerful and convenient to fusion researchers. Many physicists and engineers need a tool to compute the poloidal magnetic field for some purposes (for example, the calibration of magnetic probes for EFIT, the field null structure analysis for control, the design of some plasma diagnostic systems), so to develop a powerful and convenient web application for the calculation of magnetic field and magnetic flux produced by PF coils is very important. In this paper, a web application tool for poloidal field analysis on HL-2M with a totally original framework is presented. This web application is full of dynamic and interactive interface, and can run in any popular browser (IE, safari, firefox, opera), on any hardware (smart phone, PC, ipad, Mac) and operating system (ios, android, windows, linux, Mac OS). No any plugins is needed. The three layers (jQuery + PHP + Matlab) of this framework are introduced. The front top client layer is developed by jQuery code. The middle layer, which plays a role of a bridge to connect the server and client through socket communication, is developed by PHP code. The behind server layer is developed by Matlab, which compute the magnetic field or magnetic flux through a Special Function called Complete Elliptic Integral, and returns the results in the client favorite way, either by table or by JPG image. The field null structure and the vertical and radial field structure calculated by this tool are introduced with details. The idea to design a web
Kortüm, K; Reznicek, L; Leicht, S; Ulbig, M; Wolf, A
The importance and complexity of clinical trials is continuously increasing, especially in innovative specialties like ophthalmology. Therefore an efficient clinical trial site organisational structure is essential. In modern internet times, this can be accomplished by web-based applications. In total, 3 software applications (Vibe on Prem, Sharepoint and open source software) were evaluated in a clinical trial site in ophthalmology. Assessment criteria were set; they were: reliability, easiness of administration, usability, scheduling, task list, knowledge management, operating costs and worldwide availability. Vibe on Prem customised by the local university met the assessment criteria best. Other applications were not as strong. By introducing a web-based application for administrating and organising an ophthalmological trial site, studies can be conducted in a more efficient and reliable manner. Georg Thieme Verlag KG Stuttgart · New York.
Tarek R. Sheltami
Full Text Available Sensor network can be used in a numerous number of applications. However, implementing wireless sensor networks present new challenges compared with theoretical networks. In addition, implementing a sensor network might provide results different from that derived theoretically. Some routing protocols when implemented might fail to perform. In this paper, we implement three routing protocols, namely: Dynamic MANET on-demand, Collection Tree and Dissemination protocols. To compare the performance of these protocols, they are implemented using a Telosb sensor network. Several performance metrics are carried out to demonstrate the pros and cons of these protocols. A telemedicine application is tested in top of the implemented Telosb sensor network at King Fahd University of Petroleum and Minerals Clinic in Saudi Arabia, utilizing Alive ECG sensors.
Carlos Alberto Maliza Martinez
David M Aanensen
Full Text Available Epidemiologists and ecologists often collect data in the field and, on returning to their laboratory, enter their data into a database for further analysis. The recent introduction of mobile phones that utilise the open source Android operating system, and which include (among other features both GPS and Google Maps, provide new opportunities for developing mobile phone applications, which in conjunction with web applications, allow two-way communication between field workers and their project databases.Here we describe a generic framework, consisting of mobile phone software, EpiCollect, and a web application located within www.spatialepidemiology.net. Data collected by multiple field workers can be submitted by phone, together with GPS data, to a common web database and can be displayed and analysed, along with previously collected data, using Google Maps (or Google Earth. Similarly, data from the web database can be requested and displayed on the mobile phone, again using Google Maps. Data filtering options allow the display of data submitted by the individual field workers or, for example, those data within certain values of a measured variable or a time period.Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting 'citizen scientists' to contribute data easily to central databases through their mobile phone.
Aanensen, David M.; Huntley, Derek M.; Feil, Edward J.; al-Own, Fada'a; Spratt, Brian G.
Background Epidemiologists and ecologists often collect data in the field and, on returning to their laboratory, enter their data into a database for further analysis. The recent introduction of mobile phones that utilise the open source Android operating system, and which include (among other features) both GPS and Google Maps, provide new opportunities for developing mobile phone applications, which in conjunction with web applications, allow two-way communication between field workers and their project databases. Methodology Here we describe a generic framework, consisting of mobile phone software, EpiCollect, and a web application located within www.spatialepidemiology.net. Data collected by multiple field workers can be submitted by phone, together with GPS data, to a common web database and can be displayed and analysed, along with previously collected data, using Google Maps (or Google Earth). Similarly, data from the web database can be requested and displayed on the mobile phone, again using Google Maps. Data filtering options allow the display of data submitted by the individual field workers or, for example, those data within certain values of a measured variable or a time period. Conclusions Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting ‘citizen scientists’ to contribute data easily to central databases through their mobile phone. PMID:19756138
Weerd, I. van de; Brinkkemper, S.; Souer, J.; Versendaal, J.M.
The usage of data-intensive web applications raises problems concerning consistency, navigation, and data duplication. Content management systems (CMSs) can overcome these problems. In this research, we focus on special types ofweb contentmanagement systems – webbased CMS applications. Currently,
Siewert, René; Specovius, Svenja; Wu, Jie; Krefting, Dagmar
Interactive visualization and correction of intermediate results are required in many medical image analysis pipelines. To allow certain interaction in the remote execution of compute- and data-intensive applications, new features of HTML5 are used. They allow for transparent integration of user interaction into Grid- or Cloud-enabled scientific workflows. Both 2D and 3D visualization and data manipulation can be performed through a scientific gateway without the need to install specific software or web browser plugins. The possibilities of web-based visualization are presented along the FreeSurfer-pipeline, a popular compute- and data-intensive software tool for quantitative neuroimaging.
Jongsawat, Nipat; Tungkasthan, Anunucha; Premchaiswadi, Wichian
GeNIe, Graphical Network Interface, is designed for a windows environment. It works well on a windows platform. It cannot be run on a web or Internet-based platform. That is why there is some limitation for its use on a worldwide basis. Another thing is that it does not support is real-time data processing. To overcome the limitations of GeNIe, the SMILE web application was designed and implemented on a client/server architecture mentioned in section 3. GeNIe is an outer shell of SMILE. SMILE...
Karamanis, Nikiforos; Pignatelli, Miguel; Carvalho-Silva, Denise; Rowland, Francis; Cham, Jennifer A; Dunham, Ian
Here, we discuss how we designed the Open Targets Platform (www.targetvalidation.org), an intuitive application for bench scientists working in early drug discovery. To meet the needs of our users, we applied lean user experience (UX) design methods: we started engaging with users very early and carried out research, design and evaluation activities within an iterative development process. We also emphasize the collaborative nature of applying lean UX design, which we believe is a foundation for success in this and many other scientific projects. Copyright © 2018. Published by Elsevier Ltd.
Full Text Available Sensing devices are increasingly being deployed to monitor the physical world around us. One class of application for which sensor data is pertinent is environmental decision support systems, e.g., flood emergency response. For these applications, the sensor readings need to be put in context by integrating them with other sources of data about the surrounding environment. Traditional systems for predicting and detecting floods rely on methods that need significant human resources. In this paper we describe a semantic sensor web architecture for integrating multiple heterogeneous datasets, including live and historic sensor data, databases, and map layers. The architecture provides mechanisms for discovering datasets, defining integrated views over them, continuously receiving data in real-time, and visualising on screen and interacting with the data. Our approach makes extensive use of web service standards for querying and accessing data, and semantic technologies to discover and integrate datasets. We demonstrate the use of our semantic sensor web architecture in the context of a flood response planning web application that uses data from sensor networks monitoring the sea-state around the coast of England.
Wolpin, Seth; Berry, Donna L; Kurth, Ann; Lober, William B
The Internet is increasingly used as a medium for gathering and exchanging health information exchange. Healthcare professionals and organizations need to consider barriers that may exist within their patient-oriented Web applications. One approach to making the Web more accessible for those with lower health literacy may be to supplement textual content with audio annotation using text-to-speech engines, allowing for the creation of a virtual surrogate reader. One challenge is that with numerous text-to-speech engines on the market, objective measures of quality are difficult to obtain. To facilitate comparisons of text-to-speech engines, we developed an open-source Web application that measures user reaction times, subjective quality ratings, and accuracy in completing tasks across different audio files created by text-to-speech engines. Our research endeavor was successful in building and piloting this Web application; significant differences were found for subjective ratings of quality across three text-to-speech engines priced at different levels. However, no significant differences were found with reaction times or accuracy between these text-to-speech engines. Future avenues of research include exploring more complex tasks, usability issues related to implementing text-to-speech features, and applied health promotion and education opportunities among vulnerable populations.
Lehmann Miotto, Giovanna; Magnoni, Luca; Sloper, John Erik
The ATLAS Trigger and Data Acquisition (TDAQ) infrastructure is responsible for filtering and transferring ATLAS experimental data from detectors to mass storage systems. It relies on a large, distributed computing system composed of thousands of software applications running concurrently. In such a complex environment, information sharing is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking, the streams of messages sent by applications and data published via information services are constantly monitored by experts to verify the correctness of running operations and to understand problematic situations. To simplify and improve system analysis and errors detection tasks, we developed the TDAQ Analytics Dashboard, a web application that aims to collect, correlate and visualize effectively this real time flow of information. The TDAQ Analytics Dashboard is composed of two main entities that reflect the twofold scope of the application. The first is the engine, a Java service that performs aggregation, processing and filtering of real time data stream and computes statistical correlation on sliding windows of time. The results are made available to clients via a simple web interface supporting SQL-like query syntax. The second is the visualization, provided by an Ajax-based web application that runs on client's browser. The dashboard approach allows to present information in a clear and customizable structure. Several types of interactive graphs are proposed as widgets that can be dynamically added and removed from visualization panels. Each widget acts as a client for the engine, querying the web interface to retrieve data with desired criteria. In this paper we present the design, development and evolution of the TDAQ Analytics Dashboard. We also present the statistical analysis computed by the application in this first period of high energy data taking operations for the ATLAS experiment.
Célia Maria Dias Sales
Full Text Available This paper presents the metric-frequency calculator (MF Calculator, an online application to analyze similarity. The MF Calculator implements a metric-frequency similarity algorithm for the quantitative assessment of similarity in ill-structured data sets. It is widely applicable as it can be used with nominal, ordinal, or interval data when there is little prior control over the variables to be observed regarding number or content. The MF Calculator generates a proximity matrix in CSV, XML or DOC format that can be used as input to traditional statistical techniques such as hierarchical clustering, additive trees, or multidimensional scaling. The MF Calculator also displays a graphical representation of outputs using additive similarity trees. A simulated example illustrates the implementation of the MF calculator. An additional example with real data is presented, in order to illustrate the potential of combining the MF Calculator with cluster analysis. The MF Calculator is a user-friendly tool available free of charge. It can be accessed from http://mfcalculator.celiasales.org/Calculator.aspx, and it can be used by non-experts from a wide range of social sciences.
Sendall, D M
The World-Wide Web was first developed as a tool for collaboration in the high energy physics community. From there it spread rapidly to other fields, and grew to its present impressive size. As an easy way to access information, it has been a great success, and a huge number of medical applications have taken advantage of it. But there is another side to the Web, its potential as a tool for collaboration between people. Medical examples include telemedicine and teaching. New technical developments offer still greater potential in medical and other fields. This paper gives some background to the early development of the World-Wide Web, a brief overview of its present state with some examples relevant to medicine, and a look at the future.
Full Text Available Decision Support Systems (DSS form a specific class of computerized information systems that support business and managerial decision-making activities. Making the right decision in business primarily depends on the quality of data. It also depends on the ability to analyze the data with a view to identifying trends that can suggest solutions and strategies. A “cooperative” decision support system means the data are collected, analyzed and then provided to a human agent who can help the system to revise or refine the data. It means that both a human component and computer component work together to come up with the best solution. This paper describes the usage of a software product (Vanguard System to a specific economic application (evaluating the financial risk assuming that the rate of the economic profitability can be under the value of the interest rate.
Romaniuk, Ryszard S.
Since twenty years, young researchers form the Institute of Electronic Systems, Warsaw University of Technology, organize two times a year, under only a marginal supervision of the senior faculty members, under the patronage of WEiTI PW, KEiT PAN, SPIE, IEEE, PKOpto SEP and PSF, the WILGA Symposium on advanced, integrated functional electronic, photonic and mechatronic systems [1-5]. All aspects are considered like: research and development, theory and design, technology - material and construction, software and hardware, commissioning and tests, as well as pilot and practical applications. The applications concern mostly, which turned after several years to be a proud specialization of the WILGA Symposium, Internet engineering, high energy physics experiments, new power industry including fusion, nuclear industry, space and satellite technologies, telecommunications, smart municipal environment, as well as biology and medicine [6-8]. XXXVIIth WILGA Symposium was held on 29-31 January 2016 and gathered a few tens of young researchers active in the mentioned research areas. There were presented a few tens of technical papers which will be published in Proc.SPIE together with the accepted articles from the Summer Edition of the WILGA Symposium scheduled for 29.05-06.06.2016. This article is a digest of chosen presentations from WILGA Symposium 2016 Winter Edition. The survey is narrowed to a few chosen and main topical tracks, like electronics and photonics design using industrial standards like ATCA/MTCA, also particular designs of functional systems using this series of industrial standards. The paper, summarizing traditionally since many years the accomplished WILGA Symposium organized by young researchers from Warsaw University of Technology, is also the following part of a cycle of papers concerning their participation in design of new generations of electronic systems used in discovery experiments in Poland and in leading research laboratories of the world.
Hale, Richard Edward [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Batteh, John J [Modelon Corporation (Sweden); Tiller, Michael M. [Xogeny Corporation (United States)
Previous reports focused on the development of component and system models as well as end-to-end system models using Modelica and Dymola for two advanced reactor architectures: (1) Advanced Liquid Metal Reactor and (2) fluoride high-temperature reactor (FHR). The focus of this report is the release of the first beta version of the web-based application for model use and collaboration, as well as an update on the FHR model. The web-based application allows novice users to configure end-to-end system models from preconfigured choices to investigate the instrumentation and controls implications of these designs and allows for the collaborative development of individual component models that can be benchmarked against test systems for potential inclusion in the model library. A description of this application is provided along with examples of its use and a listing and discussion of all the models that currently exist in the library.
Dee, Fred R; Haugen, Thomas H; Kreiter, Clarence D
The goal of mechanistic case diagraming (MCD) is to provide students with more in-depth understanding of cause and effect relationships and basic mechanistic pathways in medicine. This will enable them to better explain how observed clinical findings develop from preceding pathogenic and pathophysiological events. The pedagogic function of MCD is in relating risk factors, disease entities and morphology, signs and symptoms, and test and procedure findings in a specific case scenario with etiologic pathogenic and pathophysiological sequences within a flow diagram. In this paper, we describe the addition of automation and predetermined lists to further develop the original concept of MCD as described by Engelberg in 1992 and Guerrero in 2001. We demonstrate that with these modifications, MCD is effective and efficient in small group case-based teaching for second-year medical students (ratings of ~3.4 on a 4.0 scale). There was also a significant correlation with other measures of competency, with a 'true' score correlation of 0.54. A traditional calculation of reliability showed promising results (α =0.47) within a low stakes, ungraded environment. Further, we have demonstrated MCD's potential for use in independent learning and TBL. Future studies are needed to evaluate MCD's potential for use in medium stakes assessment or self-paced independent learning and assessment. MCD may be especially relevant in returning students to the application of basic medical science mechanisms in the clinical years.
Fred R. Dee
Full Text Available The goal of mechanistic case diagraming (MCD is to provide students with more in-depth understanding of cause and effect relationships and basic mechanistic pathways in medicine. This will enable them to better explain how observed clinical findings develop from preceding pathogenic and pathophysiological events. The pedagogic function of MCD is in relating risk factors, disease entities and morphology, signs and symptoms, and test and procedure findings in a specific case scenario with etiologic pathogenic and pathophysiological sequences within a flow diagram. In this paper, we describe the addition of automation and predetermined lists to further develop the original concept of MCD as described by Engelberg in 1992 and Guerrero in 2001. We demonstrate that with these modifications, MCD is effective and efficient in small group case-based teaching for second-year medical students (ratings of ~3.4 on a 4.0 scale. There was also a significant correlation with other measures of competency, with a ‘true’ score correlation of 0.54. A traditional calculation of reliability showed promising results (α =0.47 within a low stakes, ungraded environment. Further, we have demonstrated MCD's potential for use in independent learning and TBL. Future studies are needed to evaluate MCD's potential for use in medium stakes assessment or self-paced independent learning and assessment. MCD may be especially relevant in returning students to the application of basic medical science mechanisms in the clinical years.
Zhu, Jun; Xie, Jing; Lipford, Heather Richter; Chu, Bill
Many security incidents are caused by software developers' failure to adhere to secure programming practices. Static analysis tools have been used to detect software vulnerabilities. However, their wide usage by developers is limited by the special training required to write rules customized to application-specific logic. Our approach is interactive static analysis, to integrate static analysis into Integrated Development Environment (IDE) and provide in-situ secure programming support to help developers prevent vulnerabilities during code construction. No additional training is required nor are there any assumptions on ways programs are built. Our work is motivated in part by the observation that many vulnerabilities are introduced due to failure to practice secure programming by knowledgeable developers. We implemented a prototype interactive static analysis tool as a plug-in for Java in Eclipse. Our technical evaluation of our prototype detected multiple zero-day vulnerabilities in a large open source project. Our evaluations also suggest that false positives may be limited to a very small class of use cases.
Full Text Available Many security incidents are caused by software developers’ failure to adhere to secure programming practices. Static analysis tools have been used to detect software vulnerabilities. However, their wide usage by developers is limited by the special training required to write rules customized to application-specific logic. Our approach is interactive static analysis, to integrate static analysis into Integrated Development Environment (IDE and provide in-situ secure programming support to help developers prevent vulnerabilities during code construction. No additional training is required nor are there any assumptions on ways programs are built. Our work is motivated in part by the observation that many vulnerabilities are introduced due to failure to practice secure programming by knowledgeable developers. We implemented a prototype interactive static analysis tool as a plug-in for Java in Eclipse. Our technical evaluation of our prototype detected multiple zero-day vulnerabilities in a large open source project. Our evaluations also suggest that false positives may be limited to a very small class of use cases.
Ye, Zhenqing; Ma, Tao; Kalmbach, Michael T; Dasari, Surendra; Kocher, Jean-Pierre A; Wang, Liguo
Hannah B Vander Zanden
Full Text Available The measurement of stable carbon (δ13C and nitrogen (δ15N isotopes in tissues of organisms has formed the foundation of isotopic food web reconstructions, as these values directly reflect assimilated diet. In contrast, stable hydrogen (δ2H and oxygen (δ18O isotope measurements have typically been reserved for studies of migratory origin and paleoclimate reconstruction based on systematic relationships between organismal tissue and local environmental water. Recently, innovative applications using δ2H and, to a lesser extent, δ18O values have demonstrated potential for these elements to provide novel insights in modern food web studies. We explore the advantages and challenges associated with three applications of δ2H and δ18O values in food web studies. First, large δ2H differences between aquatic and terrestrial ecosystem end members can permit the quantification of energy inputs and nutrient fluxes between these two sources, with potential applications for determining allochthonous vs. autochthonous nutrient sources in freshwater systems and relative aquatic habitat utilization by terrestrial organisms. Next, some studies have identified a relationship between δ2H values and trophic position, which suggests that this marker may serve as a trophic indicator, in addition to the more commonly used δ15N values. Finally, coupled measurements of δ2H and δ18O values are increasing as a result of reduced analytical challenges to measure both simultaneously and may provide additional ecological information over single element measurements. In some organisms, the isotopic ratios of these two elements are tightly coupled, whereas the isotopic disequilibrium in other organisms may offer insight into the diet and physiology of individuals. Although a coherent framework for interpreting δ2H and δ18O data in the context of food web studies is emerging, many fundamental uncertainties remain. We highlight directions for targeted research that
The Packt Beginner's Guide format is designed to make you as comfortable as possible. Using practical examples, this guide will walk you through the ins and outs of web application development with easy step-by-step instructions.If you want to build your own application but don't know where to start, then this is the book for you. With easy-to-follow, step-by-step and real-life examples, you will be building your own applications in a matter of weeks not years.
Saavedra-Duarte, L. A.; Angarita-Jerardino, A.; Ruiz, P. A.; Dulce-Moreno, H. J.; Vera-Rivera, F. H.; V-Niño, E. D.
Information and Communication Technologies (ICT) are essential in the transfer of knowledge, and the Web tools, as part of ICT, are important for institutions seeking greater visibility of the products developed by their researchers. For this reason, we implemented an application that allows the information management of the FORISTOM Foundation (Foundation of Researchers in Science and Technology of Materials). The application shows a detailed description, not only of all its members also of all the scientific production that they carry out, such as technological developments, research projects, articles, presentations, among others. This application can be implemented by other entities committed to the scientific dissemination and transfer of technology and knowledge.
de Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio
In the upcoming LHC era, database have become an essential part for the experiments collecting data from LHC, in order to safely store, and consistently retrieve, a wide amount of data, which are produced by different sources. In the CMS experiment at CERN, all this information is stored in ORACLE databases, allocated in several servers, both inside and outside the CERN network. In this scenario, the task of monitoring different databases is a crucial database administration issue, since different information may be required depending on different users' tasks such as data transfer, inspection, planning and security issues. We present here a web application based on Python web framework and Python modules for data mining purposes. To customize the GUI we record traces of user interactions that are used to build use case models. In addition the application detects errors in database transactions (for example identify any mistake made by user, application failure, unexpected network shutdown or Structured Query Language (SQL) statement error) and provides warning messages from the different users' perspectives. Finally, in order to fullfill the requirements of the CMS experiment community, and to meet the new development in many Web client tools, our application was further developed, and new features were deployed.
Duarte, L; Teodoro, A C; Gonçalves, J A; Soares, D; Cunha, M
Soil erosion is a serious environmental problem. An estimation of the expected soil loss by water-caused erosion can be calculated considering the Revised Universal Soil Loss Equation (RUSLE). Geographical Information Systems (GIS) provide different tools to create categorical maps of soil erosion risk which help to study the risk assessment of soil loss. The objective of this study was to develop a GIS open source application (in QGIS), using the RUSLE methodology for estimating erosion rate at the watershed scale (desktop application) and provide the same application via web access (web application). The applications developed allow one to generate all the maps necessary to evaluate the soil erosion risk. Several libraries and algorithms from SEXTANTE were used to develop these applications. These applications were tested in Montalegre municipality (Portugal). The maps involved in RUSLE method-soil erosivity factor, soil erodibility factor, topographic factor, cover management factor, and support practices-were created. The estimated mean value of the soil loss obtained was 220 ton km(-2) year(-1) ranged from 0.27 to 1283 ton km(-2) year(-1). The results indicated that most of the study area (80 %) is characterized by very low soil erosion level (soil erosion was higher than 962 ton km(-2) year(-1). It was also concluded that areas with high slope values and bare soil are related with high level of erosion and the higher the P and C values, the higher the soil erosion percentage. The RUSLE web and the desktop application are freely available.
Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.
Ianevski, Aleksandr; He, Liye; Aittokallio, Tero; Tang, Jing
Rational design of drug combinations has become a promising strategy to tackle the drug sensitivity and resistance problem in cancer treatment. To systematically evaluate the pre-clinical significance of pairwise drug combinations, functional screening assays that probe combination effects in a dose-response matrix assay are commonly used. To facilitate the analysis of such drug combination experiments, we implemented a web application that uses key functions of R-package SynergyFinder, and provides not only the flexibility of using multiple synergy scoring models, but also a user-friendly interface for visualizing the drug combination landscapes in an interactive manner. The SynergyFinder web application is freely accessible at https://synergyfinder.fimm.fi ; The R-package and its source-code are freely available at http://bioconductor.org/packages/release/bioc/html/synergyfinder.html . firstname.lastname@example.org.
Burger, Albert; Paschke, Adrian; Romano, Paolo; Marshall, M Scott; Splendiani, Andrea
As Semantic Web technologies mature and new releases of key elements, such as SPARQL 1.1 and OWL 2.0, become available, the Life Sciences continue to push the boundaries of these technologies with ever more sophisticated tools and applications. Unsurprisingly, therefore, interest in the SWAT4LS (Semantic Web Applications and Tools for the Life Sciences) activities have remained high, as was evident during the third international SWAT4LS workshop held in Berlin in December 2010. Contributors to this workshop were invited to submit extended versions of their papers, the best of which are now made available in the special supplement of BMC Bioinformatics. The papers reflect the wide range of work in this area, covering the storage and querying of Life Sciences data in RDF triple stores, tools for the development of biomedical ontologies and the semantics-based integration of Life Sciences as well as clinicial data.
Pradeep Kumar eSreenivasaiah
Full Text Available Dynamic and rapidly evolving nature of systems driven research imposes special requirements on the technology, approach, design and architecture of computational infrastructure including database and web application. Several solutions have been proposed to meet the expectations and novel methods have been developed to address the persisting problems of data integration. It is important for researchers to understand different technologies and approaches. Having familiarized with the pros and cons of the existing technologies, researchers can exploit its capabilities to the maximum potential for integrating data. In this review we discuss the architecture, design and key technologies underlying some of the prominent databases (DBs and web applications. We will mention their roles in integration of biological data and investigate some of the emerging design concepts and computational technologies that are likely to have a key role in the future of systems driven biomedical research.
Sreenivasaiah, Pradeep Kumar; Kim, Do Han
Dynamic and rapidly evolving nature of systems driven research imposes special requirements on the technology, approach, design and architecture of computational infrastructure including database and Web application. Several solutions have been proposed to meet the expectations and novel methods have been developed to address the persisting problems of data integration. It is important for researchers to understand different technologies and approaches. Having familiarized with the pros and cons of the existing technologies, researchers can exploit its capabilities to the maximum potential for integrating data. In this review we discuss the architecture, design and key technologies underlying some of the prominent databases and Web applications. We will mention their roles in integration of biological data and investigate some of the emerging design concepts and computational technologies that are likely to have a key role in the future of systems driven biomedical research.
This paper covers my work during my assignment as participant of CERN Summer Students 2015 programme. The project was aimed at refactoring and publication of the Web Application Detection tool, which was developed at CERN and priorly used internally by the Computer Security team. The range of tasks performed include initial refactoring of code, which was developed like a script rather than a Python package, through extracting components that were not specific to CERN usage, the subsequent final release of the source code on GitHub and the integration with third-party software i.e. the w3af tool. Ultimately, Web Application Detection software received positive responses, being downloaded ca. 1500 times at the time of writing this report.
Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.
Title: Web application for teaching of higher degree algebraic equations at secondary school Author: Kristýna Podhajská Department: Department of Mathematics Education Supervisor: RNDr. Jarmila Robová, CSc. E-mail supervisor: Abstract: Bachelor thesis is devoted to the subject of algebraic equations of higher degrees. The work should serve mainly as an extension teaching material for secondary school students or as an accompanying textbook for teachers of mathematics...
Pradeep Kumar eSreenivasaiah; Do Han eKim
Dynamic and rapidly evolving nature of systems driven research imposes special requirements on the technology, approach, design and architecture of computational infrastructure including database and web application. Several solutions have been proposed to meet the expectations and novel methods have been developed to address the persisting problems of data integration. It is important for researchers to understand different technologies and approaches. Having familiarized with the pros and c...
Wang, Shu-Lin; Kuo, Mu-Hsing; Shiu, Yi-Shiang; Huang, Hsiu-Mei
Information overload and irrelevant information are major obstacles for drawing conclusions on the personal health status and taking adequate medical actions. The objective of this study is to design a recommendation-based mobile web application to assist patient efficiently search online health information at anytime, anywhere and via any devices. In the system, we use a collaborative filtering approach to recommend health information to users.
Full Text Available Background and Purpose: In a complex strictly hierarchical organizational structure, undesired oscillations may occur, which have not yet been adequately addressed. Therefore, parameter values, which define fluctuations and transitions from one state to another, need to be optimized to prevent oscillations and to keep parameter values between lower and upper bounds. The objective was to develop a simulation model of hierarchical organizational structure as a web application to help in solving the aforementioned problem.
Ahmad, Faudziah; Baharom, Fauziah; Husni, Moath
This paper discusses issues on current development and measurement practices that were identified from a pilot study conducted on Jordanian small software firms. The study was to investigate whether developers follow development and measurement best practices in web applications development. The analysis was conducted in two stages: first, grouping the development and measurement practices using variable clustering, and second, identifying the acceptance degree. Mean interval was used to dete...
Breunig, M.; Kuper, P. V.; Dittrich, A.; Wild, P.; Butwilowski, E.; Al-Doori, M.
The object-oriented database architecture DB4GeO was originally designed to support sub-surface applications in the geo-sciences. This is reflected in DB4GeO's geometric data model as well as in its import and export functions. Initially, these functions were designed for communication with 3D geological modeling and visualization tools such as GOCAD or MeshLab. However, it soon became clear that DB4GeO was suitable for a much wider range of applications. Therefore it is natural to move away from a standalone solution and to open the access to DB4GeO data by standardized OGC web-services. Though REST and OGC services seem incompatible at first sight, the implementation in DB4GeO shows that OGC-based implementation of web-services may use parts of the DB4GeO-REST implementation. Starting with initial solutions in the history of DB4GeO, this paper will introduce the design, adaptation (i.e. model transformation), and first steps in the implementation of OGC Web Feature (WFS) and Web Processing Services (WPS), as new interfaces to DB4GeO data and operations. Among its capabilities, DB4GeO can provide data in different data formats like GML, GOCAD, or DB3D XML through a WFS, as well as its ability to run operations like a 3D-to-2D service, or mesh-simplification (Progressive Meshes) through a WPS. We then demonstrate, an Android-based mobile 3D augmented reality viewer for DB4GeO that uses the Web Feature Service to visualize 3D geo-database query results. Finally, we explore future research work considering DB4GeO in the framework of the research group "Computer-Aided Collaborative Subway Track Planning in Multi-Scale 3D City and Building Models".
Braga, Rodolpho C; Alves, Vinicius M; Muratov, Eugene N; Strickland, Judy; Kleinstreuer, Nicole; Trospsha, Alexander; Andrade, Carolina Horta
Chemically induced skin sensitization is a complex immunological disease with a profound impact on quality of life and working ability. Despite some progress in developing alternative methods for assessing the skin sensitization potential of chemical substances, there is no in vitro test that correlates well with human data. Computational QSAR models provide a rapid screening approach and contribute valuable information for the assessment of chemical toxicity. We describe the development of a freely accessible web-based and mobile application for the identification of potential skin sensitizers. The application is based on previously developed binary QSAR models of skin sensitization potential from human (109 compounds) and murine local lymph node assay (LLNA, 515 compounds) data with good external correct classification rate (0.70-0.81 and 0.72-0.84, respectively). We also included a multiclass skin sensitization potency model based on LLNA data (accuracy ranging between 0.73 and 0.76). When a user evaluates a compound in the web app, the outputs are (i) binary predictions of human and murine skin sensitization potential; (ii) multiclass prediction of murine skin sensitization; and (iii) probability maps illustrating the predicted contribution of chemical fragments. The app is the first tool available that incorporates quantitative structure-activity relationship (QSAR) models based on human data as well as multiclass models for LLNA. The Pred-Skin web app version 1.0 is freely available for the web, iOS, and Android (in development) at the LabMol web portal ( http://labmol.com.br/predskin/ ), in the Apple Store, and on Google Play, respectively. We will continuously update the app as new skin sensitization data and respective models become available.
Stefan, Catalin; Junghanns, Ralf; Glaß, Jana; Sallwey, Jana; Fatkhutdinov, Aybulat; Fichtner, Thomas; Barquero, Felix; Moreno, Miguel; Bonilla, José; Kwoyiga, Lydia
Managed aquifer recharge represents a valuable instrument for sustainable water resources management. The concept implies purposeful infiltration of surface water into underground for later recovery or environmental benefits. Over decades, MAR schemes were successfully installed worldwide for a variety of reasons: to maximize the natural storage capacity of aquifers, physical aquifer management, water quality management, and ecological benefits. The INOWAS-DSS platform provides a collection of free web-based tools for planning, management and optimization of main components of MAR schemes. The tools are grouped into 13 specific applications that cover most relevant challenges encountered at MAR sites, both from quantitative and qualitative perspectives. The applications include among others the optimization of MAR site location, the assessment of saltwater intrusion, the restoration of groundwater levels in overexploited aquifers, the maximization of natural storage capacity of aquifers, the improvement of water quality, the design and operational optimization of MAR schemes, clogging development and risk assessment. The platform contains a collection of about 35 web-based tools of various degrees of complexity, which are either included in application specific workflows or used as standalone modelling instruments. Among them are simple tools derived from data mining and empirical equations, analytical groundwater related equations, as well as complex numerical flow and transport models (MODFLOW, MT3DMS and SEAWAT). Up to now, the simulation core of the INOWAS-DSS, which is based on the finite differences groundwater flow model MODFLOW, is implemented and runs on the web. A scenario analyser helps to easily set up and evaluate new management options as well as future development such as land use and climate change and compare them to previous scenarios. Additionally simple tools such as analytical equations to assess saltwater intrusion are already running online
Ahmed, M. Imran; Maruf Hassan, Md; Bhuyian, Touhid
Almost all public-sector organisations in Bangladesh now offer online services through web applications, along with the existing channels, in their endeavour to realise the dream of a ‘Digital Bangladesh’. Nations across the world have joined the online environment thanks to training and awareness initiatives by their government. File sharing and downloading activities using web applications have now become very common, not only ensuring the easy distribution of different types of files and documents but also enormously reducing the time and effort of users. Although the online services that are being used frequently have made users’ life easier, it has increased the risk of exploitation of local file disclosure (LFD) vulnerability in the web applications of different public-sector organisations due to unsecure design and careless coding. This paper analyses the root cause of LFD vulnerability, its exploitation techniques, and its impact on 129 public-sector websites in Bangladesh by examining the use of manual black box testing approach.
Paquette, Suzanne M; Leinonen, Kalle; Longabaugh, William J R
Dagher, A P; Fitzpatrick, M; Flanders, A E; Eng, J
Java is a relatively new programming language that has been used to develop a World Wide Web-based tool for estimating magnetic resonance (MR) imaging relaxation times, thereby demonstrating how Java may be used for Web-based radiology applications beyond improving the user interface of teaching files. A standard processing algorithm coded with Java is downloaded along with the hypertext markup language (HTML) document. The user (client) selects the desired pulse sequence and inputs data obtained from a region of interest on the MR images. The algorithm is used to modify selected MR imaging parameters in an equation that models the phenomenon being evaluated. MR imaging relaxation times are estimated, and confidence intervals and a P value expressing the accuracy of the final results are calculated. Design features such as simplicity, object-oriented programming, and security restrictions allow Java to expand the capabilities of HTML by offering a more versatile user interface that includes dynamic annotations and graphics. Java also allows the client to perform more sophisticated information processing and computation than is usually associated with Web applications. Java is likely to become a standard programming option, and the development of stand-alone Java applications may become more common as Java is integrated into future versions of computer operating systems.
Suryanto, Wiwit; Irnaka, Theodosius Marwan
One-dimensional modeling of magnetotelluric (MT) data has been performed using an online application on a web-based virtual private server. The application was developed with the Python language using the Django framework with HTML and CSS components. The input data, including the apparent resistivity and phase as a function of period or frequency with standard deviation, can be entered through an interactive web page that can be freely accessed at https://komputasi.geofisika.ugm.ac.id. The subsurface models, represented by resistivity as a function of depth, are iteratively improved by changing the model parameters, such as the resistivity and the layer depth, based on the observed apparent resistivity and phase data. The output of the application displayed on the screen presents resistivity as a function of depth and includes the RMS error for each iteration. Synthetic and real data were used in comparative tests of the application's performance, and it is shown that the application developed accurate subsurface resistivity models. Hence, this application can be used for practical one-dimensional modeling of MT data.
Borkum, Mark I; Frey, Jeremy G
The drug discovery process is now highly dependent on the management, curation and integration of large amounts of potentially useful data. Semantics are necessary in order to interpret the information and derive knowledge. Advances in recent years have mitigated concerns that the lack of robust, usable tools has inhibited the adoption of methodologies based on semantics. THIS PAPER PRESENTS THREE EXAMPLES OF HOW SEMANTIC WEB TECHNIQUES AND TECHNOLOGIES CAN BE USED IN ORDER TO SUPPORT CHEMISTRY RESEARCH: a controlled vocabulary for quantities, units and symbols in physical chemistry; a controlled vocabulary for the classification and labelling of chemical substances and mixtures; and, a database of chemical identifiers. This paper also presents a Web-based service that uses the datasets in order to assist with the completion of risk assessment forms, along with a discussion of the legal implications and value-proposition for the use of such a service. We have introduced the Semantic Web concepts, technologies, and methodologies that can be used to support chemistry research, and have demonstrated the application of those techniques in three areas very relevant to modern chemistry research, generating three new datasets that we offer as exemplars of an extensible portfolio of advanced data integration facilities. We have thereby established the importance of Semantic Web techniques and technologies for meeting Wild's fourth "grand challenge".
Blin, Kai; Pedersen, Lasse Ebdrup; Weber, Tilmann; Lee, Sang Yup
CRISPR/Cas9-based genome editing has been one of the major achievements of molecular biology, allowing the targeted engineering of a wide range of genomes. The system originally evolved in prokaryotes as an adaptive immune system against bacteriophage infections. It now sees widespread application in genome engineering workflows, especially using the Streptococcus pyogenes endonuclease Cas9. To utilize Cas9, so-called single guide RNAs (sgRNAs) need to be designed for each target gene. While there are many tools available to design sgRNAs for the popular model organisms, only few tools that allow designing sgRNAs for non-model organisms exist. Here, we present CRISPy-web (http://crispy.secondarymetabolites.org/), an easy to use web tool based on CRISPy to design sgRNAs for any user-provided microbial genome. CRISPy-web allows researchers to interactively select a region of their genome of interest to scan for possible sgRNAs. After checks for potential off-target matches, the resulting sgRNA sequences are displayed graphically and can be exported to text files. All steps and information are accessible from a web browser without the requirement to install and use command line scripts.
Full Text Available As the internet is fast migrating from static web pages to dynamic web pages, the users with visual impairment find it confusing and challenging when accessing the contents on the web. There is evidence that dynamic web applications pose accessibility challenges for the visually impaired users. This study shows that a difference can be made through the basic understanding of the technical requirement of users with visual impairment and addresses a number of issues pertinent to the accessibility needs for such users. We propose that only by designing a framework that is structurally flexible, by removing unnecessary extras and thereby making every bit useful (fit-for-purpose, will visually impaired users be given an increased capacity to intuitively access e-contents. This theory is implemented in a dynamic website for the visually impaired designed in this study. Designers should be aware of how the screen reading software works to enable them make reasonable adjustments or provide alternative content that still corresponds to the objective content to increase the possibility of offering faultless service to such users. The result of our research reveals that materials can be added to a content repository or re-used from existing ones by identifying the content types and then transforming them into a flexible and accessible one that fits the requirements of the visually impaired through our method (no-frill + agile methodology rather than computing in advance or designing according to a given specification.
Full Text Available Mobile applications are becoming an integral part of daily life and of business’s marketing plan. They are helpful in promoting for the business, attracting and retaining customers. Software testing is vital to ensure the delivery of high quality mobile applications that could be accessed across different platforms and meet business and technical requirements. This paper proposes a web based tool, namely Pons, for the distribution of pre-release mobile applications for the purpose of manual testing. Pons facilities building, running, and manually testing Android applications directly in the browser. It gets the developers and end users engaged in testing the applications in one place, alleviates the tester’s burden of installing and maintaining testing environments, and provides a platform for developers to rapidly iterate on the software and integrate changes over time. Thus, it speeds up the pre-release testing process, reduces its cost and increases customer satisfaction.
Magnoni, L; The ATLAS collaboration; Sloper, J E
The ATLAS Trigger and Data Acquisition (TDAQ) infrastructure is responsible for filtering and transferring ATLAS experimental data from detectors to mass storage systems. It relies on a large, distributed computing environment composed by thousands of software applications running concurrently. In such a complex environment, information sharing is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking runs, the streams of messages sent by applications and data published via information services are constantly monitored by experts to verify correctness of running operations and to understand problematic situations. To simplify and improve system analysis and errors detection tasks, we developed the TDAQ Analytics Dashboard, a web application that aims to collect, correlate and visualize effectively this real time flow of information. The TDAQ Analytics Dashboard is composed by two main entities, that reflect the twofold scope of the application. The fi...
Magnoni, L; Sloper, J E
The ATLAS Trigger and Data Acquisition (TDAQ) infrastructure is responsible for filtering and transferring ATLAS experimental data from detectors to mass storage systems. It relies on a large, distributed computing environment composed by thousands of software applications running concurrently. In such a complex environment, information sharing is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking runs, the streams of messages sent by applications and data published via information services are constantly monitored by experts to verify correctness of running operations and to understand problematic situations. To simplify and improve system analysis and errors detection tasks, we developed the TDAQ Analytics Dashboard, a web application that aims to collect, correlate and visualize effectively this real time flow of information. The TDAQ Analytics Dashboard is composed by two main entities, that reflect the twofold scope of the application. The fi...
Muñoz, Andrés; Botía, Juan A.
Multi-agent systems (MAS) are being adopted in multiple areas to deal with knowledge-based applications. On the other hand, Semantic Web technologies such as OWL and SWRL have shown to be useful in managing knowledge and reasoning about it. This paper proposes an architecture based on these technologies to develop an intelligent parking management application, where agents interact to reach a consensus about the assignment of a parking area to a vehicle. Moreover, this paper tackles the inherent problem related to the rise of conflicts in MAS by means of the integration of an argumentation system called ASBO (which is part of our previous work) into the proposed architecture.
Mantas, V. M.; Liu, Z.; Pereira, A. J. S. C.
The full potential of Satellite Rainfall Estimates (SRE) can only be realized if timely access to the datasets is possible. Existing data distribution web portals are often focused on global products and offer limited customization options, especially for the purpose of routine regional monitoring. Furthermore, most online systems are designed to meet the needs of desktop users, limiting the compatibility with mobile devices. In response to the growing demand for SRE and to address the current limitations of available web portals a project was devised to create a set of freely available applications and services, available at a common portal that can: (1) simplify cross-platform access to Tropical Rainfall Measuring Mission Online Visualization and Analysis System (TOVAS) data (including from Android mobile devices), (2) provide customized and continuous monitoring of SRE in response to user demands and (3) combine data from different online data distribution services, including rainfall estimates, river gauge measurements or imagery from Earth Observation missions at a single portal, known as the Tropical Rainfall Measuring Mission (TRMM) Explorer. The TRMM Explorer project suite includes a Python-based web service and Android applications capable of providing SRE and ancillary data in different intuitive formats with the focus on regional and continuous analysis. The outputs include dynamic plots, tables and data files that can also be used to feed downstream applications and services. A case study in Southern Angola is used to describe the potential of the TRMM Explorer for SRE distribution and analysis in the context of ungauged watersheds. The development of a collection of data distribution instances helped to validate the concept and identify the limitations of the program, in a real context and based on user feedback. The TRMM Explorer can successfully supplement existing web portals distributing SRE and provide a cost-efficient resource to small and medium
McCann, M. P.
Using the STOQS Web Application for Access to in situ Oceanographic Data Mike McCann 7 August 2012 With increasing measurement and sampling capabilities of autonomous oceanographic platforms (e.g. Gliders, Autonomous Underwater Vehicles, Wavegliders), the need to efficiently access and visualize the data they collect is growing. The Monterey Bay Aquarium Research Institute has designed and built the Spatial Temporal Oceanographic Query System (STOQS) specifically to address this issue. The need for STOQS arises from inefficiencies discovered from using CF-NetCDF point observation conventions for these data. The problem is that access efficiency decreases with decreasing dimension of CF-NetCDF data. For example, the Trajectory Common Data Model feature type has only one coordinate dimension, usually Time - positions of the trajectory (Depth, Latitude, Longitude) are stored as non-indexed record variables within the NetCDF file. If client software needs to access data between two depth values or from a bounded geographic area, then the whole data set must be read and the selection made within the client software. This is very inefficient. What is needed is a way to easily select data of interest from an archive given any number of spatial, temporal, or other constraints. Geospatial relational database technology provides this capability. The full STOQS application consists of a Postgres/PostGIS database, Mapserver, and Python-Django running on a server and Web 2.0 technology (jQuery, OpenLayers, Twitter Bootstrap) running in a modern web browser. The web application provides faceted search capabilities allowing a user to quickly drill into the data of interest. Data selection can be constrained by spatial, temporal, and depth selections as well as by parameter value and platform name. The web application layer also provides a REST (Representational State Transfer) Application Programming Interface allowing tools such as the Matlab stoqstoolbox to retrieve data
Samwald, Matthias; Lim, Ernest; Masiar, Peter; Marenco, Luis; Chen, Huajun; Morse, Thomas; Mutalik, Pradeep; Shepherd, Gordon; Miller, Perry; Cheung, Kei-Hoi
The amount of biomedical data available in Semantic Web formats has been rapidly growing in recent years. While these formats are machine-friendly, user-friendly web interfaces allowing easy querying of these data are typically lacking. We present "Entrez Neuron", a pilot neuron-centric interface that allows for keyword-based queries against a coherent repository of OWL ontologies. These ontologies describe neuronal structures, physiology, mathematical models and microscopy images. The returned query results are organized hierarchically according to brain architecture. Where possible, the application makes use of entities from the Open Biomedical Ontologies (OBO) and the 'HCLS knowledgebase' developed by the W3C Interest Group for Health Care and Life Science. It makes use of the emerging RDFa standard to embed ontology fragments and semantic annotations within its HTML-based user interface. The application and underlying ontologies demonstrate how Semantic Web technologies can be used for information integration within a curated information repository and between curated information repositories. It also demonstrates how information integration can be accomplished on the client side, through simple copying and pasting of portions of documents that contain RDFa markup.
Full Text Available The Internet of Things (IoT is evolving with the connected objects at an unprecedented rate, bringing about enormous opportunities for the future IoT applications as well as challenges. One of the major challenges is to handle the complexity generated by the interconnection of billions of objects. However, Social Internet of Things (SIoT, emerging from the conglomeration of IoT and social networks, has realized an efficient way to facilitate the development of complex future IoT applications. Nevertheless, to fully utilize the benefits of SIoT, a platform that can provide efficient services using social relations among heterogeneous objects is highly required. The web objects enabled IoT environment promotes SIoT features by enabling virtualization using virtual objects and supporting the modularity with microservices. To realize SIoT services, this article proposes an architecture that provides a foundation for the development of lightweight microservices based on socially connected web objects. To efficiently discover web objects and reduce the complexity of service provisioning processes, a social relationship model is presented. To realize the interoperable service operations, a semantic ontology model has been developed. Finally, to evaluate the proposed design, a prototype has been implemented based on a use case scenario.
Zhang, Xiaojun; Yu, Ping; Yan, Jun; Hu, Hongxiang; Goureia, Niraj
This paper presents the preliminary findings of a case study of patients' acceptance and usage of web self-service - online appointment system - in a primary health care centre in a regional area in Australia. After two months of implementation, structured interviews were undertaken over three months to ascertain patients' perceptions of the web self-service application. The findings indicates that patients' acceptance of the web self-service application maybe hindered by their relative lower computer ownership or inadequate computer skills and access to the internet, their preference for flexible personal communication for appointment making and inadequate flexibility of the appointment system compared to phone call. Our preliminary findings may suggest that more than half of the healthcare consumers in this area are likely to accept the PCEHR initiative, however the decision makers of the PCEHR system need to carefully design the strategies and practice for the introduction of the innovation to overcome the substantial barriers to consumers' ability to access the internet-based e-health solutions.
Nauman, Mohammad; Ali, Tamleek
Smartphones are increasingly being used to store personal information as well as to access sensitive data from the Internet and the cloud. Establishment of the identity of a user requesting information from smartphones is a prerequisite for secure systems in such scenarios. In the past, keystroke-based user identification has been successfully deployed on production-level mobile devices to mitigate the risks associated with naïve username/password based authentication. However, these approaches have two major limitations: they are not applicable to services where authentication occurs outside the domain of the mobile device - such as web-based services; and they often overly tax the limited computational capabilities of mobile devices. In this paper, we propose a protocol for keystroke dynamics analysis which allows web-based applications to make use of remote attestation and delegated keystroke analysis. The end result is an efficient keystroke-based user identification mechanism that strengthens traditional password protected services while mitigating the risks of user profiling by collaborating malicious web services.
Italia, Michael J; Pennington, Jeffrey W; Ruth, Byron; Wrazien, Stacey; Loutrel, Jennifer G; Crenshaw, E Bryan; Miller, Jeffrey; White, Peter S
Biomedical researchers share a common challenge of making complex data understandable and accessible. This need is increasingly acute as investigators seek opportunities for discovery amidst an exponential growth in the volume and complexity of laboratory and clinical data. To address this need, we developed Harvest, an open source framework that provides a set of modular components to aid the rapid development and deployment of custom data discovery software applications. Harvest incorporates visual representations of multidimensional data types in an intuitive, web-based interface that promotes a real-time, iterative approach to exploring complex clinical and experimental data. The Harvest architecture capitalizes on standards-based, open source technologies to address multiple functional needs critical to a research and development environment, including domain-specific data modeling, abstraction of complex data models, and a customizable web client.
Dominkovics, Pau; Granell, Carlos; Pérez-Navarro, Antoni; Casals, Martí; Orcau, Angels; Caylà, Joan A
Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web
Full Text Available Today, Google Maps API application based on Ajax technology as standard web service; facilitate users with publication interactive web maps, thus opening new possibilities in relation to the classical analogue maps. CORINE land cover databases are recognized as the fundamental reference data sets for numerious spatial analysis. The theoretical and applicable aspects of Google Maps API cartographic service are considered on the case of creating web map of change in urban areas in Belgrade and surround from 2000. to 2006. year, obtained from CORINE databases.
Lang, Jeremy S.; Irving, James R.
Weaver, J. Curtis; Terziotti, Silvia; Kolb, Katharine R.; Wagner, Chad R.
A statewide StreamStats application for North Carolina was developed in cooperation with the North Carolina Department of Transportation following completion of a pilot application for the upper French Broad River basin in western North Carolina (Wagner and others, 2009). StreamStats for North Carolina, available at http://water.usgs.gov/osw/streamstats/north_carolina.html, is a Web-based Geographic Information System (GIS) application developed by the U.S. Geological Survey (USGS) in consultation with Environmental Systems Research Institute, Inc. (Esri) to provide access to an assortment of analytical tools that are useful for water-resources planning and management (Ries and others, 2008). The StreamStats application provides an accurate and consistent process that allows users to easily obtain streamflow statistics, basin characteristics, and descriptive information for USGS data-collection sites and user-selected ungaged sites. In the North Carolina application, users can compute 47 basin characteristics and peak-flow frequency statistics (Weaver and others, 2009; Robbins and Pope, 1996) for a delineated drainage basin. Selected streamflow statistics and basin characteristics for data-collection sites have been compiled from published reports and also are immediately accessible by querying individual sites from the web interface. Examples of basin characteristics that can be computed in StreamStats include drainage area, stream slope, mean annual precipitation, and percentage of forested area (Ries and others, 2008). Examples of streamflow statistics that were previously available only through published documents include peak-flow frequency, flow-duration, and precipitation data. These data are valuable for making decisions related to bridge design, floodplain delineation, water-supply permitting, and sustainable stream quality and ecology. The StreamStats application also allows users to identify stream reaches upstream and downstream from user-selected sites
Ferry yudhitama putra
Full Text Available Abstract— English Private lessons institutes are now widely facilitates a person to develop English skills in speaking and writing. Currently the service users private lessons English book private lessons manually, that is by coming directly to the place as well as through the telephone service, but with the operator even then still have difficulties in user validation that requires a long time. To facilitate the user in terms of the reservation, then the system will be built based on web and Android. Development of private lessons reservation application built with PHP and Java programming language using CodeIgniter framework on the web side , while on the Android using Eclipse tools , and MySQL as database storage media . Applications reservation private lessons has several functions to make a reservation time and tutor can be done by the student of Easyspeak and on the side of the tutor application can provide information about the student will be taught , as well as on the side of the operator to provide ease in setting booking private lessons because it computerized not manually as before. Applications reservation private lessons are also equipped with a reminder or reminders are made on the side of Android apps , using alarmmanager system.
Wages, Nolan A; Varhegyi, Nikole
In evaluating the performance of Phase I dose-finding designs, simulation studies are typically conducted to assess how often a method correctly selects the true maximum tolerated dose under a set of assumed dose-toxicity curves. A necessary component of the evaluation process is to have some concept for how well a design can possibly perform. The notion of an upper bound on the accuracy of maximum tolerated dose selection is often omitted from the simulation study, and the aim of this work is to provide researchers with accessible software to quickly evaluate the operating characteristics of Phase I methods using a benchmark. The non-parametric optimal benchmark is a useful theoretical tool for simulations that can serve as an upper limit for the accuracy of maximum tolerated dose identification based on a binary toxicity endpoint. It offers researchers a sense of the plausibility of a Phase I method's operating characteristics in simulation. We have developed an R shiny web application for simulating the benchmark. The web application has the ability to quickly provide simulation results for the benchmark and requires no programming knowledge. The application is free to access and use on any device with an Internet browser. The application provides the percentage of correct selection of the maximum tolerated dose and an accuracy index, operating characteristics typically used in evaluating the accuracy of dose-finding designs. We hope this software will facilitate the use of the non-parametric optimal benchmark as an evaluation tool in dose-finding simulation.
Full Text Available In biomedical studies the patients are often evaluated numerous times and a large number of variables are recorded at each time-point. Data entry and manipulation of longitudinal data can be performed using spreadsheet programs, which usually include some data plotting and analysis capabilities and are straightforward to use, but are not designed for the analyses of complex longitudinal data. Specialized statistical software offers more flexibility and capabilities, but first time users with biomedical background often find its use difficult. We developed medplot, an interactive web application that simplifies the exploration and analysis of longitudinal data. The application can be used to summarize, visualize and analyze data by researchers that are not familiar with statistical programs and whose knowledge of statistics is limited. The summary tools produce publication-ready tables and graphs. The analysis tools include features that are seldom available in spreadsheet software, such as correction for multiple testing, repeated measurement analyses and flexible non-linear modeling of the association of the numerical variables with the outcome. medplot is freely available and open source, it has an intuitive graphical user interface (GUI, it is accessible via the Internet and can be used within a web browser, without the need for installing and maintaining programs locally on the user's computer. This paper describes the application and gives detailed examples describing how to use the application on real data from a clinical study including patients with early Lyme borreliosis.
Eguchi, S.; Kawasaki, W.; Shirasaki, Y.; Komiya, Y.; Kosugi, G.; Ohishi, M.; Mizumoto, Y.
Atila evan Nas
Full Text Available The Systems Genetics Resource (SGR (http://systems.genetics.ucla.edu is a new open-access web application and database that contains genotypes and clinical and intermediate phenotypes from both human and mouse studies. The mouse data include studies using crosses between specific inbred strains and studies using the Hybrid Mouse Diversity Panel (HMDP. SGR is designed to assist researchers studying genes and pathways contributing to complex disease traits, including obesity, diabetes, atherosclerosis, heart failure, osteoporosis, and lipoprotein metabolism. Over the next few years, we hope to add data relevant to deafness, addiction, hepatic steatosis, toxin responses, and vascular injury. The intermediate phenotypes include expression array data for a variety of tissues and cultured cells, metabolite levels, and protein levels. Pre-computed tables of genetic loci controlling intermediate and clinical phenotypes, as well as phenotype correlations, are accessed via a user-friendly web interface. The web site includes detailed protocols for all of the studies. Data from published studies are freely available; unpublished studies have restricted access during their embargo period.
Kogawa, Noriko; Ito, Reiko; Gon, Yasuhiro; Maruoka, Shuichiro; Hashimoto, Shu
Instruction on inhalation techniques for chronic obstructive pulmonary disease(COPD)and asthma patients being treated with inhalants have sufficient therapeutic effects and are important to maintain adherence. However, problems continue to exist, including time constraints of medical staff that have a large number of patients and a lack of knowledge on inhalation instruction methods. A web application,"Inhalation Lessons,'for the iPad has been developed. It explains inhalation methods, and consists of videos and review tests. Instruction on inhalation techniques was performed using this application for patients that use Diskus, and the effects were examined. As a result, there are significant improvements in the inhalation techniques of patients after viewing the"Inhalation Lessons'application. Uniform instruction on inhalation techniques can be performed even in the field of homecare.
Leader, David P; Milner-White, E James
Small loop-shaped motifs are common constituents of the three-dimensional structure of proteins. Typically they comprise between three and seven amino acid residues, and are defined by a combination of dihedral angles and hydrogen bonding partners. The most abundant of these are alphabeta-motifs, asx-motifs, asx-turns, beta-bulges, beta-bulge loops, beta-turns, nests, niches, Schellmann loops, ST-motifs, ST-staples and ST-turns. We have constructed a database of such motifs from a range of high-quality protein structures and built a web application as a visual interface to this. The web application, Motivated Proteins, provides access to these 12 motifs (with 48 sub-categories) in a database of over 400 representative proteins. Queries can be made for specific categories or sub-categories of motif, motifs in the vicinity of ligands, motifs which include part of an enzyme active site, overlapping motifs, or motifs which include a particular amino acid sequence. Individual proteins can be specified, or, where appropriate, motifs for all proteins listed. The results of queries are presented in textual form as an (X)HTML table, and may be saved as parsable plain text or XML. Motifs can be viewed and manipulated either individually or in the context of the protein in the Jmol applet structural viewer. Cartoons of the motifs imposed on a linear representation of protein secondary structure are also provided. Summary information for the motifs is available, as are histograms of amino acid distribution, and graphs of dihedral angles at individual positions in the motifs. Motivated Proteins is a publicly and freely accessible web application that enables protein scientists to study small three-dimensional motifs without requiring knowledge of either Structured Query Language or the underlying database schema.
Milner-White E James
Full Text Available Abstract Background Small loop-shaped motifs are common constituents of the three-dimensional structure of proteins. Typically they comprise between three and seven amino acid residues, and are defined by a combination of dihedral angles and hydrogen bonding partners. The most abundant of these are αβ-motifs, asx-motifs, asx-turns, β-bulges, β-bulge loops, β-turns, nests, niches, Schellmann loops, ST-motifs, ST-staples and ST-turns. We have constructed a database of such motifs from a range of high-quality protein structures and built a web application as a visual interface to this. Description The web application, Motivated Proteins, provides access to these 12 motifs (with 48 sub-categories in a database of over 400 representative proteins. Queries can be made for specific categories or sub-categories of motif, motifs in the vicinity of ligands, motifs which include part of an enzyme active site, overlapping motifs, or motifs which include a particular amino acid sequence. Individual proteins can be specified, or, where appropriate, motifs for all proteins listed. The results of queries are presented in textual form as an (XHTML table, and may be saved as parsable plain text or XML. Motifs can be viewed and manipulated either individually or in the context of the protein in the Jmol applet structural viewer. Cartoons of the motifs imposed on a linear representation of protein secondary structure are also provided. Summary information for the motifs is available, as are histograms of amino acid distribution, and graphs of dihedral angles at individual positions in the motifs. Conclusion Motivated Proteins is a publicly and freely accessible web application that enables protein scientists to study small three-dimensional motifs without requiring knowledge of either Structured Query Language or the underlying database schema.
Weaver, Steven; Shank, Stephen D; Spielman, Stephanie J; Li, Michael; Muse, Spencer V; Kosakovsky Pond, Sergei L
Rossi, Lorenzo; Margola, Lorenzo; Manzelli, Vacia; Bandera, Alessandra
wHospital is the result of an information technology research project, based on the utilization of a web based application for managing the hospital drugs dispensing. Part of wHospital back bone and its key distinguishing characteristic is the adoption of the digital signature system,initially deployed by the Government of Lombardia, a Northern Italy Region, throughout the distribution of smart cards to all the healthcare and hospital staffs. The developed system is a web-based application with a proposed Health Records Digital Signature (HReDS) handshake to comply with the national law and with the Joint Commission International Standards. The prototype application, for a single hospital Operative Unit (OU), has focused on data and process management, related to drug therapy. Following a multi-faceted selection process, the Infective Disease OU of the Hospital in Busto Arsizio, Lombardia, was chosen for the development and prototype implementation. The project lead time, from user requirement analysis to training and deployment was approximately 8 months. This paper highlights the applied project methodology, the system architecture, and the achieved preliminary results.
Subash C.B. Gopinath
Full Text Available Systematic Evolution of Ligands by EXponential enrichment (SELEX is the method to select the specific aptamer against a wide range of targets. For this process, the initial library usually has a length of random sequences from ∼25 and it reaches over 100 bases. The lengthy sequences have disadvantages such as difficult to prepare, less stable and expensive. It is wise to prefer shorter version of aptamer for a wide range of applications including drug delivery process. It is a common practice to shorten the full-length aptamer by mapping analyses and it is tedious. Here, we used a crawling method to shorten the aptamer by different sequential deletion of bases from both 5′ and 3′ ends, assisted by Mfold web server application. Two different kinds of aptamer with varied lengths (randomized region of 30 and 74 bases were desired for this study, generated against Influenza A/Panama/2007/1999 (H3N2 and gD protein of Herpes Simplex Virus-1. It was found that shortening the aptamer length by crawling pattern is possible with the assistance of Mfold web server application. The obtained results resemble the shortened aptamer derived by mapping analyses. The proposed strategy is recommended to predict the shorter aptamer without involving any wet experimental section.
Newton, Richard; Hinds, Jason; Wernisch, Lorenz
Whole genome DNA microarray genomotyping experiments compare the gene content of different species or strains of bacteria. A statistical approach to analysing the results of these experiments was developed, based on a Hidden Markov model (HMM), which takes adjacency of genes along the genome into account when calling genes present or absent. The model was implemented in the statistical language R and applied to three datasets. The method is numerically stable with good convergence properties. Error rates are reduced compared with approaches that ignore spatial information. Moreover, the HMM circumvents a problem encountered in a conventional analysis: determining the cut-off value to use to classify a gene as absent. An Apache Struts web interface for the R script was created for the benefit of users unfamiliar with R. The application may be found at http://hmmgd.cryst.bbk.ac.uk/hmmgd. The source code illustrating how to run R scripts from an Apache Struts-based web application is available from the corresponding author on request. The application is also available for local installation if required.
Wolf, N.; Fuchsgruber, V.; Riembauer, G.; Siegmund, A.
Satellite images have great educational potential for teaching on environmental issues and can promote the motivation of young people to enter careers in natural science and technology. Due to the importance and ubiquity of remote sensing in science, industry and the public, the use of satellite imagery has been included into many school curricular in Germany. However, its implementation into school practice is still hesitant, mainly due to lack of teachers' know-how and education materials that align with the curricula. In the project "Space4Geography" a web-based learning platform is developed with the aim to facilitate the application of satellite imagery in secondary school teaching and to foster effective student learning experiences in geography and other related subjects in an interdisciplinary way. The platform features ten learning modules demonstrating the exemplary application of original high spatial resolution remote sensing data (RapidEye and TerraSAR-X) to examine current environmental issues such as droughts, deforestation and urban sprawl. In this way, students will be introduced into the versatile applications of spaceborne earth observation and geospatial technologies. The integrated web-based remote sensing software "BLIF" equips the students with a toolset to explore, process and analyze the satellite images, thereby fostering the competence of students to work on geographical and environmental questions without requiring prior knowledge of remote sensing. This contribution presents the educational concept of the learning environment and its realization by the example of the learning module "Deforestation of the rainforest in Brasil".
Dorsch, Michael P; Farris, Karen B; Bleske, Barry E; Koelling, Todd M
The objective of this study was to determine if a Web application that promoted mindfulness of the progress of the chronic disease through self-monitoring improved quality of life in heart failure. This was a prospective single-center single-group study. Participants were instructed how to use the Web application and to perform self-monitoring daily for 12 weeks. A comprehensive physical exam, assessment of New York Heart Association (NYHA) class, the Minnesota Living with Heart Failure Questionnaire (MLHFQ), and an evaluation of self-management were performed in person at baseline and at 12 weeks. Participants consisted of older (mean, 59 years), predominantly female (63%) adults with NYHA class II or III symptoms. NYHA classification (preintervention versus postintervention, 2.5±0.13 versus 2.0±0.13; p=0.0032) and MLHFQ score (55.7±4.6 versus 42.6±5.1, respectively; p=0.0078) improved over 12 weeks of self-monitoring. A trend toward improvement was also demonstrated in weight (preintervention versus postintervention, 209±9.6 pounds versus 207±9.4 pounds; by paired t test, p=0.389), number of times exercised per week (1.29±0.5 versus 2.5±0.6, respectively; p=0.3), and walk distance (572±147 yards versus 845±187 yards, respectively; p=0.119). Jugular venous distention (preintervention versus postintervention, 8.1±0.6 cm versus 6.7±0.3 cm; p=0.083) and peripheral edema (29.2% versus 16.7%, respectively; p=0.375) decreased after 12 weeks of self-monitoring via the Web application. A Web application for self-monitoring heart failure over 12 weeks improved both NYHA classification and MLHFQ score. The trend in improved physical activity and physical exam support these outcomes. The number of patients reporting a sodium-restricted diet increased over the 12 weeks, which may have led to the positive findings.
Attali, Dean; Bidshahri, Roza; Haynes, Charles; Bryan, Jennifer
Droplet digital polymerase chain reaction (ddPCR) is a novel platform for exact quantification of DNA which holds great promise in clinical diagnostics. It is increasingly popular due to its digital nature, which provides more accurate quantification and higher sensitivity than traditional real-time PCR. However, clinical adoption has been slowed in part by the lack of software tools available for analyzing ddPCR data. Here, we present ddpcr - a new R package for ddPCR visualization and analysis. In addition, ddpcr includes a web application (powered by the Shiny R package) that allows users to analyze ddPCR data using an interactive graphical interface.
Aye, Zar Chi; Nicolet, Pierrick; Jaboyedoff, Michel; Derron, Marc-Henri; Gerber, Christian; Lévy, Sebastien
Following changes in the system of Swiss subsidy in January 2008, the Swiss cantons and the Federal Office for the Environment (FOEN) were forced to prioritize different natural hazard protection projects based on their cost-effectiveness, as a response to limited financial resources (Bründl et al., 2009). For this purpose, applications such as EconoMe (OFEV, 2016) and Valdorisk (DGE, 2016) were developed for risk evaluation and prioritization of mitigation projects. These tools serve as a useful decision-making instrument to the community of practitioners and responsible authorities for natural hazard risk management in Switzerland. However, there are several aspects which could be improved, in particular, the integration and visualization of spatial information interactively through a web-GIS interface for better risk planning and evaluation. Therefore, in this study, we aim to develop an interactive web-GIS application based on the risk concepts applied in Switzerland. The purpose of this tool is to provide a rapid evaluation of risk before and after protection measures, and to test the efficiency of measures by using a simplified cost-benefit analysis within the context of different protection projects. This application allows to integrate different layers which are necessary to calculate risk, in particular, hazard intensity (vector) maps for different scenarios (such as 30, 100 and 300 years of return periods based on Swiss guidelines), exposed objects (such as buildings) and vulnerability information of these objects. Based on provided information and additional parameters, risk is calculated automatically and results are visualized within the web-GIS interface of the application. The users can modify these input information and parameters to create different risk scenarios. Based on the resultant risk scenarios, the users can propose and visualize (preliminary) risk reduction measures before realizing the actual design and dimensions of such protective
Oostra, D.; Chambers, L. H.; Lewis, P. M.; Moore, S. W.
The Atmospheric Science Data Center (ASDC) at the NASA Langley Research Center in Virginia houses almost three petabytes of data, a collection that increases every day. To put it into perspective, it is estimated that three petabytes of data storage could store a digitized copy of all printed material in U.S. research libraries. There are more than ten other NASA data centers like the ASDC. Scientists and the public use this data for research, science education, and to understand our environment. Most importantly these data provide the potential for all of us make new discoveries. NASA is about making discoveries. Galileo was quoted as saying, "All discoveries are easy to understand once they are discovered. The point is to discover them." To that end, NASA stores vast amounts of publicly available data. This paper examines an approach to create web applications that serve NASA data in ways that specifically address the mobile web application technologies that are quickly emerging. Mobile data is not a new concept. What is new, is that user driven tools have recently become available that allow users to create their own mobile applications. Through the use of these cloud-based tools users can produce complete native mobile applications. Thus, mobile apps can now be created by everyone, regardless of their programming experience or expertise. This work will explore standards and methods for creating dynamic and malleable application programming interfaces (APIs) that allow users to access and use NASA science data for their own needs. The focus will be on experiences that broaden and increase the scope and usage of NASA science data sets.
Alberti, Koko; Hiemstra, Paul; de Jong, Kor; Karssenberg, Derek
Numerical ensemble models are used in the analysis and forecasting of a wide range of environmental processes. Common use cases include assessing the consequences of nuclear accidents, pollution releases into the ocean or atmosphere, forest fires, volcanic eruptions, or identifying areas at risk from such hazards. In addition to the increased use of scenario analyses and model forecasts, the availability of supplementary data describing errors and model uncertainties is increasingly commonplace. Unfortunately most current visualization routines are not capable of properly representing uncertain information. As a result, uncertainty information is not provided at all, not readily accessible, or it is not communicated effectively to model users such as domain experts, decision makers, policy makers, or even novice users. In an attempt to address these issues a lightweight and interactive web-application has been developed. It makes clear and concise uncertainty visualizations available in a web-based mapping and visualization environment, incorporating aggregation (upscaling) techniques to adjust uncertainty information to the zooming level. The application has been built on a web mapping stack of open source software, and can quantify and visualize uncertainties in numerical ensemble models in such a way that both expert and novice users can investigate uncertainties present in a simple ensemble dataset. As a test case, a dataset was used which forecasts the spread of an airborne tracer across Western Europe. Extrinsic uncertainty representations are used in which dynamic circular glyphs are overlaid on model attribute maps to convey various uncertainty concepts. It supports both basic uncertainty metrics such as standard deviation, standard error, width of the 95% confidence interval and interquartile range, as well as more experimental ones aimed at novice users. Ranges of attribute values can be specified, and the circular glyphs dynamically change size to
Granitz, Neil; Koernig, Stephen K.
Although both experiential learning and Web 2.0 tools focus on creativity, sharing, and collaboration, sparse research has been published integrating a Web 2.0 paradigm with experiential learning in marketing. In this article, Web 2.0 concepts are explained. Web 2.0 is then positioned as a philosophy that can advance experiential learning through…
Full Text Available Short rotation woody crops (SRWC, such as hybrid poplar, have the potential to serve as a valuable feedstock for cellulosic biofuels. Spatial estimates of biomass yields under different management regimes are required for assisting stakeholders in making better management decisions and to establish viable woody cropping systems for biofuel production. To support stakeholders in their management decisions, we have developed a GIS-based web interface using a modified 3PG model for spatially predicting poplar biomass yields under different management and climate conditions in the U.S. Pacific Northwest region. The application is implemented with standard HTML5 components, allowing its use in a modern browser and dynamically adjusting to the client screen size and device. In addition, cloud storage of the results makes them accessible on any Internet-enabled device. The web interface appears simple, but is powerful in parameter manipulation and in visualizing and sharing the results. Overall, this application comprises dynamic features that enable users to run SRWC crop growth simulations based on GIS information and contributes significantly to choosing appropriate feedstock growing locations, anticipating the desired physiological properties of the feedstock and incorporating the management and policy analysis needed for growing hybrid poplar plantations.
Cheung, Kei-Hoi; Hager, Janet; Pan, Deyun; Srivastava, Ranjana; Mane, Shrikant; Li, Yuli; Miller, Perry; Williams, Kenneth R
We have developed a universal web server application (KARMA) that allows comparison and annotation of user-defined pairs of microarray platforms based on diverse types of genome annotation data (across different species) collected from multiple sources. The application is an effective tool for diverse microarray platforms, including arrays that are provided by (i) the Keck Microarray Resource at Yale, (ii) commercially available Affymetrix GeneChips and spotted arrays and (iii) custom arrays made by individual academics. The tool provides a web interface that allows users to input pairs of test files that represent diverse array platforms for either single or multiple species. The program dynamically identifies analogous DNA fragments spotted or synthesized on multiple microarray platforms based on the following types of information: (i) NCBI-Unigene identifiers, if the platforms being compared are within the same species or (ii) NCBI-Homologene data, if they are cross-species. The single-species comparison is implemented based on set operations: intersection, union and difference. Other forms of retrievable annotation data, including LocusLink, SwissProt and Gene Ontology (GO), are collected from multiple remote sites and stored in an integrated fashion using an Oracle database. The KARMA database, which is updated periodically, is available on line at the following URL: http://ymd.med.yale.edu/karma/cgi-bin/karma.pl.
The NASA GSFC Space Weather Center (http://swc.gsfc.nasa.gov) is committed to providing forecasts, alerts, research, and educational support to address NASA's space weather needs - in addition to the needs of the general space weather community. We provide a host of services including spacecraft anomaly resolution, historical impact analysis, real-time monitoring and forecasting, custom space weather alerts and products, weekly summaries and reports, and most recently - video casts. There are many challenges in providing accurate descriptions of past, present, and expected space weather events - and the Space Weather Center at NASA GSFC employs several innovative solutions to provide access to a comprehensive collection of both observational data, as well as space weather model/simulation data. We'll describe the challenges we've faced with managing hundreds of data streams, running models in real-time, data storage, and data dissemination. We'll also highlight several systems and tools that are utilized by the Space Weather Center in our daily operations, all of which are available to the general community as well. These systems and services include a web-based application called the Integrated Space Weather Analysis System (iSWA http://iswa.gsfc.nasa.gov), two mobile space weather applications for both IOS and Android devices, an external API for web-service style access to data, google earth compatible data products, and a downloadable client-based visualization tool.
Elmeligy Abdelhamid, Sherif H; Kuhlman, Chris J; Marathe, Madhav V; Mortveit, Henning S; Ravi, S S
Discrete dynamical systems are used to model various realistic systems in network science, from social unrest in human populations to regulation in biological networks. A common approach is to model the agents of a system as vertices of a graph, and the pairwise interactions between agents as edges. Agents are in one of a finite set of states at each discrete time step and are assigned functions that describe how their states change based on neighborhood relations. Full characterization of state transitions of one system can give insights into fundamental behaviors of other dynamical systems. In this paper, we describe a discrete graph dynamical systems (GDSs) application called GDSCalc for computing and characterizing system dynamics. It is an open access system that is used through a web interface. We provide an overview of GDS theory. This theory is the basis of the web application; i.e., an understanding of GDS provides an understanding of the software features, while abstracting away implementation details. We present a set of illustrative examples to demonstrate its use in education and research. Finally, we compare GDSCalc with other discrete dynamical system software tools. Our perspective is that no single software tool will perform all computations that may be required by all users; tools typically have particular features that are more suitable for some tasks. We situate GDSCalc within this space of software tools.
Full Text Available Abstract Background Simple Sequence Repeats (SSRs, or microsatellites, are among the most powerful genetic markers known. A common method for the development of SSR markers is the construction of genomic DNA libraries enriched for SSR sequences, followed by DNA sequencing. However, designing optimal SSR markers from bulk sequence data is a laborious and time-consuming process. Results SAT (SSR Analysis Tool is a user-friendly Web application developed to minimize tedious manual operations and reduce errors. This tool facilitates the integration, analysis and display of sequence data from SSR-enriched libraries. SAT is designed to successively perform base calling and quality evaluation of chromatograms, eliminate cloning vector, adaptors and low quality sequences, detect chimera or partially digested sequences, search for SSR motifs, cluster and assemble the redundant sequences, and design SSR primer pairs. An additional virtual PCR step establishes primer specificity. Users may modify the different parameters of each step of the SAT analysis. Although certain steps are compulsory, such as SSR motifs search and sequence assembly, users do not have to run the entire pipeline, and they can choose selectively which steps to perform. A database allows users to store and query results, and to redo individual steps of the workflow. Conclusion The SAT Web application is available at http://sat.cirad.fr/sat, and a standalone command-line version is also freely downloadable. Users must send an email to the SAT administrator email@example.com to request a login and password.
Dereeper, Alexis; Argout, Xavier; Billot, Claire; Rami, Jean-François; Ruiz, Manuel
Simple Sequence Repeats (SSRs), or microsatellites, are among the most powerful genetic markers known. A common method for the development of SSR markers is the construction of genomic DNA libraries enriched for SSR sequences, followed by DNA sequencing. However, designing optimal SSR markers from bulk sequence data is a laborious and time-consuming process. SAT (SSR Analysis Tool) is a user-friendly Web application developed to minimize tedious manual operations and reduce errors. This tool facilitates the integration, analysis and display of sequence data from SSR-enriched libraries.SAT is designed to successively perform base calling and quality evaluation of chromatograms, eliminate cloning vector, adaptors and low quality sequences, detect chimera or partially digested sequences, search for SSR motifs, cluster and assemble the redundant sequences, and design SSR primer pairs. An additional virtual PCR step establishes primer specificity. Users may modify the different parameters of each step of the SAT analysis. Although certain steps are compulsory, such as SSR motifs search and sequence assembly, users do not have to run the entire pipeline, and they can choose selectively which steps to perform. A database allows users to store and query results, and to redo individual steps of the workflow. The SAT Web application is available at http://sat.cirad.fr/sat, and a standalone command-line version is also freely downloadable. Users must send an email to the SAT administrator firstname.lastname@example.org to request a login and password.
Full Text Available Disaster management is the responsibility of the central government and local governments. The principles of disaster management, among others, are quick and precise, priorities, coordination and cohesion, efficient and effective manner. Help that is needed by most societies are logistical assistance, such as the assistance covers people's everyday needs, such as food, instant noodles, fast food, blankets, mattresses etc. Logistical assistance is needed for disaster management, especially in times of disasters. The support of logistical assistance must be timely, to the right location, target, quality, quantity, and needs. The purpose of this study is to make a web application to monitorlogistics distribution of disaster relefusing CodeIgniter framework. Through this application, the mechanisms of aid delivery will be easily controlled from and heading to the disaster site
O'Halloran, Damien M
Overlapping PCR is routinely used in a wide number of molecular applications. These include stitching PCR fragments together, generating fluorescent transcriptional and translational fusions, inserting mutations, making deletions, and PCR cloning. Overlapping PCR is also used for genotyping by traditional PCR techniques and in detection experiments using techniques such as loop-mediated isothermal amplification (LAMP). STITCHER is a web tool providing a central resource for researchers conducting all types of overlapping PCR experiments with an intuitive interface for automated primer design that's fast, easy to use, and freely available online (http://ohalloranlab.net/STITCHER.html). STITCHER can handle both single sequence and multi-sequence input, and specific features facilitate numerous other PCR applications, including assembly PCR, adapter PCR, and primer walking. Field PCR, and in particular, LAMP, offers promise as an on site tool for pathogen detection in underdeveloped areas, and STITCHER includes off-target detection features for pathogens commonly targeted using LAMP technology.
Knoll, P; Höll, K; Mirzaei, S; Koriska, K; Köhn, H
At present, medical applications applying World Wide Web (WWW) technology are mainly used to view static images and to retrieve some information. The Java platform is a relative new way of computing, especially designed for network computing and distributed applications which enables interactive connection between user and information via the WWW. The Java 2 Software Development Kit (SDK) including Java2D API, Java Remote Method Invocation (RMI) technology, Object Serialization and the Java Advanced Imaging (JAI) extension was used to achieve a robust, platform independent and network centric solution. Medical image processing software based on this technology is presented and adequate performance capability of Java is demonstrated by an iterative reconstruction algorithm for single photon emission computerized tomography (SPECT).
Domínguez, César; Heras, Jónathan; Mata, Eloy; Pascual, Vico
Fungi have diverse biotechnological applications in, among others, agriculture, bioenergy generation, or remediation of polluted soil and water. In this context, culture media based on color change in response to degradation of dyes are particularly relevant; but measuring dye decolorisation of fungal strains mainly relies on a visual and semiquantitative classification of color intensity changes. Such a classification is a subjective, time-consuming and difficult to reproduce process. DecoFungi is the first, at least up to the best of our knowledge, application to automatically characterise dye decolorisation level of fungal strains from images of inoculated plates. In order to deal with this task, DecoFungi employs a deep-learning model, accessible through a user-friendly web interface, with an accuracy of 96.5%. DecoFungi is an easy to use system for characterising dye decolorisation level of fungal strains from images of inoculated plates.
Pocta, P.; Beerends, J.G.
This paper investigates the impact of different audio codecs typically deployed in current digital audio broadcasting (DAB) systems and web-casting applications, which represent a main source of quality impairment in these systems and applications, on the quality perceived by the end user. Both
Ludovici, Alessandro; Calveras, Anna
In this paper, we present the design of a Constrained Application Protocol (CoAP) proxy able to interconnect Web applications based on Hypertext Transfer Protocol (HTTP) and WebSocket with CoAP based Wireless Sensor Networks. Sensor networks are commonly used to monitor and control physical objects or environments. Smart Cities represent applications of such a nature. Wireless Sensor Networks gather data from their surroundings and send them to a remote application. This data flow may be short or long lived. The traditional HTTP long-polling used by Web applications may not be adequate in long-term communications. To overcome this problem, we include the WebSocket protocol in the design of the CoAP proxy. We evaluate the performance of the CoAP proxy in terms of latency and memory consumption. The tests consider long and short-lived communications. In both cases, we evaluate the performance obtained by the CoAP proxy according to the use of WebSocket and HTTP long-polling. PMID:25585107
Ludovici, Alessandro; Calveras, Anna
In this paper, we present the design of a Constrained Application Protocol (CoAP) proxy able to interconnect Web applications based on Hypertext Transfer Protocol (HTTP) and WebSocket with CoAP based Wireless Sensor Networks. Sensor networks are commonly used to monitor and control physical objects or environments. Smart Cities represent applications of such a nature. Wireless Sensor Networks gather data from their surroundings and send them to a remote application. This data flow may be short or long lived. The traditional HTTP long-polling used by Web applications may not be adequate in long-term communications. To overcome this problem, we include the WebSocket protocol in the design of the CoAP proxy. We evaluate the performance of the CoAP proxy in terms of latency and memory consumption. The tests consider long and short-lived communications. In both cases, we evaluate the performance obtained by the CoAP proxy according to the use of WebSocket and HTTP long-polling.
Jae Eun Lee
Full Text Available Purpose: There is abundant evidence that neighborhood characteristics are significantly linked to the health of the inhabitants of a given space within a given time frame. This study is to statistically validate a web-based GIS application designed to support cardiovascular-related research developed by the NIH funded Research Centers in Minority Institutions (RCMI Translational Research Network (RTRN Data Coordinating Center (DCC and discuss its applicability to cardiovascular studies. Methods: Geo-referencing, geocoding and geospatial analyses were conducted for 500 randomly selected home addresses in a U.S. southeastern Metropolitan area. The correlation coefficient, factor analysis and Cronbach’s alpha (α were estimated to quantify measures of the internal consistency, reliability and construct/criterion/discriminant validity of the cardiovascular-related geospatial variables (walk score, number of hospitals, fast food restaurants, parks and sidewalks. Results: Cronbach’s α for CVD GEOSPATIAL variables was 95.5%, implying successful internal consistency. Walk scores were significantly correlated with number of hospitals (r = 0.715; p < 0.0001, fast food restaurants (r = 0.729; p < 0.0001, parks (r = 0.773; p < 0.0001 and sidewalks (r = 0.648; p < 0.0001 within a mile from homes. It was also significantly associated with diversity index (r = 0.138, p = 0.0023, median household incomes (r = −0.181; p < 0.0001, and owner occupied rates (r = −0.440; p < 0.0001. However, its non-significant correlation was found with median age, vulnerability, unemployment rate, labor force, and population growth rate. Conclusion: Our data demonstrates that geospatial data generated by the web-based application were internally consistent and demonstrated satisfactory validity. Therefore, the GIS application may be useful to apply to cardiovascular-related studies aimed to investigate potential impact of geospatial factors on diseases and/or the long
Baier, Rosa R; Cooper, Emily; Wysocki, Andrea; Gravenstein, Stefan; Clark, Melissa
Despite the investment in public reporting for a number of healthcare settings, evidence indicates that consumers do not routinely use available data to select providers. This suggests that existing reports do not adequately incorporate recommendations for consumer-facing reports or web applications. Healthcentric Advisors and Brown University undertook a multi-phased approach to create a consumer-facing home health web application in Rhode Island. This included reviewing the evidence base review to identify design recommendations and then creating a paper prototype and wireframe. We performed qualitative research to iteratively test our proposed user interface with two user groups, home health consumers and hospital case managers, refining our design to create the final web application. To test our prototype, we conducted two focus groups, with a total of 13 consumers, and 28 case manager interviews. Both user groups responded favorably to the prototype, with the majority commenting that they felt this type of tool would be useful. Case managers suggested revisions to ensure the application conformed to laws requiring Medicare patients to have the freedom to choose among providers and could be incorporated into hospital workflow. After incorporating changes and creating the wireframe, we conducted usability testing interviews with 14 home health consumers and six hospital case managers. We found that consumers needed prompting to navigate through the wireframe; they demonstrated confusion through both their words and body language. As a result, we modified the web application's sequence, navigation, and function to provide additional instructions and prompts. Although we designed our web application for low literacy and low health literacy, using recommendations from the evidence base, we overestimated the extent to which older adults were familiar with using computers. Some of our key learnings and recommendations run counter to general web design principles
There is a great deal of excitement about using the internet and the World Wide Web in education. There are such exciting possibilities and there is a wealth and variety of material up on the web. There are however many problems, problems of access and resources, problems of quality -- for every excellent resource there are many poor ones, and there are insufficiently explored problems of teacher training and motivation. For example, Wiesenmayer and Meadows report on a study of 347 West Virginia science teachers. These teachers were enrolled in a week-long summer workshop to introduce them to the internet and its educational potential. The teachers were asked to review science sites as to overall quality and then about their usefulness in their own classrooms. The teachers were enthusiastic about the web, and gave two-thirds of the sites high ratings, and essentially all the rest average ratings. But alarmingly, over 80% of these sites were viewed as having no direct applicability in the teacher's own classroom. This summer I was assigned to work on the Amphion project in the Automated Software Engineering Group under the leadership of Michael Lowry. I wished to find educational applications of the Amphion system, which in its current implementation can be used to create fortran programs and animations using the SPICE libraries created by the NAIF group at JPL. I wished to find an application which provided real added educational value, which was in line with educational curriculum standards and which would serve a documented need of the educational community. The application selected was teaching about the causes of the seasons -- at the approximately the fourth, fifth, sixth grade level. This topic was chosen because it is in line with national curriculum standards. The fourth, fifth, sixth grade level was selected to coincide with the grade level served by the Ames Aerospace Encounter, which services 10,000 children a year on field trips. The hope is that
Today's Web 2.0 applications (think Facebook and Twitter) go far beyond the confines of the desktop and are widely used on mobile devices. The mobile Web has become incredibly popular given the success of the iPhone and BlackBerry, the importance of Windows Mobile, and the emergence of Palm Pre (and its webOS platform). At Apress, we are fortunate to have Gail Frederick of the well-known training site Learn the Mobile Web offer her expert advice in Beginning Smartphone Web Development. In this book, Gail teaches the web standards and fundamentals specific to smartphones and other feature-drive
Full Text Available Routing services for outdoor areas are omnipresent and also three-dimensional (3D visualization is quite common within this area. Recent research efforts are now trying to adapt well known outdoor routing services to complex indoor environments. However, most of the current indoor routing systems only focus on two-dimensional visualization, thus only one level can be depicted. Especially multi-level routes therefore lack visualization. Also, most of the (few existing 3D indoor routing services utilize proprietary software or plugins, thus a widespread accessibility for those services by using common computers or mobile devices is not feasible. Therefore this paper describes the development of a web-based 3D routing system based on a new HTML extension. The visualization of rooms as well as the computed routes is realized with XML3D. Since this emerging technology is based on WebGL and will likely be integrated into the HTML5 standard, the developed system is already compatible with most common browsers such as Google Chrome or Firefox. Another key difference of the approach presented in this paper is that all utilized data is actually crowdsourced geodata from OpenStreetMap (OSM. Such data is collaboratively collected by both amateurs and professionals and can be used at no charge under the Open Data Commons Open Database License (ODbL. Our research combines user-generated geo content of the Web 2.0 with future Internet technology for the provision of a ubiquitously accessible 3D indoor routing application.
Jiang, Wenping; Zou, Ziming
independent modules according to different business needs is applied to solve the problem of the independence of the physical space between multiple models. The classic MVC(Model View Controller) software design pattern is concerned to build the architecture of space physics multi-model application integrated system. The JSP+servlet+javabean technology is used to integrate the web application programs of space physics multi-model. It solves the problem of multi-user requesting the same job of model computing and effectively balances each server computing tasks. In addition, we also complete follow tasks: establishing standard graphical user interface based on Java Applet application program; Designing the interface between model computing and model computing results visualization; Realizing three-dimensional network visualization without plug-ins; Using Java3D technology to achieve a three-dimensional network scene interaction; Improved ability to interact with web pages and dynamic execution capabilities, including rendering three-dimensional graphics, fonts and color control. Through the design and implementation of the SPMAIS based on Web, we provide an online computing and application runtime environment of space physics multi-model. The practical application improves that researchers could be benefit from our system in space physics research and engineering applications.
Maso, Joan; Díaz, Paula; Riverola, Anna; Pons, Xavier
Currently, the discovery and sharing of geospatial information over the web still presents difficulties. News distribution through website content was simplified by the use of Really Simple Syndication (RSS) and Atom syndication formats. This communication exposes an extension of Atom to redistribute references to geospatial information in a Spatial Data Infrastructure distributed environment. A geospatial client can save the status of an application that involves several OGC services of different kind and direct data and share this status with other users that need the same information and use different client vendor products in an interoperable way. The extensibility of the Atom format was essential to define a format that could be used in RSS enabled web browser, Mass Market map viewers and emerging geospatial enable integrated clients that support Open Geospatial Consortium (OGC) services. Since OWS Context has been designed as an Atom extension, it is possible to see the document in common places where Atom documents are valid. Internet web browsers are able to present the document as a list of items with title, abstract, time, description and downloading features. OWS Context uses GeoRSS so that, the document can be to be interpreted by both Google maps and Bing Maps as items that have the extent represented in a dynamic map. Another way to explode a OWS Context is to develop an XSLT to transform the Atom feed into an HTML5 document that shows the exact status of the client view window that saved the context document. To accomplish so, we use the width and height of the client window, and the extent of the view in world (geographic) coordinates in order to calculate the scale of the map. Then, we can mix elements in world coordinates (such as CF-NetCDF files or GML) with elements in pixel coordinates (such as WMS maps, WMTS tiles and direct SVG content). A smarter map browser application called MiraMon Map Browser is able to write a context document and read
Dolog, Peter; Nejdl, Wolfgang
Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web p...... are crucial to be formalized by the semantic web ontologies for adaptive web. We use examples from an eLearning domain to illustrate the principles which are broadly applicable to any information domain on the web.......Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...
Zachary A King
Full Text Available Escher is a web application for visualizing data on biological pathways. Three key features make Escher a uniquely effective tool for pathway visualization. First, users can rapidly design new pathway maps. Escher provides pathway suggestions based on user data and genome-scale models, so users can draw pathways in a semi-automated way. Second, users can visualize data related to genes or proteins on the associated reactions and pathways, using rules that define which enzymes catalyze each reaction. Thus, users can identify trends in common genomic data types (e.g. RNA-Seq, proteomics, ChIP--in conjunction with metabolite- and reaction-oriented data types (e.g. metabolomics, fluxomics. Third, Escher harnesses the strengths of web technologies (SVG, D3, developer tools so that visualizations can be rapidly adapted, extended, shared, and embedded. This paper provides examples of each of these features and explains how the development approach used for Escher can be used to guide the development of future visualization tools.
King, Zachary A; Dräger, Andreas; Ebrahim, Ali; Sonnenschein, Nikolaus; Lewis, Nathan E; Palsson, Bernhard O
Escher is a web application for visualizing data on biological pathways. Three key features make Escher a uniquely effective tool for pathway visualization. First, users can rapidly design new pathway maps. Escher provides pathway suggestions based on user data and genome-scale models, so users can draw pathways in a semi-automated way. Second, users can visualize data related to genes or proteins on the associated reactions and pathways, using rules that define which enzymes catalyze each reaction. Thus, users can identify trends in common genomic data types (e.g. RNA-Seq, proteomics, ChIP)--in conjunction with metabolite- and reaction-oriented data types (e.g. metabolomics, fluxomics). Third, Escher harnesses the strengths of web technologies (SVG, D3, developer tools) so that visualizations can be rapidly adapted, extended, shared, and embedded. This paper provides examples of each of these features and explains how the development approach used for Escher can be used to guide the development of future visualization tools.
Saputra, Dhany; Rasmussen, Simon; Larsen, Mette Voldby
Identification of bacteria may be based on sequencing and molecular analysis of a specific locus such as 16S rRNA, or a set of loci such as in multilocus sequence typing. In the near future, healthcare institutions and routine diagnostic microbiology laboratories may need to sequence the entire g......, as the entire computational analysis is done on the computer of whom utilizes the web application. This also prevents data privacy issues to arise. The Reads2Type tool is available at http://www.cbs.dtu.dk/~dhany/reads2type.html .......Identification of bacteria may be based on sequencing and molecular analysis of a specific locus such as 16S rRNA, or a set of loci such as in multilocus sequence typing. In the near future, healthcare institutions and routine diagnostic microbiology laboratories may need to sequence the entire...... genome of microbial isolates. Therefore we have developed Reads2Type, a web-based tool for taxonomy identification based on whole bacterial genome sequence data. Raw sequencing data provided by the user are mapped against a set of marker probes that are derived from currently available bacteria complete...
The mission of the United States Nuclear Data Program (USNDP) is to provide current, accurate, and authoritative data for use in pure and applied areas of nuclear science and engineering. This is accomplished by compiling, evaluating, and disseminating extensive datasets. Our main products include the Evaluated Nuclear Structure File (ENSDF) containing information on nuclear structure and decay properties and the Evaluated Nuclear Data File (ENDF) containing information on neutron-induced reactions. The National Nuclear Data Center (NNDC), through the website www.nndc.bnl.gov, provides web-based retrieval systems for these and many other databases. In addition, the NNDC hosts several on-line physics tools, useful for calculating various quantities relating to basic nuclear physics. In this talk, I will first introduce the quantities which are evaluated and recommended in our databases. I will then outline the searching capabilities which allow one to quickly and efficiently retrieve data. Finally, I will demonstrate how the database searches and web applications can provide effective teaching tools concerning the structure of nuclei and how they interact. Work supported by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886.
Full Text Available A web-based database management system developed for collecting, managing and analysing information of diabetes patients is described here. It is a searchable, client-server, relational database application, developed on the WindowsTM platform using Oracle, Active Server Pages (ASP, Visual Basic Script (VB Script and Java Script. The software is menu-driven and allows authorised healthcare providers to access, enter, update and analyse patient information. Graphical representation of data can be generated by the system using bar charts and pie charts. An interactive web interface allows users to query the database and generate reports. Alpha- and beta-testing of the system was carried out and the system at present holds records of 500 diabetes patients and is found useful in diagnosis and treatment. In addition to providing patient data on a continuous basis in a simple format, the system is used in population and comparative analysis. It has proved to be of significant advantage to the healthcare provider as compared to the paper-based system.
Kammerer, Ferdinand J; Hammon, Matthias; Schlechtweg, Philipp M; Uder, Michael; Schwab, Siegfried A
The growing complexity of radiologic examinations and interventional procedures requires frequent exchange of knowledge. Consequently a simple way to share and discuss patient images between radiology experts and with colleagues from other medical disciplines is needed. Aims of this work were the development and initial performance evaluation of a fast and user friendly, platform independent teleconsultation system for medical imaging. A local back end system receives DICOM images and generates anonymized JPEG files that are uploaded to an internet webserver. The front end running on that webserver comprises an image viewer with a specially developed pointer element for indicating findings to collaborative partners. The front end that uses only standard web technologies works on a variety of different platforms, mobile devices and desktop computers. Images can be accessed by simply calling up a special internet address in a web browser that may be exchanged between users (e.g. via email). A speed evaluation of the system showed good results: For example the preparation and upload of a standard head CT took less than 21 seconds. The data volume of the same series and the viewer application could be transferred to a mobile phone in less than 42 seconds via a UMTS network or in less than 3 seconds via a HSPA network. The presented system with its minimal hard- and software requirements, its simplicity and platform independence might be a promising tool in the increasingly important area of teleconsultation. © The Author(s) 2015.
Watson, Kara M.; Janowicz, Jon A.
StreamStats is an interactive, map-based web application from the U.S. Geological Survey (USGS) that allows users to easily obtain streamflow statistics and watershed characteristics for both gaged and ungaged sites on streams throughout New Jersey. Users can determine flood magnitude and frequency, monthly flow-duration, monthly low-flow frequency statistics, and watershed characteristics for ungaged sites by selecting a point along a stream, or they can obtain this information for streamgages by selecting a streamgage location on the map. StreamStats provides several additional tools useful for water-resources planning and management, as well as for engineering purposes. StreamStats is available for most states and some river basins through a single web portal.Streamflow statistics for water resources professionals include the 1-percent annual chance flood flow (100-year peak flow) used to define flood plain areas and the monthly 7-day, 10-year low flow (M7D10Y) used in water supply management and studies of recreation, wildlife conservation, and wastewater dilution. Additionally, watershed or basin characteristics, including drainage area, percent area forested, and average percent of impervious areas, are commonly used in land-use planning and environmental assessments. These characteristics are easily derived through StreamStats.
1. Introduction The Master Thesis titled „Web based users applications for NA61/SHINE experiment at CERN” presents the World Wide Web technologies that has been used during development of the software suite for NA61/SHINE experiment. Presented software was implemented and is use by a group of approximately sixty users 1 . The NA61/SHINE is one of many projects that takes place in the European Organization for Nuclear Research – CERN located near Geneva. 1.1. About CERN CERN (French: Organisation européenne pour la recherche nucléaire 2 ) was established on 29 th of September 1954. Poland is a member state since 1991, however a for a long time before joining CERN Poland as the only country of Communist Block had an observatory status. Nowadays Polish scientists are taking a part in a main CERN's experiments such as ALICE, ATLAS or CMS. CERN's essential scientific facilities are the particle physics accelerators and detectors. The beam provided by the accelerator or collider by interacting w...
Kim, Changsik; Choi, Jiwon; Lee, Seong Joon; Welsh, William J.; Yoon, Sukjoon
The calculation of contact-dependent secondary structure propensity (CSSP) is a unique and sensitive method that detects non-native secondary structure propensities in protein sequences. This method has applications in predicting local conformational change, which typically is observed in core sequences of protein aggregation and amyloid fibril formation. NetCSSP implements the latest version of the CSSP algorithm and provides a Flash chart-based graphic interface that enables an interactive calculation of CSSP values for any user-selected regions in a given protein sequence. This feature also can quantitatively estimate the mutational effect on changes in native or non-native secondary structural propensities in local sequences. In addition, this web tool provides precalculated non-native secondary structure propensities for over 1 400 000 fragments that are seven-residues long, collected from PDB structures. They are searchable for chameleon subsequences that can serve as the core of amyloid fibril formation. The NetCSSP web tool is available at http://cssp2.sookmyung.ac.kr/. PMID:19468045
Molinaro, Marco; Knapic, Cristina; Smareglia, Riccardo
Italian center for Astronomical Archives (IA2, http://ia2.oats.inaf.it) is a national infrastructure project of the Italian National Institute for Astrophysics (Istituto Nazionale di AstroFisica, INAF) that provides services for the astronomical community. Besides data hosting for the Large Binocular Telescope (LBT) Corporation, the Galileo National Telescope (Telescopio Nazionale Galileo, TNG) Consortium and other telescopes and instruments, IA2 offers proprietary and public data access through user portals (both developed and mirrored) and deploys resources complying the Virtual Observatory (VO) standards. Archiving systems and web interfaces are developed to be extremely flexible about adding new instruments from other telescopes. VO resources publishing, along with data access portals, implements the International Virtual Observatory Alliance (IVOA) protocols providing astronomers with new ways of analyzing data. Given the large variety of data flavours and IVOA standards, the need for tools to easily accomplish data ingestion and data publishing arises. This paper describes the VO-Dance tool, that IA2 started developing to address VO resources publishing in a dynamical way from already existent database tables or views. The tool consists in a Java web application, potentially DBMS and platform independent, that stores internally the services' metadata and information, exposes restful endpoints to accept VO queries for these services and dynamically translates calls to these endpoints to SQL queries coherent with the published table or view. In response to the call VO-Dance translates back the database answer in a VO compliant way.
Full Text Available The changes of environmental management system standards into the latest version, i.e. ISO 14001:2015, may cause a change on a data and information need in decision making and achieving the objectives in the organization coverage. Information management is the organization’s responsibility to ensure that effectiveness and efficiency start from its creating, storing, processing and distribution processes to support operations and effective decision making activity in environmental performance management. The objective of this research was to set up an information management program and to adopt the technology as the supporting component of the program which was done by PTFI Concentrating Division so that it could be in line with the desirable organization objective in environmental management based on ISO 14001:2015 environmental management system standards. Materials and methods used covered technical aspects in information management, i.e. with web-based application development by using usage centered design. The result of this research showed that the use of Single Sign On gave ease to its user to interact further on the use of the environmental management system. Developing a web-based through creating entity relationship diagram (ERD and information extraction by conducting information extraction which focuses on attributes, keys, determination of constraints. While creating ERD is obtained from relational database scheme from a number of database from environmental performances in Concentrating Division.
Pietrobon, Ricardo; Nielsen, Karen C; Steele, Susan M; Menezes, Andreia P; Martins, Henrique; Jacobs, Danny O
Although scientific writing plays a central role in the communication of clinical research findings and consumes a significant amount of time from clinical researchers, few Web applications have been designed to systematically improve the writing process. This application had as its main objective the separation of the multiple tasks associated with scientific writing into smaller components. It was also aimed at providing a mechanism where sections of the manuscript (text blocks) could be assigned to different specialists. Manuscript Architect was built using Java language in conjunction with the classic lifecycle development method. The interface was designed for simplicity and economy of movements. Manuscripts are divided into multiple text blocks that can be assigned to different co-authors by the first author. Each text block contains notes to guide co-authors regarding the central focus of each text block, previous examples, and an additional field for translation when the initial text is written in a language different from the one used by the target journal. Usability was evaluated using formal usability tests and field observations. The application presented excellent usability and integration with the regular writing habits of experienced researchers. Workshops were developed to train novice researchers, presenting an accelerated learning curve. The application has been used in over 20 different scientific articles and grant proposals. The current version of Manuscript Architect has proven to be very useful in the writing of multiple scientific texts, suggesting that virtual writing by interdisciplinary groups is an effective manner of scientific writing when interdisciplinary work is required.
Garcia-Zapirain, Begoña; de la Torre Díez, Isabel; Sainz de Abajo, Beatriz; López-Coronado, Miguel
Dafli, Eleni; Antoniou, Panagiotis; Ioannidis, Lazaros; Dombros, Nicholas; Topps, David; Bamidis, Panagiotis D
and filtering of resources. Usability weaknesses were primarily related to standard computer applications' ease of use provisions. Most evaluators provided positive feedback regarding educational experiences on both content and system usability. Evaluation results replicated across several independent evaluation events. The OpenLabyrinth extension, as part of the semantic mEducator3.0 approach, is a virtual patient sharing approach that builds on a collection of Semantic Web services and federates existing sources of clinical and educational data. It is an effective sharing tool for virtual patients and has been merged into the next version of the app (OpenLabyrinth 3.3). Such tool extensions may enhance the medical education arsenal with capacities of creating simulation/game-based learning episodes, massive open online courses, curricular transformations, and a future robust infrastructure for enabling mobile learning.
Kim, D; Cho, H.; Onof, C.; Choi, M
Patel, Ahmed; Al-Janabi, Samaher; AlShourbaji, Ibrahim
intrusions. This framework is based on risk analysis and mashup source classification that will examine, analyze and evaluate the data transitions between the server-side and the client-side. Risk filtering using data mining suggests a new data mining technique also be utilized to enhance the quality...... of the risk analysis by removing most of the false risks. This approach is called the Risk Filtering Data Mining algorithm (RFDM). The RFDM framework deals with three types of clusters (trusted, untrusted and hesitation or unknown) to handle the hesitation clusters. Our proposal is to employ Atanassov......A mashup is a web-based application developed through aggregation of data from different public external or internal sources (including trusted and untrusted). Mashup introduces an open environment that is exposed to many security vulnerabilities, threats and risks. These weaknesses will bring...
Full Text Available The paper presents the advantages of using genetic techniques in web oriented problems. The specific area of genetic programming applications that paper approaches is content modeling. The analyzed digital content is formed through the accumulation of targeted geometrical structured entities that have specific characteristics and behavior. The accumulated digital content is analyzed and specific features are extracted in order to develop an analysis system through the use of genetic programming. An experiment is presented which evolves a model based on specific features of each geometrical structured entity in the digital content base. The results show promising expectations with a low error rate which provides fair approximations related to analyzed geometrical structured entities.
Allred, B. W.; Naugle, D.; Donnelly, P.; Tack, J.; Jones, M. O.
In 2010, the USDA Natural Resources Conservation Service (NRCS) launched the Sage Grouse Initiative (SGI) to voluntarily reduce threats facing sage-grouse and rangelands on private lands. Over the past five years, SGI has matured into a primary catalyst for rangeland and wildlife conservation across the North American west, focusing on the shared vision of wildlife conservation through sustainable working landscapes and providing win-win solutions for producers, sage grouse, and 350 other sagebrush obligate species. SGI and its partners have invested a total of $750 million into rangeland and wildlife conservation. Moving forward, SGI continues to focus on rangeland conservation. Partnering with Google Earth Engine, SGI has developed outcome monitoring and conservation planning tools at continental scales. The SGI science team is currently developing assessment and monitoring algorithms of key conservation indicators. The SGI web application utilizes Google Earth Engine for user defined analysis and planning, putting the appropriate information directly into the hands of managers and conservationists.
Neumann, Ursula; Genze, Nikita; Heider, Dominik
Feature selection methods aim at identifying a subset of features that improve the prediction performance of subsequent classification models and thereby also simplify their interpretability. Preceding studies demonstrated that single feature selection methods can have specific biases, whereas an ensemble feature selection has the advantage to alleviate and compensate for these biases. The software EFS (Ensemble Feature Selection) makes use of multiple feature selection methods and combines their normalized outputs to a quantitative ensemble importance. Currently, eight different feature selection methods have been integrated in EFS, which can be used separately or combined in an ensemble. EFS identifies relevant features while compensating specific biases of single methods due to an ensemble approach. Thereby, EFS can improve the prediction accuracy and interpretability in subsequent binary classification models. EFS can be downloaded as an R-package from CRAN or used via a web application at http://EFS.heiderlab.de.
Full Text Available Developing web applications using Geographically Distributed Team Members has seen an increased popularity during the last years mainly because the rise of Open Source technologies, fast penetration of the Internet in emerging economies, the continuous quest for reduced costs as well for the fast adoption of online platforms and services which successfully address project planning, coordination and other development tasks. This paper identifies general software process stages for both collocated and distributed development and analyses the impact the use of planning, management and testing online services has on the duration, cost and quality of each stage. Given that Quality Assurance is one of the most important concerns in Geographically Distributed Software Development (GDSD, the focus is on Software Quality Validation.
Stracquadanio, Giovanni; Yang, Kun; Boeke, Jef D; Bader, Joel S
Synthetic biology has become a widely used technology, and expanding applications in research, education and industry require progress tracking for team-based DNA synthesis projects. Although some vendors are beginning to supply multi-kilobase sequence-verified constructs, synthesis workflows starting with short oligos remain important for cost savings and pedagogical benefit. We developed BioPartsDB as an open source, extendable workflow management system for synthetic biology projects with entry points for oligos and larger DNA constructs and ending with sequence-verified clones. BioPartsDB is released under the MIT license and available for download at https://github.com/baderzone/biopartsdb Additional documentation and video tutorials are available at https://github.com/baderzone/biopartsdb/wiki An Amazon Web Services image is available from the AWS Market Place (ami-a01d07c8). email@example.com. © The Author 2016. Published by Oxford University Press.
Full Text Available With the accelerated implementation of e-learning systems in educational institutions, it has become possible to record learners’ study logs in recent years. It must be admitted that little research has been conducted upon the analysis of the study logs that are obtained. In addition, there is no software that traces the mouse movements of learners during their learning processes, which the authors believe would enable teachers to better understand their students’ behaviors. The objective of this study is to develop a Web application that records students’ study logs, including their mouse trajectories, and to devise an IR tool that can summarize such diversified data. The results of an experiment are also scrutinized to provide an analysis of the relationship between learners’ activities and their study logs.
Full Text Available The aim of this research is to propose a quantitative risk modeling method that reduces the guess work and uncertainty from the vulnerability and risk assessment activities of web based applications while providing users the flexibility to assess risk according to their risk appetite and tolerance with a high degree of assurance. The research method is based on the research done by the OWASP Foundation on this subject but their risk rating methodology needed de-bugging and updates in different in key areas that are presented in this paper. The modified risk modeling method uses Monte Carlo simulations to model risk characteristics that can’t be determined without guess work and it was tested in vulnerability assessment activities on real production systems and in theory by assigning discrete uniform assumptions to all risk charac-teristics (risk attributes and evaluate the results after 1.5 million rounds of Monte Carlo simu-lations.
Kafatos, M.; Boybeyi, Z.; Cervone, G.; di, L.; Sun, D.; Yang, C.; Yang, R.
The ever-growing large volumes of Earth system science data, collected by Earth observing platforms, in situ stations and as model output data, are increasingly being used by discipline scientists and by wider classes of users. In particular, applications of Earth system science data to environmental and hazards as well as other national applications, require tailored or specialized data, as well as web-based tools and infrastructure. The latter are driven by applications and usage drivers which include ease of access, visualization of complex data, ease of producing value-added data, GIS and open source analysis usage, metadata, etc. Here we present different aspects of such web-based services and access, and discuss several applications in the hazards and environmental areas, including earthquake signatures and observations and model runs of hurricanes. Examples and lessons learned from the consortium Mid-Atlantic Geospatial Information Consortium will be presented. We discuss a NASA-funded, open source on-line data analysis system that is being applied to climate studies for the ESIP Federation. Since enhanced, this project and the next-generation Metadata Integrated Data Analysis System allow users not only to identify data but also to generate new data products on-the-fly. The functionalities extend from limited predefined functions, to sophisticated functions described by general-purposed GrADS (Grid Analysis and Display System) commands. The Federation system also allows third party data products to be combined with local data. Software component are available for converting the output from MIDAS (OPenDAP) into OGC compatible software. The on-going Grid efforts at CEOSR and LAITS in the School of Computational Sciences (SCS) include enhancing the functions of Globus to provide support for a geospatial system so the system can share the computing power to handle problems with different peak access times and improve the stability and flexibility of a rapid
... Checkpoints of the Web Content Accessibility Guidelines 1.0 (WCAG 1.0) (May 5, 1999) published by the Web Accessibility Initiative of the World Wide Web Consortium: Section 1194.22paragraph WCAG 1.0 checkpoint (a) 1.1...), (m), (n), (o), and (p) of this section are different from WCAG 1.0. Web pages that conform to WCAG 1...
D'Antonio, Mattia; D'Onorio De Meo, Paolo; Pallocca, Matteo; Picardi, Ernesto; D'Erchia, Anna Maria; Calogero, Raffaele A; Castrignanò, Tiziana; Pesole, Graziano
The study of RNA has been dramatically improved by the introduction of Next Generation Sequencing platforms allowing massive and cheap sequencing of selected RNA fractions, also providing information on strand orientation (RNA-Seq). The complexity of transcriptomes and of their regulative pathways make RNA-Seq one of most complex field of NGS applications, addressing several aspects of the expression process (e.g. identification and quantification of expressed genes and transcripts, alternative splicing and polyadenylation, fusion genes and trans-splicing, post-transcriptional events, etc.). In order to provide researchers with an effective and friendly resource for analyzing RNA-Seq data, we present here RAP (RNA-Seq Analysis Pipeline), a cloud computing web application implementing a complete but modular analysis workflow. This pipeline integrates both state-of-the-art bioinformatics tools for RNA-Seq analysis and in-house developed scripts to offer to the user a comprehensive strategy for data analysis. RAP is able to perform quality checks (adopting FastQC and NGS QC Toolkit), identify and quantify expressed genes and transcripts (with Tophat, Cufflinks and HTSeq), detect alternative splicing events (using SpliceTrap) and chimeric transcripts (with ChimeraScan). This pipeline is also able to identify splicing junctions and constitutive or alternative polyadenylation sites (implementing custom analysis modules) and call for statistically significant differences in genes and transcripts expression, splicing pattern and polyadenylation site usage (using Cuffdiff2 and DESeq). Through a user friendly web interface, the RAP workflow can be suitably customized by the user and it is automatically executed on our cloud computing environment. This strategy allows to access to bioinformatics tools and computational resources without specific bioinformatics and IT skills. RAP provides a set of tabular and graphical results that can be helpful to browse, filter and export
Candey, R. M.; Chimiak, R. A.; Harris, B. T.; Kovalick, T. J.; McGuire, R. E.
The Satellite Situation Center Web (SSCWeb) is a browser-based service to provide geocentric spacecraft location information and cast it into a framework of (empirical) geophysical regions and mappings of spacecraft locations along lines of the Earth's magnetic field. While backed by a substantial and maintained database of spacecraft orbit information and extensive software logic, several shortcomings in the present service are the inability of the architecture to support externally-developed clients and limitation of the orbit graphics to static, 2-D plots. This talk introduces a new distributed programming interface to the SSCWeb software based on the SOAP (Simple Object Access Protocol), a modern, industry standard technology. This interface, SSC XML Web Services, allows systems to communicate with SSC over the open protocols of the Internet. This flexible architecture will enable new SPDF developed client applications, as well as externally developed clients, to access the SSCWeb data and logic to bring new services and capabilities to the SEC community. The first such client application is TIPSOD (Tool for Interactive Plotting, Sonification and 3-D Orbit Display). Implemented in Java 3D, TIPSOD extends the existing SSCWeb 2-D static orbit graphics with 3-D interactive and animated displays linking set of spacecraft positions as a function of time. Additional capability and functional enhancements to SSCWeb services and TIPSOD, as well as the extension of this technology to the CDAWeb service, are being considered to further the relevance and usefulness of this work to the science community. SSCWeb is a joint effort of the NASA GSFC Space Physics Data Facility (SPDF) and the National Space Science Data Center (NSSDC).
Ries, Kernell G.; Horn, Marilee A.; Nardi, Mark R.; Tessler, Steven
Approximately 25,000 new households and thousands of new jobs will be established in an area that extends from southwest to northeast of Baltimore, Maryland, as a result of the Federal Base Realignment and Closure (BRAC) process, with consequent new demands on the water resources of the area. The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, has extended the area of implementation and added functionality to an existing map-based Web application named StreamStats to provide an improved tool for planning and managing the water resources in the BRAC-affected areas. StreamStats previously was implemented for only a small area surrounding Baltimore, Maryland, and it was extended to cover all BRAC-affected areas. StreamStats could provide previously published streamflow statistics, such as the 1-percent probability flood and the 7-day, 10-year low flow, for U.S. Geological Survey data-collection stations and estimates of streamflow statistics for any user-selected point on a stream within the implemented area. The application was modified for this study to also provide summaries of water withdrawals and discharges upstream from any user-selected point on a stream. This new functionality was made possible by creating a Web service that accepts a drainage-basin delineation from StreamStats, overlays it on a spatial layer of water withdrawal and discharge points, extracts the water-use data for the identified points, and sends it back to StreamStats, where it is summarized for the user. The underlying water-use data were extracted from the U.S. Geological Survey's Site-Specific Water-Use Database System (SWUDS) and placed into a Microsoft Access database that was created for this study for easy linkage to the Web service and StreamStats. This linkage of StreamStats with water-use information from SWUDS should enable Maryland regulators and planners to make more informed decisions on the use of water resources in the BRAC area, and
Jaca-Garcia, M. C.; Serrano-Barcena, N.
The term Web 2.0 is associated with the development of Web-based technology applications and tools used by communities of users. Those applications let these users access and produce information in a simple way, without the necessity of complicated software on their computers. this technology can also be used by small and medium companies to improve of any type of projects involving collaborative work. This paper presents different Web 2.0 applications that can be used by SMES (Small and medium enterprises) and the steps that should be addressed to implement them. Different examples of its uses are also explained. (Author) 15 refs.
Pennington, Jeffrey W; Ruth, Byron; Italia, Michael J; Miller, Jeffrey; Wrazien, Stacey; Loutrel, Jennifer G; Crenshaw, E Bryan; White, Peter S
Biomedical researchers share a common challenge of making complex data understandable and accessible as they seek inherent relationships between attributes in disparate data types. Data discovery in this context is limited by a lack of query systems that efficiently show relationships between individual variables, but without the need to navigate underlying data models. We have addressed this need by developing Harvest, an open-source framework of modular components, and using it for the rapid development and deployment of custom data discovery software applications. Harvest incorporates visualizations of highly dimensional data in a web-based interface that promotes rapid exploration and export of any type of biomedical information, without exposing researchers to underlying data models. We evaluated Harvest with two cases: clinical data from pediatric cardiology and demonstration data from the OpenMRS project. Harvest's architecture and public open-source code offer a set of rapid application development tools to build data discovery applications for domain-specific biomedical data repositories. All resources, including the OpenMRS demonstration, can be found at http://harvest.research.chop.edu.
Full Text Available This research aims to develop and evaluate a geospatial application for groundwater resource management at Karanganyar Regency. The system development methodology from Whitten and Bentley (2007 was used in this research. To initiate the project, we discussed with the stakeholders from Karanganyar Regency which came from various related agencies followed by a focus group discussion (FGD to analyse the system. Computational design and experiment were conducted to design the system prototype. Finally, we implement the system in the Regency. The result shows that the system is complex not only due to the managerial procedures but also the number of involved users (stakeholder in the system. To address the emerged requirements from the FGD, we propose and develop a web-based GIS application with current open source technology and Google Map API which can be used for collaboration among stakeholders as well as for supporting the decision support purpose in the groundwater management. Currently, Air-tanah, the prototype of the application is available at http://geografi.ums.ac.id/air-tanah/. Both quantitative and qualitative evaluation of the system resulted good responses from the users.
Suralkar, Sunita; Joshi, Nilambari; Meshram, B B
This paper describes about the need for Web project management, fundamentals of project management for web projects: what it is, why projects go wrong, and what's different about web projects. We also discuss Cost Estimation Techniques based on Size Metrics. Though Web project development is similar to traditional software development applications, the special characteristics of Web Application development requires adaption of many software engineering approaches or even development of comple...
Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the
Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the
Lee, Jae Eun; Sung, Jung Hye; Malouhi, Mohamad
There is abundant evidence that neighborhood characteristics are significantly linked to the health of the inhabitants of a given space within a given time frame. This study is to statistically validate a web-based GIS application designed to support cardiovascular-related research developed by the NIH funded Research Centers in Minority Institutions (RCMI) Translational Research Network (RTRN) Data Coordinating Center (DCC) and discuss its applicability to cardiovascular studies. Geo-referencing, geocoding and geospatial analyses were conducted for 500 randomly selected home addresses in a U.S. southeastern Metropolitan area. The correlation coefficient, factor analysis and Cronbach's alpha (α) were estimated to quantify measures of the internal consistency, reliability and construct/criterion/discriminant validity of the cardiovascular-related geospatial variables (walk score, number of hospitals, fast food restaurants, parks and sidewalks). Cronbach's α for CVD GEOSPATIAL variables was 95.5%, implying successful internal consistency. Walk scores were significantly correlated with number of hospitals (r = 0.715; p GIS application may be useful to apply to cardiovascular-related studies aimed to investigate potential impact of geospatial factors on diseases and/or the long-term effect of clinical trials.
Full Text Available BACKGROUND: Estimates of the sensitivity and specificity for new diagnostic tests based on evaluation against a known gold standard are imprecise when the accuracy of the gold standard is imperfect. Bayesian latent class models (LCMs can be helpful under these circumstances, but the necessary analysis requires expertise in computational programming. Here, we describe open-access web-based applications that allow non-experts to apply Bayesian LCMs to their own data sets via a user-friendly interface. METHODS/PRINCIPAL FINDINGS: Applications for Bayesian LCMs were constructed on a web server using R and WinBUGS programs. The models provided (http://mice.tropmedres.ac include two Bayesian LCMs: the two-tests in two-population model (Hui and Walter model and the three-tests in one-population model (Walter and Irwig model. Both models are available with simplified and advanced interfaces. In the former, all settings for Bayesian statistics are fixed as defaults. Users input their data set into a table provided on the webpage. Disease prevalence and accuracy of diagnostic tests are then estimated using the Bayesian LCM, and provided on the web page within a few minutes. With the advanced interfaces, experienced researchers can modify all settings in the models as needed. These settings include correlation among diagnostic test results and prior distributions for all unknown parameters. The web pages provide worked examples with both models using the original data sets presented by Hui and Walter in 1980, and by Walter and Irwig in 1988. We also illustrate the utility of the advanced interface using the Walter and Irwig model on a data set from a recent melioidosis study. The results obtained from the web-based applications were comparable to those published previously. CONCLUSIONS: The newly developed web-based applications are open-access and provide an important new resource for researchers worldwide to evaluate new diagnostic tests.
Lim, Cherry; Wannapinij, Prapass; White, Lisa; Day, Nicholas P J; Cooper, Ben S; Peacock, Sharon J; Limmathurotsakul, Direk
Estimates of the sensitivity and specificity for new diagnostic tests based on evaluation against a known gold standard are imprecise when the accuracy of the gold standard is imperfect. Bayesian latent class models (LCMs) can be helpful under these circumstances, but the necessary analysis requires expertise in computational programming. Here, we describe open-access web-based applications that allow non-experts to apply Bayesian LCMs to their own data sets via a user-friendly interface. Applications for Bayesian LCMs were constructed on a web server using R and WinBUGS programs. The models provided (http://mice.tropmedres.ac) include two Bayesian LCMs: the two-tests in two-population model (Hui and Walter model) and the three-tests in one-population model (Walter and Irwig model). Both models are available with simplified and advanced interfaces. In the former, all settings for Bayesian statistics are fixed as defaults. Users input their data set into a table provided on the webpage. Disease prevalence and accuracy of diagnostic tests are then estimated using the Bayesian LCM, and provided on the web page within a few minutes. With the advanced interfaces, experienced researchers can modify all settings in the models as needed. These settings include correlation among diagnostic test results and prior distributions for all unknown parameters. The web pages provide worked examples with both models using the original data sets presented by Hui and Walter in 1980, and by Walter and Irwig in 1988. We also illustrate the utility of the advanced interface using the Walter and Irwig model on a data set from a recent melioidosis study. The results obtained from the web-based applications were comparable to those published previously. The newly developed web-based applications are open-access and provide an important new resource for researchers worldwide to evaluate new diagnostic tests.
Aye, Zar Chi; Jaboyedoff, Michel; Derron, Marc-Henri
Probabilistic Risk Assessment (PRA) model and the knowledge collected from experts. The visualization of the risk reduction scenarios can also be shared among the users on the web to support the on-line participatory process. In addition, cost-benefit ratios of the different risk reduction scenarios can be prepared in order to serve as inputs for high-level decision makers. The most appropriate risk reduction scenarios will be chosen using Multi-Criteria Evaluation (MCE) method by weighting different parameters according to the preferences and criteria defined by the users. The role of public participation has been changing from one-way communication between authorities, experts, stakeholders and citizens towards more intensive two-way interaction. Involving the affected public and interest groups can enhance the level of legitimacy, transparency, and confidence in the decision making process. Due to its important part in decision making, online participatory tool is included in the DSS in order to allow the involved stakeholders interactively in risk reduction and be aware of the existing vulnerability conditions of the community. Moreover, it aims to achieve a more transparent and better informed decision-making process. The system is under in progress and the first tools implemented will be presented showing the wide possibilities of new web technologies which can have a great impact on the decision making process. It will be applied in four pilot areas in Europe: French Alps, North Eastern Italy, Romania and Poland. Nevertheless, the framework will be designed and implemented in a way to be applicable in any other regions.
Langella, Giuliano; Basile, Angelo; Giannecchini, Simone; Iamarino, Michela; Munafò, Michele; Terribile, Fabio
Soil sealing is one of the most important causes of land degradation and desertification. In Europe, soil covered by impermeable materials has increased by about 80% from the Second World War till nowadays, while population has only grown by one third. There is an increasing concern at the high political levels about the need to attenuate imperviousness itself and its effects on soil functions. European Commission promulgated a roadmap (COM(2011) 571) by which the net land take would be zero by 2050. Furthermore, European Commission also published a report in 2011 providing best practices and guidelines for limiting soil sealing and imperviousness. In this scenario, we developed an open source and an open source based Soil Sealing Geospatial Cyber Infrastructure (SS-GCI) named as "Soil Monitor". This tool merges a webGIS with parallel geospatial computation in a fast and dynamic fashion in order to provide real-time assessments of soil sealing at high spatial resolution (20 meters and below) over the whole Italy. Common open source webGIS packages are used to implement both the data management and visualization infrastructures, such as GeoServer and MapStore. The high-speed geospatial computation is ensured by a GPU parallelism using the CUDA (Computing Unified Device Architecture) framework by NVIDIA®. This kind of parallelism required the writing - from scratch - all codes needed to fulfil the geospatial computation built behind the soil sealing toolbox. The combination of GPU computing with webGIS infrastructures is relatively novel and required particular attention at the Java-CUDA programming interface. As a result, Soil Monitor is smart because it can perform very high time-consuming calculations (querying for instance an Italian administrative region as area of interest) in less than one minute. The web application is embedded in a web browser and nothing must be installed before using it. Potentially everybody can use it, but the main targets are the
Web application for the control and management of radioprotection equipment in the Cadarache centre; Application WEB pour le controle et la gestion des appareils de radioprotection sur le centre de Cadarache
The author describes a web 2-type application which has been developed for the periodic calibration controls of radioprotection equipment in Cadarache. This application aims at offering an easy and immediate and even remote access to information, at selecting information with respect to uses (radioprotection department, administrator, and so on), at securing and safeguarding homogeneous data, at editing control statistics. The different functionalities are briefly presented with their displayed interface
Akanbi, Adeyinka K.; Agunbiade, Olusanya Y.
International audience; Geospatial applications are becoming indispensable part of information systems, they provides detailed information’s regarding the attribute data of spatial objects in real world. Due to the rapid technological developments in web based geographical information systems, the uses of web based geospatial application varies from Geotagging to Geolocation capabilities. Therefore, effective utilization of web based information system can only be realized by representing the...
Chu, Larry F; Young, Chelsea A; Zamora, Abby K; Lowe, Derek; Hoang, Dan B; Pearl, Ronald G; Macario, Alex
Despite the use of web-based information resources by both anesthesia departments and applicants, little research has been done to assess these resources and determine whether they are meeting applicant needs. Evidence is needed to guide anesthesia informatics research in developing high-quality anesthesia residency program Web sites (ARPWs). We used an anonymous web-based program (SurveyMonkey, Portland, OR) to distribute a survey investigating the information needs and perceived usefulness of ARPWs to all 572 Stanford anesthesia residency program applicants. A quantitative scoring system was then created to assess the quality of ARPWs in meeting the information needs of these applicants. Two researchers independently analyzed all 131 ARPWs in the United States to determine whether the ARPWs met the needs of applicants based on the scoring system. Finally, a qualitative assessment of the overall user experience of ARPWs was developed to account for the subjective elements of the Web site's presentation. Ninety-eight percent of respondents reported having used ARPWs during the application process. Fifty-six percent reported first visiting the Stanford ARPW when deciding whether to apply to Stanford's anesthesia residency program. Multimedia and Web 2.0 technologies were "very" or "most" useful in "learning intangible aspects of a program, like how happy people are" (42% multimedia and Web 2.0 versus 14% text and photos). ARPWs, on average, contained only 46% of the content items identified as important by applicants. The average (SD) quality scores among all ARPWs was 2.06 (0.59) of 4.0 maximum points. The mean overall qualitative score for all 131 ARPWs was 4.97 (1.92) of 10 points. Only 2% of applicants indicated that the majority (75%-100%) of Web sites they visited provided a complete experience. Anesthesia residency applicants rely heavily on ARPWs to research programs, prepare for interviews, and formulate a rank list. Anesthesia departments can improve their
Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel
In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.
Oracle Application Express has taken another big leap towards becoming a true next generation RAD tool. It has entered into its fifth version to build robust web applications. One of the most significant feature in this release is a new page designer that helps developers create and edit page elements within a single page design view, which enormously maximizes developer productivity. Without involving the audience too much into the boring bits, this full colored edition adopts an inspiring approach that helps beginners practically evaluate almost every feature of Oracle Application Express, including all features new to version 5. The most convincing way to explore a technology is to apply it to a real world problem. In this book, you’ll develop a sales application that demonstrates almost every feature to practically expose the anatomy of Oracle Application Express 5. The short list below presents some main topics of Oracle APEX covered in this book: Rapid web application development for desktops, la...
Aljraiwi, Seham Salman
The current study proposes web applications-based learning environment to promote teaching and learning activities in the classrooms. It also helps teachers facilitate learners' contributions in the process of learning and improving their motivation and performance. The case study illustrated that female students were more interested in learning…
About This Book Learn how to propagate DOM changes across the website without writing extensive jQuery callbacks code. Learn how to achieve reactivity and easily compose views with Vue.js and understand what it does behind the scenes. Explore the core features of Vue.js with small examples, learn how to build dynamic content into preexisting web applications, and build Vue.js applications from scratch. Who This Book Is For This book is perfect for novice web developer seeking to learn new technologies or frameworks and also for webdev gurus eager to enrich their experience. Whatever your level of expertise, this book is a great introduction to the wonderful world of reactive web apps. What You Will Learn Build a fully functioning reactive web application in Vue.js from scratch. The importance of the MVVM architecture and how Vue.js compares with other frameworks such as Angular.js and React.js. How to bring reactivity to an existing static application using Vue.js. How to use p...
Full Text Available The objective of the paper is the verification of the fulfilment of the purposes of Basel II, Pillar 3 – market discipline during the recent financial crisis. The objective of the paper is to describe the current state of the working out of the project that is focused on the analysis of the market participants’ interest in mandatory disclosure of financial information by a commercial bank by means of advanced methods of web log mining. The output of the realized project will be the verification of the assumptions related to the purposes of Basel III by means of the web mining methods, the recommendations for possible reduction of mandatory disclosure of information under Basel II and III, the proposal of the methodology for data preparation for web log mining in this application domain and the generalised procedure for users’ behaviour modelling dependent on time. The schedule of the project has been divided into three phases. The paper deals with its first phase that is focusing on the data pre-processing, analysis and evaluation of the required information under Basel II, Pillar 3 since 2008 and its disclosure into the web site of a commercial bank. The authors introduce the methodologies for data preparation and known heuristic methods for path completion into web log files with respect to the particularity of investigated application domain. They propose scientific methods for modelling users’ behaviour of the webpages related to Pillar 3 with respect to time.
Full Text Available Abstract Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i a workflow to annotate 100,000 sequences from an invertebrate species; ii an integrated system for analysis of the transcription factor binding sites (TFBSs enriched based on differential gene expression data obtained from a microarray experiment; iii a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i the absence of several useful data or analysis functions in the Web service "space"; ii the lack of documentation of methods; iii lack of
Katayama, Toshiaki; Wilkinson, Mark D; Vos, Rutger; Kawashima, Takeshi; Kawashima, Shuichi; Nakao, Mitsuteru; Yamamoto, Yasunori; Chun, Hong-Woo; Yamaguchi, Atsuko; Kawano, Shin; Aerts, Jan; Aoki-Kinoshita, Kiyoko F; Arakawa, Kazuharu; Aranda, Bruno; Bonnal, Raoul Jp; Fernández, José M; Fujisawa, Takatomo; Gordon, Paul Mk; Goto, Naohisa; Haider, Syed; Harris, Todd; Hatakeyama, Takashi; Ho, Isaac; Itoh, Masumi; Kasprzyk, Arek; Kido, Nobuhiro; Kim, Young-Joo; Kinjo, Akira R; Konishi, Fumikazu; Kovarskaya, Yulia; von Kuster, Greg; Labarga, Alberto; Limviphuvadh, Vachiranee; McCarthy, Luke; Nakamura, Yasukazu; Nam, Yunsun; Nishida, Kozo; Nishimura, Kunihiro; Nishizawa, Tatsuya; Ogishima, Soichi; Oinn, Tom; Okamoto, Shinobu; Okuda, Shujiro; Ono, Keiichiro; Oshita, Kazuki; Park, Keun-Joon; Putnam, Nicholas; Senger, Martin; Severin, Jessica; Shigemoto, Yasumasa; Sugawara, Hideaki; Taylor, James; Trelles, Oswaldo; Yamasaki, Chisato; Yamashita, Riu; Satoh, Noriyuki; Takagi, Toshihisa
The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i) a workflow to annotate 100,000 sequences from an invertebrate species; ii) an integrated system for analysis of the transcription factor binding sites (TFBSs) enriched based on differential gene expression data obtained from a microarray experiment; iii) a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv) a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i) the absence of several useful data or analysis functions in the Web service "space"; ii) the lack of documentation of methods; iii) lack of compliance with the SOAP/WSDL specification among and
Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i) a workflow to annotate 100,000 sequences from an invertebrate species; ii) an integrated system for analysis of the transcription factor binding sites (TFBSs) enriched based on differential gene expression data obtained from a microarray experiment; iii) a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv) a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i) the absence of several useful data or analysis functions in the Web service "space"; ii) the lack of documentation of methods; iii) lack of compliance with the SOAP
Gurjar, Anoop Kishor Singh; Panwar, Abhijeet Singh; Gupta, Rajinder; Mantri, Shrikant S
High-throughput small RNA (sRNA) sequencing technology enables an entirely new perspective for plant microRNA (miRNA) research and has immense potential to unravel regulatory networks. Novel insights gained through data mining in publically available rich resource of sRNA data will help in designing biotechnology-based approaches for crop improvement to enhance plant yield and nutritional value. Bioinformatics resources enabling meta-analysis of miRNA expression across multiple plant species are still evolving. Here, we report PmiRExAt, a new online database resource that caters plant miRNA expression atlas. The web-based repository comprises of miRNA expression profile and query tool for 1859 wheat, 2330 rice and 283 maize miRNA. The database interface offers open and easy access to miRNA expression profile and helps in identifying tissue preferential, differential and constitutively expressing miRNAs. A feature enabling expression study of conserved miRNA across multiple species is also implemented. Custom expression analysis feature enables expression analysis of novel miRNA in total 117 datasets. New sRNA dataset can also be uploaded for analysing miRNA expression profiles for 73 plant species. PmiRExAt application program interface, a simple object access protocol web service allows other programmers to remotely invoke the methods written for doing programmatic search operations on PmiRExAt database.Database URL:http://pmirexat.nabi.res.in. © The Author(s) 2016. Published by Oxford University Press.
Full Text Available The project describes a process of conversion of printed books into a web map and mobile application. The goal of the project is to make spatial data in the book accessible to wide public using GIS especially on web in order to spread the information about this topic. Moreover, as a result of the analysis and of the new perspectives gained from the data context, historians will be able to find new connections. The books that serve as sources of the project (two books with the scope of about 1400 pages featuring hundreds of locations where each location is associated with more events of different types refer to places with many addresses in Prague and some villages in the Czech Republic which are related to events that took place during the World War II. The paper describes the steps of conversion, the design of the data model in Esri geodatabase and examples of outputs. The historical data are connected to actual addresses and thanks to such a combination of historical and actual locations, the project will help to discover a part of the history of the Czech Republic and it will show new context in data via GIS capabilities. This project is a continuation of a project which recorded a march of death on a map. This is a unique project created in cooperation with Academia Publishing. The outputs of the project will serve as a core resource for a multimedia history portal. The author of the book is currently writing sequels from the post-war period and at least two other books are envisioned, so the future of the project is ensured.
Menezes Andreia P
Full Text Available Abstract Background Although scientific writing plays a central role in the communication of clinical research findings and consumes a significant amount of time from clinical researchers, few Web applications have been designed to systematically improve the writing process. This application had as its main objective the separation of the multiple tasks associated with scientific writing into smaller components. It was also aimed at providing a mechanism where sections of the manuscript (text blocks could be assigned to different specialists. Manuscript Architect was built using Java language in conjunction with the classic lifecycle development method. The interface was designed for simplicity and economy of movements. Manuscripts are divided into multiple text blocks that can be assigned to different co-authors by the first author. Each text block contains notes to guide co-authors regarding the central focus of each text block, previous examples, and an additional field for translation when the initial text is written in a language different from the one used by the target journal. Usability was evaluated using formal usability tests and field observations. Results The application presented excellent usability and integration with the regular writing habits of experienced researchers. Workshops were developed to train novice researchers, presenting an accelerated learning curve. The application has been used in over 20 different scientific articles and grant proposals. Conclusion The current version of Manuscript Architect has proven to be very useful in the writing of multiple scientific texts, suggesting that virtual writing by interdisciplinary groups is an effective manner of scientific writing when interdisciplinary work is required.
Habib, Shahid; Talabac, Stephen J.
There is a significant interest in the Earth Science research and user remote sensing community to substantially increase the number of useful observations relative to the current frequency of collection. The obvious reason for such a push is to improve the temporal, spectral, and spatial coverage of the area(s) under investigation. However, there is little analysis available in terms of the benefits, costs and the optimal set of sensors needed to make the necessary observations. Classic observing system solutions may no longer be applicable because of their point design philosophy. Instead, a new intelligent data collection system paradigm employing both reactive and proactive measurement strategies with adaptability to the dynamics of the phenomena should be developed. This is a complex problem that should be carefully studied and balanced across various boundaries including: science, modeling, applications, and technology. Modeling plays a crucial role in making useful predictions about naturally occurring or human-induced phenomena In particular, modeling can serve to mitigate the potentially deleterious impacts a phenomenon may have on human life, property, and the economy. This is especially significant when one is interested in learning about the dynamics of, for example, the spread of forest fires, regional to large-scale air quality issues, the spread of the harmful invasive species, or the atmospheric transport of volcanic plumes and ash. This paper identifies and examines these challenging issues and presents architectural alternatives for an integrated sensor web to provide observing scenarios driving the requisite dynamic spatial, spectral, and temporal characteristics to address these key application areas. A special emphasis is placed on the observing systems and its operational aspects in serving the multiple users and stakeholders in providing societal benefits. We also address how such systems will take advantage of technological advancement in
Bjoernes Charlotte D
Full Text Available Abstract Background In today’s short stay hospital settings the contact time for patients is reduced. However, it seems to be more important for the patients that the healthcare professionals are easy to get in contact with during the whole course of treatment, and to have the opportunity to exchange information, as a basis for obtaining individualized information and support. Therefore, the aim was to explore the ability of a dialogue-based application to contribute to accessibility of the healthcare professionals and exchangeability of information. Method An application for online written and asynchronous contacts was developed, implemented in clinical practice, and evaluated. The qualitative effect of the online contact was explored using a Web-based survey comprised of open-ended questions. Results Patients valued the online contacts and experienced feelings of partnership in dialogue, in a flexible and calm environment, which supported their ability to be active partners and feelings of freedom and security. Conclusion The online asynchronous written environment can contribute to accessibility and exchangeability, and add new possibilities for dialogues from which the patients can benefit. The individualized information obtained via online contact empowers the patients. The Internet-based contacts are a way to differentiate and expand the possibilities for contacts outside the few scheduled face-to-face hospital contacts.
Background In today’s short stay hospital settings the contact time for patients is reduced. However, it seems to be more important for the patients that the healthcare professionals are easy to get in contact with during the whole course of treatment, and to have the opportunity to exchange information, as a basis for obtaining individualized information and support. Therefore, the aim was to explore the ability of a dialogue-based application to contribute to accessibility of the healthcare professionals and exchangeability of information. Method An application for online written and asynchronous contacts was developed, implemented in clinical practice, and evaluated. The qualitative effect of the online contact was explored using a Web-based survey comprised of open-ended questions. Results Patients valued the online contacts and experienced feelings of partnership in dialogue, in a flexible and calm environment, which supported their ability to be active partners and feelings of freedom and security. Conclusion The online asynchronous written environment can contribute to accessibility and exchangeability, and add new possibilities for dialogues from which the patients can benefit. The individualized information obtained via online contact empowers the patients. The Internet-based contacts are a way to differentiate and expand the possibilities for contacts outside the few scheduled face-to-face hospital contacts. PMID:22947231
Scarselli, Franco; Tsoi, Ah Chung; Hagenbuchner, Markus; Noi, Lucia Di
This paper proposes the combination of two state-of-the-art algorithms for processing graph input data, viz., the probabilistic mapping graph self organizing map, an unsupervised learning approach, and the graph neural network, a supervised learning approach. We organize these two algorithms in a cascade architecture containing a probabilistic mapping graph self organizing map, and a graph neural network. We show that this combined approach helps us to limit the long-term dependency problem that exists when training the graph neural network resulting in an overall improvement in performance. This is demonstrated in an application to a benchmark problem requiring the detection of spam in a relatively large set of web sites. It is found that the proposed method produces results which reach the state of the art when compared with some of the best results obtained by others using quite different approaches. A particular strength of our method is its applicability towards any input domain which can be represented as a graph. Copyright © 2013 Elsevier Ltd. All rights reserved.
Kim, Minsung; Kim, Kamyoung; Lee, Sang-Il
This article examines the pedagogical potential of a Web-based GIS application, Population Migration Web Service (PMWS), in which students can examine population geography in an interactive and exploratory manner. This article introduces PMWS, a tailored, unique Internet GIS application that provides functions for visualizing spatial interaction…
Wagner, Michael M; Levander, John D; Brown, Shawn; Hogan, William R; Millett, Nicholas; Hanna, Josh
This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem-which we define as a configuration and a query of results-exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services.
The World Wide Web bas become an important platform for developing and running applications. A vital process while developing web applications is the choice of web technologies, on which the application will be build. The developers face a dizzying array of platforms, languages, frameworks and technical artifacts to choose from. The decison carries consequences on most other decisions in the development process. Thesis contains analisis, classifications and comparison of web technologies s...
Full Text Available This article emerged as an academic initiative in which it is observed that the areas of knowledge in software develop- ment under the paradigm of Object-oriented programming (OOP is confronted by a model data storage relational raising two scenarios different developers try to mitigate through conversions between types or using intermediate tools such as mapping relational objects that bring certain advantages and disadvantages, and therefore, was raised within the project the possibility of using a storage engine type non-relational or NoSQL.With the design and development of the framework for generating Web applications, the user can define objects to consider including in the application, which will be stored in MongoDB engine, which arranges the data in the form of documents. The dynamic structure of these documents can be used in many projects, including many who traditionally would work on relational databases.Aiming to socialize and evaluate the work done, some instruments were designed to collect information from users with experience in the field of databases and software development. As a result highlights that software developers have clear concepts of object persistence through object-relational mapping (ORM, that learning these techniques software development through implementing own code or using APIs have a high degree of complexity and mostly (60% they are aware that these implementations generate low performance in applications. In addition, the opening of these highlights to choose alternative to organize and store information, different to the relational approach used for several years.
Rudolph, Abby; Tobin, Karin; Rudolph, Jonathan; Latkin, Carl
Although studies that characterize the risk environment by linking contextual factors with individual-level data have advanced infectious disease and substance use research, there are opportunities to refine how we define relevant neighborhood exposures; this can in turn reduce the potential for exposure misclassification. For example, for those who do not inject at home, injection risk behaviors may be more influenced by the environment where they inject than where they live. Similarly, among those who spend more time away from home, a measure that accounts for different neighborhood exposures by weighting each unique location proportional to the percentage of time spent there may be more correlated with health behaviors than one's residential environment. This study aimed to develop a Web-based application that interacts with Google Maps application program interfaces (APIs) to collect contextually relevant locations and the amount of time spent in each. Our analysis examined the extent of overlap across different location types and compared different approaches for classifying neighborhood exposure. Between May 2014 and March 2017, 547 participants enrolled in a Baltimore HIV care and prevention study completed an interviewer-administered Web-based survey that collected information about where participants were recruited, worked, lived, socialized, injected drugs, and spent most of their time. For each location, participants gave an address or intersection which they confirmed using Google Map and Street views. Geographic coordinates (and hours spent in each location) were joined to neighborhood indicators by Community Statistical Area (CSA). We computed a weighted exposure based on the proportion of time spent in each unique location. We compared neighborhood exposures based on each of the different location types with one another and the weighted exposure using analysis of variance with Bonferroni corrections to account for multiple comparisons. Participants
Full Text Available The purpose of this study was to determine the extent to which the use of a web application to improve efficiency and effectiveness of administrative work. The research method used consisted of the analysis phase, study libraries to support application design and data collection based on observations. Results of evaluation and observation data before and after implementation of the system showed that the use of a web application to improve efficiency and effectiveness of administrative work, e.g. by decreasing the length of time the making of letter, an increase in the percentage of the making of timeliness, accuracy of data validation and provide facilities for students in order letter as well as convenience for staff in the making of letter. Another thing is the ease in monitoring the accuracy of the making of letter.
Huanlong, Liu; Zengtao, Wang; Wenlong, Zhang; Youmao, Zheng
To investigate the application of the dorsal metacarpal perforator sliding flap for web-space reconstruction in congenital syndactyly. According to the size and shape of skin defect at the web space after division operation of syndactyly, the corresponding intermetacarpal perforator sliding flap was designed. The edge of the flap was cut off, but its underlying tissue was not dissected. From May 2007 to November 2012, 28 web-spaces in 15 patients with syndactyly (10 male and 5 female) were reconstructed. All the 28 flaps survived completely. The flap size ranged from 3 cm x 2 cm to 1.5 cm x 1.0 cm. 14 cases with 26 flaps were followed up for 10-22 months (average, 14.5 month). The reconstructed web spaces had normal appearance and movement range. The 2-point discrimination distance was 9-13 mm (average, 11 mm). According to the Swanson Standard, 18 fingers were graded as excellent, 8 as good and 2 as fair (excellent and good, 92.6%, 26/28). Reconstruction of web-space in syndactyly with the dorsal metacarpal perforator flap has the advantages of easy handling, good cosmetic and functional results.
Steeman, Gerald; Connell, Christopher
Many librarians may feel that dynamic Web pages are out of their reach, financially and technically. Yet we are reminded in library and Web design literature that static home pages are a thing of the past. This paper describes how librarians at the Institute for Defense Analyses (IDA) library developed a database-driven, dynamic intranet site using commercial off-the-shelf applications. Administrative issues include surveying a library users group for interest and needs evaluation; outlining metadata elements; and, committing resources from managing time to populate the database and training in Microsoft FrontPage and Web-to-database design. Technical issues covered include Microsoft Access database fundamentals, lessons learned in the Web-to-database process (including setting up Database Source Names (DSNs), redesigning queries to accommodate the Web interface, and understanding Access 97 query language vs. Standard Query Language (SQL)). This paper also offers tips on editing Active Server Pages (ASP) scripting to create desired results. A how-to annotated resource list closes out the paper.
Ronchetti, Marco; Rizzi, Matteo
The idea of annotating Web pages is not a new one: early proposals date back to 1994. A tool providing the ability to add notes to a Web page, and to share the notes with other users seems to be particularly well suited to an e-learning environment. Although several tools already provide such possibility, they are not widely popular. This paper…
Miller, Christopher T.
This paper provides a review of literature that relates research on Carl Rogers' person-centered learning theory to Web-based learning. Based on the review of the literature, a set of criteria is described that can be used to determine how closely a Web-based course matches the different components of Rogers' person-centered learning theory. Using…
The system has been configured at the School of Civil and Environmental Engineering and in the Office of the Road Fund. It was tested and is found to be functional that requires populating it by appropriate data, which would be the next step. Key words: Road Network, ERNIS, WebGIS, Open-Source, OpenLayers, Web ...
Probst, Thomas; Pryss, Rüdiger C; Langguth, Berthold; Spiliopoulou, Myra; Landgrebe, Michael; Vesala, Markku; Harrison, Stephen; Schobel, Johannes; Reichert, Manfred; Stach, Michael; Schlee, Winfried
For understanding the heterogeneity of tinnitus, large samples are required. However, investigations on how samples recruited by different methods differ from each other are lacking. In the present study, three large samples each recruited by different means were compared: N = 5017 individuals registered at a self-help web platform for tinnitus (crowdsourcing platform Tinnitus Talk), N = 867 users of a smart mobile application for tinnitus (crowdsensing platform TrackYourTinnitus), and N = 3786 patients contacting an outpatient tinnitus clinic (Tinnitus Center of the University Hospital Regensburg). The three samples were compared regarding age, gender, and duration of tinnitus (month or years perceiving tinnitus; subjective report) using chi-squared tests. The three samples significantly differed from each other in age, gender and tinnitus duration (p platform were younger, users of the Tinnitus Talk crowdsourcing platform had more often female gender, and users of both newer technologies (crowdsourcing and crowdsensing) had more frequently acute/subacute tinnitus (20 years). The implications of these findings for clinical research are that newer technologies such as crowdsourcing and crowdsensing platforms offer the possibility to reach individuals hard to get in contact with at an outpatient tinnitus clinic. Depending on the aims and the inclusion/exclusion criteria of a given study, different recruiting strategies (clinic and/or newer technologies) offer different advantages and disadvantages. In general, the representativeness of study results might be increased when tinnitus study samples are recruited in the clinic as well as via crowdsourcing and crowdsensing.
Hester, Reid K; Delaney, Harold D; Campbell, William; Handmaker, Nancy
Eighty-four heavy drinkers who responded to a newspaper recruitment advertisement were randomly assigned to receive either (a) training in a Moderate Drinking protocol via an Internet-based program (www.moderatedrinking.com) and use of the online resources of Moderation Management (MM; www.moderation.org) or (b) use of the online resources of MM alone. Follow-ups are being conducted at 3, 6, and 12 months. Results of the recently completed 3-month follow-up (86% follow-up) indicated both groups significantly reduced their drinking based on these variables: standard drinks per week, percent days abstinent, and mean estimated blood alcohol concentration (BAC) per drinking day. Both groups also significantly reduced their alcohol-related problems. Relative to the control group, the experimental group had better outcomes on percent days abstinent and log drinks per drinking day. These short-term outcome data provide evidence for the effectiveness of both the Moderate Drinking Web application and of the resources available online at MM in helping heavy drinkers reduce their drinking and alcohol-related problems.
Feinberg, J. M.; Maxbauer, D.; Fox, D. L.
Magnetic minerals are present in a wide variety of natural systems and are often indicative of the natural or anthropogenic processes that led to their deposition, formation, or transformation. Unmixing the contribution of magnetic components to bulk field-dependent magnetization curves has become increasingly common in environmental and rock magnetic studies and has enhanced our ability to fingerprint the magnetic signatures of magnetic minerals with distinct compositions, grain sizes, and origins. A variety of programs have been developed over the past two decades to allow researchers to deconvolve field-dependent magnetization curves for these purposes, however many of these programs are either outdated or have obstacles that inhibit the programs usability. MAX UnMix is a new web application (available online at http://www.irm.umn.edu/maxunmix) built using the `shiny' package for R-studio that can be used to process coercivity distributions derived from magnetization curves (acquisition, demagnetization, or backfield data) via an online user-interface. Here, we use example datasets from lake sediments and paleosols to present details of the MAX UnMix model and the programs functionality. MAX UnMix is designed to be accessible, user friendly, and should serve as a useful resource for future research.
Madhyastha, Tara M; Koh, Natalie; Day, Trevor K M; Hernández-Fernández, Moises; Kelley, Austin; Peterson, Daniel J; Rajan, Sabreena; Woelfer, Karl A; Wolf, Jonathan; Grabowski, Thomas J
The contribution of this paper is to identify and describe current best practices for using Amazon Web Services (AWS) to execute neuroimaging workflows "in the cloud." Neuroimaging offers a vast set of techniques by which to interrogate the structure and function of the living brain. However, many of the scientists for whom neuroimaging is an extremely important tool have limited training in parallel computation. At the same time, the field is experiencing a surge in computational demands, driven by a combination of data-sharing efforts, improvements in scanner technology that allow acquisition of images with higher image resolution, and by the desire to use statistical techniques that stress processing requirements. Most neuroimaging workflows can be executed as independent parallel jobs and are therefore excellent candidates for running on AWS, but the overhead of learning to do so and determining whether it is worth the cost can be prohibitive. In this paper we describe how to identify neuroimaging workloads that are appropriate for running on AWS, how to benchmark execution time, and how to estimate cost of running on AWS. By benchmarking common neuroimaging applications, we show that cloud computing can be a viable alternative to on-premises hardware. We present guidelines that neuroimaging labs can use to provide a cluster-on-demand type of service that should be familiar to users, and scripts to estimate cost and create such a cluster.
Tara M. Madhyastha
Full Text Available The contribution of this paper is to identify and describe current best practices for using Amazon Web Services (AWS to execute neuroimaging workflows “in the cloud.” Neuroimaging offers a vast set of techniques by which to interrogate the structure and function of the living brain. However, many of the scientists for whom neuroimaging is an extremely important tool have limited training in parallel computation. At the same time, the field is experiencing a surge in computational demands, driven by a combination of data-sharing efforts, improvements in scanner technology that allow acquisition of images with higher image resolution, and by the desire to use statistical techniques that stress processing requirements. Most neuroimaging workflows can be executed as independent parallel jobs and are therefore excellent candidates for running on AWS, but the overhead of learning to do so and determining whether it is worth the cost can be prohibitive. In this paper we describe how to identify neuroimaging workloads that are appropriate for running on AWS, how to benchmark execution time, and how to estimate cost of running on AWS. By benchmarking common neuroimaging applications, we show that cloud computing can be a viable alternative to on-premises hardware. We present guidelines that neuroimaging labs can use to provide a cluster-on-demand type of service that should be familiar to users, and scripts to estimate cost and create such a cluster.
Hester, Reid K.; Delaney, Harold D.; Campbell, William; Handmaker, Nancy
Eighty four heavy drinkers who responded to a newspaper recruitment ad were randomly assigned to receive either: a) training in a moderate drinking protocol via an Internet-based program (www.moderatedrinking.com) and use of the online resources of Moderation Management (MM) (www.moderation.org) or b) use of the online resources of MM alone. Follow-ups are being conducted at 3, 6, and 12 months. Results of the recently completed 3 month follow-up (86% follow-up) indicated both groups significantly reduced their drinking based on these variables: standard drinks per week; percent days abstinent; and mean BAC per drinking day. Both groups also significantly reduced their alcohol-related problems. Relative to the control group the experimental group had better outcomes on percent days abstinent and log Drinks per Drinking Day. These short-term outcome data provide evidence for the effectiveness of both the moderate drinking web application and of the resources available online at MM in helping heavy drinkers reduce their drinking and alcohol-related problems. PMID:19339137
Østergaard, Søren; Ettema, Jehan Frans; Kudahl, Anne Braad
. This aim was fulfilled at the end of 2009 by a beta version of a SimHerd web application tested by veterinary practitioners in real herd health advisory situation and approved as the core product for starting up in early 2010 a commercial spin-off company owned by NOVI Innovation and Aarhus University......SimHerd is a computer model of a dairy herd. In more than a decade the model has been applied and further developed within the research in herd management and animal health economics in dairy herds. Analyses with the SimHerd model have been included in 2 doctoral theses, 9 PhD theses, 4 MSc theses...... and a total of 28 papers in scientific journals. Minor attempts for introducing the model as a herd health advisory tool has been carried out over the years. However, the model has not yet been used broadly as a herd health advisory tool, which mainly has been explained by the complexity of the model...
Panizzoni, Giulio; Debiasi, Alberto; Eccher, Matteo; De Amicis, Raffaele
Global warming and rapid climatic changes are producing dramatic effects on coastal area of Mediterranean countries. Italian coastal areas are one of the most urbanized zones of the south western Europe and the extensive use of soil is causing a consistent impact on the hydrogeological context. Moreover, soil consumption combined with extreme meteorological events, facilitates the occurrence of hazardous landslide events. Environmental policy makers and data managers in territorial planning need to face such emergency situation with appropriate tools. We present an application service with the aim of advising user through environmental analysis of Landslide and Soil Consumption impact. This service wants also to improve the sharing of environmental harmonized datasets/metadata across different organizations and the creation of a collaborative environment where the stakeholders and environmental experts can share their data and work cooperatively. We developed a set of processing services providing functionalities to assess impact of landslide on territory and impact of land take and soil sealing. Among others, the service is able to evaluate environmental impacts of landslide events on Cultural Heritage sites. We have also designed a 3D WebGL client customized to execute the processing services and visualize their outputs. It provides high usability in terms of navigation and data visualization. In this way the service provides not only a Spatial Data Infrastructure to access and visualize data but a complete Decision Support Systems for a more effective environmental planning of coastal area.
Tekman, Mehmet; Medlar, Alan; Mozere, Monika; Kleta, Robert; Stanescu, Horia
Haplotype reconstruction is an important tool for understanding the aetiology of human disease. Haplotyping infers the most likely phase of observed genotypes conditional on constraints imposed by the genotypes of other pedigree members. The results of haplotype reconstruction, when visualised appropriately, show which alleles are identical by descent despite the presence of untyped individuals. When used in concert with linkage analysis, haplotyping can help delineate a locus of interest and provide a succinct explanation for the transmission of the trait locus. Unfortunately, the design choices made by existing haplotype visualisation programs do not scale to large numbers of markers. Indeed, following haplotypes from generation to generation requires excessive scrolling back and forth. In addition, the most widely-used program for haplotype visualisation produces inconsistent recombination artefacts for the X chromosome. To resolve these issues, we developed HaploForge, a novel web application for haplotype visualisation and pedigree drawing. HaploForge takes advantage of HTML5 to be fast, portable and avoid the need for local installation. It can accurately visualise autosomal and X-linked haplotypes from both outbred and consanguineous pedigrees. Haplotypes are coloured based on identity by descent using a novel A* search algorithm and we provide a flexible viewing mode to aid visual inspection. HaploForge can currently process haplotype reconstruction output from Allegro, GeneHunter, Merlin and Simwalk. HaploForge is licensed under GPLv3 and is hosted and maintained via GitHub. Supplementary data is available from Bioinformatics online.
Ahearn, Elizabeth A.; Ries, Kernell G.; Steeves, Peter A.
Introduction An important mission of the U. S. Geological Survey (USGS) is to provide information on streamflow in the Nation's rivers. Streamflow statistics are used by water managers, engineers, scientists, and others to protect people and property during floods and droughts, and to manage land, water, and biological resources. Common uses for streamflow statistics include dam, bridge, and culvert design; water-supply planning and management; water-use appropriations and permitting; wastewater and industrial discharge permitting; hydropower-facility design and regulation; and flood-plain mapping for establishing flood-insurance rates and land-use zones. In an effort to improve access to published streamflow statistics, and to make the process of computing streamflow statistics for ungaged stream sites easier, more accurate, and more consistent, the USGS and the Environmental Systems Research Institute, Inc. (ESRI) developed StreamStats (Ries and others, 2004). StreamStats is a Geographic Information System (GIS)-based Web application for serving previously published streamflow statistics and basin characteristics for USGS data-collection stations, and computing streamflow statistics and basin characteristics for ungaged stream sites. The USGS, in cooperation with the Connecticut Department of Environmental Protection and the Connecticut Department of Transportation, has implemented StreamStats for Connecticut.
Wakefield, Bonnie; Pham, Kassie; Scherubel, Melody
Symptom recognition and reporting by patients with heart failure are critical to avoid hospitalization. This project evaluated a patient symptom tracking application. Fourteen end users (nine patients, five clinicians) from a Midwestern Veterans Affairs Medical Center evaluated the website using a think aloud protocol. A structured observation protocol was used to assess success or failure for each task. Measures included task time, success, and satisfaction. Patients had a mean age of 70 years; clinicians averaged 42 years in age. Patients took 9.3 min and clinicians took less than 3 min per scenario. Most patients needed some assistance, but few patients were completely unable to complete some tasks. Clinicians demonstrated few problems navigating the site. Patient System Usability Scale item scores ranged from 2.0 to 3.6; clinician item scores ranged from 1.8 to 4.0. Further work is needed to determine whether using the web-based tool improves symptom recognition and reporting. © The Author(s) 2015.
De Leon, Marlene M.; Estuar, Maria Regina E.; Lim, Hadrian Paulo; Victorino, John Noel C.; Co, Jerelyn; Saddi, Ivan Lester; Paelmo, Sharlene Mae; Dela Cruz, Bon Lemuel
Environment and agriculture related applications have been gaining ground for the past several years and have been the context for researches in ubiquitous and pervasive computing. This study is a part of a bigger study that uses artificial intelligence in developing models to detect, monitor, and forecast the spread of Fusarium oxysporum cubense TR4 (FOC TR4) on Cavendish bananas cultivated in the Philippines. To implement an Intelligent Farming system, 1) wireless sensor nodes (WSNs) are deployed in Philippine banana plantations to collect soil parameter data that is considered to affect the health of Cavendish bananas, 2) a custom built smartphone application is used for collecting, storing, and transmitting soil data, plant images and plant status data to a cloud storage, and 3) a custom built web application is used to load and display results of physico-chemical analysis of soil, analysis of data models, and geographic locations of plants being monitored. This study discusses the issues, considerations, and solutions implemented in the development of an asynchronous communication channel to ensure that all data collected by WSNs and smartphone applications are transmitted with a high degree of accuracy and reliability. From a design standpoint: standard API documentation on usage of data type is required to avoid inconsistencies in parameter passing. From a technical standpoint, there is a need to include error-handling mechanisms especially for delays in transmission of data as well as generalize method of parsing thru multidimensional array of data. Strategies are presented in the paper.