WorldWideScience

Sample records for retrieval web applications

  1. A Specialized Framework for Data Retrieval Web Applications

    Directory of Open Access Journals (Sweden)

    Jerzy Nogiec

    2005-06-01

    Full Text Available Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application debugging, yet the scope is deliberately kept limited to allow for easy comprehension and rapid application development. The system clearly decouples the application processing and visualization, which in turn allows for clean separation of layout and processing development. Duplication of standard web page features such as toolbars and navigational aids is therefore eliminated. The framework employs the popular Model-View-Controller (MVC architecture, but it also uses the filter mechanism for several of its base functionalities, which permits easy extension of the provided core functionality of the system.

  2. A specialized framework for data retrieval Web applications

    International Nuclear Information System (INIS)

    Jerzy Nogiec; Kelley Trombly-Freytag; Dana Walbridge

    2004-01-01

    Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application debugging, yet the scope is deliberately kept limited to allow for easy comprehension and rapid application development. The system clearly decouples the application processing and visualization, which in turn allows for clean separation of layout and processing development. Duplication of standard web page features such as toolbars and navigational aids is therefore eliminated. The framework employs the popular Model-View-Controller (MVC) architecture, but it also uses the filter mechanism for several of its base functionalities, which permits easy extension of the provided core functionality of the system

  3. Web information retrieval for health professionals.

    Science.gov (United States)

    Ting, S L; See-To, Eric W K; Tse, Y K

    2013-06-01

    This paper presents a Web Information Retrieval System (WebIRS), which is designed to assist the healthcare professionals to obtain up-to-date medical knowledge and information via the World Wide Web (WWW). The system leverages the document classification and text summarization techniques to deliver the highly correlated medical information to the physicians. The system architecture of the proposed WebIRS is first discussed, and then a case study on an application of the proposed system in a Hong Kong medical organization is presented to illustrate the adoption process and a questionnaire is administrated to collect feedback on the operation and performance of WebIRS in comparison with conventional information retrieval in the WWW. A prototype system has been constructed and implemented on a trial basis in a medical organization. It has proven to be of benefit to healthcare professionals through its automatic functions in classification and summarizing the medical information that the physicians needed and interested. The results of the case study show that with the use of the proposed WebIRS, significant reduction of searching time and effort, with retrieval of highly relevant materials can be attained.

  4. Web application for recording learners’ mouse trajectories and retrieving their study logs for data analysis

    Directory of Open Access Journals (Sweden)

    Yoshinori Miyazaki

    2012-03-01

    Full Text Available With the accelerated implementation of e-learning systems in educational institutions, it has become possible to record learners’ study logs in recent years. It must be admitted that little research has been conducted upon the analysis of the study logs that are obtained. In addition, there is no software that traces the mouse movements of learners during their learning processes, which the authors believe would enable teachers to better understand their students’ behaviors. The objective of this study is to develop a Web application that records students’ study logs, including their mouse trajectories, and to devise an IR tool that can summarize such diversified data. The results of an experiment are also scrutinized to provide an analysis of the relationship between learners’ activities and their study logs.

  5. Emergent web intelligence advanced information retrieval

    CERN Document Server

    Badr, Youakim; Abraham, Ajith; Hassanien, Aboul-Ella

    2010-01-01

    Web Intelligence explores the impact of artificial intelligence and advanced information technologies representing the next generation of Web-based systems, services, and environments, and designing hybrid web systems that serve wired and wireless users more efficiently. Multimedia and XML-based data are produced regularly and in increasing way in our daily digital activities, and their retrieval must be explored and studied in this emergent web-based era. 'Emergent Web Intelligence: Advanced information retrieval, provides reviews of the related cutting-edge technologies and insights. It is v

  6. Application of Google Maps API service for creating web map of information retrieved from CORINE land cover databases

    Directory of Open Access Journals (Sweden)

    Kilibarda Milan

    2010-01-01

    Full Text Available Today, Google Maps API application based on Ajax technology as standard web service; facilitate users with publication interactive web maps, thus opening new possibilities in relation to the classical analogue maps. CORINE land cover databases are recognized as the fundamental reference data sets for numerious spatial analysis. The theoretical and applicable aspects of Google Maps API cartographic service are considered on the case of creating web map of change in urban areas in Belgrade and surround from 2000. to 2006. year, obtained from CORINE databases.

  7. An Effective Combined Feature For Web Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    H.M.R.B Herath

    2015-08-01

    Full Text Available Abstract Technology advances as well as the emergence of large scale multimedia applications and the revolution of the World Wide Web has changed the world into a digital age. Anybody can use their mobile phone to take a photo at any time anywhere and upload that image to ever growing image databases. Development of effective techniques for visual and multimedia retrieval systems is one of the most challenging and important directions of the future research. This paper proposes an effective combined feature for web based image retrieval. Frequently used colour and texture features are explored in order to develop a combined feature for this purpose. Widely used three colour features Colour moments Colour coherence vector and Colour Correlogram and three texture features Grey Level Co-occurrence matrix Tamura features and Gabor filter were analyzed for their performance. Precision and Recall were used to evaluate the performance of each of these techniques. By comparing precision and recall values the methods that performed best were taken and combined to form a hybrid feature. The developed combined feature was evaluated by developing a web based CBIR system. A web crawler was used to first crawl through Web sites and images found in those sites are downloaded and the combined feature representation technique was used to extract image features. The test results indicated that this web system can be used to index web images with the combined feature representation schema and to find similar images. Random image retrievals using the web system shows that the combined feature can be used to retrieve images belonging to the general image domain. Accuracy of the retrieval can be noted high for natural images like outdoor scenes images of flowers etc. Also images which have a similar colour and texture distribution were retrieved as similar even though the images were belonging to deferent semantic categories. This can be ideal for an artist who wants

  8. Geospatial metadata retrieval from web services

    Directory of Open Access Journals (Sweden)

    Ivanildo Barbosa

    Full Text Available Nowadays, producers of geospatial data in either raster or vector formats are able to make them available on the World Wide Web by deploying web services that enable users to access and query on those contents even without specific software for geoprocessing. Several providers around the world have deployed instances of WMS (Web Map Service, WFS (Web Feature Service and WCS (Web Coverage Service, all of them specified by the Open Geospatial Consortium (OGC. In consequence, metadata about the available contents can be retrieved to be compared with similar offline datasets from other sources. This paper presents a brief summary and describes the matching process between the specifications for OGC web services (WMS, WFS and WCS and the specifications for metadata required by the ISO 19115 - adopted as reference for several national metadata profiles, including the Brazilian one. This process focuses on retrieving metadata about the identification and data quality packages as well as indicates the directions to retrieve metadata related to other packages. Therefore, users are able to assess whether the provided contents fit to their purposes.

  9. Web information retrieval based on ontology

    Science.gov (United States)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  10. Blueprint of a Cross-Lingual Web Retrieval Collection

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.; van Zwol, R.

    2005-01-01

    The world wide web is a natural setting for cross-lingual information retrieval; web content is essentially multilingual, and web searchers are often polyglots. Even though English has emerged as the lingua franca of the web, planning for a business trip or holiday usually involves digesting pages

  11. Web Application Vulnerabilities

    OpenAIRE

    Yadav, Bhanu

    2014-01-01

    Web application security has been a major issue in information technology since the evolvement of dynamic web application. The main objective of this project was to carry out a detailed study on the top three web application vulnerabilities such as injection, cross site scripting, broken authentication and session management, present the situation where an application can be vulnerable to these web threats and finally provide preventative measures against them. ...

  12. Wordpress web application development

    CERN Document Server

    Ratnayake, Rakhitha Nimesh

    2015-01-01

    This book is intended for WordPress developers and designers who want to develop quality web applications within a limited time frame and for maximum profit. Prior knowledge of basic web development and design is assumed.

  13. Quantifying retrieval bias in Web archive search

    NARCIS (Netherlands)

    Samar, Thaer; Traub, Myriam C.; van Ossenbruggen, Jacco; Hardman, Lynda; de Vries, Arjen P.

    2018-01-01

    A Web archive usually contains multiple versions of documents crawled from the Web at different points in time. One possible way for users to access a Web archive is through full-text search systems. However, previous studies have shown that these systems can induce a bias, known as the

  14. Building Social Web Applications

    CERN Document Server

    Bell, Gavin

    2009-01-01

    Building a web application that attracts and retains regular visitors is tricky enough, but creating a social application that encourages visitors to interact with one another requires careful planning. This book provides practical solutions to the tough questions you'll face when building an effective community site -- one that makes visitors feel like they've found a new home on the Web. If your company is ready to take part in the social web, this book will help you get started. Whether you're creating a new site from scratch or reworking an existing site, Building Social Web Applications

  15. Towards an Intelligent Possibilistic Web Information Retrieval Using Multiagent System

    Science.gov (United States)

    Elayeb, Bilel; Evrard, Fabrice; Zaghdoud, Montaceur; Ahmed, Mohamed Ben

    2009-01-01

    Purpose: The purpose of this paper is to make a scientific contribution to web information retrieval (IR). Design/methodology/approach: A multiagent system for web IR is proposed based on new technologies: Hierarchical Small-Worlds (HSW) and Possibilistic Networks (PN). This system is based on a possibilistic qualitative approach which extends the…

  16. Engineering Web Applications

    DEFF Research Database (Denmark)

    Casteleyn, Sven; Daniel, Florian; Dolog, Peter

    Nowadays, Web applications are almost omnipresent. The Web has become a platform not only for information delivery, but also for eCommerce systems, social networks, mobile services, and distributed learning environments. Engineering Web applications involves many intrinsic challenges due...... to their distributed nature, content orientation, and the requirement to make them available to a wide spectrum of users who are unknown in advance. The authors discuss these challenges in the context of well-established engineering processes, covering the whole product lifecycle from requirements engineering through...... design and implementation to deployment and maintenance. They stress the importance of models in Web application development, and they compare well-known Web-specific development processes like WebML, WSDM and OOHDM to traditional software development approaches like the waterfall model and the spiral...

  17. Engineering Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    suit the user profile the most. This paper summarizes the domain engineering framework for such adaptive web applications. The framework provides guidelines to develop adaptive web applications as members of a family. It suggests how to utilize the design artifacts as knowledge which can be used......Information and services on the web are accessible for everyone. Users of the web differ in their background, culture, political and social environment, interests and so on. Ambient intelligence was envisioned as a concept for systems which are able to adapt to user actions and needs....... With the growing amount of information and services, the web applications become natural candidates to adopt the concepts of ambient intelligence. Such applications can deal with divers user intentions and actions based on the user profile and can suggest the combination of information content and services which...

  18. Web multimedia information retrieval using improved Bayesian algorithm.

    Science.gov (United States)

    Yu, Yi-Jun; Chen, Chun; Yu, Yi-Min; Lin, Huai-Zhong

    2003-01-01

    The main thrust of this paper is application of a novel data mining approach on the log of user's feedback to improve web multimedia information retrieval performance. A user space model was constructed based on data mining, and then integrated into the original information space model to improve the accuracy of the new information space model. It can remove clutter and irrelevant text information and help to eliminate mismatch between the page author's expression and the user's understanding and expectation. User space model was also utilized to discover the relationship between high-level and low-level features for assigning weight. The authors proposed improved Bayesian algorithm for data mining. Experiment proved that the authors' proposed algorithm was efficient.

  19. Express web application development

    CERN Document Server

    Yaapa, Hage

    2013-01-01

    Express Web Application Development is a practical introduction to learning about Express. Each chapter introduces you to a different area of Express, using screenshots and examples to get you up and running as quickly as possible.If you are looking to use Express to build your next web application, ""Express Web Application Development"" will help you get started and take you right through to Express' advanced features. You will need to have an intermediate knowledge of JavaScript to get the most out of this book.

  20. Network and User-Perceived Performance of Web Page Retrievals

    Science.gov (United States)

    Kruse, Hans; Allman, Mark; Mallasch, Paul

    1998-01-01

    The development of the HTTP protocol has been driven by the need to improve the network performance of the protocol by allowing the efficient retrieval of multiple parts of a web page without the need for multiple simultaneous TCP connections between a client and a server. We suggest that the retrieval of multiple page elements sequentially over a single TCP connection may result in a degradation of the perceived performance experienced by the user. We attempt to quantify this perceived degradation through the use of a model which combines a web retrieval simulation and an analytical model of TCP operation. Starting with the current HTTP/l.1 specification, we first suggest a client@side heuristic to improve the perceived transfer performance. We show that the perceived speed of the page retrieval can be increased without sacrificing data transfer efficiency. We then propose a new client/server extension to the HTTP/l.1 protocol to allow for the interleaving of page element retrievals. We finally address the issue of the display of advertisements on web pages, and in particular suggest a number of mechanisms which can make efficient use of IP multicast to send advertisements to a number of clients within the same network.

  1. Developing Large Web Applications

    CERN Document Server

    Loudon, Kyle

    2010-01-01

    How do you create a mission-critical site that provides exceptional performance while remaining flexible, adaptable, and reliable 24/7? Written by the manager of a UI group at Yahoo!, Developing Large Web Applications offers practical steps for building rock-solid applications that remain effective even as you add features, functions, and users. You'll learn how to develop large web applications with the extreme precision required for other types of software. Avoid common coding and maintenance headaches as small websites add more pages, more code, and more programmersGet comprehensive soluti

  2. Progressive Web applications

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Progressive Web Applications are native-like applications running inside of a browser context. In my presentation I would like describe their characteristics, benchmarks and building process using a quick and simple case study example with focus on Service Workers api.

  3. Retrieving top-k prestige-based relevant spatial web objects

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian S.

    2010-01-01

    The location-aware keyword query returns ranked objects that are near a query location and that have textual descriptions that match query keywords. This query occurs inherently in many types of mobile and traditional web services and applications, e.g., Yellow Pages and Maps services. Previous...... of prestige-based relevance to capture both the textual relevance of an object to a query and the effects of nearby objects. Based on this, a new type of query, the Location-aware top-k Prestige-based Text retrieval (LkPT) query, is proposed that retrieves the top-k spatial web objects ranked according...... to both prestige-based relevance and location proximity. We propose two algorithms that compute LkPT queries. Empirical studies with real-world spatial data demonstrate that LkPT queries are more effective in retrieving web objects than a previous approach that does not consider the effects of nearby...

  4. Designing Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2008-01-01

    Learning system to study a discipline. In business to business interaction, different requirements and parameters of exchanged business requests might be served by different services from third parties. Such applications require certain intelligence and a slightly different approach to design. Adpative web......The unique characteristic of web applications is that they are supposed to be used by much bigger and diverse set of users and stakeholders. An example application area is e-Learning or business to business interaction. In eLearning environment, various users with different background use the e......-based applications aim to leave some of their features at the design stage in the form of variables which are dependent on several criteria. The resolution of the variables is called adaptation and can be seen from two perspectives: adaptation by humans to the changed requirements of stakeholders and dynamic system...

  5. Development of a Web Application: Recording Learners' Mouse Trajectories and Retrieving their Study Logs to Identify the Occurrence of Hesitation in Solving Word-Reordering Problems

    Directory of Open Access Journals (Sweden)

    Mitsumasa Zushi

    2014-04-01

    Full Text Available Most computer marking systems evaluate the results of the answers reached by learners without looking into the process by which the answers are produced, which will be insufficient to ascertain learners' understanding level because correct answers may well include lucky hunches, namely accidentally correct but not confident answers. In order to differentiate these lucky answers from confident correct ones, we have developed a Web application that can record mouse trajectories during the performance of tasks. Mathematical analyses of these trajectories have revealed that some parameters for mouse movements can be useful indicators to identify the occurrence of hesitation resulting from lack of knowledge or confidence in solving problems.

  6. Web User Profile Using XUL and Information Retrieval Techniques

    Directory of Open Access Journals (Sweden)

    Dan MUNTEANU

    2008-12-01

    Full Text Available This paper presents the importance of user profile in information retrieval, information filtering and recommender systems using explicit and implicit feedback. A Firefox extension (based on XUL used for gathering data needed to infer a web user profile and an example file with collected data are presented. Also an algorithm for creating and updating the user profile and keeping track of a fixed number k of subjects of interest is presented.

  7. Improving life sciences information retrieval using semantic web technology.

    Science.gov (United States)

    Quan, Dennis

    2007-05-01

    The ability to retrieve relevant information is at the heart of every aspect of research and development in the life sciences industry. Information is often distributed across multiple systems and recorded in a way that makes it difficult to piece together the complete picture. Differences in data formats, naming schemes and network protocols amongst information sources, both public and private, must be overcome, and user interfaces not only need to be able to tap into these diverse information sources but must also assist users in filtering out extraneous information and highlighting the key relationships hidden within an aggregated set of information. The Semantic Web community has made great strides in proposing solutions to these problems, and many efforts are underway to apply Semantic Web techniques to the problem of information retrieval in the life sciences space. This article gives an overview of the principles underlying a Semantic Web-enabled information retrieval system: creating a unified abstraction for knowledge using the RDF semantic network model; designing semantic lenses that extract contextually relevant subsets of information; and assembling semantic lenses into powerful information displays. Furthermore, concrete examples of how these principles can be applied to life science problems including a scenario involving a drug discovery dashboard prototype called BioDash are provided.

  8. Web-based information search and retrieval: effects of strategy use and age on search success.

    Science.gov (United States)

    Stronge, Aideen J; Rogers, Wendy A; Fisk, Arthur D

    2006-01-01

    The purpose of this study was to investigate the relationship between strategy use and search success on the World Wide Web (i.e., the Web) for experienced Web users. An additional goal was to extend understanding of how the age of the searcher may influence strategy use. Current investigations of information search and retrieval on the Web have provided an incomplete picture of Web strategy use because participants have not been given the opportunity to demonstrate their knowledge of Web strategies while also searching for information on the Web. Using both behavioral and knowledge-engineering methods, we investigated searching behavior and system knowledge for 16 younger adults (M = 20.88 years of age) and 16 older adults (M = 67.88 years). Older adults were less successful than younger adults in finding correct answers to the search tasks. Knowledge engineering revealed that the age-related effect resulted from ineffective search strategies and amount of Web experience rather than age per se. Our analysis led to the development of a decision-action diagram representing search behavior for both age groups. Older adults had more difficulty than younger adults when searching for information on the Web. However, this difficulty was related to the selection of inefficient search strategies, which may have been attributable to a lack of knowledge about available Web search strategies. Actual or potential applications of this research include training Web users to search more effectively and suggestions to improve the design of search engines.

  9. Developing Web Applications

    CERN Document Server

    Moseley, Ralph

    2007-01-01

    Building applications for the Internet is a complex and fast-moving field which utilizes a variety of continually evolving technologies. Whether your perspective is from the client or server side, there are many languages to master - X(HTML), JavaScript, PHP, XML and CSS to name but a few. These languages have to work together cleanly, logically and in harmony with the systems they run on, and be compatible with any browsers with which they interact. Developing Web Applications presents script writing and good programming practice but also allows students to see how the individual technologi

  10. Tennessee StreamStats: A Web-Enabled Geographic Information System Application for Automating the Retrieval and Calculation of Streamflow Statistics

    Science.gov (United States)

    Ladd, David E.; Law, George S.

    2007-01-01

    The U.S. Geological Survey (USGS) provides streamflow and other stream-related information needed to protect people and property from floods, to plan and manage water resources, and to protect water quality in the streams. Streamflow statistics provided by the USGS, such as the 100-year flood and the 7-day 10-year low flow, frequently are used by engineers, land managers, biologists, and many others to help guide decisions in their everyday work. In addition to streamflow statistics, resource managers often need to know the physical and climatic characteristics (basin characteristics) of the drainage basins for locations of interest to help them understand the mechanisms that control water availability and water quality at these locations. StreamStats is a Web-enabled geographic information system (GIS) application that makes it easy for users to obtain streamflow statistics, basin characteristics, and other information for USGS data-collection stations and for ungaged sites of interest. If a user selects the location of a data-collection station, StreamStats will provide previously published information for the station from a database. If a user selects a location where no data are available (an ungaged site), StreamStats will run a GIS program to delineate a drainage basin boundary, measure basin characteristics, and estimate streamflow statistics based on USGS streamflow prediction methods. A user can download a GIS feature class of the drainage basin boundary with attributes including the measured basin characteristics and streamflow estimates.

  11. An Implementation of Semantic Web System for Information retrieval using J2EE Technologies.

    OpenAIRE

    B.Hemanth kumar,; Prof. M.Surendra Prasad Babu

    2011-01-01

    Accessing web resources (Information) is an essential facility provided by web applications to every body. Semantic web is one of the systems that provide a facility to access the resources through web service applications. Semantic web and web Services are new emerging web based technologies. An automatic information processing system can be developed by using semantic web and web services, each having its own contribution within the context of developing web-based information systems and ap...

  12. Web Platform Application

    Energy Technology Data Exchange (ETDEWEB)

    Paulsworth, Ashley [Sunvestment Group, Frederick, MD (United States); Kurtz, Jim [Sunvestment Group, Frederick, MD (United States); Brun de Pontet, Stephanie [Sunvestment Group, Frederick, MD (United States)

    2016-06-15

    Sunvestment Energy Group (previously called Sunvestment Group) was established to create a web application that brings together site hosts, those who will obtain the energy from the solar array, with project developers and funders, including affinity investors. Sunvestment Energy Group (SEG) uses a community-based model that engages with investors who have some affinity with the site host organization. In addition to a financial return, these investors receive non-financial value from their investments and are therefore willing to offer lower cost capital. This enables the site host to enjoy more savings from solar through these less expensive Community Power Purchase Agreements (CPPAs). The purpose of this award was to develop an online platform to bring site hosts and investors together virtually.

  13. Mobile Application Development: Component Retrieval System

    Data.gov (United States)

    National Aeronautics and Space Administration — The purpose of this project was to investigate requirements to develop an innovative mobile application to retrieve components’ detailed information from the Stennis...

  14. The Nuclear Science References (NSR) database and Web Retrieval System

    International Nuclear Information System (INIS)

    Pritychenko, B.; Betak, E.; Kellett, M.A.; Singh, B.; Totans, J.

    2011-01-01

    The Nuclear Science References (NSR) database together with its associated Web interface is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 200,000 articles since the beginning of nuclear science. The weekly updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr).

  15. Value of Information Web Application

    Science.gov (United States)

    2015-04-01

    their understanding of VoI attributes (source reliable, information content, and latency). The VoI web application emulates many features of a...only when using the Firefox web browser on those computers (Internet Explorer was not viable due to unchangeable user settings). During testing, the

  16. Introduction to the JASIST Special Topic Issue on Web Retrieval and Mining: A Machine Learning Perspective.

    Science.gov (United States)

    Chen, Hsinchun

    2003-01-01

    Discusses information retrieval techniques used on the World Wide Web. Topics include machine learning in information extraction; relevance feedback; information filtering and recommendation; text classification and text clustering; Web mining, based on data mining techniques; hyperlink structure; and Web size. (LRW)

  17. Multimedia database retrieval technology and applications

    CERN Document Server

    Muneesawang, Paisarn; Guan, Ling

    2014-01-01

    This book explores multimedia applications that emerged from computer vision and machine learning technologies. These state-of-the-art applications include MPEG-7, interactive multimedia retrieval, multimodal fusion, annotation, and database re-ranking. The application-oriented approach maximizes reader understanding of this complex field. Established researchers explain the latest developments in multimedia database technology and offer a glimpse of future technologies. The authors emphasize the crucial role of innovation, inspiring users to develop new applications in multimedia technologies

  18. Sigma: Web Retrieval Interface for Nuclear Reaction Data

    International Nuclear Information System (INIS)

    Pritychenko, B.; Sonzogni, A.A.

    2008-01-01

    The authors present Sigma, a Web-rich application which provides user-friendly access in processing and plotting of the evaluated and experimental nuclear reaction data stored in the ENDF-6 and EXFOR formats. The main interface includes browsing using a periodic table and a directory tree, basic and advanced search capabilities, interactive plots of cross sections, angular distributions and spectra, comparisons between evaluated and experimental data, computations between different cross section sets. Interactive energy-angle, neutron cross section uncertainties plots and visualization of covariance matrices are under development. Sigma is publicly available at the National Nuclear Data Center website at www.nndc.bnl.gov/sigma

  19. Web-based control application using WebSocket

    International Nuclear Information System (INIS)

    Furukawa, Y.

    2012-01-01

    The WebSocket allows asynchronous full-duplex communication between a Web-based (i.e. Java Script-based) application and a Web-server. WebSocket started as a part of HTML5 standardization but has now been separated from HTML5 and has been developed independently. Using WebSocket, it becomes easy to develop platform independent presentation layer applications for accelerator and beamline control software. In addition, a Web browser is the only application program that needs to be installed on client computer. The WebSocket-based applications communicate with the WebSocket server using simple text-based messages, so WebSocket is applicable message-based control system like MADOCA, which was developed for the SPring-8 control system. A simple WebSocket server for the MADOCA control system and a simple motor control application were successfully made as a first trial of the WebSocket control application. Using Google-Chrome (version 13.0) on Debian/Linux and Windows 7, Opera (version 11.0) on Debian/Linux and Safari (version 5.0.3) on Mac OS X as clients, the motors can be controlled using a WebSocket-based Web-application. Diffractometer control application use in synchrotron radiation diffraction experiment was also developed. (author)

  20. Correct software in web applications and web services

    CERN Document Server

    Thalheim, Bernhard; Prinz, Andreas; Buchberger, Bruno

    2015-01-01

    The papers in this volume aim at obtaining a common understanding of the challenging research questions in web applications comprising web information systems, web services, and web interoperability; obtaining a common understanding of verification needs in web applications; achieving a common understanding of the available rigorous approaches to system development, and the cases in which they have succeeded; identifying how rigorous software engineering methods can be exploited to develop suitable web applications; and at developing a European-scale research agenda combining theory, methods a

  1. Web Services in Mobile Applications

    Directory of Open Access Journals (Sweden)

    Octavian DOSPINESCU

    2013-01-01

    Full Text Available Information and communication technologies are designed to support and anticipate the continuing changes of the information society, while outlining new economic, social and cultural dimensions. We see the growth of new business models whose aim is to remove traditional barriers and improve the value of goods and services. Information is a strategic resource and its manipulation raises new problems for all entities involved in the process. Information and communication technologies should be a stable support in managing the flow of data and support the integrity, confidentiality and availability. Concepts such as eBusiness, eCommerce, Software as a Service, Cloud Computing and Social Media are based on web technologies consisting of complex languages, protocols and standards, built around client-server architecture. One of the most used technologies in mobile applications are the Web Services defined as an application model supported by any operating system able to provide certain functionalities using Internet technologies to promote interoperability between various appli-cations and platforms. Web services use HTTP, XML, SSL, SMTP and SOAP, because their stability has proven over the years. Their functionalities are highly variable, with Web services applications exchange type, weather, arithmetic or authentication services. In this article we will talk about SOAP and REST architectures for web services in mobile applications and we will also provide some practical examples based on Android platform.

  2. Project Assessment Skills Web Application

    Science.gov (United States)

    Goff, Samuel J.

    2013-01-01

    The purpose of this project is to utilize Ruby on Rails to create a web application that will replace a spreadsheet keeping track of training courses and tasks. The goal is to create a fast and easy to use web application that will allow users to track progress on training courses. This application will allow users to update and keep track of all of the training required of them. The training courses will be organized by group and by user, making readability easier. This will also allow group leads and administrators to get a sense of how everyone is progressing in training. Currently, updating and finding information from this spreadsheet is a long and tedious task. By upgrading to a web application, finding and updating information will be easier than ever as well as adding new training courses and tasks. Accessing this data will be much easier in that users just have to go to a website and log in with NDC credentials rather than request the relevant spreadsheet from the holder. In addition to Ruby on Rails, I will be using JavaScript, CSS, and jQuery to help add functionality and ease of use to my web application. This web application will include a number of features that will help update and track progress on training. For example, one feature will be to track progress of a whole group of users to be able to see how the group as a whole is progressing. Another feature will be to assign tasks to either a user or a group of users. All of these together will create a user friendly and functional web application.

  3. Web tools for effective retrieval, visualization, and evaluation of cardiology medical images and records

    Science.gov (United States)

    Masseroli, Marco; Pinciroli, Francesco

    2000-12-01

    To provide easy retrieval, integration and evaluation of multimodal cardiology images and data in a web browser environment, distributed application technologies and java programming were used to implement a client-server architecture based on software agents. The server side manages secure connections and queries to heterogeneous remote databases and file systems containing patient personal and clinical data. The client side is a Java applet running in a web browser and providing a friendly medical user interface to perform queries on patient and medical test dat and integrate and visualize properly the various query results. A set of tools based on Java Advanced Imaging API enables to process and analyze the retrieved cardiology images, and quantify their features in different regions of interest. The platform-independence Java technology makes the developed prototype easy to be managed in a centralized form and provided in each site where an intranet or internet connection can be located. Giving the healthcare providers effective tools for querying, visualizing and evaluating comprehensively cardiology medical images and records in all locations where they can need them- i.e. emergency, operating theaters, ward, or even outpatient clinics- the developed prototype represents an important aid in providing more efficient diagnoses and medical treatments.

  4. Millennial Undergraduate Research Strategies in Web and Library Information Retrieval Systems

    Science.gov (United States)

    Porter, Brandi

    2011-01-01

    This article summarizes the author's dissertation regarding search strategies of millennial undergraduate students in Web and library online information retrieval systems. Millennials bring a unique set of search characteristics and strategies to their research since they have never known a world without the Web. Through the use of search engines,…

  5. Improving Web Page Retrieval using Search Context from Clicked Domain Names

    NARCIS (Netherlands)

    Li, R.

    Search context is a crucial factor that helps to understand a user’s information need in ad-hoc Web page retrieval. A query log of a search engine contains rich information on issued queries and their corresponding clicked Web pages. The clicked data implies its relevance to the query and can be

  6. JavaScript Web Applications

    CERN Document Server

    MacCaw, Alex

    2011-01-01

    Building rich JavaScript applications that bring a desktop experience to the Web requires moving state from the server to the client side-not a simple task. This hands-on book takes proficient JavaScript developers through all the steps necessary to create state-of-the-art applications, including structure, templating, frameworks, communicating with the server, and many other issues. Throughout the book, you'll work with real-world example applications to help you grasp the concepts involved. Learn how to create JavaScript applications that offer a more responsive and improved experience. U

  7. Usage of Web Service in Mobile Application for Parents and Students in Binus School Serpong

    Directory of Open Access Journals (Sweden)

    Karto Iskandar

    2016-09-01

    Full Text Available A web service is a service offered by a device electronically to communicate with other electronic device using the World wide web. Smartphone is an electronic device that almost everyone has, especially student and parent for getting information about the school. In BINUS School Serpong mobile application, web services used for getting data from web server like student and menu data. Problem faced by BINUS School Serpong today is the time-consuming application update when using the native application while the application updates are very frequent. To resolve this problem, BINUS School Serpong mobile application will use the web service. This article showed the usage of web services with XML for retrieving data of student. The result from this study is that by using web service, smartphone can retrieve data consistently between multiple platforms. 

  8. Web Application Development Utilizing Cloud Virtual Machine

    OpenAIRE

    Muukka, Olli

    2014-01-01

    The thesis goes through a development project where a web application was implemented to support the start-up company business operations. The main reason to implement a web application was the company needed a system where business data is centrally managed with cost-efficient, simple and easy tool. The deployed cloud service provided a platform for the web application. The alternative to the web application development was to deploy commercial customer relationship management tool, but the ...

  9. Forensics Investigation of Web Application Security Attacks

    OpenAIRE

    Amor Lazzez; Thabet Slimani

    2015-01-01

    Nowadays, web applications are popular targets for security attackers. Using specific security mechanisms, we can prevent or detect a security attack on a web application, but we cannot find out the criminal who has carried out the security attack. Being unable to trace back an attack, encourages hackers to launch new attacks on the same system. Web application forensics aims to trace back and attribute a web application security attack to its originator. This may significantly reduce the sec...

  10. A survey on web modeling approaches for ubiquitous web applications

    NARCIS (Netherlands)

    Schwinger, W.; Retschitzegger, W.; Schauerhuber, A.; Kappel, G.; Wimmer, M.; Pröll, B.; Cachero Castro, C.; Casteleyn, S.; De Troyer, O.; Fraternali, P.; Garrigos, I.; Garzotto, F.; Ginige, A.; Houben, G.J.P.M.; Koch, N.; Moreno, N.; Pastor, O.; Paolini, P.; Pelechano Ferragud, V.; Rossi, G.; Schwabe, D.; Tisi, M.; Vallecillo, A.; Sluijs, van der K.A.M.; Zhang, G.

    2008-01-01

    Purpose – Ubiquitous web applications (UWA) are a new type of web applications which are accessed in various contexts, i.e. through different devices, by users with various interests, at anytime from anyplace around the globe. For such full-fledged, complex software systems, a methodologically sound

  11. Lecture 3: Web Application Security

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Computer security has been an increasing concern for IT professionals for a number of years, yet despite all the efforts, computer systems and networks remain highly vulnerable to attacks of different kinds. Design flaws and security bugs in the underlying software are among the main reasons for this. This lecture focuses on security aspects of Web application development. Various vulnerabilities typical to web applications (such as Cross-site scripting, SQL injection, cross-site request forgery etc.) are introduced and discussed. Sebastian Lopienski is CERN’s deputy Computer Security Officer. He works on security strategy and policies; offers internal consultancy and audit services; develops and maintains security tools for vulnerability assessment and intrusion detection; provides training and awareness raising; and does incident investigation and response. During his work at CERN since 2001, Sebastian has had various assignments, including designing and developing software to manage and support servic...

  12. MedlinePlus Connect: Web Application

    Science.gov (United States)

    ... MedlinePlus Connect → Web Application URL of this page: https://medlineplus.gov/connect/application.html MedlinePlus Connect: Web ... will change.) Old URLs New URLs Web Application https://apps.nlm.nih.gov/medlineplus/services/mpconnect.cfm? ...

  13. Advanced express web application development

    CERN Document Server

    Keig, Andrew

    2013-01-01

    A practical book, guiding the reader through the development of a single page application using a feature-driven approach.If you are an experienced JavaScript developer who wants to build highly scalable, real-world applications using Express, this book is ideal for you. This book is an advanced title and assumes that the reader has some experience with node, Javascript MVC web development frameworks, and has heard of Express before, or is familiar with it. You should also have a basic understanding of Redis and MongoDB. This book is not a tutorial on Node, but aims to explore some of the more

  14. A novel architecture for information retrieval system based on semantic web

    Science.gov (United States)

    Zhang, Hui

    2011-12-01

    Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.

  15. A framework for efficient spatial web object retrieval

    DEFF Research Database (Denmark)

    Wu, Dinging; Cong, Gao; Jensen, Christian S.

    2012-01-01

    The conventional Internet is acquiring a geospatial dimension. Web documents are being geo-tagged and geo-referenced objects such as points of interest are being associated with descriptive text documents. The resulting fusion of geo-location and documents enables new kinds of queries that take...

  16. Comparing the Scale of Web Subject Directories Precision in Technical-Engineering Information Retrieval

    Directory of Open Access Journals (Sweden)

    Mehrdokht Wazirpour Keshmiri

    2012-07-01

    Full Text Available The main purpose of this research was to compare the scale of web subject directories precision in information retrieval of technical-engineering science. Information gathering was documentary and webometric. Keywords of technical-engineering science were chosen at twenty different subjects from IEEE (Institute of Electrical and Electronics Engineers and engineering magazines that situated in sciencedirect site. These keywords are used at five subject directories Yahoo, Google, Infomine, Intute, Dmoz, that were web directories high-utilization. Usually first results in searching tools are connected to searching keywords. Because, first ten results was evaluated in every search. These assessments to consist of scale of precision, scale of error, scale retrieval items in technical-engineering categories to retrieval items entirely. The used criteria for determining the scale of precision that was according to high-utilization standards in different documents, to consist of presence of the keywords in title, appearance of keywords at the part of web retrieved pages, keywords adjacency, URL of page, page description and subject categories. Information analysis was according to Kruskal-Wallis Test and L.S.D fisher. Results revealed that there was meaningful difference about precision of web subject directories in information retrieval of technical-engineering science, Therefore this theory was confirmed.web subject directories ranked from point of precision as follows. Google, Yahoo, Intute, Dmoz, and Infomine. The scale of observed error at the first results was another criterion that was used for comparing web subject directories. In this research, Yahoo had minimum scale of error and Infomine had most of error. This research also compared the scale of retrieval items in all of categories web subject directories entirely to retrieval items in technical-engineering categories, results revealed that there was meaningful difference between them. And

  17. WAPTT - Web Application Penetration Testing Tool

    Directory of Open Access Journals (Sweden)

    DURIC, Z.

    2014-02-01

    Full Text Available Web applications vulnerabilities allow attackers to perform malicious actions that range from gaining unauthorized account access to obtaining sensitive data. The number of reported web application vulnerabilities in last decade is increasing dramatically. The most of vulnerabilities result from improper input validation and sanitization. The most important of these vulnerabilities based on improper input validation and sanitization are: SQL injection (SQLI, Cross-Site Scripting (XSS and Buffer Overflow (BOF. In order to address these vulnerabilities we designed and developed the WAPTT (Web Application Penetration Testing Tool tool - web application penetration testing tool. Unlike other web application penetration testing tools, this tool is modular, and can be easily extended by end-user. In order to improve efficiency of SQLI vulnerability detection, WAPTT uses an efficient algorithm for page similarity detection. The proposed tool showed promising results as compared to six well-known web application scanners in detecting various web application vulnerabilities.

  18. Integrating Web Services into Map Image Applications

    National Research Council Canada - National Science Library

    Tu, Shengru

    2003-01-01

    Web services have been opening a wide avenue for software integration. In this paper, we have reported our experiments with three applications that are built by utilizing and providing web services for Geographic Information Systems (GIS...

  19. Effective Web and Desktop Retrieval with Enhanced Semantic Spaces

    Science.gov (United States)

    Daoud, Amjad M.

    We describe the design and implementation of the NETBOOK prototype system for collecting, structuring and efficiently creating semantic vectors for concepts, noun phrases, and documents from a corpus of free full text ebooks available on the World Wide Web. Automatic generation of concept maps from correlated index terms and extracted noun phrases are used to build a powerful conceptual index of individual pages. To ensure scalabilty of our system, dimension reduction is performed using Random Projection [13]. Furthermore, we present a complete evaluation of the relative effectiveness of the NETBOOK system versus the Google Desktop [8].

  20. Folksonomies indexing and retrieval in web 2.0

    CERN Document Server

    Peters, Isabella

    2009-01-01

    In Web 2.0 users not only make heavy use of Col-laborative Information Services in order to create, publish and share digital information resources - what is more, they index and represent these re-sources via own keywords, so-called tags. The sum of this user-generated metadata of a Collaborative Information Service is also called Folksonomy. In contrast to professionally created and highly struc-tured metadata, e.g. subject headings, thesauri, clas-sification systems or ontologies, which are applied in libraries, corporate information architectures or commercial databases and which were deve

  1. Information Retrieval Strategies of Millennial Undergraduate Students in Web and Library Database Searches

    Science.gov (United States)

    Porter, Brandi

    2009-01-01

    Millennial students make up a large portion of undergraduate students attending colleges and universities, and they have a variety of online resources available to them to complete academically related information searches, primarily Web based and library-based online information retrieval systems. The content, ease of use, and required search…

  2. The Role of the Medical Students’ Emotional Mood in Information Retrieval from the Web

    Directory of Open Access Journals (Sweden)

    Marzieh Yari Zanganeh

    2018-04-01

    Full Text Available Background: Online information retrieval is a process the result of which is influenced by the changes in the emotional moods of the user. It seems reasonable to include emotional aspects in developing information retrieval systems in order to optimize the experience of the users. Therefore, this study aimed to identify the role of positive and negative affects in the information seeking process on the web among students of medical sciences. Methods: From the methodological perspective, the present study was an experimental and applied research. According to the nature of the experimental method, observation and questionnaire were used. The participants were the students of various fields of Medical Sciences. The research sample included 50 students of Shiraz University of Medical Sciences selected through purposeful sampling method; they regularly used World Wide Web and google engine for information retrieval in educational, Research, personal, or managerial activities. In order to collect the data, search tasks were characterized by the topic, sequence in a search process, difficulty level, and searcher’s interest (simple in a task. Face and content validity of the questionnaire were confirmed by the experts. Reliability of the questionnaire was tested by Alpha Cronbach. Cronbach’s alpha coefficient (PA=0.777, NA=0.754 showed a high rate of reliability in a PANAS questionnaire. The collected data were analyzed using SPSS, version 20.0; also, to test the research hypothesis, T-Test and pair Samples T-Test were used. The P0.05. Conclusion: Information retrieval systems in the Web should identify positive and negative affects in the information seeking process in a set of perceiving signs in human interaction with the computer. The automatic identification of the users’ affect opens new dimensions into users moderators and information retrieval systems for successful retrieval from the Web.

  3. Design and Analysis of Web Application Frameworks

    DEFF Research Database (Denmark)

    Schwarz, Mathias Romme

    -state manipulation vulnerabilities. The hypothesis of this dissertation is that we can design frameworks and static analyses that aid the programmer to avoid such errors. First, we present the JWIG web application framework for writing secure and maintainable web applications. We discuss how this framework solves...... some of the common errors through an API that is designed to be safe by default. Second, we present a novel technique for checking HTML validity for output that is generated by web applications. Through string analysis, we approximate the output of web applications as context-free grammars. We model......Numerous web application frameworks have been developed in recent years. These frameworks enable programmers to reuse common components and to avoid typical pitfalls in web application development. Although such frameworks help the programmer to avoid many common errors, we nd...

  4. Secure Java For Web Application Development

    CERN Document Server

    Bhargav, Abhay

    2010-01-01

    As the Internet has evolved, so have the various vulnerabilities, which largely stem from the fact that developers are unaware of the importance of a robust application security program. This book aims to educate readers on application security and building secure web applications using the new Java Platform. The text details a secure web application development process from the risk assessment phase to the proof of concept phase. The authors detail such concepts as application risk assessment, secure SDLC, security compliance requirements, web application vulnerabilities and threats, security

  5. A Holistic Approach to Securing Web Applications

    OpenAIRE

    Stankovic, Srdjan; Simic, Dejan

    2010-01-01

    Protection of Web applications is an activity that requires constant monitoring of security threats as well as looking for solutions in this field. Since protection has moved from the lower layers of OSI models to the application layer and having in mind the fact that 75% of all the attacks are performed at the application layer, special attention should be paid to the application layer. It is possible to improve protection of Web application on the level of the system architecture by introdu...

  6. A Web Service Framework for Economic Applications

    Directory of Open Access Journals (Sweden)

    Dan BENTA

    2010-01-01

    Full Text Available The Internet offers multiple solutions to linkcompanies with their partners, customers or suppliersusing IT solutions, including a special focus on Webservices. Web services are able to solve the problem relatedto the exchange of data between business partners, marketsthat can use each other's services, problems ofincompatibility between IT applications. As web servicesare described, discovered and accessed programs based onXML vocabularies and Web protocols, Web servicesrepresents solutions for Web-based technologies for smalland medium-sized enterprises (SMEs. This paper presentsa web service framework for economic applications. Also, aprototype of this IT solution using web services waspresented and implemented in a few companies from IT,commerce and consulting fields measuring the impact ofthe solution in the business environment development.

  7. Applying Semantic Web technologies to improve the retrieval, credibility and use of health-related web resources.

    Science.gov (United States)

    Mayer, Miguel A; Karampiperis, Pythagoras; Kukurikos, Antonis; Karkaletsis, Vangelis; Stamatakis, Kostas; Villarroel, Dagmar; Leis, Angela

    2011-06-01

    The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various "trust marks" and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.

  8. Maintenance-Ready Web Application Development

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2016-01-01

    Full Text Available The current paper tackles the subject of developing maintenance-ready web applications. Maintenance is presented as a core stage in a web application’s lifecycle. The concept of maintenance-ready is defined in the context of web application development. Web application maintenance tasks types are enunciated and suitable task types are identified for further analysis. The research hypothesis is formulated based on a direct link between tackling maintenance in the development stage and reducing overall maintenance costs. A live maintenance-ready web application is presented and maintenance related aspects are highlighted. The web application’s features, that render it maintenance-ready, are emphasize. The cost of designing and building the web-application to be maintenance-ready are disclosed. The savings in maintenance development effort facilitated by maintenance ready features are also disclosed. Maintenance data is collected from 40 projects implemented by a web development company. Homogeneity and diversity of collected data is evaluated. A data sample is presented and the size and comprehensive nature of the entire dataset is depicted. Research hypothesis are validated and conclusions are formulated on the topic of developing maintenance-ready web applications. The limits of the research process which represented the basis for the current paper are enunciated. Future research topics are submitted for debate.

  9. Automatic invariant detection in dynamic web applications

    NARCIS (Netherlands)

    Groeneveld, F.; Mesbah, A.; Van Deursen, A.

    2010-01-01

    The complexity of modern web applications increases as client-side JavaScript and dynamic DOM programming are used to offer a more interactive web experience. In this paper, we focus on improving the dependability of such applications by automatically inferring invariants from the client-side and

  10. Building Grid applications using Web Services

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    There has been a lot of discussion within the Grid community about the use of Web Services technologies in building large-scale, loosely-coupled, cross-organisation applications. In this talk we are going to explore the principles that govern Service-Oriented Architectures and the promise of Web Services technologies for integrating applications that span administrative domains. We are going to see how existing Web Services specifications and practices could provide the necessary infrastructure for implementing Grid applications. Biography Dr. Savas Parastatidis is a Principal Research Associate at the School of Computing Science, University of Newcastle upon Tyne, UK. Savas is one of the authors of the "Grid Application Framework based on Web Services Specifications and Practices" document that was influential in the convergence between Grid and Web Services and the move away from OGSI (more information can be found at http://www.neresc.ac.uk/ws-gaf). He has done research on runtime support for distributed-m...

  11. Developing web applications with Oracle ADF essentials

    CERN Document Server

    Vesterli, Sten E

    2013-01-01

    Developing Web Applications with Oracle ADF Essentials covers the basics of Oracle ADF and then works through more complex topics such as debugging and logging features and JAAS Security in JDeveloper as the reader gains more skills. This book will follow a tutorial approach, using a practical example, with the content and tasks getting harder throughout.""Developing Web Applications with Oracle ADF Essentials"" is for you if you want to build modern, user-friendly web applications for all kinds of data gathering, analysis, and presentations. You do not need to know any advanced HTML or JavaSc

  12. WordPress web application development

    CERN Document Server

    Ratnayake, Rakhitha Nimesh

    2013-01-01

    An extensive, practical guide that explains how to adapt WordPress features, both conventional and trending, for web applications.This book is intended for WordPress developers and designers who have the desire to go beyond conventional website development to develop quality web applications within a limited time frame and for maximum profit. Experienced web developers who are looking for a framework for rapid application development will also find this to be a useful resource. Prior knowledge with of WordPress is preferable as the main focus will be on explaining methods for adapting WordPres

  13. System Testing of Desktop and Web Applications

    Science.gov (United States)

    Slack, James M.

    2011-01-01

    We want our students to experience system testing of both desktop and web applications, but the cost of professional system-testing tools is far too high. We evaluate several free tools and find that AutoIt makes an ideal educational system-testing tool. We show several examples of desktop and web testing with AutoIt, starting with simple…

  14. Retrieval of very large numbers of items in the Web of Science: an exercise to develop accurate search strategies

    NARCIS (Netherlands)

    Arencibia-Jorge, R.; Leydesdorff, L.; Chinchilla-Rodríguez, Z.; Rousseau, R.; Paris, S.W.

    2009-01-01

    The Web of Science interface counts at most 100,000 retrieved items from a single query. If the query results in a dataset containing more than 100,000 items the number of retrieved items is indicated as >100,000. The problem studied here is how to find the exact number of items in a query that

  15. Opal web services for biomedical applications.

    Science.gov (United States)

    Ren, Jingyuan; Williams, Nadya; Clementi, Luca; Krishnan, Sriram; Li, Wilfred W

    2010-07-01

    Biomedical applications have become increasingly complex, and they often require large-scale high-performance computing resources with a large number of processors and memory. The complexity of application deployment and the advances in cluster, grid and cloud computing require new modes of support for biomedical research. Scientific Software as a Service (sSaaS) enables scalable and transparent access to biomedical applications through simple standards-based Web interfaces. Towards this end, we built a production web server (http://ws.nbcr.net) in August 2007 to support the bioinformatics application called MEME. The server has grown since to include docking analysis with AutoDock and AutoDock Vina, electrostatic calculations using PDB2PQR and APBS, and off-target analysis using SMAP. All the applications on the servers are powered by Opal, a toolkit that allows users to wrap scientific applications easily as web services without any modification to the scientific codes, by writing simple XML configuration files. Opal allows both web forms-based access and programmatic access of all our applications. The Opal toolkit currently supports SOAP-based Web service access to a number of popular applications from the National Biomedical Computation Resource (NBCR) and affiliated collaborative and service projects. In addition, Opal's programmatic access capability allows our applications to be accessed through many workflow tools, including Vision, Kepler, Nimrod/K and VisTrails. From mid-August 2007 to the end of 2009, we have successfully executed 239,814 jobs. The number of successfully executed jobs more than doubled from 205 to 411 per day between 2008 and 2009. The Opal-enabled service model is useful for a wide range of applications. It provides for interoperation with other applications with Web Service interfaces, and allows application developers to focus on the scientific tool and workflow development. Web server availability: http://ws.nbcr.net.

  16. Web application security: a beginner's guide

    National Research Council Canada - National Science Library

    Sullivan, Bryan; Liu, Vincent

    2012-01-01

    .... Sullivan and Liu have created a savvy, essentials-based approach to web app security packed with immediately applicable tools for any information security practitioner sharpening his or her tools or just starting...

  17. [Application of spaced retrieval training on patients with dementia].

    Science.gov (United States)

    Wu, Hua-Shan; Lin, Li-Chan

    2012-10-01

    Dementia causes semantic and episodic memory impairments that limit patients' activities of daily living (ADL) and increase caregiver burden. Spaced retrieval training uses repetitive retrieval to strengthen cognitive and motor skills intuitively in mild / moderate dementia patients who retain preserved implicit / non-declarative memory. This article describes and discusses the operative mechanism, influencing variables, and practical applications of spaced retrieval training. We hope this article increases professional understanding and application of this training approach to improve dementia patient ADL and improve quality of life for both caregivers and patients.

  18. Continuous Integration in PHP web applications development

    OpenAIRE

    Hujer, Martin

    2011-01-01

    This work deals with continuous integration of web applications, especially those in PHP language. The main objective is the selection of the server for continuous integration, its deployment and configuration for continuous integration of PHP web applications. The first chapter describes the concept of continuous integration and its individual techniques. The second chapter deals with the choice of server for continuous integration and its basic settings. The third chapter contains an overvi...

  19. Bat-Inspired Algorithm Based Query Expansion for Medical Web Information Retrieval.

    Science.gov (United States)

    Khennak, Ilyes; Drias, Habiba

    2017-02-01

    With the increasing amount of medical data available on the Web, looking for health information has become one of the most widely searched topics on the Internet. Patients and people of several backgrounds are now using Web search engines to acquire medical information, including information about a specific disease, medical treatment or professional advice. Nonetheless, due to a lack of medical knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, which deem their search queries to be imprecise due the use of unclear keywords. The use of these ambiguous and vague queries to describe the patients' needs has resulted in a failure of Web search engines to retrieve accurate and relevant information. One of the most natural and promising method to overcome this drawback is Query Expansion. In this paper, an original approach based on Bat Algorithm is proposed to improve the retrieval effectiveness of query expansion in medical field. In contrast to the existing literature, the proposed approach uses Bat Algorithm to find the best expanded query among a set of expanded query candidates, while maintaining low computational complexity. Moreover, this new approach allows the determination of the length of the expanded query empirically. Numerical results on MEDLINE, the on-line medical information database, show that the proposed approach is more effective and efficient compared to the baseline.

  20. Estimating Maintenance Cost for Web Applications

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2016-01-01

    Full Text Available The current paper tackles the issue of determining a method for estimating maintenance costs for web applications. The current state of research in the field of web application maintenance is summarized and leading theories and results are highlighted. The cost of web maintenance is determined by the number of man-hours invested in maintenance tasks. Web maintenance tasks are categorized into content maintenance and technical maintenance. Research is centered on analyzing technical maintenance tasks. The research hypothesis is formulated on the assumption that the number of man-hours invested in maintenance tasks can be assessed based on the web application’s user interaction level, complexity and content update effort. Data regarding the costs of maintenance tasks is collected from 24 maintenance projects implemented by a web development company that tackles a wide area of web applications. Homogeneity and diversity of collected data is submitted for debate by presenting a sample of the data and depicting the overall size and comprehensive nature of the entire dataset. A set of metrics dedicated to estimating maintenance costs in web applications is defined based on conclusions formulated by analyzing the collected data and the theories and practices dominating the current state of research. Metrics are validated with regards to the initial research hypothesis. Research hypothesis are validated and conclusions are formulated on the topic of estimating the maintenance cost of web applications. The limits of the research process which represented the basis for the current paper are enunciated. Future research topics are submitted for debate.

  1. Geant4 application in a Web browser

    International Nuclear Information System (INIS)

    Garnier, Laurent

    2014-01-01

    Geant4 is a toolkit for the simulation of the passage of particles through matter. The Geant4 visualization system supports many drivers including OpenGL[1], OpenInventor, HepRep[2], DAWN[3], VRML, RayTracer, gMocren[4] and ASCIITree, with diverse and complementary functionalities. Web applications have an increasing role in our work, and thanks to emerging frameworks such as Wt [5], building a web application on top of a C++ application without rewriting all the code can be done. Because the Geant4 toolkit's visualization and user interface modules are well decoupled from the rest of Geant4, it is straightforward to adapt these modules to render in a web application instead of a computer's native window manager. The API of the Wt framework closely matches that of Qt [6], our experience in building Qt driver will benefit for Wt driver. Porting a Geant4 application to a web application is easy, and with minimal effort, Geant4 users can replicate this process to share their own Geant4 applications in a web browser.

  2. Efficient Retrieval of the Top-k Most Relevant Spatial Web Objects

    DEFF Research Database (Denmark)

    Cong, Gao; Jensen, Christian Søndergaard; Wu, Dingming

    2009-01-01

    The conventional Internet is acquiring a geo-spatial dimension. Web documents are being geo-tagged, and geo-referenced objects such as points of interest are being associated with descriptive text documents. The resulting fusion of geo-location and documents enables a new kind of top-k query...... that takes into account both location proximity and text relevancy. To our knowledge, only naive techniques exist that are capable of computing a general web information retrieval query while also taking location into account. This paper proposes a new indexing framework for location-aware top-k text...... both text relevancy and location proximity to prune the search space. Results of empirical studies with an implementation of the framework demonstrate that the paper’s proposal offers scalability and is capable of excellent performance....

  3. Comparing Web Applications with Desktop Applications: An Empirical Study

    DEFF Research Database (Denmark)

    Pop, Paul

    2002-01-01

    In recent years, many desktop applications have been ported to the world wide web in order to reduce (multiplatform) development, distribution and maintenance costs. However, there is little data concerning the usability of web applications, and the impact of their usability on the total cost...... of developing and using such applications. In this paper we present a comparison of web and desktop applications from the usability point of view. The comparison is based on an empirical study that investigates the performance of a group of users on two calendaring applications: Yahoo!Calendar and Microsoft...... Calendar. The study shows that in the case of web applications the performance of the users is significantly reduced, mainly because of the restricted interaction mechanisms provided by current web browsers....

  4. Capturing Trust in Social Web Applications

    Science.gov (United States)

    O'Donovan, John

    The Social Web constitutes a shift in information flow from the traditional Web. Previously, content was provided by the owners of a website, for consumption by the end-user. Nowadays, these websites are being replaced by Social Web applications which are frameworks for the publication of user-provided content. Traditionally, Web content could be `trusted' to some extent based on the site it originated from. Algorithms such as Google's PageRank were (and still are) used to compute the importance of a website, based on analysis of underlying link topology. In the Social Web, analysis of link topology merely tells us about the importance of the information framework which hosts the content. Consumers of information still need to know about the importance/reliability of the content they are reading, and therefore about the reliability of the producers of that content. Research into trust and reputation of the producers of information in the Social Web is still very much in its infancy. Every day, people are forced to make trusting decisions about strangers on the Web based on a very limited amount of information. For example, purchasing a product from an eBay seller with a `reputation' of 99%, downloading a file from a peer-to-peer application such as Bit-Torrent, or allowing Amazon.com tell you what products you will like. Even something as simple as reading comments on a Web-blog requires the consumer to make a trusting decision about the quality of that information. In all of these example cases, and indeed throughout the Social Web, there is a pressing demand for increased information upon which we can make trusting decisions. This chapter examines the diversity of sources from which trust information can be harnessed within Social Web applications and discusses a high level classification of those sources. Three different techniques for harnessing and using trust from a range of sources are presented. These techniques are deployed in two sample Social Web

  5. Retrieval operators of remote sensing applications

    International Nuclear Information System (INIS)

    Ahmad, T.; Shah, A.

    2014-01-01

    A set of operators of remote sensing applications have been proposed to fulfill most of the Functional Requirements (FR). These operators capture the functions of the applications, which can be considered as the services provided by the applications. In general, a good application meets maximum FR from user. In this paper, we have defined a remote sensing application by a set, having all images created at dissimilar time instances, and each image is categorized into set of different layers. (author)

  6. Modelling Safe Interface Interactions in Web Applications

    Science.gov (United States)

    Brambilla, Marco; Cabot, Jordi; Grossniklaus, Michael

    Current Web applications embed sophisticated user interfaces and business logic. The original interaction paradigm of the Web based on static content pages that are browsed by hyperlinks is, therefore, not valid anymore. In this paper, we advocate a paradigm shift for browsers and Web applications, that improves the management of user interaction and browsing history. Pages are replaced by States as basic navigation nodes, and Back/Forward navigation along the browsing history is replaced by a full-fledged interactive application paradigm, supporting transactions at the interface level and featuring Undo/Redo capabilities. This new paradigm offers a safer and more precise interaction model, protecting the user from unexpected behaviours of the applications and the browser.

  7. Specification framework for engineering adaptive web applications

    NARCIS (Netherlands)

    Frasincar, F.; Houben, G.J.P.M.; Vdovják, R.

    2002-01-01

    The growing demand for data-driven Web applications has led to the need for a structured and controlled approach to the engineering of such applications. Both designers and developers need a framework that in all stages of the engineering process allows them to specify the relevant aspects of the

  8. Two Algorithms for Web Applications Assessment

    Directory of Open Access Journals (Sweden)

    Stavros Ioannis Valsamidis

    2011-09-01

    Full Text Available The usage of web applications can be measured with the use of metrics. In a LMS, a typical web application, there are no appropriate metrics which would facilitate their qualitative and quantitative measurement. The purpose of this paper is to propose the use of existing techniques with a different way, in order to analyze the log file of a typical LMS and deduce useful conclusions. Three metrics for course usage measurement are used. It also describes two algorithms for course classification and suggestion actions. The metrics and the algorithms and were in Open eClass LMS tracking data of an academic institution. The results from 39 courses presented interest insights. Although the case study concerns a LMS it can also be applied to other web applications such as e-government, e-commerce, e-banking, blogs e.t.c.

  9. Memory versus logic: two models of organizing information and their influences on web retrieval strategies

    Directory of Open Access Journals (Sweden)

    Teresa Numerico

    2008-07-01

    Full Text Available We can find the first anticipation of the World Wide Web hypertextual structure in Bush paper of 1945, where he described a “selection” and storage machine called the Memex, capable of keeping the useful information of a user and connecting it to other relevant material present in the machine or added by other users. We will argue that Vannevar Bush, who conceived this type of machine, did it because its involvement with analogical devices. During the 1930s, in fact, he invented and built the Differential Analyzer, a powerful analogue machine, used to calculate various relevant mathematical functions. The model of the Memex is not the digital one, because it relies on another form of data representation that emulates more the procedures of memory than the attitude of the logic used by the intellect. Memory seems to select and arrange information according to association strategies, i.e., using analogies and connections that are very often arbitrary, sometimes even chaotic and completely subjective. The organization of information and the knowledge creation process suggested by logic and symbolic formal representation of data is deeply different from the former one, though the logic approach is at the core of the birth of computer science (i.e., the Turing Machine and the Von Neumann Machine. We will discuss the issues raised by these two “visions” of information management and the influences of the philosophical tradition of the theory of knowledge on the hypertextual organization of content. We will also analyze all the consequences of these different attitudes with respect to information retrieval techniques in a hypertextual environment, as the web. Our position is that it necessary to take into accounts the nature and the dynamic social topology of the network when we choose information retrieval methods for the network; otherwise, we risk creating a misleading service for the end user of web search tools (i.e., search engines.

  10. The Role of the Web Server in a Capstone Web Application Course

    Science.gov (United States)

    Umapathy, Karthikeyan; Wallace, F. Layne

    2010-01-01

    Web applications have become commonplace in the Information Systems curriculum. Much of the discussion about Web development for capstone courses has centered on the scripting tools. Very little has been discussed about different ways to incorporate the Web server into Web application development courses. In this paper, three different ways of…

  11. Significant Benefits from Libraries in Web 3.0 Environment

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Keywords- Web 3.0, library 3.0, Web 3.0 Applications, Semantic. Web ... providing virtual information services, and other services cannot be ... web third generation, definition, beginning, and retrieve system. The study ...

  12. Semantic-Web Technology: Applications at NASA

    Science.gov (United States)

    Ashish, Naveen

    2004-01-01

    We provide a description of work at the National Aeronautics and Space Administration (NASA) on building system based on semantic-web concepts and technologies. NASA has been one of the early adopters of semantic-web technologies for practical applications. Indeed there are several ongoing 0 endeavors on building semantics based systems for use in diverse NASA domains ranging from collaborative scientific activity to accident and mishap investigation to enterprise search to scientific information gathering and integration to aviation safety decision support We provide a brief overview of many applications and ongoing work with the goal of informing the external community of these NASA endeavors.

  13. OntoTrader: An Ontological Web Trading Agent Approach for Environmental Information Retrieval

    Directory of Open Access Journals (Sweden)

    Luis Iribarne

    2014-01-01

    Full Text Available Modern Web-based Information Systems (WIS are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS. This framework implements a “Query-Searching/Recovering-Response” information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated.

  14. SWHi system description : A case study in information retrieval, inference, and visualization in the Semantic Web

    NARCIS (Netherlands)

    Fahmi, Ismail; Zhang, Junte; Ellermann, Henk; Bouma, Gosse; Franconi, E; Kifer, M; May, W

    2007-01-01

    Search engines have become the most popular tools for finding information on the Internet. A real-world Semantic Web application can benefit from this by combining its features with some features from search engines. In this paper, we describe methods for indexing and searching a populated ontology

  15. HTML5 web application development by example

    CERN Document Server

    Gustafson, JM

    2013-01-01

    The best way to learn anything is by doing. The author uses a friendly tone and fun examples to ensure that you learn the basics of application development. Once you have read this book, you should have the necessary skills to build your own applications.If you have no experience but want to learn how to create applications in HTML5, this book is the only help you'll need. Using practical examples, HTML5 Web Application Development by Example will develop your knowledge and confidence in application development.

  16. Distributed Systems and Applications of Information Filtering and Retrieval

    CERN Document Server

    Giuliani, Alessandro; Semeraro, Giovanni; DART 2012

    2014-01-01

    This volume focuses on new challenges in distributed Information Filtering and Retrieval. It collects invited chapters and extended research contributions from the special session on Information Filtering and Retrieval: Novel Distributed Systems and Applications (DART) of the 4th International Conference on Knowledge Discovery and Information Retrieval (KDIR 2012), held in Barcelona, Spain, on 4-7 October 2012. The main focus of DART was to discuss and compare suitable novel solutions based on intelligent techniques and applied to real-world applications. The chapters of this book present a comprehensive review of related works and state of the art. Authors, both practitioners and researchers, shared their results in several topics such as "Multi-Agent Systems", "Natural Language Processing", "Automatic Advertisement", "Customer Interaction Analytics", "Opinion Mining". Contributions have been careful reviewed by experts in the area, who also gave useful suggestions to improve the quality of the volume.

  17. Mastering web application development with Express

    CERN Document Server

    Vlăduțu, Alexandru

    2014-01-01

    If you are a Node.js developer who wants to take your Express skills to the next level and develop high performing, reliable web applications using best practices, this book is ideal for you. The only prerequisite is knowledge of Node.js.

  18. Web-based applications for virtual laboratories

    NARCIS (Netherlands)

    Bier, H.H.

    2011-01-01

    Web-based applications for academic education facilitate, usually, exchange of multimedia files, while design-oriented domains such as architectural and urban design require additional support in collaborative real-time drafting and modeling. In this context, multi-user interactive interfaces

  19. Web-Scale Discovery Services Retrieve Relevant Results in Health Sciences Topics Including MEDLINE Content

    Directory of Open Access Journals (Sweden)

    Elizabeth Margaret Stovold

    2017-06-01

    Full Text Available A Review of: Hanneke, R., & O’Brien, K. K. (2016. Comparison of three web-scale discovery services for health sciences research. Journal of the Medical Library Association, 104(2, 109-117. http://dx.doi.org/10.3163/1536-5050.104.2.004 Abstract Objective – To compare the results of health sciences search queries in three web-scale discovery (WSD services for relevance, duplicate detection, and retrieval of MEDLINE content. Design – Comparative evaluation and bibliometric study. Setting – Six university libraries in the United States of America. Subjects – Three commercial WSD services: Primo, Summon, and EBSCO Discovery Service (EDS. Methods – The authors collected data at six universities, including their own. They tested each of the three WSDs at two data collection sites. However, since one of the sites was using a legacy version of Summon that was due to be upgraded, data collected for Summon at this site were considered obsolete and excluded from the analysis. The authors generated three questions for each of six major health disciplines, then designed simple keyword searches to mimic typical student search behaviours. They captured the first 20 results from each query run at each test site, to represent the first “page” of results, giving a total of 2,086 total search results. These were independently assessed for relevance to the topic. Authors resolved disagreements by discussion, and calculated a kappa inter-observer score. They retained duplicate records within the results so that the duplicate detection by the WSDs could be compared. They assessed MEDLINE coverage by the WSDs in several ways. Using precise strategies to generate a relevant set of articles, they conducted one search from each of the six disciplines in PubMed so that they could compare retrieval of MEDLINE content. These results were cross-checked against the first 20 results from the corresponding query in the WSDs. To aid investigation of overall

  20. Leveraging Web Services in Providing Efficient Discovery, Retrieval, and Integration of NASA-Sponsored Observations and Predictions

    Science.gov (United States)

    Bambacus, M.; Alameh, N.; Cole, M.

    2006-12-01

    The Applied Sciences Program at NASA focuses on extending the results of NASA's Earth-Sun system science research beyond the science and research communities to contribute to national priority applications with societal benefits. By employing a systems engineering approach, supporting interoperable data discovery and access, and developing partnerships with federal agencies and national organizations, the Applied Sciences Program facilitates the transition from research to operations in national applications. In particular, the Applied Sciences Program identifies twelve national applications, listed at http://science.hq.nasa.gov/earth-sun/applications/, which can be best served by the results of NASA aerospace research and development of science and technologies. The ability to use and integrate NASA data and science results into these national applications results in enhanced decision support and significant socio-economic benefits for each of the applications. This paper focuses on leveraging the power of interoperability and specifically open standard interfaces in providing efficient discovery, retrieval, and integration of NASA's science research results. Interoperability (the ability to access multiple, heterogeneous geoprocessing environments, either local or remote by means of open and standard software interfaces) can significantly increase the value of NASA-related data by increasing the opportunities to discover, access and integrate that data in the twelve identified national applications (particularly in non-traditional settings). Furthermore, access to data, observations, and analytical models from diverse sources can facilitate interdisciplinary and exploratory research and analysis. To streamline this process, the NASA GeoSciences Interoperability Office (GIO) is developing the NASA Earth-Sun System Gateway (ESG) to enable access to remote geospatial data, imagery, models, and visualizations through open, standard web protocols. The gateway (online

  1. Determining Data Entry Points For Javascript-rich Web applications

    Directory of Open Access Journals (Sweden)

    George Maksimovich Noseevich

    2013-02-01

    Full Text Available The paper is devoted the task of automatic crawling of javascript-rich web applications for data entry points. A new technique is proposed, which combines dynamic and static javascript code analysis. Testing the proposed technique on real world web applications such as Twitter, Youtube and Reddit has confirmed its applicability for analysis of modern web applications.

  2. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-08-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  3. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-12-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  4. Design and Application of an Intelligent Agent for Web Information Discovery

    Institute of Scientific and Technical Information of China (English)

    闵君; 冯珊; 唐超; 许立达

    2003-01-01

    With the propagation of applications on the internet, the internet has become a great information source which supplies users with valuable information. But it is hard for users to quickly acquire the right information on the web. This paper an intelligent agent for internet applications to retrieve and extract web information under user's guidance. The intelligent agent is made up of a retrieval script to identify web sources, an extraction script based on the document object model to express extraction process, a data translator to export the extracted information into knowledge bases with frame structures, and a data reasoning to reply users' questions. A GUI tool named Script Writer helps to generate the extraction script visually, and knowledge rule databases help to extract wanted information and to generate the answer to questions.

  5. Information Retrieval Models

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Göker, Ayse; Davies, John

    2009-01-01

    Many applications that handle information on the internet would be completely inadequate without the support of information retrieval technology. How would we find information on the world wide web if there were no web search engines? How would we manage our email without spam filtering? Much of the

  6. Directions for Web and E-Commerce Applications Security

    OpenAIRE

    Thuraisingham, Bhavani; Clifton, Chris; Gupta, Amar; Bertino, Elisa; Ferrari, Elena

    2003-01-01

    This paper provides directions for web and e-commerce applications security. In particular, access control policies, workflow security, XML security and federated database security issues pertaining to the web and ecommerce applications are discussed.

  7. Development of Content Management System-based Web Applications

    NARCIS (Netherlands)

    Souer, J.

    2012-01-01

    Web engineering is the application of systematic and quantifiable approaches (concepts, methods, techniques, tools) to cost-effective requirements analysis, design, implementation, testing, operation, and maintenance of high quality web applications. Over the past years, Content Management Systems

  8. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    Science.gov (United States)

    Zerkin, V. V.; Pritychenko, B.

    2018-04-01

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ∼22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. It is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.

  9. Domainwise Web Page Optimization Based On Clustered Query Sessions Using Hybrid Of Trust And ACO For Effective Information Retrieval

    Directory of Open Access Journals (Sweden)

    Dr. Suruchi Chawla

    2015-08-01

    Full Text Available Abstract In this paper hybrid of Ant Colony OptimizationACO and trust has been used for domainwise web page optimization in clustered query sessions for effective Information retrieval. The trust of the web page identifies its degree of relevance in satisfying specific information need of the user. The trusted web pages when optimized using pheromone updates in ACO will identify the trusted colonies of web pages which will be relevant to users information need in a given domain. Hence in this paper the hybrid of Trust and ACO has been used on clustered query sessions for identifying more and more relevant number of documents in a given domain in order to better satisfy the information need of the user. Experiment was conducted on the data set of web query sessions to test the effectiveness of the proposed approach in selected three domains Academics Entertainment and Sports and the results confirm the improvement in the precision of search results.

  10. Technical Note: On The Usage and Development of the AWAKE Web Server and Web Applications

    CERN Document Server

    Berger, Dillon Tanner

    2017-01-01

    The purpose of this technical note is to give a brief explanation of the AWAKE Web Server, the current web applications it serves, and how to edit, maintain, and update the source code. The majority of this paper is dedicated to the development of the server and its web applications.

  11. JWIG: Yet Another Framework for Maintainable and Secure Web Applications

    DEFF Research Database (Denmark)

    Møller, Anders; Schwarz, Mathias Romme

    2009-01-01

    Although numerous frameworks for web application programming have been developed in recent years, writing web applications remains a challenging task. Guided by a collection of classical design principles, we propose yet another framework. It is based on a simple but flexible server-oriented arch...... services.The resulting framework provides a novel foundation for developing maintainable and secure web applications....

  12. Web Application for Actuarial Calculations for Insurance

    OpenAIRE

    Dobrev, Hristo; Kyurkchiev, Nikolay

    2013-01-01

    Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013 During the last 10 years a growing interest in the modernization of vocational education of actuaries, the content of actuarial study programs, consistent with global traditions and trends is indicated. Web application for insurance actuarial calculations is explored. Association for the Development of the Information Society, Institute of Mathematics and...

  13. Security Assessment of Web Based Distributed Applications

    Directory of Open Access Journals (Sweden)

    Catalin BOJA

    2010-01-01

    Full Text Available This paper presents an overview about the evaluation of risks and vulnerabilities in a web based distributed application by emphasizing aspects concerning the process of security assessment with regards to the audit field. In the audit process, an important activity is dedicated to the measurement of the characteristics taken into consideration for evaluation. From this point of view, the quality of the audit process depends on the quality of assessment methods and techniques. By doing a review of the fields involved in the research process, the approach wants to reflect the main concerns that address the web based distributed applications using exploratory research techniques. The results show that many are the aspects which must carefully be worked with, across a distributed system and they can be revealed by doing a depth introspective analyze upon the information flow and internal processes that are part of the system. This paper reveals the limitations of a non-existing unified security risk assessment model that could prevent such risks and vulnerabilities debated. Based on such standardize models, secure web based distributed applications can be easily audited and many vulnerabilities which can appear due to the lack of access to information can be avoided.

  14. Life Cycle Project Plan Outline: Web Sites and Web-based Applications

    Science.gov (United States)

    This tool is a guideline for planning and checking for 508 compliance on web sites and web based applications. Determine which EIT components are covered or excepted, which 508 standards and requirements apply, and how to implement them.

  15. Creation of web applications by Rich Internet Application Adobe Flex

    OpenAIRE

    PEKA, Karel

    2011-01-01

    Bachelor work focuses on explaining the functions and development of interactive applications in Adobe Flex RIA also compared to similar web technologies such as AJAX, Microsoft Silverlight or Adobe Flash. Explain the difference between "ordinary" sites and Rich Internet Application (RIA) and the difference shows a series of demonstration examples were processed in Adobe Flash Builder (environment for building Flex applications). Also will be created large-scale application for comprehensive ...

  16. Photonics Applications and Web Engineering: WILGA 2017

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2017-08-01

    XLth Wilga Summer 2017 Symposium on Photonics Applications and Web Engineering was held on 28 May-4 June 2017. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, modern optics, mechatronics, applied physics, electronics technologies and applications. There were presented around 300 oral and poster papers in a few main topical tracks, which are traditional for Wilga, including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Things, measurement systems for astronomy, high energy physics experiments, and other. The paper is a traditional introduction to the 2017 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations. This year Symposium was divided to the following topical sessions/conferences: Optics, Optoelectronics and Photonics, Computational and Artificial Intelligence, Biomedical Applications, Astronomical and High Energy Physics Experiments Applications, Material Research and Engineering, and Advanced Photonics and Electronics Applications in Research and Industry.

  17. Towards New Web Application Development Practices

    Directory of Open Access Journals (Sweden)

    Angeliki Poulymenakou

    1998-11-01

    Full Text Available Electronic Commerce over the Internet, aims to become a global conveyor belt of business transactions. Web applications of increasing sophistication emerge in almost every business sector, reflecting a variety of technical and technological approaches. In this paper we argue that system developers need to reconsider their professional practices in the context of these new technologies by taking advantage of opportunities like short response cycles and easy diffusion of systems results, while they recognise the limitations of traditional practice. We discuss a framework of IS development issues for Internet based applications and propose guidelines towards new development practices.

  18. JWIG: Yet Another Framework for Maintainable and Secure Web Applications

    DEFF Research Database (Denmark)

    Møller, Anders; Schwarz, Mathias Romme

    2009-01-01

    Although numerous frameworks for web application programming have been developed in recent years, writing web applications remains a challenging task. Guided by a collection of classical design principles, we propose yet another framework. It is based on a simple but flexible server......-oriented architecture that coherently supports general aspects of modern web applications, including dynamic XML construction, session management, data persistence, caching, and authentication, but it also simplifies programming of server-push communication and integration of XHTML-based applications and XML-based web...... services.The resulting framework provides a novel foundation for developing maintainable and secure web applications....

  19. Design and development of semantic web-based system for computer science domain-specific information retrieval

    Directory of Open Access Journals (Sweden)

    Ritika Bansal

    2016-09-01

    Full Text Available In semantic web-based system, the concept of ontology is used to search results by contextual meaning of input query instead of keyword matching. From the research literature, there seems to be a need for a tool which can provide an easy interface for complex queries in natural language that can retrieve the domain-specific information from the ontology. This research paper proposes an IRSCSD system (Information retrieval system for computer science domain as a solution. This system offers advanced querying and browsing of structured data with search results automatically aggregated and rendered directly in a consistent user-interface, thus reducing the manual effort of users. So, the main objective of this research is design and development of semantic web-based system for integrating ontology towards domain-specific retrieval support. Methodology followed is a piecemeal research which involves the following stages. First Stage involves the designing of framework for semantic web-based system. Second stage builds the prototype for the framework using Protégé tool. Third Stage deals with the natural language query conversion into SPARQL query language using Python-based QUEPY framework. Fourth Stage involves firing of converted SPARQL queries to the ontology through Apache's Jena API to fetch the results. Lastly, evaluation of the prototype has been done in order to ensure its efficiency and usability. Thus, this research paper throws light on framework development for semantic web-based system that assists in efficient retrieval of domain-specific information, natural language query interpretation into semantic web language, creation of domain-specific ontology and its mapping with related ontology. This research paper also provides approaches and metrics for ontology evaluation on prototype ontology developed to study the performance based on accessibility of required domain-related information.

  20. Bifröst: debugging web applications as a whole

    NARCIS (Netherlands)

    K.B. van der Vlist (Kevin)

    2013-01-01

    htmlabstractEven though web application development is supported by professional tooling, debugging support is lacking. If one starts to debug a web application, hardly any tooling support exists. Only the core components like server processes and a web browser are exposed. Developers need to

  1. Extending Symfony 2 web application framework

    CERN Document Server

    Armand, Sébastien

    2014-01-01

    Symfony is a high performance PHP framework for developing MVC web applications. Symfony1 allowed for ease of use but its shortcoming was the difficulty of extending it. However, this difficulty has now been eradicated by the more powerful and extensible Symfony2. Information on more advanced techniques for extending Symfony can be difficult to find, so you need one resource that contains the advanced features in a way you can understand. This tutorial offers solutions to all your Symfony extension problems. You will get to grips with all the extension points that Symfony, Twig, and Doctrine o

  2. Development of Content Management System-based Web Applications

    OpenAIRE

    Souer, J.

    2012-01-01

    Web engineering is the application of systematic and quantifiable approaches (concepts, methods, techniques, tools) to cost-effective requirements analysis, design, implementation, testing, operation, and maintenance of high quality web applications. Over the past years, Content Management Systems (CMS) have emerged as an important foundation for the web engineering process. CMS can be defined as a tool for the creation, editing and management of web information in an integral way. A CMS appe...

  3. Content-based multimedia retrieval: indexing and diversification

    NARCIS (Netherlands)

    van Leuken, R.H.

    2009-01-01

    The demand for efficient systems that facilitate searching in multimedia databases and collections is vastly increasing. Application domains include criminology, musicology, trademark registration, medicine and image or video retrieval on the web. This thesis discusses content-based retrieval

  4. SIRW: A web server for the Simple Indexing and Retrieval System that combines sequence motif searches with keyword searches.

    Science.gov (United States)

    Ramu, Chenna

    2003-07-01

    SIRW (http://sirw.embl.de/) is a World Wide Web interface to the Simple Indexing and Retrieval System (SIR) that is capable of parsing and indexing various flat file databases. In addition it provides a framework for doing sequence analysis (e.g. motif pattern searches) for selected biological sequences through keyword search. SIRW is an ideal tool for the bioinformatics community for searching as well as analyzing biological sequences of interest.

  5. Retrieval.

    OpenAIRE

    Clay, Allyson

    1990-01-01

    Allyson Clay’s "Traces of a City in the Spaces Between Some People" is a series of twenty diptychs contrasting fabricated faux finishing with expressionist painting and text. The fabricated paint applications evoke city surfaces like concrete and granite; they also evoke modernist painting.  Unlike modernist painting, however, the faux surfaces are decorative and mechanically painted. The choice to have the surfaces fabricated serves to disrupt the egoism of modern abstraction and the im...

  6. LUNARINFO:A Data Archiving and Retrieving System for the Circumlunar Explorer Based on XML/Web Services

    Institute of Scientific and Technical Information of China (English)

    ZUO Wei; LI Chunlai; OUYANG Ziyuan; LIU Jianjun; XU Tao

    2004-01-01

    It is essential to build a modem information management system to store and manage data of our circumlunar explorer in order to realize the scientific objectives. It is difficult for an information system based on traditional distributed technology to communicate information and work together among heterogeneous systems in order to meet the new requirement of Intemet development. XML and Web Services, because of their open standards and self-containing properties, have changed the mode of information organization and data management. Now they can provide a good solution for building an open, extendable, and compatible information management system, and facilitate interchanging and transferring of data among heterogeneous systems. On the basis of the three-tiered browse/server architectures and the Oracle 9i Database as an information storage platform, we have designed and implemented a data archiving and retrieval system for the circumlunar explorer-LUNARINFO. We have also successfully realized the integration between LUNARINFO and the cosmic dust database system. LUNARINFO consists of five function modules for data management, information publishing, system management, data retrieval, and interface integration. Based on XML and Web Services, it not only is an information database system for archiving, long-term storing, retrieving and publication of lunar reference data related to the circumlunar explorer, but also provides data web Services which can be easily developed by various expert groups and connected to the common information system to realize data resource integration.

  7. Application of object modeling technique to medical image retrieval system

    International Nuclear Information System (INIS)

    Teshima, Fumiaki; Abe, Takeshi

    1993-01-01

    This report describes the results of discussions on the object-oriented analysis methodology, which is one of the object-oriented paradigms. In particular, we considered application of the object modeling technique (OMT) to the analysis of a medical image retrieval system. The object-oriented methodology places emphasis on the construction of an abstract model from real-world entities. The effectiveness of and future improvements to OMT are discussed from the standpoint of the system's expandability. These discussions have elucidated that the methodology is sufficiently well-organized and practical to be applied to commercial products, provided that it is applied to the appropriate problem domain. (author)

  8. Sistem Informasi Perpustakaan Berbasis Web Application

    Directory of Open Access Journals (Sweden)

    Yudie Irawan

    2014-01-01

    Full Text Available Digital  library  system  contributes  the  development  of  digital  resource  digital  resource  that  can  be accessed  via the  Internet.  Librarymanagement system contributed to the development of automation membership data processing, circulation and cataloging. In this thesisis  to develop  a new  concept of  digital  library  systems  and  library  management  system  by  integrating  these  two systems  architecture. Integration  architecture  implemented  by  inserting  component  library  management  system  into  the  digital  library  system  architecture. Web application technology required for these components in order to be integrated with the digital library system components.  The newsystem  has  the advantage  of  this  development  application  utilization  of  borrowing,  membership  and  kataloging  to  a  sharable  over the internet,  so  applications  that  can  be used  together.  Information  can be  delivered  between the  library  catalog,  without  leaving the  digitallibrary function in the utilization of shared digital resources derived from uploading by each librarian.Keywords : Digital library system; Library management system; Web application

  9. SAMP: Application Messaging for Desktop and Web Applications

    Science.gov (United States)

    Taylor, M. B.; Boch, T.; Fay, J.; Fitzpatrick, M.; Paioro, L.

    2012-09-01

    SAMP, the Simple Application Messaging Protocol, is a technology which allows tools to communicate. It is deployed in a number of desktop astronomy applications including ds9, Aladin, TOPCAT, World Wide Telescope and numerous others, and makes it straightforward for a user to treat a selection of these tools as a loosely-integrated suite, combining the most powerful features of each. It has been widely used within Virtual Observatory contexts, but is equally suitable for non-VO use. Enabling SAMP communication from web-based content has long been desirable. An obvious use case is arranging for a click on a web page link to deliver an image, table or spectrum to a desktop viewer, but more sophisticated two-way interaction with rich internet applications would also be possible. Use from the web however presents some problems related to browser sandboxing. We explain how the SAMP Web Profile, introduced in version 1.3 of the SAMP protocol, addresses these issues, and discuss the resulting security implications.

  10. APFEL Web a web-based application for the graphical visualization of parton distribution functions

    CERN Document Server

    Carrazza, Stefano; Palazzo, Daniele; Rojo, Juan

    2015-01-01

    We present APFEL Web, a web-based application designed to provide a flexible user-friendly tool for the graphical visualization of parton distribution functions (PDFs). In this note we describe the technical design of the APFEL Web application, motivating the choices and the framework used for the development of this project. We document the basic usage of APFEL Web and show how it can be used to provide useful input for a variety of collider phenomenological studies. Finally we provide some examples showing the output generated by the application.

  11. APFEL Web: a web-based application for the graphical visualization of parton distribution functions

    International Nuclear Information System (INIS)

    Carrazza, Stefano; Ferrara, Alfio; Palazzo, Daniele; Rojo, Juan

    2015-01-01

    We present APFEL Web, a Web-based application designed to provide a flexible user-friendly tool for the graphical visualization of parton distribution functions. In this note we describe the technical design of the APFEL Web application, motivating the choices and the framework used for the development of this project. We document the basic usage of APFEL Web and show how it can be used to provide useful input for a variety of collider phenomenological studies. Finally we provide some examples showing the output generated by the application. (note)

  12. General Aspects of some Causes of Web Application Vulnerabilities

    Directory of Open Access Journals (Sweden)

    Mironela Pîrnău

    2015-10-01

    Full Text Available Because web applications are complex software systems in constant evolution, they become real targets for hackers as they provide direct access to corporate or personal data. Web application security is supposed to represent an essential priority for organizations in order to protect sensitive customer data, or those of the employees of a company. Worldwide, there are many organizations that report the most common types of attacks on Web applications and methods for their prevention. While the paper is an overview, it puts forward several typical examples of web application vulnerabilities that are due to programming errors; these may be used by attackers to take unauthorized control over computers.

  13. The application of similar image retrieval in electronic commerce.

    Science.gov (United States)

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system.

  14. The Application of Similar Image Retrieval in Electronic Commerce

    Directory of Open Access Journals (Sweden)

    YuPing Hu

    2014-01-01

    Full Text Available Traditional online shopping platform (OSP, which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers’ experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system.

  15. The Application of Similar Image Retrieval in Electronic Commerce

    Science.gov (United States)

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system. PMID:24883411

  16. Nuclear data retrieval for PC applications, PCNuDat

    International Nuclear Information System (INIS)

    Kinsey, R.R.

    1996-01-01

    The PCNuDat program for IBM-PC compatibles is similar to the NuDat program available through the NNDC Online Nuclear Data Service. They provide a user with access to nuclear data in a convenient and menu driven system. This data is useful in both basic and applied research. The nuclear base used by NuDat is extracted from several data bases maintained at the National Nuclear Data Center (NNDC). The program is an extended DOS program which uses 32 bit addressing. It can run in a DOS window on all the current Windows operating systems. The program and its data base are currently available on both a CD-ROM or electronically over the Internet. Electronic access can be made through the NNDC's Web home page. The files may also be FTP'd from the public area under the [pc prog] directory on bnlnd2.dne.bnl.gov. The CD-ROM version also contains the Nuclear Science References (NSR) data base and its retrieval program, Papyrus NSR

  17. WebViz: A web browser based application for collaborative analysis of 3D data

    Science.gov (United States)

    Ruegg, C. S.

    2011-12-01

    In the age of high speed Internet where people can interact instantly, scientific tools have lacked technology which can incorporate this concept of communication using the web. To solve this issue a web application for geological studies has been created, tentatively titled WebViz. This web application utilizes tools provided by Google Web Toolkit to create an AJAX web application capable of features found in non web based software. Using these tools, a web application can be created to act as piece of software from anywhere in the globe with a reasonably speedy Internet connection. An application of this technology can be seen with data regarding the recent tsunami from the major japan earthquakes. After constructing the appropriate data to fit a computer render software called HVR, WebViz can request images of the tsunami data and display it to anyone who has access to the application. This convenience alone makes WebViz a viable solution, but the option to interact with this data with others around the world causes WebViz to be taken as a serious computational tool. WebViz also can be used on any javascript enabled browser such as those found on modern tablets and smart phones over a fast wireless connection. Due to the fact that WebViz's current state is built using Google Web Toolkit the portability of the application is in it's most efficient form. Though many developers have been involved with the project, each person has contributed to increase the usability and speed of the application. In the project's most recent form a dramatic speed increase has been designed as well as a more efficient user interface. The speed increase has been informally noticed in recent uses of the application in China and Australia with the hosting server being located at the University of Minnesota. The user interface has been improved to not only look better but the functionality has been improved. Major functions of the application are rotating the 3D object using buttons

  18. A new measurement of workload in Web application reliability assessment

    Directory of Open Access Journals (Sweden)

    CUI Xia

    2015-02-01

    Full Text Available Web application has been popular in various fields of social life.It becomes more and more important to study the reliability of Web application.In this paper the definition of Web application failure is firstly brought out,and then the definition of Web application reliability.By analyzing data in the IIS server logs and selecting corresponding usage and information delivery failure data,the paper study the feasibility of Web application reliability assessment from the perspective of Web software system based on IIS server logs.Because the usage for a Web site often has certain regularity,a new measurement of workload in Web application reliability assessment is raised.In this method,the unit is removed by weighted average technique;and the weights are assessed by setting objective function and optimization.Finally an experiment was raised for validation.The experiment result shows the assessment of Web application reliability base on the new workload is better.

  19. IBM WebSphere Application Server 80 Administration Guide

    CERN Document Server

    Robinson, Steve

    2011-01-01

    IBM WebSphere Application Server 8.0 Administration Guide is a highly practical, example-driven tutorial. You will be introduced to WebSphere Application Server 8.0, and guided through configuration, deployment, and tuning for optimum performance. If you are an administrator who wants to get up and running with IBM WebSphere Application Server 8.0, then this book is not to be missed. Experience with WebSphere and Java would be an advantage, but is not essential.

  20. A web services choreography scenario for interoperating bioinformatics applications

    Directory of Open Access Journals (Sweden)

    Cheung David W

    2004-03-01

    Full Text Available Abstract Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1 the platforms on which the applications run are heterogeneous, 2 their web interface is not machine-friendly, 3 they use a non-standard format for data input and output, 4 they do not exploit standards to define application interface and message exchange, and 5 existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates

  1. Just-in-time Database-Driven Web Applications

    Science.gov (United States)

    2003-01-01

    "Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that comprised 80% of the hits were results reporting (27%), graduate medical education (26%), research (20%), and bed availability (8%). The mean number of hits per application was 3888 (SD = 5598; range, 14-19879). A model is described for just-in-time database-driven Web application development and an example given with a popular HTML editor and database program. PMID:14517109

  2. Invariant-Based Automatic Testing of Modern Web Applications

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.; Roest, D.

    2011-01-01

    AJAX-based Web 2.0 applications rely on stateful asynchronous client/server communication, and client-side run-time manipulation of the DOM tree. This not only makes them fundamentally different from traditional web applications, but also more error-prone and harder to test. We propose a method for

  3. Programming Collective Intelligence Building Smart Web 2.0 Applications

    CERN Document Server

    Segaran, Toby

    2008-01-01

    This fascinating book demonstrates how you can build web applications to mine the enormous amount of data created by people on the Internet. With the sophisticated algorithms in this book, you can write smart programs to access interesting datasets from other web sites, collect data from users of your own applications, and analyze and understand the data once you've found it.

  4. Advances in Electronic Commerce, Web Application and Communication v.1

    CERN Document Server

    Lin, Sally; Second International Conference on Electronic Commerce, Web Application and Communication (ECWAC 2012)

    2012-01-01

    ECWAC2012 is an integrated conference devoted to Electronic Commerce, Web Application and Communication. In the this proceedings you can find the carefully reviewed scientific outcome of the second International Conference on Electronic Commerce, Web Application and Communication (ECWAC 2012) held at March 17-18,2012  in Wuhan, China, bringing together researchers from all around the world in the field.

  5. Crawl-Based Analysis of Web Applications : Prospects and Challenges

    NARCIS (Netherlands)

    Van Deursen, A.; Mesbah, A.; Nederlof, A.

    2014-01-01

    In this paper we review five years of research in the field of automated crawling and testing of web applications. We describe the open source Crawljax tool, and the various extensions that have been proposed in order to address such issues as cross-browser compatibility testing, web application

  6. Engineering semantic-based interactive multi-device web applications

    NARCIS (Netherlands)

    Bellekens, P.A.E.; Sluijs, van der K.A.M.; Aroyo, L.M.; Houben, G.J.P.M.; Baresi, L.; Fraternali, P.; Houben, G.J.

    2007-01-01

    To build high-quality personalized Web applications developers have to deal with a number of complex problems. We look at the growing class of personalized Web Applications that share three characteristic challenges. Firstly, the semantic problem of how to enable content reuse and integration.

  7. Advances in Electronic Commerce, Web Application and Communication v.2

    CERN Document Server

    Lin, Sally; Second International Conference on Electronic Commerce, Web Application and Communication (ECWAC 2012)

    2012-01-01

    ECWAC2012 is an integrated conference devoted to Electronic Commerce, Web Application and Communication. In the this proceedings you can find the carefully reviewed scientific outcome of the second International Conference on Electronic Commerce, Web Application and Communication (ECWAC 2012) held at March 17-18,2012  in Wuhan, China, bringing together researchers from all around the world in the field.

  8. SproutCore web application development

    CERN Document Server

    Keating, Tyler

    2013-01-01

    Written as a practical, step-by-step tutorial, Creating HTML5 Apps with SproutCore is full of engaging examples to help you learn in a practical context.This book is for any person looking to write software for the Web or already writing software for the Web. Whether your background is in web development or in software development, Creating HTML5 Apps with SproutCore will help you expand your skills so that you will be ready to apply the software development principles in the web development space.

  9. New nuclear data service at CNEA: retrieval of the update libraries from a local Web-Server

    International Nuclear Information System (INIS)

    Suarez, Patricia M.; Pepe, Maria E.; Sbaffoni, Maria M.

    2000-01-01

    A new On-line Nuclear Data Service was implemented at National Atomic Energy Commission (CNEA) Web-Site. The information usually issued by the Nuclear Data Section of IAEA (NDS-IAEA) on CD-ROM, as well as complementary libraries periodically downloaded from the a mirror server of NDS-IAEA Service located at IPEN, Brazil are available on the new CNEA Web page. In the site, users can find numerical data on neutron, charged-particle, and photonuclear reactions, nuclear structure, and decay data, with related bibliographic information. This data server is permanently maintained and updated by CNEA staff members. This crew also offers assistance on the use and retrieval of nuclear data to local users. (author)

  10. SWORS: a system for the efficient retrieval of relevant spatial web objects

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian S.

    2012-01-01

    Spatial web objects that possess both a geographical location and a textual description are gaining in prevalence. This gives prominence to spatial keyword queries that exploit both location and textual arguments. Such queries are used in many web services such as yellow pages and maps services....

  11. A contribution to semantic indexing and retrieval based on FCA - An application to song datasets

    OpenAIRE

    Codocedo , Victor; Lykourentzou , Ioanna; Napoli , Amedeo

    2012-01-01

    International audience; Semantic indexing and retrieval is an important research area, as the available amount of information on the Web is growing more and more. In this paper, we introduce an original approach to semantic indexing and retrieval based on Formal Concept Analysis. The concept lattice is used as a semantic index and we propose an original algorithm for traversing the lattice and answering user queries. This framework has been used and evaluated on a song dataset.

  12. web2py Application Development Cookbook

    CERN Document Server

    Mulone, Pablo Martin; Gordon, Richard

    2012-01-01

    This is a cookbook and you may read the chapters in any order. The recipes need not be read sequentially. There are a good amount of code examples and relevant screenshots to ease learning pains. The target audience are Python developers with basic knowledge of web2py who want to gain further knowledge of web2py

  13. AngularJS web application development

    CERN Document Server

    Darwin, Peter Bacon

    2013-01-01

    The book will be a step-by-step guide showing the readers how to build a complete web app with AngularJSJavaScript developers who want to learn AngularJS for developing web apps. Knowledge of JavaScript and HTML is expected. No knowledge of AngularJS is required.

  14. The Semantics of Web Services: An Examination in GIScience Applications

    Directory of Open Access Journals (Sweden)

    Xuan Shi

    2013-09-01

    Full Text Available Web service is a technological solution for software interoperability that supports the seamless integration of diverse applications. In the vision of web service architecture, web services are described by the Web Service Description Language (WSDL, discovered through Universal Description, Discovery and Integration (UDDI and communicate by the Simple Object Access Protocol (SOAP. Such a divination has never been fully accomplished yet. Although it was criticized that WSDL only has a syntactic definition of web services, but was not semantic, prior initiatives in semantic web services did not establish a correct methodology to resolve the problem. This paper examines the distinction and relationship between the syntactic and semantic definitions for web services that characterize different purposes in service computation. Further, this paper proposes that the semantics of web service are neutral and independent from the service interface definition, data types and platform. Such a conclusion can be a universal law in software engineering and service computing. Several use cases in the GIScience application are examined in this paper, while the formalization of geospatial services needs to be constructed by the GIScience community towards a comprehensive ontology of the conceptual definitions and relationships for geospatial computation. Advancements in semantic web services research will happen in domain science applications.

  15. A Web Based Financial and Accounting Software Application

    Directory of Open Access Journals (Sweden)

    Doru E. TILIUTE

    2010-01-01

    Full Text Available The Client-server applications become more attractivein comparison with their counterpart desktop-type due to someincontestable advantages. Among the client-server applicationssome uses the Web environment providing full access fromanywhere and anytime to all application features. The presentwork presents the fist results in the achievement of a web basedfinancial and accounting application using open-sourcestechnologies and programming languages (Apache, MySQL,PHP and JavaScript

  16. Asset Identification for Security Risk Assessment in Web Applications

    OpenAIRE

    Hisham M. Haddad; Brunil D. Romero

    2009-01-01

    As software applications become more complex they require more security, allowing them to reach an appropriate level of quality to manage information, and therefore achieving business objectives. Web applications represent one segment of software industry where security risk assessment is essential. Web engineering must address new challenges to provide new techniques and tools that guarantee high quality application development. This work focuses asset identification, the initial step in sec...

  17. Web Application Design Using Server-Side JavaScript

    Energy Technology Data Exchange (ETDEWEB)

    Hampton, J.; Simons, R.

    1999-02-01

    This document describes the application design philosophy for the Comprehensive Nuclear Test Ban Treaty Research & Development Web Site. This design incorporates object-oriented techniques to produce a flexible and maintainable system of applications that support the web site. These techniques will be discussed at length along with the issues they address. The overall structure of the applications and their relationships with one another will also be described. The current problems and future design changes will be discussed as well.

  18. ANALYSIS OF WEB MINING APPLICATIONS AND BENEFICIAL AREAS

    Directory of Open Access Journals (Sweden)

    Khaleel Ahmad

    2011-10-01

    Full Text Available The main purpose of this paper is to study the process of Web mining techniques, features, application ( e-commerce and e-business and its beneficial areas. Web mining has become more popular and its widely used in varies application areas (such as business intelligent system, e-commerce and e-business. The e-commerce or e-business results are bettered by the application of the mining techniques such as data mining and text mining, among all the mining techniques web mining is better.

  19. The Web Application Hacker's Handbook Finding and Exploiting Security Flaws

    CERN Document Server

    Stuttard, Dafydd

    2011-01-01

    The highly successful security book returns with a new edition, completely updated Web applications are the front door to most organizations, exposing them to attacks that may disclose personal information, execute fraudulent transactions, or compromise ordinary users. This practical book has been completely updated and revised to discuss the latest step-by-step techniques for attacking and defending the range of ever-evolving web applications. You'll explore the various new technologies employed in web applications that have appeared since the first edition and review the new attack technique

  20. Peningkatan Efisiensi dan Efektifitas Layanan Dosen dalam Pemanfaatan Web Application

    Directory of Open Access Journals (Sweden)

    Reina Reina

    2013-06-01

    Full Text Available This study aims to determine the benefits of a web application in improving the efficiency and effectiveness of services to lecturers. The research method consists of literature study and data collection analysis based on observations. Implementing a web application, an observation is conducted followed by a comparison on data prior to the implementation. The evaluation results show that the implementation of a web application improves efficiency and effectiveness in the use of time and resources in providing services to lecturers in information access.

  1. Web application development with Laravel PHP Framework version 4

    OpenAIRE

    Armel, Jamal

    2014-01-01

    The purpose of this thesis work was to learn a new PHP framework and use it efficiently to build an eCommerce web application for a small start-up freelancing company that will let potential customers check products by category and pass orders securely. To fulfil this set of requirements, a system consisting of a web application with a backend was designed and implemented using built in Laravel features such as Composer, Eloquent, Blade and Artisan and a WAMP stack. The web application wa...

  2. A Sample WebQuest Applicable in Teaching Topological Concepts

    Science.gov (United States)

    Yildiz, Sevda Goktepe; Korpeoglu, Seda Goktepe

    2016-01-01

    In recent years, WebQuests have received a great deal of attention and have been used effectively in teaching-learning process in various courses. In this study, a WebQuest that can be applicable in teaching topological concepts for undergraduate level students was prepared. A number of topological concepts, such as countability, infinity, and…

  3. Ontology-Based Information Visualization: Toward Semantic Web Applications

    NARCIS (Netherlands)

    Fluit, Christiaan; Sabou, Marta; Harmelen, Frank van

    2006-01-01

    The Semantic Web is an extension of the current World Wide Web, based on the idea of exchanging information with explicit, formal, and machine-accessible descriptions of meaning. Providing information with such semantics will enable the construction of applications that have an increased awareness

  4. Ajax and Firefox: New Web Applications and Browsers

    Science.gov (United States)

    Godwin-Jones, Bob

    2005-01-01

    Alternative browsers are gaining significant market share, and both Apple and Microsoft are releasing OS upgrades which portend some interesting changes in Web development. Of particular interest for language learning professionals may be new developments in the area of Web browser based applications, particularly using an approach dubbed "Ajax."…

  5. Exploring the concept of web site customization : applications and antecedents

    NARCIS (Netherlands)

    Teerling, M.L.; Huizingh, Eelko K.R.E.

    2006-01-01

    While mass customization is the tailoring of products and services to the needs and wants of individual customers, web site customization is the tailoring of web sites to individual customers’ preferences. Based on a review of site customization applications, the authors propose a model with four

  6. Publication and Retrieval of Computational Chemical-Physical Data Via the Semantic Web. Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Ostlund, Neil [Chemical Semantics, Inc., Gainesville, FL (United States)

    2017-07-20

    This research showed the feasibility of applying the concepts of the Semantic Web to Computation Chemistry. We have created the first web portal (www.chemsem.com) that allows data created in the calculations of quantum chemistry, and other such chemistry calculations to be placed on the web in a way that makes the data accessible to scientists in a semantic form never before possible. The semantic web nature of the portal allows data to be searched, found, and used as an advance over the usual approach of a relational database. The semantic data on our portal has the nature of a Giant Global Graph (GGG) that can be easily merged with related data and searched globally via a SPARQL Protocol and RDF Query Language (SPARQL) that makes global searches for data easier than with traditional methods. Our Semantic Web Portal requires that the data be understood by a computer and hence defined by an ontology (vocabulary). This ontology is used by the computer in understanding the data. We have created such an ontology for computational chemistry (purl.org/gc) that encapsulates a broad knowledge of the field of computational chemistry. We refer to this ontology as the Gainesville Core. While it is perhaps the first ontology for computational chemistry and is used by our portal, it is only a start of what must be a long multi-partner effort to define computational chemistry. In conjunction with the above efforts we have defined a new potential file standard (Common Standard for eXchange – CSX for computational chemistry data). This CSX file is the precursor of data in the Resource Description Framework (RDF) form that the semantic web requires. Our portal translates CSX files (as well as other computational chemistry data files) into RDF files that are part of the graph database that the semantic web employs. We propose a CSX file as a convenient way to encapsulate computational chemistry data.

  7. Building rich and interactive web applications with CoverageJSON

    OpenAIRE

    Blower, Jon; Riechert, Maik; Griffiths, Guy; Kumar, Mridul; Williams, Riley

    2017-01-01

    Web browsers are becoming increasingly capable as visualisation and analysis platformsLots of tools and libraries are built around images and “simple features”GeoJSON, KML, OpenLayers, Leaflet ...Formats and tools for scientific / meteorological data are not always web-friendlyComplex, binary, desktop-orientedLarge variety, usually community-specific=> Lots of people building ad-hoc solutions for web applicationsWe want to bring scientific data within the reach of more Web and mobile app deve...

  8. WEB STRUCTURE MINING

    Directory of Open Access Journals (Sweden)

    CLAUDIA ELENA DINUCĂ

    2011-01-01

    Full Text Available The World Wide Web became one of the most valuable resources for information retrievals and knowledge discoveries due to the permanent increasing of the amount of data available online. Taking into consideration the web dimension, the users get easily lost in the web’s rich hyper structure. Application of data mining methods is the right solution for knowledge discovery on the Web. The knowledge extracted from the Web can be used to raise the performances for Web information retrievals, question answering and Web based data warehousing. In this paper, I provide an introduction of Web mining categories and I focus on one of these categories: the Web structure mining. Web structure mining, one of three categories of web mining for data, is a tool used to identify the relationship between Web pages linked by information or direct link connection. It offers information about how different pages are linked together to form this huge web. Web Structure Mining finds hidden basic structures and uses hyperlinks for more web applications such as web search.

  9. Effects of Diacritics on Web Search Engines’ Performance for Retrieval of Yoruba Documents

    Directory of Open Access Journals (Sweden)

    Toluwase Victor Asubiaro

    2014-06-01

    Full Text Available This paper aims to find out the possible effect of the use or nonuse of diacritics in Yoruba search queries on the performance of major search engines, AOL, Bing, Google and Yahoo!, in retrieving documents. 30 Yoruba queries created from the most searched keywords from Nigeria on Google search logs were submitted to the search engines. The search queries were posed to the search engines without diacritics and then with diacritics. All of the search engines retrieved more sites in response to the queries without diacritics. Also, they all retrieved more precise results for queries without diacritics. The search engines also answered more queries without diacritics. There was no significant difference in the precision values of any two of the four search engines for diacritized and undiacritized queries. There was a significant difference in the effectiveness of AOL and Yahoo when diacritics were applied and when they were not applied. The findings of the study indicate that the search engines do not find a relationship between the diacritized Yoruba words and the undiacritized versions. Therefore, there is a need for search engines to add normalization steps to pre-process Yoruba queries and indexes. This study concentrates on a problem with search engines that has not been previously investigated.

  10. Web application for marketing of digital art works and services

    OpenAIRE

    Vatovec, Jan

    2016-01-01

    The aim of the diploma thesis is to create a web application for digital artworks and services marketing. The decision to undertake this task is based on the authors’ understanding of the field and assessment that current solutions do not satisfy completely the needs of digital artists who work on the market of online subcultures built on fantasy characters (commercial and artists’ own creations). The final application comprises an interactive web gallery, auction-based marketing system for s...

  11. Web application with R using Shiny

    CERN Document Server

    Beeley, Chris

    2013-01-01

    This book follows a standard tutorial-based approach which will teach you how to make a web app using R and Shiny quickly and easily.This book is for anybody who wants to produce interactive data summaries over the Web, whether you want to share them with a few colleagues or the whole world. You need no previous experience with R, Shiny, HTML, or CSS to begin using this book, although you will need at least a little previous experience with programming in a different language.

  12. BUILDING A WEB APPLICATION WITH LARAVEL 5

    OpenAIRE

    Nguyen, Quang

    2015-01-01

    In modern IT industry, it is essential for web developers to know at least one battle-proven framework. Laravel is one of the most successful PHP framework in 2015, based on annual framework popularity survey conducted by SitePoint (SitePoint, The Best PHP Framework for 2015: SitePoint Survey Results, cited, 25.10.2015). There are several advantages and benefits of using web framework in general and Laravel in particular. Framework is a product of collective intelligence, comprising many ...

  13. Web Application Obfuscation '-WAFsEvasionFiltersalert(Obfuscation)-'

    CERN Document Server

    Heiderich, Mario; Heyes, Gareth; Lindsay, David

    2010-01-01

    Web applications are used every day by millions of users, which is why they are one of the most popular vectors for attackers. Obfuscation of code has allowed hackers to take one attack and create hundreds-if not millions-of variants that can evade your security measures. Web Application Obfuscation takes a look at common Web infrastructure and security controls from an attacker's perspective, allowing the reader to understand the shortcomings of their security systems. Find out how an attacker would bypass different types of security controls, how these very security controls introduce new ty

  14. Development of a Web-based financial application System

    Science.gov (United States)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.; Mostafa, M. G.

    2013-12-01

    The paper describes a technique to develop a web based financial system, following latest technology and business needs. In the development of web based application, the user friendliness and technology both are very important. It is used ASP .NET MVC 4 platform and SQL 2008 server for development of web based financial system. It shows the technique for the entry system and report monitoring of the application is user friendly. This paper also highlights the critical situations of development, which will help to develop the quality product.

  15. COEUS: "semantic web in a box" for biomedical applications.

    Science.gov (United States)

    Lopes, Pedro; Oliveira, José Luís

    2012-12-17

    As the "omics" revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter's complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a "semantic web in a box" approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.

  16. Project Management Data Retrieval and Integration (PMDRI) Application

    Data.gov (United States)

    Department of Veterans Affairs — The Project Management Data Retrieval and Integration Database (PMDRI) is a system that presents data from the VA Financial Management System(FMS) in a structured...

  17. ASP.NET web API build RESTful web applications and services on the .NET framework

    CERN Document Server

    Kanjilal, Joydip

    2013-01-01

    This book is a step-by-step, practical tutorial with a simple approach to help you build RESTful web applications and services on the .NET framework quickly and efficiently.This book is for ASP.NET web developers who want to explore REST-based services with C# 5. This book contains many real-world code examples with explanations whenever necessary. Some experience with C# and ASP.NET 4 is expected.

  18. Large-Scale Partial-Duplicate Image Retrieval and Its Applications

    Science.gov (United States)

    2016-04-23

    tree based image retrieval , a semantic-aware co-indexing algorithm is proposed to jointly embed two strong cues into the inverted indexes: 1) local...based image retrieval , a semantic-aware co-indexing algorithm is proposed to jointly embed two strong cues into the inverted indexes: 1) local...Distribution Unlimited UU UU UU UU 23-04-2016 23-Jan-2012 22-Jan-2016 Final Report: Large-Scale Partial-Duplicate Image Retrieval and Its Applications

  19. Students as Designers of Semantic Web Applications

    Science.gov (United States)

    Tracy, Fran; Jordan, Katy

    2012-01-01

    This paper draws upon the experience of an interdisciplinary research group in engaging undergraduate university students in the design and development of semantic web technologies. A flexible approach to participatory design challenged conventional distinctions between "designer" and "user" and allowed students to play a role…

  20. Development of an Web Service Architecture for Enterprise Application Integration

    International Nuclear Information System (INIS)

    Kim, Ji-Hyeon; Jung, Jae-Cheon; Chang, Young-Woo; Chang, Hoon-Seon; Kim, Jae-Cheol; Kim, Hang-Bae; Kim, Kyu-Ho; Lee, Dong-Chul

    2007-01-01

    The purpose of Enterprise Application Integration (EAI) is to enable the interoperability between two or more enterprise software systems. These systems, for example, can be an Enterprise Resource Planning (ERP) system, an Enterprise Asset Management (EAM) system or a Condition Monitoring system. Traditional EAI approach, based on point-to-point connection, is expensive, vendor specific with limited modules and restricted interoperability with other ERPs and applications. To overcome these drawbacks, the Web Service based EAI has emerged. It allows the integration without point to point linking and with less costs. Many approaches of Web service based EAI are combined with ORACLE, SAP, PeopleSoft, WebSphere, SIEBEL etc. as a system integration platform. The approach still has the restriction that only predefined clients can access the services. This means clients must know exactly the protocol for calling the services and if they don't have the access information they never can get the services. This is because these Web services are based on syntactic service description. In this paper, a semantic based EAI approach, that allows the uninformed clients to access the services, is introduced. The semantic EAI is designed with the Web services that have semantic service descriptions. The Semantic Web Services(SWS) are described in Web Ontology Language for Services(OWL-S), a semantic service ontology language, and advertised in Universal Description, Discovery and Integration (UDDI). Clients find desired services through the UDDI and get services from service providers through Web Service Description Language(WSDL)

  1. Development of an Web Service Architecture for Enterprise Application Integration

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji-Hyeon; Jung, Jae-Cheon; Chang, Young-Woo; Chang, Hoon-Seon; Kim, Jae-Cheol; Kim, Hang-Bae [Korea Power Engineering Company, Daejeon (Korea, Republic of); Kim, Kyu-Ho; Lee, Dong-Chul [Korea Electric Power Data Network, Daejeon (Korea, Republic of)

    2007-07-01

    The purpose of Enterprise Application Integration (EAI) is to enable the interoperability between two or more enterprise software systems. These systems, for example, can be an Enterprise Resource Planning (ERP) system, an Enterprise Asset Management (EAM) system or a Condition Monitoring system. Traditional EAI approach, based on point-to-point connection, is expensive, vendor specific with limited modules and restricted interoperability with other ERPs and applications. To overcome these drawbacks, the Web Service based EAI has emerged. It allows the integration without point to point linking and with less costs. Many approaches of Web service based EAI are combined with ORACLE, SAP, PeopleSoft, WebSphere, SIEBEL etc. as a system integration platform. The approach still has the restriction that only predefined clients can access the services. This means clients must know exactly the protocol for calling the services and if they don't have the access information they never can get the services. This is because these Web services are based on syntactic service description. In this paper, a semantic based EAI approach, that allows the uninformed clients to access the services, is introduced. The semantic EAI is designed with the Web services that have semantic service descriptions. The Semantic Web Services(SWS) are described in Web Ontology Language for Services(OWL-S), a semantic service ontology language, and advertised in Universal Description, Discovery and Integration (UDDI). Clients find desired services through the UDDI and get services from service providers through Web Service Description Language(WSDL)

  2. QuickEval: a web application for psychometric scaling experiments

    Science.gov (United States)

    Van Ngo, Khai; Storvik, Jehans J.; Dokkeberg, Christopher A.; Farup, Ivar; Pedersen, Marius

    2015-01-01

    QuickEval is a web application for carrying out psychometric scaling experiments. It offers the possibility of running controlled experiments in a laboratory, or large scale experiment over the web for people all over the world. It is a unique one of a kind web application, and it is a software needed in the image quality field. It is also, to the best of knowledge, the first software that supports the three most common scaling methods; paired comparison, rank order, and category judgement. It is also the first software to support rank order. Hopefully, a side effect of this newly created software is that it will lower the threshold to perform psychometric experiments, improve the quality of the experiments being carried out, make it easier to reproduce experiments, and increase research on image quality both in academia and industry. The web application is available at www.colourlab.no/quickeval.

  3. Data mining approach to web application intrusions detection

    Science.gov (United States)

    Kalicki, Arkadiusz

    2011-10-01

    Web applications became most popular medium in the Internet. Popularity, easiness of web application script languages and frameworks together with careless development results in high number of web application vulnerabilities and high number of attacks performed. There are several types of attacks possible because of improper input validation: SQL injection Cross-site scripting, Cross-Site Request Forgery (CSRF), web spam in blogs and others. In order to secure web applications intrusion detection (IDS) and intrusion prevention systems (IPS) are being used. Intrusion detection systems are divided in two groups: misuse detection (traditional IDS) and anomaly detection. This paper presents data mining based algorithm for anomaly detection. The principle of this method is the comparison of the incoming HTTP traffic with a previously built profile that contains a representation of the "normal" or expected web application usage sequence patterns. The frequent sequence patterns are found with GSP algorithm. Previously presented detection method was rewritten and improved. Some tests show that the software catches malicious requests, especially long attack sequences, results quite good with medium length sequences, for short length sequences must be complemented with other methods.

  4. AMBIT RESTful web services: an implementation of the OpenTox application programming interface

    Directory of Open Access Journals (Sweden)

    Jeliazkova Nina

    2011-05-01

    Full Text Available Abstract The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i an information model, based on a common OWL-DL ontology ii links to related ontologies; iii data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF representation, or initiate the associated calculations. The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative Structure-Activity Relationship (QSAR models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The

  5. AMBIT RESTful web services: an implementation of the OpenTox application programming interface.

    Science.gov (United States)

    Jeliazkova, Nina; Jeliazkov, Vedrin

    2011-05-16

    The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations.The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative) Structure-Activity Relationship (QSAR) models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The downloadable web application

  6. Neutralizing SQL Injection Attack Using Server Side Code Modification in Web Applications

    OpenAIRE

    Dalai, Asish Kumar; Jena, Sanjay Kumar

    2017-01-01

    Reports on web application security risks show that SQL injection is the top most vulnerability. The journey of static to dynamic web pages leads to the use of database in web applications. Due to the lack of secure coding techniques, SQL injection vulnerability prevails in a large set of web applications. A successful SQL injection attack imposes a serious threat to the database, web application, and the entire web server. In this article, the authors have proposed a novel method for prevent...

  7. [A systematic evaluation of application of the web-based cancer database].

    Science.gov (United States)

    Huang, Tingting; Liu, Jialin; Li, Yong; Zhang, Rui

    2013-10-01

    In order to support the theory and practice of the web-based cancer database development in China, we applied a systematic evaluation to assess the development condition of the web-based cancer databases at home and abroad. We performed computer-based retrieval of the Ovid-MEDLINE, Springerlink, EBSCOhost, Wiley Online Library and CNKI databases, the papers of which were published between Jan. 1995 and Dec. 2011, and retrieved the references of these papers by hand. We selected qualified papers according to the pre-established inclusion and exclusion criteria, and carried out information extraction and analysis of the papers. Eventually, searching the online database, we obtained 1244 papers, and checking the reference lists, we found other 19 articles. Thirty-one articles met the inclusion and exclusion criteria and we extracted the proofs and assessed them. Analyzing these evidences showed that the U.S.A. counted for 26% in the first place. Thirty-nine percent of these web-based cancer databases are comprehensive cancer databases. As for single cancer databases, breast cancer and prostatic cancer are on the top, both counting for 10% respectively. Thirty-two percent of the cancer database are associated with cancer gene information. For the technical applications, MySQL and PHP applied most widely, nearly 23% each.

  8. Virtual patients on the semantic Web: a proof-of-application study.

    Science.gov (United States)

    Dafli, Eleni; Antoniou, Panagiotis; Ioannidis, Lazaros; Dombros, Nicholas; Topps, David; Bamidis, Panagiotis D

    2015-01-22

    Virtual patients are interactive computer simulations that are increasingly used as learning activities in modern health care education, especially in teaching clinical decision making. A key challenge is how to retrieve and repurpose virtual patients as unique types of educational resources between different platforms because of the lack of standardized content-retrieving and repurposing mechanisms. Semantic Web technologies provide the capability, through structured information, for easy retrieval, reuse, repurposing, and exchange of virtual patients between different systems. An attempt to address this challenge has been made through the mEducator Best Practice Network, which provisioned frameworks for the discovery, retrieval, sharing, and reuse of medical educational resources. We have extended the OpenLabyrinth virtual patient authoring and deployment platform to facilitate the repurposing and retrieval of existing virtual patient material. A standalone Web distribution and Web interface, which contains an extension for the OpenLabyrinth virtual patient authoring system, was implemented. This extension was designed to semantically annotate virtual patients to facilitate intelligent searches, complex queries, and easy exchange between institutions. The OpenLabyrinth extension enables OpenLabyrinth authors to integrate and share virtual patient case metadata within the mEducator3.0 network. Evaluation included 3 successive steps: (1) expert reviews; (2) evaluation of the ability of health care professionals and medical students to create, share, and exchange virtual patients through specific scenarios in extended OpenLabyrinth (OLabX); and (3) evaluation of the repurposed learning objects that emerged from the procedure. We evaluated 30 repurposed virtual patient cases. The evaluation, with a total of 98 participants, demonstrated the system's main strength: the core repurposing capacity. The extensive metadata schema presentation facilitated user exploration

  9. A Fuzzy Semantic Information Retrieval System for Transactional Applications

    Directory of Open Access Journals (Sweden)

    A O Ajayi

    2009-09-01

    Full Text Available In this paper, we present an information retrieval system based on the concept of fuzzy logic to relate vague and uncertain objects with un-sharp boundaries. The simple but comprehensive user interface of the system permits the entering of uncertain specifications in query forms. The system was modelled and simulated in a Matlab environment; its implementation was carried out using Borland C++ Builder. The result of the performance measure of the system using precision and recall rates is encouraging. Similarly, the smaller amount of more precise information retrieved by the system will positively impact the response time perceived by the users.

  10. Automated Functional Testing based on the Navigation of Web Applications

    Directory of Open Access Journals (Sweden)

    Boni García

    2011-08-01

    Full Text Available Web applications are becoming more and more complex. Testing such applications is an intricate hard and time-consuming activity. Therefore, testing is often poorly performed or skipped by practitioners. Test automation can help to avoid this situation. Hence, this paper presents a novel approach to perform automated software testing for web applications based on its navigation. On the one hand, web navigation is the process of traversing a web application using a browser. On the other hand, functional requirements are actions that an application must do. Therefore, the evaluation of the correct navigation of web applications results in the assessment of the specified functional requirements. The proposed method to perform the automation is done in four levels: test case generation, test data derivation, test case execution, and test case reporting. This method is driven by three kinds of inputs: i UML models; ii Selenium scripts; iii XML files. We have implemented our approach in an open-source testing framework named Automatic Testing Platform. The validation of this work has been carried out by means of a case study, in which the target is a real invoice management system developed using a model-driven approach.

  11. G-Bean: an ontology-graph based web tool for biomedical literature retrieval.

    Science.gov (United States)

    Wang, James Z; Zhang, Yuanyuan; Dong, Liang; Li, Lin; Srimani, Pradip K; Yu, Philip S

    2014-01-01

    Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean

  12. An Image Retrieval and Processing Expert System for the World Wide Web

    Science.gov (United States)

    Rodriguez, Ricardo; Rondon, Angelica; Bruno, Maria I.; Vasquez, Ramon

    1998-01-01

    This paper presents a system that is being developed in the Laboratory of Applied Remote Sensing and Image Processing at the University of P.R. at Mayaguez. It describes the components that constitute its architecture. The main elements are: a Data Warehouse, an Image Processing Engine, and an Expert System. Together, they provide a complete solution to researchers from different fields that make use of images in their investigations. Also, since it is available to the World Wide Web, it provides remote access and processing of images.

  13. Development of a 3D WebGIS System for Retrieving and Visualizing CityGML Data Based on their Geometric and Semantic Characteristics by Using Free and Open Source Technology

    Science.gov (United States)

    Pispidikis, I.; Dimopoulou, E.

    2016-10-01

    CityGML is considered as an optimal standard for representing 3D city models. However, international experience has shown that visualization of the latter is quite difficult to be implemented on the web, due to the large size of data and the complexity of CityGML. As a result, in the context of this paper, a 3D WebGIS application is developed in order to successfully retrieve and visualize CityGML data in accordance with their respective geometric and semantic characteristics. Furthermore, the available web technologies and the architecture of WebGIS systems are investigated, as provided by international experience, in order to be utilized in the most appropriate way for the purposes of this paper. Specifically, a PostgreSQL/ PostGIS Database is used, in compliance with the 3DCityDB schema. At Server tier, Apache HTTP Server and GeoServer are utilized, while a Server Side programming language PHP is used. At Client tier, which implemented the interface of the application, the following technologies were used: JQuery, AJAX, JavaScript, HTML5, WebGL and Ol3-Cesium. Finally, it is worth mentioning that the application's primary objectives are a user-friendly interface and a fully open source development.

  14. Protein Annotators' Assistant: A Novel Application of Information Retrieval Techniques.

    Science.gov (United States)

    Wise, Michael J.

    2000-01-01

    Protein Annotators' Assistant (PAA) is a software system which assists protein annotators in assigning functions to newly sequenced proteins. PAA employs a number of information retrieval techniques in a novel setting and is thus related to text categorization, where multiple categories may be suggested, except that in this case none of the…

  15. The TDAQ Analytics Dashboard: a real-time web application for the ATLAS TDAQ control infrastructure

    International Nuclear Information System (INIS)

    Miotto, Giovanna Lehmann; Magnoni, Luca; Sloper, John Erik

    2011-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) infrastructure is responsible for filtering and transferring ATLAS experimental data from detectors to mass storage systems. It relies on a large, distributed computing system composed of thousands of software applications running concurrently. In such a complex environment, information sharing is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking, the streams of messages sent by applications and data published via information services are constantly monitored by experts to verify the correctness of running operations and to understand problematic situations. To simplify and improve system analysis and errors detection tasks, we developed the TDAQ Analytics Dashboard, a web application that aims to collect, correlate and visualize effectively this real time flow of information. The TDAQ Analytics Dashboard is composed of two main entities that reflect the twofold scope of the application. The first is the engine, a Java service that performs aggregation, processing and filtering of real time data stream and computes statistical correlation on sliding windows of time. The results are made available to clients via a simple web interface supporting SQL-like query syntax. The second is the visualization, provided by an Ajax-based web application that runs on client's browser. The dashboard approach allows to present information in a clear and customizable structure. Several types of interactive graphs are proposed as widgets that can be dynamically added and removed from visualization panels. Each widget acts as a client for the engine, querying the web interface to retrieve data with desired criteria. In this paper we present the design, development and evolution of the TDAQ Analytics Dashboard. We also present the statistical analysis computed by the application in this first period of high energy data taking operations for the ATLAS experiment.

  16. Les frameworks au coeur des applications web

    OpenAIRE

    Moro, Arielle; Daehne, Peter

    2010-01-01

    Depuis quelques années, Internet est vraiment entré dans les mœurs : tant dans les entreprises qu’au sein de chaque foyer. En effet, Internet permet de communiquer à travers le monde en quelques secondes, de vendre toute sorte de produits en déployant des solutions e-commerce facilement et bien d’autres choses. Internet est donc un véritable vecteur de communication, de commerce et à présent, avec le Web 2.0, un vrai berceau d’informations (tant des informations personnelles que des informati...

  17. SIGMA WEB INTERFACE FOR REACTOR DATA APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Pritychenko,B.; Sonzogni, A.A.

    2010-05-09

    We present Sigma Web interface which provides user-friendly access for online analysis and plotting of the evaluated and experimental nuclear reaction data stored in the ENDF-6 and EXFOR formats. The interface includes advanced browsing and search capabilities, interactive plots of cross sections, angular distributions and spectra, nubars, comparisons between evaluated and experimental data, computations for cross section data sets, pre-calculated integral quantities, neutron cross section uncertainties plots and visualization of covariance matrices. Sigma is publicly available at the National Nuclear Data Center website at http://www.nndc.bnl.gov/sigma.

  18. Sigma Web Interface For Reactor Data Applications

    International Nuclear Information System (INIS)

    Pritychenko, B.; Sonzogni, A.A.

    2010-01-01

    We present Sigma Web interface which provides user-friendly access for online analysis and plotting of the evaluated and experimental nuclear reaction data stored in the ENDF-6 and EXFOR formats. The interface includes advanced browsing and search capabilities, interactive plots of cross sections, angular distributions and spectra, nubars, comparisons between evaluated and experimental data, computations for cross section data sets, pre-calculated integral quantities, neutron cross section uncertainties plots and visualization of covariance matrices. Sigma is publicly available at the National Nuclear Data Center website at http://www.nndc.bnl.gov/sigma.

  19. AN AUTOMATIC AND METHODOLOGICAL APPROACH FOR ACCESSIBLE WEB APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Lourdes Moreno

    2007-06-01

    Full Text Available Semantic Web approaches try to get the interoperability and communication among technologies and organizations. Nevertheless, sometimes it is forgotten that the Web must be useful for every user, consequently it is necessary to include tools and techniques doing Semantic Web be accessible. Accessibility and usability are two usually joined concepts widely used in web application development, however their meaning are different. Usability means the way to make easy the use but accessibility is referred to the access possibility. For the first one, there are many well proved approaches in real cases. However, accessibility field requires a deeper research that will make feasible the access to disable people and also the access to novel non-disable people due to the cost to automate and maintain accessible applications. In this paper, we propose one architecture to achieve the accessibility in web-environments dealing with the WAI accessibility standard and the Universal Design paradigm. This architecture tries to control the accessibility in web applications development life-cycle following a methodology starting from a semantic conceptual model and leans on description languages and controlled vocabularies.

  20. Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2009-01-01

    Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to

  1. Developing responsive web applications with Ajax and jQuery

    CERN Document Server

    Patel, Sandeep Kumar

    2014-01-01

    This book is a standard tutorial for web application developers presented in a comprehensive, step-by-step manner to explain the nuances involved. It has an abundance of code and examples supporting explanations of each feature. This book is intended for Java developers wanting to create rich and responsive applications using AJAX. Basic experience of using jQuery is assumed.

  2. Development of Web-Based Learning Application for Generation Z

    Science.gov (United States)

    Hariadi, Bambang; Dewiyani Sunarto, M. J.; Sudarmaningtyas, Pantjawati

    2016-01-01

    This study aimed to develop a web-based learning application as a form of learning revolution. The form of learning revolution includes the provision of unlimited teaching materials, real time class organization, and is not limited by time or place. The implementation of this application is in the form of hybrid learning by using Google Apps for…

  3. CERN's web application updates for electron and laser beam technologies

    CERN Document Server

    Sigas, Christos

    2017-01-01

    This report describes the modifications at CERN's web application for electron and laser beam technologies. There are updates at both the front and the back end of the application. New electron and laser machines were added and also old machines were updated. There is also a new feature for printing needed information.

  4. SPIDERGL: A GRAPHICS LIBRARY FOR 3D WEB APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. Di Benedetto

    2012-09-01

    Full Text Available The recent introduction of the WebGL API for leveraging the power of 3D graphics accelerators within Web browsers opens the possibility to develop advanced graphics applications without the need for an ad-hoc plug-in. There are several contexts in which this new technology can be exploited to enhance user experience and data fruition, like e-commerce applications, games and, in particular, Cultural Heritage. In fact, it is now possible to use the Web platform to present a virtual reconstruction hypothesis of ancient pasts, to show detailed 3D models of artefacts of interests to a wide public, and to create virtual museums. We introduce SpiderGL, a JavaScript library for developing 3D graphics Web applications. SpiderGL provides data structures and algorithms to ease the use of WebGL, to define and manipulate shapes, to import 3D models in various formats, and to handle asynchronous data loading. We show the potential of this novel library with a number of demo applications and give details about its future uses in the context of Cultural Heritage applications.

  5. Web application for monitoring mainframe computer, Linux operating systems and application servers

    OpenAIRE

    Dimnik, Tomaž

    2016-01-01

    This work presents the idea and the realization of web application for monitoring the operation of the mainframe computer, servers with Linux operating system and application servers. Web application is intended for administrators of these systems, as an aid to better understand the current state, load and operation of the individual components of the server systems.

  6. The ADAM project: a generic web interface for retrieval and display of ATLAS TDAQ information

    Science.gov (United States)

    Harwood, A.; Lehmann Miotto, G.; Magnoni, L.; Vandelli, W.; Savu, D.

    2012-06-01

    This paper describes a new approach to the visualization of information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, it visualizes the collected data using a flexible and interactive front-end web system. Structurally, the project comprises of 3 main levels of the data collection cycle: The Level 0 represents the information sources within ATLAS. These providers do not store information in a uniform fashion. The first step of the project was to define a common interface with which to expose stored data. The interface designed for the project originates from the Google Data Protocol API. The idea is to allow read-only access to data providers, through HTTP requests similar in format to the SQL query structure. This provides a standardized way to access this different information sources within ATLAS. The Level 1 can be considered the engine of the system. The primary task of the Level 1 is to gather data from multiple data sources via the common interface, to correlate this data together, or over a defined time series, and expose the combined data as a whole to the Level 2 web

  7. The ADAM project: a generic web interface for retrieval and display of ATLAS TDAQ information

    International Nuclear Information System (INIS)

    Harwood, A; Miotto, G Lehmann; Magnoni, L; Vandelli, W; Savu, D

    2012-01-01

    This paper describes a new approach to the visualization of information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, it visualizes the collected data using a flexible and interactive front-end web system. Structurally, the project comprises of 3 main levels of the data collection cycle: The Level 0 represents the information sources within ATLAS. These providers do not store information in a uniform fashion. The first step of the project was to define a common interface with which to expose stored data. The interface designed for the project originates from the Google Data Protocol API. The idea is to allow read-only access to data providers, through HTTP requests similar in format to the SQL query structure. This provides a standardized way to access this different information sources within ATLAS. The Level 1 can be considered the engine of the system. The primary task of the Level 1 is to gather data from multiple data sources via the common interface, to correlate this data together, or over a defined time series, and expose the combined data as a whole to the Level 2 web

  8. The ADAM project: a generic web interface for retrieval and display of ATLAS TDAQ information.

    CERN Document Server

    Harwood, A; The ATLAS collaboration; Magnoni, L; Vandelli, W; Savu, D

    2011-01-01

    This paper describes a new approach to the visualization of stored information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, ...

  9. ADAM Project – A generic web interface for retrieval and display of ATLAS TDAQ information.

    CERN Document Server

    Harwood, A; The ATLAS collaboration; Lehmann Miotto, G

    2011-01-01

    This paper describes a new approach to the visualization of stored information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers, to the network utilization are stored in several databases for a posterior analysis. Although the ability to view these data-sets individually is already in place, there currently is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple diversely structured providers. It is capable of aggregating and correlating the data according to user defined criteria. Finally it v...

  10. Recent trends in print portals and Web2Print applications

    Science.gov (United States)

    Tuijn, Chris

    2009-01-01

    case, the ordering process is, of course, not fully automated. Standardized products, on the other hand, are easily identified and the cost charged to the print buyer can be retrieved from predefined price lists. Typically, higher volumes will result in more attractive prices. An additional advantage of this type of products is that they are often defined such that they can be produced in bulk using conventional printing techniques. If one wants to automate the ganging, a connection must be established between the on-line ordering and the production planning system. (For digital printing, there typically is no need to gang products since they can be produced more effectively separately.) Many of the on-line print solutions support additional features also available in general purpose e-commerce sites. We here think of the availability of virtual shopping baskets, the connectivity with payment gateways and the support of special facilities for interfacing with courier services (bar codes, connectivity to courier web sites for tracking shipments etc.). Supporting these features also assumes an intimate link with the print production system. Another development that goes beyond the on-line ordering of printed material and the submission of full pages and/or documents, is the interactive, on-line definition of the content itself. Typical applications in this respect are, e.g., the creation of business cards, leaflets, letter heads etc. On a more professional level, we also see that more and more publishing organizations start using on-line publishing platforms to organize their work. These professional platforms can also be connected directly to printing portals and thus enable extra automation. In this paper, we will discuss for each of the different applications presented above (traditional Print Portals, Web2Print applications and professional, on-line publishing platforms) how they interact with prepress and print production systems and how they contribute to the

  11. A WEB API AND WEB APPLICATION DEVELOPMENT FOR DISSEMINATION OF AIR QUALITY INFORMATION

    Directory of Open Access Journals (Sweden)

    K. Şahin

    2017-11-01

    Full Text Available Various studies have been carried out since 2005 under the leadership of Ministry of Environment and Urbanism of Turkey, in order to observe the quality of air in Turkey, to develop new policies and to develop a sustainable air quality management strategy. For this reason, a national air quality monitoring network has been developed providing air quality indices. By this network, the quality of the air has been continuously monitored and an important information system has been constructed in order to take precautions for preventing a dangerous situation. The biggest handicap in the network is the data access problem for instant and time series data acquisition and processing because of its proprietary structure. Currently, there is no service offered by the current air quality monitoring system for exchanging information with third party applications. Within the context of this work, a web service has been developed to enable location based querying of the current/past air quality data in Turkey. This web service is equipped with up-todate and widely preferred technologies. In other words, an architecture is chosen in which applications can easily integrate. In the second phase of the study, a web-based application was developed to test the developed web service and this testing application can perform location based acquisition of air-quality data. This makes it possible to easily carry out operations such as screening and examination of the area in the given time-frame which cannot be done with the national monitoring network.

  12. a Web Api and Web Application Development for Dissemination of Air Quality Information

    Science.gov (United States)

    Şahin, K.; Işıkdağ, U.

    2017-11-01

    Various studies have been carried out since 2005 under the leadership of Ministry of Environment and Urbanism of Turkey, in order to observe the quality of air in Turkey, to develop new policies and to develop a sustainable air quality management strategy. For this reason, a national air quality monitoring network has been developed providing air quality indices. By this network, the quality of the air has been continuously monitored and an important information system has been constructed in order to take precautions for preventing a dangerous situation. The biggest handicap in the network is the data access problem for instant and time series data acquisition and processing because of its proprietary structure. Currently, there is no service offered by the current air quality monitoring system for exchanging information with third party applications. Within the context of this work, a web service has been developed to enable location based querying of the current/past air quality data in Turkey. This web service is equipped with up-todate and widely preferred technologies. In other words, an architecture is chosen in which applications can easily integrate. In the second phase of the study, a web-based application was developed to test the developed web service and this testing application can perform location based acquisition of air-quality data. This makes it possible to easily carry out operations such as screening and examination of the area in the given time-frame which cannot be done with the national monitoring network.

  13. Web 2.0 and the Semantic Web: Implications for Future HEP Web Applications

    CERN Multimedia

    CERN. Geneva

    2006-01-01

    Bebo White is a Departmental Associate (retired) at SLAC and has spent considerable time at CERN. In addition, he holds faculty appointments at Hong Kong University, the University of San Francisco, and Contra Costa College. He is a frequent speaker at conferences, academic institutions, and for commercial organizations around the world. Bebo has been a member of the International World Wide Web Conference Committee (IW3C2) since 1996 and in that time has served as General Co-Chair of two of the conferences ...

  14. Toward Exposing Timing-Based Probing Attacks in Web Applications.

    Science.gov (United States)

    Mao, Jian; Chen, Yue; Shi, Futian; Jia, Yaoqi; Liang, Zhenkai

    2017-02-25

    Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT) systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users' browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach.

  15. Toward Exposing Timing-Based Probing Attacks in Web Applications

    Directory of Open Access Journals (Sweden)

    Jian Mao

    2017-02-01

    Full Text Available Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users’ browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach.

  16. Teaching web application development: Microsoft proprietary or open systems?

    Directory of Open Access Journals (Sweden)

    Stephen Corich

    Full Text Available This paper revisits the debate concerning which development environment should be used to teach server-side Web Application Development courses to undergraduate students. In 2002, following an industry-based survey of Web developers, a decision was made to adopt an open source platform consisting of PHP and MySQL rather than a Microsoft platform utilising Access and Active Server Pages. Since that date there have been a number of significant changes within the computing industry that suggest that perhaps it is appropriate to revisit the original decision. This paper investigates expert opinion by reviewing current literature regarding web development environments, it looks at the results of a survey of web development companies and it examines the current employment trends in the web development area. The paper concludes by examining the impact of making a decision to change the development environment used to teach Web Application Development to a third year computing degree class and describes the impact on course delivery that the change has brought about.

  17. A Fuzzy Semantic Information Retrieval System for Transactional Applications

    OpenAIRE

    A O Ajayi; H A Soriyan; G A Aderounmu

    2009-01-01

    In this paper, we present an information retrieval system based on the concept of fuzzy logic to relate vague and uncertain objects with un-sharp boundaries. The simple but comprehensive user interface of the system permits the entering of uncertain specifications in query forms. The system was modelled and simulated in a Matlab environment; its implementation was carried out using Borland C++ Builder. The result of the performance measure of the system using precision and recall rates is enc...

  18. Remote data retrieval for bioinformatics applications: an agent migration approach.

    Directory of Open Access Journals (Sweden)

    Lei Gao

    Full Text Available Some of the approaches have been developed to retrieve data automatically from one or multiple remote biological data sources. However, most of them require researchers to remain online and wait for returned results. The latter not only requires highly available network connection, but also may cause the network overload. Moreover, so far none of the existing approaches has been designed to address the following problems when retrieving the remote data in a mobile network environment: (1 the resources of mobile devices are limited; (2 network connection is relatively of low quality; and (3 mobile users are not always online. To address the aforementioned problems, we integrate an agent migration approach with a multi-agent system to overcome the high latency or limited bandwidth problem by moving their computations to the required resources or services. More importantly, the approach is fit for the mobile computing environments. Presented in this paper are also the system architecture, the migration strategy, as well as the security authentication of agent migration. As a demonstration, the remote data retrieval from GenBank was used to illustrate the feasibility of the proposed approach.

  19. U.S. Geological Survey (USGS) Earthquake Web Applications

    Science.gov (United States)

    Fee, J.; Martinez, E.

    2015-12-01

    USGS Earthquake web applications provide access to earthquake information from USGS and other Advanced National Seismic System (ANSS) contributors. One of the primary goals of these applications is to provide a consistent experience for accessing both near-real time information as soon as it is available and historic information after it is thoroughly reviewed. Millions of people use these applications every month including people who feel an earthquake, emergency responders looking for the latest information about a recent event, and scientists researching historic earthquakes and their effects. Information from multiple catalogs and contributors is combined by the ANSS Comprehensive Catalog into one composite catalog, identifying the most preferred information from any source for each event. A web service and near-real time feeds provide access to all contributed data, and are used by a number of users and software packages. The Latest Earthquakes application displays summaries of many events, either near-real time feeds or custom searches, and the Event Page application shows detailed information for each event. Because all data is accessed through the web service, it can also be downloaded by users. The applications are maintained as open source projects on github, and use mobile-first and responsive-web-design approaches to work well on both mobile devices and desktop computers. http://earthquake.usgs.gov/earthquakes/map/

  20. Relational Constraint Driven Test Case Synthesis for Web Applications

    Directory of Open Access Journals (Sweden)

    Xiang Fu

    2010-09-01

    Full Text Available This paper proposes a relational constraint driven technique that synthesizes test cases automatically for web applications. Using a static analysis, servlets can be modeled as relational transducers, which manipulate backend databases. We present a synthesis algorithm that generates a sequence of HTTP requests for simulating a user session. The algorithm relies on backward symbolic image computation for reaching a certain database state, given a code coverage objective. With a slight adaptation, the technique can be used for discovering workflow attacks on web applications.

  1. Web application security analysis using the Kali Linux operating system

    OpenAIRE

    BABINCEV IVAN M.; VULETIC DEJAN V.

    2016-01-01

    The Kali Linux operating system is described as well as its purpose and possibilities. There are listed groups of tools that Kali Linux has together with the methods of their functioning, as well as a possibility to install and use tools that are not an integral part of Kali. The final part shows a practical testing of web applications using the tools from the Kali Linux operating system. The paper thus shows a part of the possibilities of this operating system in analaysing web applications ...

  2. Design of a web portal for interdisciplinary image retrieval from multiple online image resources.

    Science.gov (United States)

    Kammerer, F J; Frankewitsch, T; Prokosch, H-U

    2009-01-01

    Images play an important role in medicine. Finding the desired images within the multitude of online image databases is a time-consuming and frustrating process. Existing websites do not meet all the requirements for an ideal learning environment for medical students. This work intends to establish a new web portal providing a centralized access point to a selected number of online image databases. A back-end system locates images on given websites and extracts relevant metadata. The images are indexed using UMLS and the MetaMap system provided by the US National Library of Medicine. Specially developed functions allow to create individual navigation structures. The front-end system suits the specific needs of medical students. A navigation structure consisting of several medical fields, university curricula and the ICD-10 was created. The images may be accessed via the given navigation structure or using different search functions. Cross-references are provided by the semantic relations of the UMLS. Over 25,000 images were identified and indexed. A pilot evaluation among medical students showed good first results concerning the acceptance of the developed navigation structures and search features. The integration of the images from different sources into the UMLS semantic network offers a quick and an easy-to-use learning environment.

  3. Web Application Software for Ground Operations Planning Database (GOPDb) Management

    Science.gov (United States)

    Lanham, Clifton; Kallner, Shawn; Gernand, Jeffrey

    2013-01-01

    A Web application facilitates collaborative development of the ground operations planning document. This will reduce costs and development time for new programs by incorporating the data governance, access control, and revision tracking of the ground operations planning data. Ground Operations Planning requires the creation and maintenance of detailed timelines and documentation. The GOPDb Web application was created using state-of-the-art Web 2.0 technologies, and was deployed as SaaS (Software as a Service), with an emphasis on data governance and security needs. Application access is managed using two-factor authentication, with data write permissions tied to user roles and responsibilities. Multiple instances of the application can be deployed on a Web server to meet the robust needs for multiple, future programs with minimal additional cost. This innovation features high availability and scalability, with no additional software that needs to be bought or installed. For data governance and security (data quality, management, business process management, and risk management for data handling), the software uses NAMS. No local copy/cloning of data is permitted. Data change log/tracking is addressed, as well as collaboration, work flow, and process standardization. The software provides on-line documentation and detailed Web-based help. There are multiple ways that this software can be deployed on a Web server to meet ground operations planning needs for future programs. The software could be used to support commercial crew ground operations planning, as well as commercial payload/satellite ground operations planning. The application source code and database schema are owned by NASA.

  4. Decomposition of monolithic web application to microservices

    OpenAIRE

    Zaymus, Mikulas

    2017-01-01

    Solteq Oyj has an internal Wellbeing project for massage reservations. The task of this thesis was to transform the monolithic architecture of this application to microservices. The thesis starts with a detailed comparison between microservices and monolithic application. It points out the benefits and disadvantages microservice architecture can bring to the project. Next, it describes the theory and possible strategies that can be used in the process of decomposition of an existing monoli...

  5. Techniques for Finding Vulnerabilities in Web Applications

    OpenAIRE

    Mihai Sandulescu

    2014-01-01

    The current trend is to move everything on the Internet. Because a lot of companies store sensitive user information, security has become mandatory. Usually, software developers don’t follow some basic practices in order to secure their applications. This paper will present in the second chapter, the white-box, black-box and gray-box methods which can be used in order to test applications for possible vulnerabilities. It focuses on fuzz testing, which is a black-box testing method, presented ...

  6. Total Bregman Divergence and its Applications to Shape Retrieval.

    Science.gov (United States)

    Liu, Meizhu; Vemuri, Baba C; Amari, Shun-Ichi; Nielsen, Frank

    2010-01-01

    Shape database search is ubiquitous in the world of biometric systems, CAD systems etc. Shape data in these domains is experiencing an explosive growth and usually requires search of whole shape databases to retrieve the best matches with accuracy and efficiency for a variety of tasks. In this paper, we present a novel divergence measure between any two given points in [Formula: see text] or two distribution functions. This divergence measures the orthogonal distance between the tangent to the convex function (used in the definition of the divergence) at one of its input arguments and its second argument. This is in contrast to the ordinate distance taken in the usual definition of the Bregman class of divergences [4]. We use this orthogonal distance to redefine the Bregman class of divergences and develop a new theory for estimating the center of a set of vectors as well as probability distribution functions. The new class of divergences are dubbed the total Bregman divergence (TBD). We present the l 1 -norm based TBD center that is dubbed the t-center which is then used as a cluster center of a class of shapes The t-center is weighted mean and this weight is small for noise and outliers. We present a shape retrieval scheme using TBD and the t-center for representing the classes of shapes from the MPEG-7 database and compare the results with other state-of-the-art methods in literature.

  7. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    International Nuclear Information System (INIS)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-01-01

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster

  8. Efficient Image Blur in Web-Based Applications

    DEFF Research Database (Denmark)

    Kraus, Martin

    2010-01-01

    Scripting languages require the use of high-level library functions to implement efficient image processing; thus, real-time image blur in web-based applications is a challenging task unless specific library functions are available for this purpose. We present a pyramid blur algorithm, which can ...

  9. Functionality for learning networks: lessons learned from social web applications

    NARCIS (Netherlands)

    Berlanga, Adriana; Sloep, Peter; Brouns, Francis; Van Rosmalen, Peter; Bitter-Rijpkema, Marlies; Koper, Rob

    2007-01-01

    Berlanga, A. J., Sloep, P., Brouns, F., Van Rosmalen, P., Bitter-Rijpkema, M., & Koper, R. (2007). Functionality for learning networks: lessons learned from social web applications. Proceedings of the ePortfolio 2007 Conference. October, 18-19, 2007, Maastricht, The Netherlands. [See also

  10. Empower the patients with a dialogue-based web application

    DEFF Research Database (Denmark)

    Bjørnes, Charlotte D.; Cummings, Elizabeth; Nøhr, Christian

    2012-01-01

    -based web application was designed and implemented to accommodate patients' information and communication needs in short stay hospital settings. To ensure the system meet the patients' needs, both patients and healthcare professionals were involved in the design process by applying various participatory...

  11. Development of WEB Applications of The Component – Open Source

    Directory of Open Access Journals (Sweden)

    Arturo Sergio Medina Castillo

    2013-06-01

    Full Text Available Nowadays software development not starting from scratch, however already has a set of tools provided by frameworks, which enables faster application development, relevant and indispensable factor for supporting continuous improvement processes seeking higher levels of competitiveness in this global society.In all respects the development of Web applications, whether open source or proprietary, is developing rapidly, by providing service levels of communication, interoperability, access to internal and external customers that allows management support different business processes.

  12. Migrating Multi-page Web Applications to Single-page AJAX Interfaces

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.

    2006-01-01

    Recently, a new web development technique for creating interactive web applications, dubbed AJAX, has emerged. In this new model, the single-page web interface is composed of individual components which can be updated/replaced independently. With the rise of AJAX web applications classical

  13. A STUDY ON RANKING METHOD IN RETRIEVING WEB PAGES BASED ON CONTENT AND LINK ANALYSIS: COMBINATION OF FOURIER DOMAIN SCORING AND PAGERANK SCORING

    Directory of Open Access Journals (Sweden)

    Diana Purwitasari

    2008-01-01

    Full Text Available Ranking module is an important component of search process which sorts through relevant pages. Since collection of Web pages has additional information inherent in the hyperlink structure of the Web, it can be represented as link score and then combined with the usual information retrieval techniques of content score. In this paper we report our studies about ranking score of Web pages combined from link analysis, PageRank Scoring, and content analysis, Fourier Domain Scoring. Our experiments use collection of Web pages relate to Statistic subject from Wikipedia with objectives to check correctness and performance evaluation of combination ranking method. Evaluation of PageRank Scoring show that the highest score does not always relate to Statistic. Since the links within Wikipedia articles exists so that users are always one click away from more information on any point that has a link attached, it it possible that unrelated topics to Statistic are most likely frequently mentioned in the collection. While the combination method show link score which is given proportional weight to content score of Web pages does effect the retrieval results.

  14. Distributed nuclear medicine applications using World Wide Web and Java technology

    International Nuclear Information System (INIS)

    Knoll, P.; Hoell, K.; Koriska, K.; Mirzaei, S.; Koehn, H.

    2000-01-01

    At present, medical applications applying World Wide Web (WWW) technology are mainly used to view static images and to retrieve some information. The Java platform is a relative new way of computing, especially designed for network computing and distributed applications which enables interactive connection between user and information via the WWW. The Java 2 Software Development Kit (SDK) including Java2D API, Java Remote Method Invocation (RMI) technology, Object Serialization and the Java Advanced Imaging (JAI) extension was used to achieve a robust, platform independent and network centric solution. Medical image processing software based on this technology is presented and adequate performance capability of Java is demonstrated by an iterative reconstruction algorithm for single photon emission computerized tomography (SPECT). (orig.)

  15. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    OpenAIRE

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    2017-01-01

    © 2017 The Author(s). An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate w...

  16. A Semantic Sensor Web for Environmental Decision Support Applications

    Science.gov (United States)

    Gray, Alasdair J. G.; Sadler, Jason; Kit, Oles; Kyzirakos, Kostis; Karpathiotakis, Manos; Calbimonte, Jean-Paul; Page, Kevin; García-Castro, Raúl; Frazer, Alex; Galpin, Ixent; Fernandes, Alvaro A. A.; Paton, Norman W.; Corcho, Oscar; Koubarakis, Manolis; De Roure, David; Martinez, Kirk; Gómez-Pérez, Asunción

    2011-01-01

    Sensing devices are increasingly being deployed to monitor the physical world around us. One class of application for which sensor data is pertinent is environmental decision support systems, e.g., flood emergency response. For these applications, the sensor readings need to be put in context by integrating them with other sources of data about the surrounding environment. Traditional systems for predicting and detecting floods rely on methods that need significant human resources. In this paper we describe a semantic sensor web architecture for integrating multiple heterogeneous datasets, including live and historic sensor data, databases, and map layers. The architecture provides mechanisms for discovering datasets, defining integrated views over them, continuously receiving data in real-time, and visualising on screen and interacting with the data. Our approach makes extensive use of web service standards for querying and accessing data, and semantic technologies to discover and integrate datasets. We demonstrate the use of our semantic sensor web architecture in the context of a flood response planning web application that uses data from sensor networks monitoring the sea-state around the coast of England. PMID:22164110

  17. A RESTful interface to pseudonymization services in modern web applications.

    Science.gov (United States)

    Lablans, Martin; Borg, Andreas; Ückert, Frank

    2015-02-07

    Medical research networks rely on record linkage and pseudonymization to determine which records from different sources relate to the same patient. To establish informational separation of powers, the required identifying data are redirected to a trusted third party that has, in turn, no access to medical data. This pseudonymization service receives identifying data, compares them with a list of already reported patient records and replies with a (new or existing) pseudonym. We found existing solutions to be technically outdated, complex to implement or not suitable for internet-based research infrastructures. In this article, we propose a new RESTful pseudonymization interface tailored for use in web applications accessed by modern web browsers. The interface is modelled as a resource-oriented architecture, which is based on the representational state transfer (REST) architectural style. We translated typical use-cases into resources to be manipulated with well-known HTTP verbs. Patients can be re-identified in real-time by authorized users' web browsers using temporary identifiers. We encourage the use of PID strings for pseudonyms and the EpiLink algorithm for record linkage. As a proof of concept, we developed a Java Servlet as reference implementation. The following resources have been identified: Sessions allow data associated with a client to be stored beyond a single request while still maintaining statelessness. Tokens authorize for a specified action and thus allow the delegation of authentication. Patients are identified by one or more pseudonyms and carry identifying fields. Relying on HTTP calls alone, the interface is firewall-friendly. The reference implementation has proven to be production stable. The RESTful pseudonymization interface fits the requirements of web-based scenarios and allows building applications that make pseudonymization transparent to the user using ordinary web technology. The open-source reference implementation implements the

  18. Specification and Verification of Web Applications in Rewriting Logic

    Science.gov (United States)

    Alpuente, María; Ballis, Demis; Romero, Daniel

    This paper presents a Rewriting Logic framework that formalizes the interactions between Web servers and Web browsers through a communicating protocol abstracting HTTP. The proposed framework includes a scripting language that is powerful enough to model the dynamics of complex Web applications by encompassing the main features of the most popular Web scripting languages (e.g. PHP, ASP, Java Servlets). We also provide a detailed characterization of browser actions (e.g. forward/backward navigation, page refresh, and new window/tab openings) via rewrite rules, and show how our models can be naturally model-checked by using the Linear Temporal Logic of Rewriting (LTLR), which is a Linear Temporal Logic specifically designed for model-checking rewrite theories. Our formalization is particularly suitable for verification purposes, since it allows one to perform in-depth analyses of many subtle aspects related to Web interaction. Finally, the framework has been completely implemented in Maude, and we report on some successful experiments that we conducted by using the Maude LTLR model-checker.

  19. Neural network retrieval of soil moisture: application to SMOS

    Science.gov (United States)

    Rodriguez-Fernandez, Nemesio; Richaume, Philippe; Aires, Filipe; Prigent, Catherine; Kerr, Yann; Kolasssa, Jana; Jimenez, Carlos; Cabot, Francois; Mahmoodi, Ali

    2014-05-01

    We present an efficient statistical soil moisture (SM) retrieval method using SMOS brightness temperatures (BTs) complemented with MODIS NDVI and ASCAT backscattering data. The method is based on a feed-forward neural network (hereafter NN) trained with SM from ECMWF model predictions or from the SMOS operational algorithm. The best compromise to retrieve SM with NNs from SMOS brightness temperatures in a large fraction of the swath (~ 670 km) is to use incidence angles from 25 to 60 degrees (in 7 bins of 5 deg width) for both H and V polarizations. The correlation coefficient (R) of the SM retrieved by the NN and the reference SM dataset (ECMWF or SMOS L3) is 0.8. The correlation coefficient increases to 0.91 when adding as input MODIS NDVI, ECOCLIMAP sand and clay fractions and one of the following data: (i) active microwaves observations (ASCAT backscattering coefficient at 40 deg incidence angle), (ii) ECMWF soil temperature. Finally, the correlation coefficient increases to R=0.94 when using a normalization index computed locally for each latitude-longitude point with the maximum and minimum BTs and the associated SM values from the local time series. Global maps of SM obtained with NNs reproduce well the spatial structures present in the reference SM datasets, implying that the NN works well for a wide range of ecosystems and physical conditions. In addition, the results of the NNs have been evaluated at selected locations for which in situ measurements are available such as the USDA-ARS watersheds (USA), the OzNet network (AUS) and USDA-NRCS SCAN network (USA). The time series of SM obtained with NNs reproduce the temporal behavior measured with in situ sensors. For well known sites where the in situ measurement is representative of a 40 km scale like the Little Washita watershed, the NN models show a very high correlation of (R = 0.8-0.9) and a low standard deviation of 0.02-0.04 m3/m3 with respect to the in situ measurements. When comparing with all the in

  20. Access Control of Web- and Java-Based Applications

    Science.gov (United States)

    Tso, Kam S.; Pajevski, Michael J.

    2013-01-01

    Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers

  1. Detection of the Security Vulnerabilities in Web Applications

    Directory of Open Access Journals (Sweden)

    2009-01-01

    Full Text Available The contemporary organizations develop business processes in a very complex environment. The IT&C technologies are used by organizations to improve their competitive advantages. But, the IT&C technologies are not perfect. They are developed in an iterative process and their quality is the result of the lifecycle activities. The audit and evaluation processes are required by the increased complexity of the business processes supported by IT&C technologies. In order to organize and develop a high-quality audit process, the evaluation team must analyze the risks, threats and vulnerabilities of the information system. The paper highlights the security vulnerabilities in web applications and the processes of their detection. The web applications are used as IT&C tools to support the distributed information processes. They are a major component of the distributed information systems. The audit and evaluation processes are carried out in accordance with the international standards developed for information system security assurance.

  2. Situational Requirements Engineering for the Development of Content Management System-based Web Applications

    NARCIS (Netherlands)

    Souer, J.; van de Weerd, I.; Versendaal, J.M.; Brinkkemper, S.

    2005-01-01

    Web applications are evolving towards strong content-centered Web applications. The development processes and implementation of these applications are unlike the development and implementation of traditional information systems. In this paper we propose WebEngineering Method; a method for developing

  3. User Interface Design in Medical Distributed Web Applications.

    Science.gov (United States)

    Serban, Alexandru; Crisan-Vida, Mihaela; Mada, Leonard; Stoicu-Tivadar, Lacramioara

    2016-01-01

    User interfaces are important to facilitate easy learning and operating with an IT application especially in the medical world. An easy to use interface has to be simple and to customize the user needs and mode of operation. The technology in the background is an important tool to accomplish this. The present work aims to creating a web interface using specific technology (HTML table design combined with CSS3) to provide an optimized responsive interface for a complex web application. In the first phase, the current icMED web medical application layout is analyzed, and its structure is designed using specific tools, on source files. In the second phase, a new graphic adaptable interface to different mobile terminals is proposed, (using HTML table design (TD) and CSS3 method) that uses no source files, just lines of code for layout design, improving the interaction in terms of speed and simplicity. For a complex medical software application a new prototype layout was designed and developed using HTML tables. The method uses a CSS code with only CSS classes applied to one or multiple HTML table elements, instead of CSS styles that can be applied to just one DIV tag at once. The technique has the advantage of a simplified CSS code, and a better adaptability to different media resolutions compared to DIV-CSS style method. The presented work is a proof that adaptive web interfaces can be developed just using and combining different types of design methods and technologies, using HTML table design, resulting in a simpler to learn and use interface, suitable for healthcare services.

  4. Angular JS – The Newest Technology in Creating Web Applications

    Directory of Open Access Journals (Sweden)

    Radu BUCEA-MANEA-TONIS

    2016-09-01

    Full Text Available This article is the result of searching and selecting new technologies that help programmer in developing web applications. It also represents a pleading for using it, showing its advantages and disadvantages. Alongside the article there are features regarding binding elements, modules, filters and directives. It is a synthesis and a guide of good practice for innovative programmers. All technical issues presented are supported by a case study.

  5. WeBIAS: a web server for publishing bioinformatics applications.

    Science.gov (United States)

    Daniluk, Paweł; Wilczyński, Bartek; Lesyng, Bogdan

    2015-11-02

    One of the requirements for a successful scientific tool is its availability. Developing a functional web service, however, is usually considered a mundane and ungratifying task, and quite often neglected. When publishing bioinformatic applications, such attitude puts additional burden on the reviewers who have to cope with poorly designed interfaces in order to assess quality of presented methods, as well as impairs actual usefulness to the scientific community at large. In this note we present WeBIAS-a simple, self-contained solution to make command-line programs accessible through web forms. It comprises a web portal capable of serving several applications and backend schedulers which carry out computations. The server handles user registration and authentication, stores queries and results, and provides a convenient administrator interface. WeBIAS is implemented in Python and available under GNU Affero General Public License. It has been developed and tested on GNU/Linux compatible platforms covering a vast majority of operational WWW servers. Since it is written in pure Python, it should be easy to deploy also on all other platforms supporting Python (e.g. Windows, Mac OS X). Documentation and source code, as well as a demonstration site are available at http://bioinfo.imdik.pan.pl/webias . WeBIAS has been designed specifically with ease of installation and deployment of services in mind. Setting up a simple application requires minimal effort, yet it is possible to create visually appealing, feature-rich interfaces for query submission and presentation of results.

  6. WEBnm@: a web application for normal mode analyses of proteins

    Directory of Open Access Journals (Sweden)

    Reuter Nathalie

    2005-03-01

    Full Text Available Abstract Background Normal mode analysis (NMA has become the method of choice to investigate the slowest motions in macromolecular systems. NMA is especially useful for large biomolecular assemblies, such as transmembrane channels or virus capsids. NMA relies on the hypothesis that the vibrational normal modes having the lowest frequencies (also named soft modes describe the largest movements in a protein and are the ones that are functionally relevant. Results We developed a web-based server to perform normal modes calculations and different types of analyses. Starting from a structure file provided by the user in the PDB format, the server calculates the normal modes and subsequently offers the user a series of automated calculations; normalized squared atomic displacements, vector field representation and animation of the first six vibrational modes. Each analysis is performed independently from the others and results can be visualized using only a web browser. No additional plug-in or software is required. For users who would like to analyze the results with their favorite software, raw results can also be downloaded. The application is available on http://www.bioinfo.no/tools/normalmodes. We present here the underlying theory, the application architecture and an illustration of its features using a large transmembrane protein as an example. Conclusion We built an efficient and modular web application for normal mode analysis of proteins. Non specialists can easily and rapidly evaluate the degree of flexibility of multi-domain protein assemblies and characterize the large amplitude movements of their domains.

  7. AN INNOVATIVE WEB MINING APPLICATION ON BLOGS - A LAYOUT

    Directory of Open Access Journals (Sweden)

    S. Prakash

    2012-01-01

    Full Text Available Blogs and Web services agree to express user’s opinions and interests, in the form of small text messages which gives abbreviated and highly personalized remarks in real-time. Recognizing emotion is really significant for a text-based communication tool such as blogs. Nowadays, user opinions in the structure of comments, reviews in blogs have been utilized by researchers for various purposes. Among them the application of sentiment analysis techniques to these opinions is an interesting one. This paper deals with a proposal of a software structural design for constructing Web mining applications in the blog world. The design includes blog crawling and data mining algorithms, to offer a full-fledged and flexible key for constructing general-purpose Web mining applications. The structural design allocates some significant customizations, such as the construction of adapters for reading text from different blogs, and the utilization of different pre-processing methods and data mining procedures. The core of this paper is on explaining the innovative software structural design of the general framework offering thorough information about the data mining sub-framework.

  8. SCALEUS: Semantic Web Services Integration for Biomedical Applications.

    Science.gov (United States)

    Sernadela, Pedro; González-Castro, Lorena; Oliveira, José Luís

    2017-04-01

    In recent years, we have witnessed an explosion of biological data resulting largely from the demands of life science research. The vast majority of these data are freely available via diverse bioinformatics platforms, including relational databases and conventional keyword search applications. This type of approach has achieved great results in the last few years, but proved to be unfeasible when information needs to be combined or shared among different and scattered sources. During recent years, many of these data distribution challenges have been solved with the adoption of semantic web. Despite the evident benefits of this technology, its adoption introduced new challenges related with the migration process, from existent systems to the semantic level. To facilitate this transition, we have developed Scaleus, a semantic web migration tool that can be deployed on top of traditional systems in order to bring knowledge, inference rules, and query federation to the existent data. Targeted at the biomedical domain, this web-based platform offers, in a single package, straightforward data integration and semantic web services that help developers and researchers in the creation process of new semantically enhanced information systems. SCALEUS is available as open source at http://bioinformatics-ua.github.io/scaleus/ .

  9. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.

    Science.gov (United States)

    Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.

  10. Taking advantage of Google's Web-based applications and services.

    Science.gov (United States)

    Brigham, Tara J

    2014-01-01

    Google is a company that is constantly expanding and growing its services and products. While most librarians possess a "love/hate" relationship with Google, there are a number of reasons you should consider exploring some of the tools Google has created and made freely available. Applications and services such as Google Docs, Slides, and Google+ are functional and dynamic without the cost of comparable products. This column will address some of the issues users should be aware of before signing up to use Google's tools, and a description of some of Google's Web applications and services, plus how they can be useful to librarians in health care.

  11. Photonics applications and web engineering: WILGA Summer 2016

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2016-09-01

    Wilga Summer 2016 Symposium on Photonics Applications and Web Engineering was held on 29 May - 06 June. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, electronics technologies and applications. There were presented around 300 presentations in a few main topical tracks including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Thins, and other. The paper is an introduction the 2016 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations.

  12. Photonics applications and web engineering: WILGA Summer 2015

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2015-09-01

    Wilga Summer 2015 Symposium on Photonics Applications and Web Engineering was held on 23-31 May. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, electronics technologies and applications. There were presented around 300 presentations in a few main topical tracks including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Thins, and other. The paper is an introduction the 2015 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations.

  13. Project Management Methodology for the Development of M-Learning Web Based Applications

    Directory of Open Access Journals (Sweden)

    Adrian VISOIU

    2010-01-01

    Full Text Available M-learning web based applications are a particular case of web applications designed to be operated from mobile devices. Also, their purpose is to implement learning aspects. Project management of such applications takes into account the identified peculiarities. M-learning web based application characteristics are identified. M-learning functionality covers the needs of an educational process. Development is described taking into account the mobile web and its influences over the analysis, design, construction and testing phases. Activities building up a work breakdown structure for development of m-learning web based applications are presented. Project monitoring and control techniques are proposed. Resources required for projects are discussed.

  14. FASH: A web application for nucleotides sequence search

    Directory of Open Access Journals (Sweden)

    Chew Paul

    2008-05-01

    Full Text Available Abstract FASH (Fourier Alignment Sequence Heuristics is a web application, based on the Fast Fourier Transform, for finding remote homologs within a long nucleic acid sequence. Given a query sequence and a long text-sequence (e.g, the human genome, FASH detects subsequences within the text that are remotely-similar to the query. FASH offers an alternative approach to Blast/Fasta for querying long RNA/DNA sequences. FASH differs from these other approaches in that it does not depend on the existence of contiguous seed-sequences in its initial detection phase. The FASH web server is user friendly and very easy to operate. Availability FASH can be accessed at https://fash.bgu.ac.il:8443/fash/default.jsp (secured website

  15. A Web-based Architecture Enabling Multichannel Telemedicine Applications

    Directory of Open Access Journals (Sweden)

    Fabrizio Lamberti

    2003-02-01

    Full Text Available Telemedicine scenarios include today in-hospital care management, remote teleconsulting, collaborative diagnosis and emergency situations handling. Different types of information need to be accessed by means of etherogeneous client devices in different communication environments in order to enable high quality continuous sanitary assistance delivery wherever and whenever needed. In this paper, a Web-based telemedicine architecture based on Java, XML and XSL technologies is presented. By providing dynamic content delivery services and Java based client applications for medical data consultation and modification, the system enables effective access to an Electronic Patient Record based standard database by means of any device equipped with a Web browser, such as traditional Personal Computers and workstation as well as modern Personal Digital Assistants. The effectiveness of the proposed architecture has been evaluated in different scenarios, experiencing fixed and mobile clinical data transmissions over Local Area Networks, wireless LANs and wide coverage telecommunication network including GSM and GPRS.

  16. Tracking Outfield Employees using GPS in Web Applications

    Directory of Open Access Journals (Sweden)

    Kasinathan Vinothini

    2018-01-01

    Full Text Available This paper presents e-Track, a web-based tracking system for outfield employees in order to cater for various business activities as demanded by the business owners. Such demands may range from a simple task assignment, to employee location tracking and remote observation of the employees’ task progress. The objective of the proposed system is two-fold. First, the employees to access the application and clocks-in work. Second, a standalone web system for the employers to determine the approximate location of the staff assigned with outfield duties. The IP address recognition will ensure no buddy punching takes place. e-Track is hoped to increase efficiency among employees by saving time travelling between branches during outfield duties. In the future, e-Track will be integrated with claim and payment modules to support arrangement for outfield duties.

  17. Datamart use for complex data retrieval in an ArcIMS application

    Energy Technology Data Exchange (ETDEWEB)

    Scherma, S. (Steven); Bolivar, Stephen L.

    2004-01-01

    This paper describes the use of datamarts and data warehousing concepts to expedite retrieval and display of complex attribute data from multi-million record databases. Los Alamos National Laboratory has developed an Internet application (SMART) using ArcIMS that relies on datamarts to quickly retrieve attribute data, associated with, but not contained within GIS layers. The volume of data and the complex relationships within the transactional database made data display within ArcIMS impractical without the use of datamarts. The technical issues and solutions involved in the development are discussed.

  18. Validating Satellite-Retrieved Cloud Properties for Weather and Climate Applications

    Science.gov (United States)

    Minnis, P.; Bedka, K. M.; Smith, W., Jr.; Yost, C. R.; Bedka, S. T.; Palikonda, R.; Spangenberg, D.; Sun-Mack, S.; Trepte, Q.; Dong, X.; Xi, B.

    2014-12-01

    Cloud properties determined from satellite imager radiances are increasingly used in weather and climate applications, particularly in nowcasting, model assimilation and validation, trend monitoring, and precipitation and radiation analyses. The value of using the satellite-derived cloud parameters is determined by the accuracy of the particular parameter for a given set of conditions, such as viewing and illumination angles, surface background, and cloud type and structure. Because of the great variety of those conditions and of the sensors used to monitor clouds, determining the accuracy or uncertainties in the retrieved cloud parameters is a daunting task. Sensitivity studies of the retrieved parameters to the various inputs for a particular cloud type are helpful for understanding the errors associated with the retrieval algorithm relative to the plane-parallel world assumed in most of the model clouds that serve as the basis for the retrievals. Real world clouds, however, rarely fit the plane-parallel mold and generate radiances that likely produce much greater errors in the retrieved parameter than can be inferred from sensitivity analyses. Thus, independent, empirical methods are used to provide a more reliable uncertainty analysis. At NASA Langley, cloud properties are being retrieved from both geostationary (GEO) and low-earth orbiting (LEO) satellite imagers for climate monitoring and model validation as part of the NASA CERES project since 2000 and from AVHRR data since 1978 as part of the NOAA CDR program. Cloud properties are also being retrieved in near-real time globally from both GEO and LEO satellites for weather model assimilation and nowcasting for hazards such as aircraft icing. This paper discusses the various independent datasets and approaches that are used to assessing the imager-based satellite cloud retrievals. These include, but are not limited to data from ARM sites, CloudSat, and CALIPSO. This paper discusses the use of the various

  19. Migrating Existing PHP Web Applications to the Cloud

    Directory of Open Access Journals (Sweden)

    Ionut VODA

    2014-01-01

    Full Text Available The purpose of this paper is to present a set of best practices for moving PHP web applications from a traditional hosting to a Cloud based one. PHP applications are widespread nowadays and they come in many shapes and sizes and that is why they require a special attention. The paper goes beyond just moving the code in the Cloud and setting up the run-time environment as some architectural changes must be done at application level most of the time. The decision of how and when to make these changes can make the difference between a successful migra-tion and a failed one. It will be presented how to decouple and scale an application, how to scale a database while following the high availability principles.

  20. Crawling Ajax-based Web Applications through Dynamic Analysis of User Interface State Changes

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.; Lenselink, S.

    2011-01-01

    Using JavaScript and dynamic DOM manipulation on the client-side of web applications is becoming a widespread approach for achieving rich interactivity and responsiveness in modern web applications. At the same time, such techniques, collectively known as Ajax, shatter the metaphor of web ‘pages’

  1. Modern tools for development of interactive web map applications for visualization spatial data on the internet

    Directory of Open Access Journals (Sweden)

    Horáková Bronislava

    2009-11-01

    Full Text Available In the last few years has begun the development of dynamic web applications, often called Web2.0. From this development wascreated a technology called Mashups. Mashups may easily combine huge amounts of data sources and functionalities of existing as wellas future web applications and services. Therefore they are used to develop a new device, which offers new possibilities of informationusage. This technology provides possibilities of developing basic as well as robust web applications not only for IT or GIS specialists,but also for common users. Software companies have developed web projects for building mashup application also called mashupeditors.

  2. The Semantic Web: opportunities and challenges for next-generation Web applications

    Directory of Open Access Journals (Sweden)

    2002-01-01

    Full Text Available Recently there has been a growing interest in the investigation and development of the next generation web - the Semantic Web. While most of the current forms of web content are designed to be presented to humans, but are barely understandable by computers, the content of the Semantic Web is structured in a semantic way so that it is meaningful to computers as well as to humans. In this paper, we report a survey of recent research on the Semantic Web. In particular, we present the opportunities that this revolution will bring to us: web-services, agent-based distributed computing, semantics-based web search engines, and semantics-based digital libraries. We also discuss the technical and cultural challenges of realizing the Semantic Web: the development of ontologies, formal semantics of Semantic Web languages, and trust and proof models. We hope that this will shed some light on the direction of future work on this field.

  3. Access Control of Web and Java Based Applications

    Science.gov (United States)

    Tso, Kam S.; Pajevski, Michael J.; Johnson, Bryan

    2011-01-01

    Cyber security has gained national and international attention as a result of near continuous headlines from financial institutions, retail stores, government offices and universities reporting compromised systems and stolen data. Concerns continue to rise as threats of service interruption, and spreading of viruses become ever more prevalent and serious. Controlling access to application layer resources is a critical component in a layered security solution that includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. In this paper we discuss the development of an application-level access control solution, based on an open-source access manager augmented with custom software components, to provide protection to both Web-based and Java-based client and server applications.

  4. Cloud retrievals from satellite data using optimal estimation: evaluation and application to ATSR

    Directory of Open Access Journals (Sweden)

    C. A. Poulsen

    2012-08-01

    Full Text Available Clouds play an important role in balancing the Earth's radiation budget. Hence, it is vital that cloud climatologies are produced that quantify cloud macro and micro physical parameters and the associated uncertainty. In this paper, we present an algorithm ORAC (Oxford-RAL retrieval of Aerosol and Cloud which is based on fitting a physically consistent cloud model to satellite observations simultaneously from the visible to the mid-infrared, thereby ensuring that the resulting cloud properties provide both a good representation of the short-wave and long-wave radiative effects of the observed cloud. The advantages of the optimal estimation method are that it enables rigorous error propagation and the inclusion of all measurements and any a priori information and associated errors in a rigorous mathematical framework. The algorithm provides a measure of the consistency between retrieval representation of cloud and satellite radiances. The cloud parameters retrieved are the cloud top pressure, cloud optical depth, cloud effective radius, cloud fraction and cloud phase.

    The algorithm can be applied to most visible/infrared satellite instruments. In this paper, we demonstrate the applicability to the Along-Track Scanning Radiometers ATSR-2 and AATSR. Examples of applying the algorithm to ATSR-2 flight data are presented and the sensitivity of the retrievals assessed, in particular the algorithm is evaluated for a number of simulated single-layer and multi-layer conditions. The algorithm was found to perform well for single-layer cloud except when the cloud was very thin; i.e., less than 1 optical depths. For the multi-layer cloud, the algorithm was robust except when the upper ice cloud layer is less than five optical depths. In these cases the retrieved cloud top pressure and cloud effective radius become a weighted average of the 2 layers. The sum of optical depth of multi-layer cloud is retrieved well until the cloud becomes thick

  5. WebSphere Application Server Step by Step

    CERN Document Server

    Cline, Owen; Van Sickel, Peter

    2012-01-01

    WebSphere Application Server (WAS) is complex and multifaceted middleware used by huge enterprises as well as small businesses. In this book, the authors do an excellent job of covering the many aspects of the software. While other books merely cover installation and configuration, this book goes beyond that to cover the critical verification and management process to ensure a successful installation and implementation. It also addresses all of the different packages-from Express to Network-so that no matter what size your company is, you will be able to successfully implement WAS V6. To de

  6. Application of web-GIS approach for climate change study

    Science.gov (United States)

    Okladnikov, Igor; Gordov, Evgeny; Titov, Alexander; Bogomolov, Vasily; Martynova, Yuliya; Shulgina, Tamara

    2013-04-01

    Georeferenced datasets are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated web-GIS information-computational system for analysis of georeferenced climatological and meteorological data has been created. It is based on OGC standards and involves many modern solutions such as object-oriented programming model, modular composition, and JavaScript libraries based on GeoExt library, ExtJS Framework and OpenLayers software. The main advantage of the system lies in a possibility to perform mathematical and statistical data analysis, graphical visualization of results with GIS-functionality, and to prepare binary output files with just only a modern graphical web-browser installed on a common desktop computer connected to Internet. Several geophysical datasets represented by two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis, DWD Global Precipitation Climatology Centre's data, GMAO Modern Era-Retrospective analysis for Research and Applications, meteorological observational data for the territory of the former USSR for the 20th century, results of modeling by global and regional climatological models, and others are available for processing by the system. And this list is extending. Also a functionality to run WRF and "Planet simulator" models was implemented in the system. Due to many preset parameters and limited time and spatial ranges set in the system these models have low computational power requirements and could be used in educational workflow for better

  7. PaaS for web applications with OpenShift Origin

    OpenAIRE

    Lossent, A; Rodriguez Peon, A; Wagner, A

    2017-01-01

    The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.

  8. PaaS for web applications with OpenShift Origin

    Science.gov (United States)

    Lossent, A.; Rodriguez Peon, A.; Wagner, A.

    2017-10-01

    The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.

  9. Usage of Web Service in Mobile Application for Parents and Students in Binus School Serpong

    OpenAIRE

    Karto Iskandar; Andrew Thejo Putrantob

    2016-01-01

    A web service is a service offered by a device electronically to communicate with other electronic device using the World wide web. Smartphone is an electronic device that almost everyone has, especially student and parent for getting information about the school. In BINUS School Serpong mobile application, web services used for getting data from web server like student and menu data. Problem faced by BINUS School Serpong today is the time-consuming application update when using the native ap...

  10. Mobile medical image retrieval

    Science.gov (United States)

    Duc, Samuel; Depeursinge, Adrien; Eggel, Ivan; Müller, Henning

    2011-03-01

    Images are an integral part of medical practice for diagnosis, treatment planning and teaching. Image retrieval has gained in importance mainly as a research domain over the past 20 years. Both textual and visual retrieval of images are essential. In the process of mobile devices becoming reliable and having a functionality equaling that of formerly desktop clients, mobile computing has gained ground and many applications have been explored. This creates a new field of mobile information search & access and in this context images can play an important role as they often allow understanding complex scenarios much quicker and easier than free text. Mobile information retrieval in general has skyrocketed over the past year with many new applications and tools being developed and all sorts of interfaces being adapted to mobile clients. This article describes constraints of an information retrieval system including visual and textual information retrieval from the medical literature of BioMedCentral and of the RSNA journals Radiology and Radiographics. Solutions for mobile data access with an example on an iPhone in a web-based environment are presented as iPhones are frequently used and the operating system is bound to become the most frequent smartphone operating system in 2011. A web-based scenario was chosen to allow for a use by other smart phone platforms such as Android as well. Constraints of small screens and navigation with touch screens are taken into account in the development of the application. A hybrid choice had to be taken to allow for taking pictures with the cell phone camera and upload them for visual similarity search as most producers of smart phones block this functionality to web applications. Mobile information access and in particular access to images can be surprisingly efficient and effective on smaller screens. Images can be read on screen much faster and relevance of documents can be identified quickly through the use of images contained in

  11. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    OpenAIRE

    Filistea Naude; Chris Rensleigh; Adeline S.A. du Toit

    2010-01-01

    This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa) was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The re...

  12. Meta4: a web application for sharing and annotating metagenomic gene predictions using web services.

    Science.gov (United States)

    Richardson, Emily J; Escalettes, Franck; Fotheringham, Ian; Wallace, Robert J; Watson, Mick

    2013-01-01

    Whole-genome shotgun metagenomics experiments produce DNA sequence data from entire ecosystems, and provide a huge amount of novel information. Gene discovery projects require up-to-date information about sequence homology and domain structure for millions of predicted proteins to be presented in a simple, easy-to-use system. There is a lack of simple, open, flexible tools that allow the rapid sharing of metagenomics datasets with collaborators in a format they can easily interrogate. We present Meta4, a flexible and extensible web application that can be used to share and annotate metagenomic gene predictions. Proteins and predicted domains are stored in a simple relational database, with a dynamic front-end which displays the results in an internet browser. Web services are used to provide up-to-date information about the proteins from homology searches against public databases. Information about Meta4 can be found on the project website, code is available on Github, a cloud image is available, and an example implementation can be seen at.

  13. A sea surface reflectance model for (AATSR, and application to aerosol retrievals

    Directory of Open Access Journals (Sweden)

    A. M. Sayer

    2010-07-01

    Full Text Available A model of the sea surface bidirectional reflectance distribution function (BRDF is presented for the visible and near-IR channels (over the spectral range 550 nm to 1.6 μm of the dual-viewing Along-Track Scanning Radiometers (ATSRs. The intended application is as part of the Oxford-RAL Aerosols and Clouds (ORAC retrieval scheme. The model accounts for contributions to the observed reflectance from whitecaps, sun-glint and underlight. Uncertainties in the parametrisations used in the BRDF model are propagated through into the forward model and retrieved state. The new BRDF model offers improved coverage over previous methods, as retrievals are possible into the sun-glint region, through the ATSR dual-viewing system. The new model has been applied in the ORAC aerosol retrieval algorithm to process Advanced ATSR (AATSR data from September 2004 over the south-eastern Pacific. The assumed error budget is shown to be generally appropriate, meaning the retrieved states are consistent with the measurements and a priori assumptions. The resulting field of aerosol optical depth (AOD is compared with colocated MODIS-Terra observations, AERONET observations at Tahiti, and cruises over the oceanic region. MODIS and AATSR show similar spatial distributions of AOD, although MODIS reports values which are larger and more variable. It is suggested that assumptions in the MODIS aerosol retrieval algorithm may lead to a positive bias in MODIS AOD of order 0.01 at 550 nm over ocean regions where the wind speed is high.

  14. Neutralizing SQL Injection Attack Using Server Side Code Modification in Web Applications

    Directory of Open Access Journals (Sweden)

    Asish Kumar Dalai

    2017-01-01

    Full Text Available Reports on web application security risks show that SQL injection is the top most vulnerability. The journey of static to dynamic web pages leads to the use of database in web applications. Due to the lack of secure coding techniques, SQL injection vulnerability prevails in a large set of web applications. A successful SQL injection attack imposes a serious threat to the database, web application, and the entire web server. In this article, the authors have proposed a novel method for prevention of SQL injection attack. The classification of SQL injection attacks has been done based on the methods used to exploit this vulnerability. The proposed method proves to be efficient in the context of its ability to prevent all types of SQL injection attacks. Some popular SQL injection attack tools and web application security datasets have been used to validate the model. The results obtained are promising with a high accuracy rate for detection of SQL injection attack.

  15. A Web System Trace Model and Its Application to Web Design

    OpenAIRE

    Kong, Xiaoying; Liu, Li; Lowe, David

    2007-01-01

    Traceability analysis is crucial to the development of web-centric systems, particularly those with frequent system changes, fine-grained evolution and maintenance, and high level of requirements uncertainty. A trace model at the level of the web system architecture is presented in this paper to address the specific challenges of developing web-centric systems. The trace model separates the concerns of different stakeholders in the web development life cycle into viewpoints; and c...

  16. Application of Tikhonov regularization method to wind retrieval from scatterometer data II: cyclone wind retrieval with consideration of rain

    International Nuclear Information System (INIS)

    Zhong Jian; Huang Si-Xun; Fei Jian-Fang; Du Hua-Dong; Zhang Liang

    2011-01-01

    According to the conclusion of the simulation experiments in paper I, the Tikhonov regularization method is applied to cyclone wind retrieval with a rain-effect-considering geophysical model function (called GMF+Rain). The GMF+Rain model which is based on the NASA scatterometer-2 (NSCAT2) GMF is presented to compensate for the effects of rain on cyclone wind retrieval. With the multiple solution scheme (MSS), the noise of wind retrieval is effectively suppressed, but the influence of the background increases. It will cause a large wind direction error in ambiguity removal when the background error is large. However, this can be mitigated by the new ambiguity removal method of Tikhonov regularization as proved in the simulation experiments. A case study on an extratropical cyclone of hurricane observed with SeaWinds at 25-km resolution shows that the retrieved wind speed for areas with rain is in better agreement with that derived from the best track analysis for the GMF+Rain model, but the wind direction obtained with the two-dimensional variational (2DVAR) ambiguity removal is incorrect. The new method of Tikhonov regularization effectively improves the performance of wind direction ambiguity removal through choosing appropriate regularization parameters and the retrieved wind speed is almost the same as that obtained from the 2DVAR. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  17. Information Retrieval and Graph Analysis Approaches for Book Recommendation

    OpenAIRE

    Chahinez Benkoussas; Patrice Bellot

    2015-01-01

    A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval ...

  18. Bed management team with Kanban web-based application.

    Science.gov (United States)

    Rocha, Hermano Alexandre Lima; Santos, Ana Kelly Lima da Cruz; Alcântara, Antônia Celia de Castro; Lima, Carmen Sulinete Suliano da Costa; Rocha, Sabrina Gabriele Maia Oliveira; Cardoso, Roberto Melo; Cremonin, Jair Rodrigues

    2018-05-15

    To measure the effectiveness of the bed management process that uses a web-based application with Kanban methodology to reduce hospitalization time of hospitalized patients. Before-after study was performed. The study was conducted between July 2013 and July 2017, at the Unimed Regional Hospital of Fortaleza, which has 300 beds, of which 60 are in the intensive care unit (ICU). It is accredited by International Society for Quality in Healthcare. Patients hospitalized in the referred period. Bed management with an application that uses color logic to signal at which stage of high flow the patients meet, in which each patient is interpreted as a card of the classical Kanban theory. It has an automatic user signaling system for process movement, and a system for monitoring and analyzing discharge forecasts. Length of hospital stay, number of customer complaints related to bed availability. After the intervention, the hospital's overall hospital stay time was reduced from 5.6 days to 4.9 days (P = 0.001). The units with the greatest reduction were the ICUs, with reduction from 6.0 days to 2.0 (P = 0.001). The relative percentage of complaints regarding bed availability in the hospital fell from 27% to 0%. We conclude that the use of an electronic tool based on Kanban methodology and accessed via the web by a bed management team is effective in reducing patients' hospital stay time.

  19. web cellHTS2: A web-application for the analysis of high-throughput screening data

    Directory of Open Access Journals (Sweden)

    Boutros Michael

    2010-04-01

    Full Text Available Abstract Background The analysis of high-throughput screening data sets is an expanding field in bioinformatics. High-throughput screens by RNAi generate large primary data sets which need to be analyzed and annotated to identify relevant phenotypic hits. Large-scale RNAi screens are frequently used to identify novel factors that influence a broad range of cellular processes, including signaling pathway activity, cell proliferation, and host cell infection. Here, we present a web-based application utility for the end-to-end analysis of large cell-based screening experiments by cellHTS2. Results The software guides the user through the configuration steps that are required for the analysis of single or multi-channel experiments. The web-application provides options for various standardization and normalization methods, annotation of data sets and a comprehensive HTML report of the screening data analysis, including a ranked hit list. Sessions can be saved and restored for later re-analysis. The web frontend for the cellHTS2 R/Bioconductor package interacts with it through an R-server implementation that enables highly parallel analysis of screening data sets. web cellHTS2 further provides a file import and configuration module for common file formats. Conclusions The implemented web-application facilitates the analysis of high-throughput data sets and provides a user-friendly interface. web cellHTS2 is accessible online at http://web-cellHTS2.dkfz.de. A standalone version as a virtual appliance and source code for platforms supporting Java 1.5.0 can be downloaded from the web cellHTS2 page. web cellHTS2 is freely distributed under GPL.

  20. WEB APPLICATION TO MANAGE DOCUMENTS USING THE GOOGLE WEB TOOLKIT AND APP ENGINE TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    Velázquez Santana Eugenio César

    2017-12-01

    Full Text Available The application of new information technologies such as Google Web Toolkit and App Engine are making a difference in the academic management of Higher Education Institutions (IES, who seek to streamline their processes as well as reduce infrastructure costs. However, they encounter the problems with regard to acquisition costs, the infrastructure necessary for their use, as well as the maintenance of the software; It is for this reason that the present research aims to describe the application of these new technologies in HEIs, as well as to identify their advantages and disadvantages and the key success factors in their implementation. As a software development methodology, SCRUM was used as well as PMBOK as a project management tool. The main results were related to the application of these technologies in the development of customized software for teachers, students and administrators, as well as the weaknesses and strengths of using them in the cloud. On the other hand, it was also possible to describe the paradigm shift that data warehouses are generating with respect to today's relational databases.

  1. Soil food web structure after wood ash application

    DEFF Research Database (Denmark)

    Mortensen, Louise Hindborg; Qin, Jiayi; Cruz-Paredes, Carla

    the consequences of returning wood ash to biofuel producing coniferous forest. We that the change in pH and increased availability of nutrients after ash application to forest floor can facilitate an increase in the bacteria to fungi ratio with possible effects for the soil food by applying ash of different...... concentrations to experimental plots in a coniferous forest the soil will be collected with varying intervals and subsequently analyzed. The food web included several trophic levels; bacteria/fungi, protozoa, nematodes, enchytraeids and microarthropods and arthropods. Results from 2014 indicated that bacteria...... and protozoa were stimulated in the uppermost soil layer (0-3 cm) two months ash application, whereas the enchytraeids seemed to be slightly negatively affected. Generally, nematodes also appeared to be negatively affected, although it differed between feeding groups. On the higher trophic levels, no effect...

  2. Application of electron beam curing in web-offset printing

    International Nuclear Information System (INIS)

    Rodrigues, A.M.; Newcomb, W.T.

    1984-01-01

    Four years ago, the first commercial installation of an electron beam processor, coupled with a high speed web offset printing press was described. This line has been in operation since and its success demonstrates the advantages and feasibility of such an application. Just recently, another company has announced that it is using electron beam curing in its printing lines. Judging from the amount of inquiries and opportunities being actively pursued one can state that the use of electron beam systems in printing applications has come of age. This paper describes the advantages of this process, the characteristics of the equipment that are important for industrial use in a multishift environment, and addresses its economics through analysis of some major cost elements

  3. StreamStats: A water resources web application

    Science.gov (United States)

    Ries, Kernell G.; Guthrie, John G.; Rea, Alan H.; Steeves, Peter A.; Stewart, David W.

    2008-01-01

    . Streamflow measurements are collected systematically over a period of years at partial-record stations to estimate peak-flow or low-flow statistics. Streamflow measurements usually are collected at miscellaneous-measurement stations for specific hydrologic studies with various objectives.StreamStats is a Web-based Geographic Information System (GIS) application that was created by the USGS, in cooperation with Environmental Systems Research Institute, Inc. (ESRI)1, to provide users with access to an assortment of analytical tools that are useful for water-resources planning and management. StreamStats functionality is based on ESRI’s ArcHydro Data Model and Tools, described on the Web at http://resources.arcgis.com/en/communities/hydro/01vn0000000s000000.htm. StreamStats allows users to easily obtain streamflow statistics, basin characteristics, and descriptive information for USGS data-collection stations and user-selected ungaged sites. It also allows users to identify stream reaches that are upstream and downstream from user-selected sites, and to identify and obtain information for locations along the streams where activities that may affect streamflow conditions are occurring. This functionality can be accessed through a map-based user interface that appears in the user’s Web browser, or individual functions can be requested remotely as Web services by other Web or desktop computer applications. StreamStats can perform these analyses much faster than historically used manual techniques.StreamStats was designed so that each state would be implemented as a separate application, with a reliance on local partnerships to fund the individual applications, and a goal of eventual full national implementation. Idaho became the first state to implement StreamStats in 2003. By mid-2008, 14 states had applications available to the public, and 18 other states were in various stages of implementation.

  4. Web service composition: a semantic web and automated planning technique application

    Directory of Open Access Journals (Sweden)

    Jaime Alberto Guzmán Luna

    2008-09-01

    Full Text Available This article proposes applying semantic web and artificial intelligence planning techniques to a web services composition model dealing with problems of ambiguity in web service description and handling incomplete web information. The model uses an OWL-S services and implements a planning technique which handles open world semantics in its reasoning process to resolve these problems. This resulted in a web services composition system incorporating a module for interpreting OWL-S services and converting them into a planning problem in PDDL (a planning module handling incomplete information and an execution service module concurrently interacting with the planner for executing each composition plan service.

  5. An Application for Data Preprocessing and Models Extractions in Web Usage Mining

    Directory of Open Access Journals (Sweden)

    Claudia Elena DINUCA

    2011-11-01

    Full Text Available Web servers worldwide generate a vast amount of information on web users’ browsing activities. Several researchers have studied these so-called clickstream or web access log data to better understand and characterize web users. The goal of this application is to analyze user behaviour by mining enriched web access log data. With the continued growth and proliferation of e-commerce, Web services, and Web-based information systems, the volumes of click stream and user data collected by Web-based organizations in their daily operations has reached astronomical proportions. This information can be exploited in various ways, such as enhancing the effectiveness of websites or developing directed web marketing campaigns. The discovered patterns are usually represented as collections of pages, objects, or re-sources that are frequently accessed by groups of users with common needs or interests. In this paper we will focus on displaying the way how it was implemented the application for data preprocessing and extracting different data models from web logs data, finding association as a data mining technique to extract potentially useful knowledge from web usage data. We find different data models navigation patterns by analysing the log files of the web-site. I implemented the application in Java using NetBeans IDE. For exemplification, I used the log files data from a commercial web site www.nice-layouts.com.

  6. Missouri StreamStats—A water-resources web application

    Science.gov (United States)

    Ellis, Jarrett T.

    2018-01-31

    The U.S. Geological Survey (USGS) maintains and operates more than 8,200 continuous streamgages nationwide. Types of data that may be collected, computed, and stored for streamgages include streamgage height (water-surface elevation), streamflow, and water quality. The streamflow data allow scientists and engineers to calculate streamflow statistics, such as the 1-percent annual exceedance probability flood (also known as the 100-year flood), the mean flow, and the 7-day, 10-year low flow, which are used by managers to make informed water resource management decisions, at each streamgage location. Researchers, regulators, and managers also commonly need physical characteristics (basin characteristics) that describe the unique properties of a basin. Common uses for streamflow statistics and basin characteristics include hydraulic design, water-supply management, water-use appropriations, and flood-plain mapping for establishing flood-insurance rates and land-use zones. The USGS periodically publishes reports that update the values of basin characteristics and streamflow statistics at selected gaged locations (locations with streamgages), but these studies usually only update a subset of streamgages, making data retrieval difficult. Additionally, streamflow statistics and basin characteristics are most often needed at ungaged locations (locations without streamgages) for which published streamflow statistics and basin characteristics do not exist. Missouri StreamStats is a web-based geographic information system that was created by the USGS in cooperation with the Missouri Department of Natural Resources to provide users with access to an assortment of tools that are useful for water-resources planning and management. StreamStats allows users to easily obtain the most recent published streamflow statistics and basin characteristics for streamgage locations and to automatically calculate selected basin characteristics and estimate streamflow statistics at ungaged

  7. [Development and evaluation of the medical imaging distribution system with dynamic web application and clustering technology].

    Science.gov (United States)

    Yokohama, Noriya; Tsuchimoto, Tadashi; Oishi, Masamichi; Itou, Katsuya

    2007-01-20

    It has been noted that the downtime of medical informatics systems is often long. Many systems encounter downtimes of hours or even days, which can have a critical effect on daily operations. Such systems remain especially weak in the areas of database and medical imaging data. The scheme design shows the three-layer architecture of the system: application, database, and storage layers. The application layer uses the DICOM protocol (Digital Imaging and Communication in Medicine) and HTTP (Hyper Text Transport Protocol) with AJAX (Asynchronous JavaScript+XML). The database is designed to decentralize in parallel using cluster technology. Consequently, restoration of the database can be done not only with ease but also with improved retrieval speed. In the storage layer, a network RAID (Redundant Array of Independent Disks) system, it is possible to construct exabyte-scale parallel file systems that exploit storage spread. Development and evaluation of the test-bed has been successful in medical information data backup and recovery in a network environment. This paper presents a schematic design of the new medical informatics system that can be accommodated from a recovery and the dynamic Web application for medical imaging distribution using AJAX.

  8. Usage Of Asp.Net Ajax for Binus School Serpong Web Applications

    Directory of Open Access Journals (Sweden)

    Karto Iskandar

    2016-03-01

    Full Text Available Today web applications have become a necessity and many companies use them as a communication tool to keep in touch with their customers. The usage of Web Application in current time increases as the numberof internet users has been rised. For reason of Rich Internet Application, the desktop application developer wasmoved to web application developer with AJAX technology. BINUS School Serpong is a Cambridge Curriculum base International School that uses web application for access every information about the school. By usingAJAX, performance of web application should be improved and the bandwidth usage is decreased. Problems thatoccur at BINUS School Serpong is not all part of the web application that uses AJAX. This paper introducesusage of AJAX in ASP.NET with C# programming language in web application BINUS School Serpong. It is expected by using ASP.NET AJAX, BINUS School Serpong website performance will be faster because of reducing web page reload. The methodology used in this paper is literature study. Results from this study are to prove that the ASP.NET AJAX can be used easily and improve BINUS School Serpong website performance. Conclusion of this paper is the implementation of ASP.NET AJAX improves performance of web application in BINUS School Serpong.

  9. Demonstration: SpaceExplorer - A Tool for Designing Ubiquitous Web Applications for Collections of Displays

    DEFF Research Database (Denmark)

    Hansen, Thomas Riisgaard

    2007-01-01

    This demonstration presents a simple browser plug-in that grant web applications the ability to use multiple nearby devices for displaying web content. A web page can e.g. be designed to present additional information on nearby devices. The demonstration introduces a light weight peer-to-peer arc...

  10. Developing BP-driven web application through the use of MDE techniques

    OpenAIRE

    Torres Bosch, Maria Victoria; Giner Blasco, Pau; Pelechano Ferragud, Vicente

    2012-01-01

    Model driven engineering (MDE) is a suitable approach for performing the construction of software systems (in particular in the Web application domain). There are different types of Web applications depending on their purpose (i.e., document-centric, interactive, transactional, workflow/business process-based, collaborative, etc). This work focusses on business process-based Web applications in order to be able to understand business processes in a broad sense, from the lightweight business p...

  11. Advancements in web-database applications for rabies surveillance

    Directory of Open Access Journals (Sweden)

    Bélanger Denise

    2011-08-01

    Full Text Available Abstract Background Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. Results RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include 1 automatic integration of multi-agency data and diagnostic results on a daily basis; 2 a web-based data editing interface that enables authorized users to add, edit and extract data; and 3 an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. Conclusions RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from

  12. Development and challenges of using web-based GIS for health applications

    DEFF Research Database (Denmark)

    Gao, Sheng; Mioc, Darka; Boley, Harold

    2011-01-01

    Web-based GIS is increasingly used in health applications. It has the potential to provide critical information in a timely manner, support health care policy development, and educate decision makers and the general public. This paper describes the trends and recent development of health...... applications using a Web-based GIS. Recent progress on the database storage and geospatial Web Services has advanced the use of Web-based GIS for health applications, with various proprietary software, open source software, and Application Programming Interfaces (APIs) available. Current challenges in applying...... care planning, and public health participation....

  13. Solving Guesstimation Problems Using the Semantic Web:Four Lessons from an Application

    OpenAIRE

    Bundy, Alan; Sasnauskas, Gintautas; Chan, Michael

    2013-01-01

    We draw on our experience of implementing a semi-automated guesstimation application of the Semantic Web, gort, to draw four lessons, which we claim are of general applicability. These are:1. Inference can unleash the Semantic Web: The full power of the web will only be realised when we can use it to infer new knowledge from old.2. The Semantic Web does not constrain the inference mechanisms: Since we must anyway curate the knowledge we extract from the web, we can take the opportunity to tra...

  14. Integrating Data Warehouses with Web Data

    DEFF Research Database (Denmark)

    Perez, Juan Manuel; Berlanga, Rafael; Aramburu, Maria Jose

    This paper surveys the most relevant research on combining Data Warehouse (DW) and Web data. It studies the XML technologies that are currently being used to integrate, store, query and retrieve web data, and their application to data warehouses. The paper addresses the problem of integrating...

  15. An application of TOPSIS for ranking internet web browsers

    Directory of Open Access Journals (Sweden)

    Shahram Rostampour

    2012-07-01

    Full Text Available Web browser is one of the most important internet facilities for surfing the internet. A good web browser must incorporate literally tens of features such as integrated search engine, automatic updates, etc. Each year, ten web browsers are formally introduced as top best reviewers by some organizations. In this paper, we propose the implementation of TOPSIS technique to rank ten web browsers. The proposed model of this paper uses five criteria including speed, features, security, technical support and supported configurations. In terms of speed, Safari is the best web reviewer followed by Google Chrome and Internet Explorer while Opera is the best web reviewer when we look into 20 different features. We have also ranked these web browsers using all five categories together and the results indicate that Opera, Internet explorer, Firefox and Google Chrome are the best web browsers to be chosen.

  16. VennDiagramWeb: a web application for the generation of highly customizable Venn and Euler diagrams.

    Science.gov (United States)

    Lam, Felix; Lalansingh, Christopher M; Babaran, Holly E; Wang, Zhiyuan; Prokopec, Stephenie D; Fox, Natalie S; Boutros, Paul C

    2016-10-03

    Visualization of data generated by high-throughput, high-dimensionality experiments is rapidly becoming a rate-limiting step in computational biology. There is an ongoing need to quickly develop high-quality visualizations that can be easily customized or incorporated into automated pipelines. This often requires an interface for manual plot modification, rapid cycles of tweaking visualization parameters, and the generation of graphics code. To facilitate this process for the generation of highly-customizable, high-resolution Venn and Euler diagrams, we introduce VennDiagramWeb: a web application for the widely used VennDiagram R package. VennDiagramWeb is hosted at http://venndiagram.res.oicr.on.ca/ . VennDiagramWeb allows real-time modification of Venn and Euler diagrams, with parameter setting through a web interface and immediate visualization of results. It allows customization of essentially all aspects of figures, but also supports integration into computational pipelines via download of R code. Users can upload data and download figures in a range of formats, and there is exhaustive support documentation. VennDiagramWeb allows the easy creation of Venn and Euler diagrams for computational biologists, and indeed many other fields. Its ability to support real-time graphics changes that are linked to downloadable code that can be integrated into automated pipelines will greatly facilitate the improved visualization of complex datasets. For application support please contact Paul.Boutros@oicr.on.ca.

  17. Specification of application logic in web information systems

    NARCIS (Netherlands)

    Barna, P.

    2007-01-01

    The importance of the World Wide Web has grown tremendously over the past decade (or decade and a half). With a quickly growing amount of information published on the Web and its rapidly growing audience, requirements put on Web-based Information Systems (WIS), their developers and maintainers have

  18. Empower the patients with a dialogue-based web application.

    Science.gov (United States)

    Bjørnes, Charlotte D; Cummings, Elizabeth; Nøhr, Christian

    2012-01-01

    Based on a clinical intervention study this paper adds to the significance of users involvement in design processes and substantiate the potential of online, flexible health informatics tools as useful components to accommodate organizational changes that short stay treatment demands. A dialogue-based web application was designed and implemented to accommodate patients' information and communication needs in short stay hospital settings. To ensure the system meet the patients' needs, both patients and healthcare professionals were involved in the design process by applying various participatory methods. Contextualization of the new application was also central in all phases to ensure a focus not only on the technology itself, but also the way it is used and in which relations and contexts. In evaluation of the tool, the patients' descriptions as user substantiate that the use of Internet applications can expand the time for dialogue between the individual patient and healthcare professionals. The patients experience being partners in an on going dialogue, and thereby are empowered, e.g. in managing their care even at home, as these dialogues generate individualized information.

  19. Analysing and Enriching Focused Semantic Web Archives for Parliament Applications

    Directory of Open Access Journals (Sweden)

    Elena Demidova

    2014-07-01

    Full Text Available The web and the social web play an increasingly important role as an information source for Members of Parliament and their assistants, journalists, political analysts and researchers. It provides important and crucial background information, like reactions to political events and comments made by the general public. The case study presented in this paper is driven by two European parliaments (the Greek and the Austrian parliament and targets an effective exploration of political web archives. In this paper, we describe semantic technologies deployed to ease the exploration of the archived web and social web content and present evaluation results.

  20. USING WEB MINING IN E-COMMERCE APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Claudia Elena Dinucă

    2011-09-01

    Full Text Available Nowadays, the web is an important part of our daily life. The web is now the best medium of doing business. Large companies rethink their business strategy using the web to improve business. Business carried on the Web offers the opportunity to potential customers or partners where their products and specific business can be found. Business presence through a company web site has several advantages as it breaks the barrier of time and space compared with the existence of a physical office. To differentiate through the Internet economy, winning companies have realized that e-commerce transactions is more than just buying / selling, appropriate strategies are key to improve competitive power. One effective technique used for this purpose is data mining. Data mining is the process of extracting interesting knowledge from data. Web mining is the use of data mining techniques to extract information from web data. This article presents the three components of web mining: web usage mining, web structure mining and web content mining.

  1. Using ChEMBL web services for building applications and data processing workflows relevant to drug discovery.

    Science.gov (United States)

    Nowotka, Michał M; Gaulton, Anna; Mendez, David; Bento, A Patricia; Hersey, Anne; Leach, Andrew

    2017-08-01

    ChEMBL is a manually curated database of bioactivity data on small drug-like molecules, used by drug discovery scientists. Among many access methods, a REST API provides programmatic access, allowing the remote retrieval of ChEMBL data and its integration into other applications. This approach allows scientists to move from a world where they go to the ChEMBL web site to search for relevant data, to one where ChEMBL data can be simply integrated into their everyday tools and work environment. Areas covered: This review highlights some of the audiences who may benefit from using the ChEMBL API, and the goals they can address, through the description of several use cases. The examples cover a team communication tool (Slack), a data analytics platform (KNIME), batch job management software (Luigi) and Rich Internet Applications. Expert opinion: The advent of web technologies, cloud computing and micro services oriented architectures have made REST APIs an essential ingredient of modern software development models. The widespread availability of tools consuming RESTful resources have made them useful for many groups of users. The ChEMBL API is a valuable resource of drug discovery bioactivity data for professional chemists, chemistry students, data scientists, scientific and web developers.

  2. VoSeq: a voucher and DNA sequence web application.

    Directory of Open Access Journals (Sweden)

    Carlos Peña

    Full Text Available There is an ever growing number of molecular phylogenetic studies published, due to, in part, the advent of new techniques that allow cheap and quick DNA sequencing. Hence, the demand for relational databases with which to manage and annotate the amassing DNA sequences, genes, voucher specimens and associated biological data is increasing. In addition, a user-friendly interface is necessary for easy integration and management of the data stored in the database back-end. Available databases allow management of a wide variety of biological data. However, most database systems are not specifically constructed with the aim of being an organizational tool for researchers working in phylogenetic inference. We here report a new software facilitating easy management of voucher and sequence data, consisting of a relational database as back-end for a graphic user interface accessed via a web browser. The application, VoSeq, includes tools for creating molecular datasets of DNA or amino acid sequences ready to be used in commonly used phylogenetic software such as RAxML, TNT, MrBayes and PAUP, as well as for creating tables ready for publishing. It also has inbuilt BLAST capabilities against all DNA sequences stored in VoSeq as well as sequences in NCBI GenBank. By using mash-ups and calls to web services, VoSeq allows easy integration with public services such as Yahoo! Maps, Flickr, Encyclopedia of Life (EOL and GBIF (by generating data-dumps that can be processed with GBIF's Integrated Publishing Toolkit.

  3. Improvements to TOVS retrievals over sea ice and applications to estimating Arctic energy fluxes

    Science.gov (United States)

    Francis, Jennifer A.

    1994-01-01

    Modeling studies suggest that polar regions play a major role in modulating the Earth's climate and that they may be more sensitive than lower latitudes to climate change. Until recently, however, data from meteorological stations poleward of 70 degs have been sparse, and consequently, our understanding of air-sea-ice interaction processes is relatively poor. Satellite-borne sensors now offer a promising opportunity to observe polar regions and ultimately to improve parameterizations of energy transfer processes in climate models. This study focuses on the application of the TIROS-N operational vertical sounder (TOVS) to sea-ice-covered regions in the nonmelt season. TOVS radiances are processed with the improved initialization inversion ('3I') algorithm, providng estimates of layer-average temperature and moisture, cloud conditions, and surface characteristics at a horizontal resolution of approximately 100 km x 100 km. Although TOVS has flown continuously on polar-orbiting satellites since 1978, its potential has not been realized in high latitudes because the quality of retrievals is often significantly lower over sea ice and snow than over the surfaces. The recent availability of three Arctic data sets has provided an opportunity to validate TOVS retrievals: the first from the Coordinated Eastern Arctic Experiment (CEAREX) in winter 1988/1989, the second from the LeadEx field program in spring 1992, and the third from Russian drifting ice stations. Comparisons with these data reveal deficiencies in TOVS retrievals over sea ice during the cold season; e.g., ice surface temperature is often 5 to 15 K too warm, microwave emissivity is approximately 15% too low at large view angles, clear/cloudy scenes are sometimes misidentified, and low-level inversions are often not captured. In this study, methods to reduce these errors are investigated. Improvements to the ice surface temperature retrieval have reduced rms errors from approximately 7 K to 3 K; correction of

  4. 77 FR 74278 - Proposed Information Collection (Internet Student CPR Web Registration Application); Comment Request

    Science.gov (United States)

    2012-12-13

    ... (Internet Student CPR Web Registration Application); Comment Request AGENCY: Veterans Health Administration.... Title: Internet Student CPR Web Registration Application, VA Form 10-0468. OMB Control Number: 2900-0746... Minneapolis VA Medical Center Education Service. Students will be able to identify and register for a training...

  5. X-Switch: An Efficient , Multi-User, Multi-Language Web Application Server

    Directory of Open Access Journals (Sweden)

    Mayumbo Nyirenda

    2010-07-01

    Full Text Available Web applications are usually installed on and accessed through a Web server. For security reasons, these Web servers generally provide very few privileges to Web applications, defaulting to executing them in the realm of a guest ac- count. In addition, performance often is a problem as Web applications may need to be reinitialised with each access. Various solutions have been designed to address these security and performance issues, mostly independently of one another, but most have been language or system-specic. The X-Switch system is proposed as an alternative Web application execution environment, with more secure user-based resource management, persistent application interpreters and support for arbitrary languages/interpreters. Thus it provides a general-purpose environment for developing and deploying Web applications. The X-Switch system's experimental results demonstrated that it can achieve a high level of performance. Further- more it was shown that X-Switch can provide functionality matching that of existing Web application servers but with the added benet of multi-user support. Finally the X-Switch system showed that it is feasible to completely separate the deployment platform from the application code, thus ensuring that the developer does not need to modify his/her code to make it compatible with the deployment platform.

  6. Nuclear expert web mining system: monitoring and analysis of nuclear acceptance by information retrieval and opinion extraction on the Internet

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Imakuma, Kengo, E-mail: thiagoreis@usp.b, E-mail: barroso@ipen.b, E-mail: kimakuma@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    This paper presents a research initiative that aims to collect nuclear related information and to analyze opinionated texts by mining the hypertextual data environment and social networks web sites on the Internet. Different from previous approaches that employed traditional statistical techniques, it is being proposed a novel Web Mining approach, built using the concept of Expert Systems, for massive and autonomous data collection and analysis. The initial step has been accomplished, resulting in a framework design that is able to gradually encompass a set of evolving techniques, methods, and theories in such a way that this work will build a platform upon which new researches can be performed more easily by just substituting modules or plugging in new ones. Upon completion it is expected that this research will contribute to the understanding of the population views on nuclear technology and its acceptance. (author)

  7. Nuclear expert web mining system: monitoring and analysis of nuclear acceptance by information retrieval and opinion extraction on the Internet

    International Nuclear Information System (INIS)

    Reis, Thiago; Barroso, Antonio C.O.; Imakuma, Kengo

    2011-01-01

    This paper presents a research initiative that aims to collect nuclear related information and to analyze opinionated texts by mining the hypertextual data environment and social networks web sites on the Internet. Different from previous approaches that employed traditional statistical techniques, it is being proposed a novel Web Mining approach, built using the concept of Expert Systems, for massive and autonomous data collection and analysis. The initial step has been accomplished, resulting in a framework design that is able to gradually encompass a set of evolving techniques, methods, and theories in such a way that this work will build a platform upon which new researches can be performed more easily by just substituting modules or plugging in new ones. Upon completion it is expected that this research will contribute to the understanding of the population views on nuclear technology and its acceptance. (author)

  8. Security Testing in Agile Web Application Development - A Case Study Using the EAST Methodology

    CERN Document Server

    Erdogan, Gencer

    2010-01-01

    There is a need for improved security testing methodologies specialized for Web applications and their agile development environment. The number of web application vulnerabilities is drastically increasing, while security testing tends to be given a low priority. In this paper, we analyze and compare Agile Security Testing with two other common methodologies for Web application security testing, and then present an extension of this methodology. We present a case study showing how our Extended Agile Security Testing (EAST) performs compared to a more ad hoc approach used within an organization. Our working hypothesis is that the detection of vulnerabilities in Web applications will be significantly more efficient when using a structured security testing methodology specialized for Web applications, compared to existing ad hoc ways of performing security tests. Our results show a clear indication that our hypothesis is on the right track.

  9. ChemiRs: a web application for microRNAs and chemicals.

    Science.gov (United States)

    Su, Emily Chia-Yu; Chen, Yu-Sing; Tien, Yun-Cheng; Liu, Jeff; Ho, Bing-Ching; Yu, Sung-Liang; Singh, Sher

    2016-04-18

    MicroRNAs (miRNAs) are about 22 nucleotides, non-coding RNAs that affect various cellular functions, and play a regulatory role in different organisms including human. Until now, more than 2500 mature miRNAs in human have been discovered and registered, but still lack of information or algorithms to reveal the relations among miRNAs, environmental chemicals and human health. Chemicals in environment affect our health and daily life, and some of them can lead to diseases by inferring biological pathways. We develop a creditable online web server, ChemiRs, for predicting interactions and relations among miRNAs, chemicals and pathways. The database not only compares gene lists affected by chemicals and miRNAs, but also incorporates curated pathways to identify possible interactions. Here, we manually retrieved associations of miRNAs and chemicals from biomedical literature. We developed an online system, ChemiRs, which contains miRNAs, diseases, Medical Subject Heading (MeSH) terms, chemicals, genes, pathways and PubMed IDs. We connected each miRNA to miRBase, and every current gene symbol to HUGO Gene Nomenclature Committee (HGNC) for genome annotation. Human pathway information is also provided from KEGG and REACTOME databases. Information about Gene Ontology (GO) is queried from GO Online SQL Environment (GOOSE). With a user-friendly interface, the web application is easy to use. Multiple query results can be easily integrated and exported as report documents in PDF format. Association analysis of miRNAs and chemicals can help us understand the pathogenesis of chemical components. ChemiRs is freely available for public use at http://omics.biol.ntnu.edu.tw/ChemiRs .

  10. Medication-use evaluation with a Web application.

    Science.gov (United States)

    Burk, Muriel; Moore, Von; Glassman, Peter; Good, Chester B; Emmendorfer, Thomas; Leadholm, Thomas C; Cunningham, Francesca

    2013-12-15

    A Web-based application for coordinating medication-use evaluation (MUE) initiatives within the Veterans Affairs (VA) health care system is described. The MUE Tracker (MUET) software program was created to improve VA's ability to conduct national medication-related interventions throughout its network of 147 medical centers. MUET initiatives are centrally coordinated by the VA Center for Medication Safety (VAMedSAFE), which monitors the agency's integrated databases for indications of suboptimal prescribing or drug therapy monitoring and adverse treatment outcomes. When a pharmacovigilance signal is detected, VAMedSAFE identifies "trigger groups" of at-risk veterans and uploads patient lists to the secure MUET application, where locally designated personnel (typically pharmacists) can access and use the data to target risk-reduction efforts. Local data on patient-specific interventions are stored in a centralized database and regularly updated to enable tracking and reporting for surveillance and quality-improvement purposes; aggregated data can be further analyzed for provider education and benchmarking. In a three-year pilot project, the MUET program was found effective in promoting improved prescribing of erythropoiesis-stimulating agents (ESAs) and enhanced laboratory monitoring of ESA-treated patients in all specified trigger groups. The MUET initiative has since been expanded to target other high-risk drugs, and efforts are underway to refine the tool for broader utility. The MUET application has enabled the increased standardization of medication safety initiatives across the VA system and may serve as a useful model for the development of pharmacovigilance tools by other large integrated health care systems.

  11. DEVELOPMENT OF A WEB-BASED PROXIMITY BASED MEDIA SHARING APPLICATION

    OpenAIRE

    Erol Ozan

    2016-01-01

    This article reports the development of Vissou, which is a location based web application that enables media recording and sharing among users who are in close proximity to each other. The application facilitates the automated hand-over of the recorded media files from one user to another. There are many social networking applications and web sites that provide digital media sharing and editing functionalities. What differentiates Vissou from other similar applications is the functions and us...

  12. BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications.

    Science.gov (United States)

    Whetzel, Patricia L; Noy, Natalya F; Shah, Nigam H; Alexander, Paul R; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A

    2011-07-01

    The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

  13. MyLabStocks: a web-application to manage molecular biology materials.

    Science.gov (United States)

    Chuffart, Florent; Yvert, Gaël

    2014-05-01

    Laboratory stocks are the hardware of research. They must be stored and managed with mimimum loss of material and information. Plasmids, oligonucleotides and strains are regularly exchanged between collaborators within and between laboratories. Managing and sharing information about every item is crucial for retrieval of reagents, for planning experiments and for reproducing past experimental results. We have developed a web-based application to manage stocks commonly used in a molecular biology laboratory. Its functionalities include user-defined privileges, visualization of plasmid maps directly from their sequence and the capacity to search items from fields of annotation or directly from a query sequence using BLAST. It is designed to handle records of plasmids, oligonucleotides, yeast strains, antibodies, pipettes and notebooks. Based on PHP/MySQL, it can easily be extended to handle other types of stocks and it can be installed on any server architecture. MyLabStocks is freely available from: https://forge.cbp.ens-lyon.fr/redmine/projects/mylabstocks under an open source licence. © 2014 Laboratoire de Biologie Moleculaire de la Cellule CNRS. Yeast published by John Wiley & Sons, Ltd.

  14. Development of grid-like applications for public health using Web 2.0 mashup techniques.

    Science.gov (United States)

    Scotch, Matthew; Yip, Kevin Y; Cheung, Kei-Hoi

    2008-01-01

    Development of public health informatics applications often requires the integration of multiple data sources. This process can be challenging due to issues such as different file formats, schemas, naming systems, and having to scrape the content of web pages. A potential solution to these system development challenges is the use of Web 2.0 technologies. In general, Web 2.0 technologies are new internet services that encourage and value information sharing and collaboration among individuals. In this case report, we describe the development and use of Web 2.0 technologies including Yahoo! Pipes within a public health application that integrates animal, human, and temperature data to assess the risk of West Nile Virus (WNV) outbreaks. The results of development and testing suggest that while Web 2.0 applications are reasonable environments for rapid prototyping, they are not mature enough for large-scale public health data applications. The application, in fact a "systems of systems," often failed due to varied timeouts for application response across web sites and services, internal caching errors, and software added to web sites by administrators to manage the load on their servers. In spite of these concerns, the results of this study demonstrate the potential value of grid computing and Web 2.0 approaches in public health informatics.

  15. Research of web application based on B/S structure testing

    International Nuclear Information System (INIS)

    Ou Ge; Zhang Hongmei; Song Liming

    2007-01-01

    Software testing is very important method used to assure the quality of Web application. With the fast development of Web application, the old testing techniques can not satisfied the require any more. Because of this people begin to classify different part of the application, find out the content that can be tested by the test tools and studies the structure of testing to enhance his efficiency. This paper analyses the testing based on the feature of Web application, sums up the testing method and gives some improvements of them. (authors)

  16. Web application security is a stack how to CYA (cover your apps) completely

    CERN Document Server

    Mac Vittie, Lori

    2015-01-01

    The web application stack - a growing threat vector   Understand the threat and learn how to defend your organisation This book is intended for application developers, system administrators and operators, as well as networking professionals who need a comprehensive top-level view of web application security in order to better defend and protect both the 'web' and the 'application' against potential attacks. This book examines the most common, fundamental attack vectors and shows readers the defence techniques used to combat them. ContentsIntroductionAttack SurfaceThreat VectorsThreat Mitigatio

  17. Developing Dynamic Single Page Web Applications Using Meteor : Comparing JavaScript Frameworks: Blaze and React

    OpenAIRE

    Yetayeh, Asabeneh

    2017-01-01

    This paper studies Meteor which is a JavaScript full-stack framework to develop interactive single page web applications. Meteor allows building web applications entirely in JavaScript. Meteor uses Blaze, React or AngularJS as a view layer and Node.js and MongoDB as a back-end. The main purpose of this study is to compare the performance of Blaze and React. A multi-user Blaze and React web applications with similar HTML and CSS were developed. Both applications were deployed on Heroku’s w...

  18. A Web of applicant attraction: person-organization fit in the context of Web-based recruitment.

    Science.gov (United States)

    Dineen, Brian R; Ash, Steven R; Noe, Raymond A

    2002-08-01

    Applicant attraction was examined in the context of Web-based recruitment. A person-organization (P-O) fit framework was adopted to examine how the provision of feedback to individuals regarding their potential P-O fit with an organization related to attraction. Objective and subjective P-O fit, agreement with fit feedback, and self-esteem also were examined in relation to attraction. Results of an experiment that manipulated fit feedback level after a self-assessment provided by a fictitious company Web site found that both feedback level and objective P-O fit were positively related to attraction. These relationships were fully mediated by subjective P-O fit. In addition, attraction was related to the interaction of objective fit, feedback, and agreement and objective fit, feedback, and self-esteem. Implications and future Web-based recruitment research directions are discussed.

  19. Key Technologies and Applications of Satellite and Sensor Web-coupled Real-time Dynamic Web Geographic Information System

    Directory of Open Access Journals (Sweden)

    CHEN Nengcheng

    2017-10-01

    Full Text Available The geo-spatial information service has failed to reflect the live status of spot and meet the needs of integrated monitoring and real-time information for a long time. To tackle the problems in observation sharing and integrated management of space-borne, air-borne, and ground-based platforms and efficient service of spatio-temporal information, an observation sharing model was proposed. The key technologies in real-time dynamic geographical information system (GIS including maximum spatio-temporal coverage-based optimal layout of earth-observation sensor Web, task-driven and feedback-based control, real-time access of streaming observations, dynamic simulation, warning and decision support were detailed. An real-time dynamic Web geographical information system (WebGIS named GeoSensor and its applications in sensing and management of spatio-temporal information of Yangtze River basin including navigation, flood prevention, and power generation were also introduced.

  20. THE DIFFERENCE BETWEEN DEVELOPING SINGLE PAGE APPLICATION AND TRADITIONAL WEB APPLICATION BASED ON MECHATRONICS ROBOT LABORATORY ONAFT APPLICATION

    Directory of Open Access Journals (Sweden)

    V. Solovei

    2018-04-01

    Full Text Available Today most of desktop and mobile applications have analogues in the form of web-based applications.  With evolution of development technologies and web technologies web application increased in functionality to desktop applications. The Web application consists of two parts of the client part and the server part. The client part is responsible for providing the user with visual information through the browser. The server part is responsible for processing and storing data.MPA appeared simultaneously with the Internet. Multiple-page applications work in a "traditional" way. Every change eg. display the data or submit data back to the server. With the advent of AJAX, MPA learned to load not the whole page, but only a part of it, which eventually led to the appearance of the SPA. SPA is the principle of development when only one page is transferred to the client part, and the content is downloaded only to a certain part of the page, without rebooting it, which allows to speed up the application and simplify the user experience of using the application to the level of desktop applications.Based on the SPA, the Mechatronics Robot Laboratory ONAFT application was designed to automate the management process. The application implements the client-server architecture. The server part consists of a RESTful API, which allows you to get unified access to the application functionality, and a database for storing information. Since the client part is a spa, this allows you to reduce the load on the connection to the server and improve the user experience

  1. Efficient development of web applications for remote participation using Ruby on Rails

    International Nuclear Information System (INIS)

    Emoto, M.; Yoshida, M.; Iwata, C.; Inagaki, S.; Nagayama, Y.

    2010-01-01

    Large-scale experiments such as ITER require international collaboration, and remote participation plays an important role in carrying out such experiments. Web-based applications are useful tools for scientists participating in experiments remotely using personal computers. Since the participants typically download web-based applications to their computer each time they access the web servers, they do not need to install extra software in order to use these applications. In addition, the application developers do not need to distribute the latest program files each time they are modified, thus reducing maintenance costs for remote participation systems. For these reasons, we have been developing web-based applications for the LHD experiments at NIFS. In a previous study, we showed the benefits of using Ruby on Rails (RoR) to develop web-based applications for analysis code. We thought this approach would also be useful for developing applications for remote participation. Therefore, we have developed several web-based applications using RoR for participating in the LHD experiments. These applications include a data viewer and a scheduler of experiments. The main reason to adopt RoR for this purpose is its efficiency for developing web-based applications. For example, to develop a data viewer, we used an existing program running on an X-Windows System. Using RoR, we could minimize the modifications of the existing programs to add web interfaces. In this paper, we will report a web-based application developed using RoR for the LHD experiment. We will also discuss the benefits of using RoR in developing remote participation tools.

  2. Mindcontrol: A web application for brain segmentation quality control.

    Science.gov (United States)

    Keshavan, Anisha; Datta, Esha; M McDonough, Ian; Madan, Christopher R; Jordan, Kesshi; Henry, Roland G

    2018-04-15

    Tissue classification plays a crucial role in the investigation of normal neural development, brain-behavior relationships, and the disease mechanisms of many psychiatric and neurological illnesses. Ensuring the accuracy of tissue classification is important for quality research and, in particular, the translation of imaging biomarkers to clinical practice. Assessment with the human eye is vital to correct various errors inherent to all currently available segmentation algorithms. Manual quality assurance becomes methodologically difficult at a large scale - a problem of increasing importance as the number of data sets is on the rise. To make this process more efficient, we have developed Mindcontrol, an open-source web application for the collaborative quality control of neuroimaging processing outputs. The Mindcontrol platform consists of a dashboard to organize data, descriptive visualizations to explore the data, an imaging viewer, and an in-browser annotation and editing toolbox for data curation and quality control. Mindcontrol is flexible and can be configured for the outputs of any software package in any data organization structure. Example configurations for three large, open-source datasets are presented: the 1000 Functional Connectomes Project (FCP), the Consortium for Reliability and Reproducibility (CoRR), and the Autism Brain Imaging Data Exchange (ABIDE) Collection. These demo applications link descriptive quality control metrics, regional brain volumes, and thickness scalars to a 3D imaging viewer and editing module, resulting in an easy-to-implement quality control protocol that can be scaled for any size and complexity of study. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. AmWeb: a novel interactive web tool for antimicrobial resistance surveillance, applicable to both community and hospital patients.

    Science.gov (United States)

    Ironmonger, Dean; Edeghere, Obaghe; Gossain, Savita; Bains, Amardeep; Hawkey, Peter M

    2013-10-01

    Antimicrobial resistance (AMR) is recognized as one of the most significant threats to human health. Local and regional AMR surveillance enables the monitoring of temporal changes in susceptibility to antibiotics and can provide prescribing guidance to healthcare providers to improve patient management and help slow the spread of antibiotic resistance in the community. There is currently a paucity of routine community-level AMR surveillance information. The HPA in England sponsored the development of an AMR surveillance system (AmSurv) to collate local laboratory reports. In the West Midlands region of England, routine reporting of AMR data has been established via the AmSurv system from all diagnostic microbiology laboratories. The HPA Regional Epidemiology Unit developed a web-enabled database application (AmWeb) to provide microbiologists, pharmacists and other stakeholders with timely access to AMR data using user-configurable reporting tools. AmWeb was launched in the West Midlands in January 2012 and is used by microbiologists and pharmacists to monitor resistance profiles, perform local benchmarking and compile data for infection control reports. AmWeb is now being rolled out to all English regions. It is expected that AmWeb will become a valuable tool for monitoring the threat from newly emerging or currently circulating resistant organisms and helping antibiotic prescribers to select the best treatment options for their patients.

  4. Possibilities of contactless control of web map applications by sight

    Directory of Open Access Journals (Sweden)

    Rostislav Netek

    2012-03-01

    Full Text Available This paper assesses possibilities of a new approach of control map applications on the screen without locomotive system. There is a project about usability of Eye Tracking System in Geoinformatic and Cartographic fields at Department of Geoinformatics at Palacky University. The eye tracking system is a device for measuring eye/gaze positions and eye/gaze movement ("where we are looking". There is a number of methods and outputs, but the most common are "heat-maps" of intensity and/or time. Just this method was used in the first part, where was analyzed the number of common web map portals, especially distribution of their tools and functions on the screen. The aim of research is to localize by heat-maps the best distribution of control tools for movement with map (function "pan". It can analyze how sensitive are people on perception of control tools in different web pages and platforms. It is a great experience to compare accurate survey data with personal interpretation and knowledge. Based on these results is the next step – design of "control tools" which is command by eye-tracking device. There has been elected rectangle areas located on the edge of map (AOI – areas of interest, with special function which have defined some time delay. When user localizes one of these areas the map automatically moves to the way on which edge is localized on, and time delay prevents accidental movement. The technology for recording the eye movements on the screen offers this option because if you properly define the layout and function controls of the map, you need only connect these two systems. At this moment, there is a technical constrain. The solution of movement control is based on data transmission between eye-tracking-device-output and converter in real-time. Just real-time transfer is not supported in every case of SMI (SensoMotoric Instruments company devices. More precisely it is the problem of money, because eye-tracking device and every

  5. Application of simulated annealing for simultaneous retrieval of particle size distribution and refractive index

    International Nuclear Information System (INIS)

    Ma, Lin; Kranendonk, Laura; Cai, Weiwei; Zhao, Yan; Baba, Justin S.

    2009-01-01

    This paper describes the application of the simulated annealing technique for the simultaneous retrieval of particle size distribution and refractive index based on polarization modulated scattering (PMS) measurements. The PMS technique is a well-established method to measure multiple elements of the Mueller scattering matrix. However, the inference of the scatterers properties (e.g., the size distribution function and refractive index) from such measurements involves solving an ill-conditioned inverse problem. In this paper, a new inversion technique was demonstrated to infer particle properties from PMS measurements. The new technique formulated the inverse problem into a minimization problem, which is then solved by the simulated annealing technique. Both numerical and experimental investigation on the new inversion technique was presented in the paper. The results obtained demonstrated the robustness and reliability of the new algorithm, and supported its expanded applications in scientific and technological areas involving particulates/aerosols.

  6. Development of Grid-like Applications for Public Health Using Web 2.0 Mashup Techniques

    OpenAIRE

    Scotch, Matthew; Yip, Kevin Y.; Cheung, Kei-Hoi

    2008-01-01

    Development of public health informatics applications often requires the integration of multiple data sources. This process can be challenging due to issues such as different file formats, schemas, naming systems, and having to scrape the content of web pages. A potential solution to these system development challenges is the use of Web 2.0 technologies. In general, Web 2.0 technologies are new internet services that encourage and value information sharing and collaboration among individuals....

  7. Information Retrieval and Graph Analysis Approaches for Book Recommendation.

    Science.gov (United States)

    Benkoussas, Chahinez; Bellot, Patrice

    2015-01-01

    A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments.

  8. The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications

    Directory of Open Access Journals (Sweden)

    Katayama Toshiaki

    2011-08-01

    Full Text Available Abstract Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i a workflow to annotate 100,000 sequences from an invertebrate species; ii an integrated system for analysis of the transcription factor binding sites (TFBSs enriched based on differential gene expression data obtained from a microarray experiment; iii a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i the absence of several useful data or analysis functions in the Web service "space"; ii the lack of documentation of methods; iii lack of

  9. The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications

    Science.gov (United States)

    2011-01-01

    Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i) a workflow to annotate 100,000 sequences from an invertebrate species; ii) an integrated system for analysis of the transcription factor binding sites (TFBSs) enriched based on differential gene expression data obtained from a microarray experiment; iii) a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv) a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i) the absence of several useful data or analysis functions in the Web service "space"; ii) the lack of documentation of methods; iii) lack of compliance with the SOAP

  10. A web-based application for initial screening of living kidney donors: development, implementation and evaluation.

    Science.gov (United States)

    Moore, D R; Feurer, I D; Zavala, E Y; Shaffer, D; Karp, S; Hoy, H; Moore, D E

    2013-02-01

    Most centers utilize phone or written surveys to screen candidates who self-refer to be living kidney donors. To increase efficiency and reduce resource utilization, we developed a web-based application to screen kidney donor candidates. The aim of this study was to evaluate the use of this web-based application. Method and time of referral were tabulated and descriptive statistics summarized demographic characteristics. Time series analyses evaluated use over time. Between January 1, 2011 and March 31, 2012, 1200 candidates self-referred to be living kidney donors at our center. Eight hundred one candidates (67%) completed the web-based survey and 399 (33%) completed a phone survey. Thirty-nine percent of donors accessed the application on nights and weekends. Postimplementation of the web-based application, there was a statistically significant increase (p web-based application as opposed to telephone contact. Also, there was a significant increase (p = 0.025) in the total number of self-referrals post-implementation from 61 to 116 per month. An interactive web-based application is an effective strategy for the initial screening of donor candidates. The web-based application increased the ability to interface with donors, process them efficiently and ultimately increased donor self-referral at our center. © Copyright 2012 The American Society of Transplantation and the American Society of Transplant Surgeons.

  11. SBMLmod: a Python-based web application and web service for efficient data integration and model simulation.

    Science.gov (United States)

    Schäuble, Sascha; Stavrum, Anne-Kristin; Bockwoldt, Mathias; Puntervoll, Pål; Heiland, Ines

    2017-06-24

    Systems Biology Markup Language (SBML) is the standard model representation and description language in systems biology. Enriching and analysing systems biology models by integrating the multitude of available data, increases the predictive power of these models. This may be a daunting task, which commonly requires bioinformatic competence and scripting. We present SBMLmod, a Python-based web application and service, that automates integration of high throughput data into SBML models. Subsequent steady state analysis is readily accessible via the web service COPASIWS. We illustrate the utility of SBMLmod by integrating gene expression data from different healthy tissues as well as from a cancer dataset into a previously published model of mammalian tryptophan metabolism. SBMLmod is a user-friendly platform for model modification and simulation. The web application is available at http://sbmlmod.uit.no , whereas the WSDL definition file for the web service is accessible via http://sbmlmod.uit.no/SBMLmod.wsdl . Furthermore, the entire package can be downloaded from https://github.com/MolecularBioinformatics/sbml-mod-ws . We envision that SBMLmod will make automated model modification and simulation available to a broader research community.

  12. An Introduction to Testing Web Applications with twill and Selenium

    CERN Document Server

    Brown, Titus; Huggins, Jason

    2007-01-01

    This Short Cut is an introduction tobuilding automated web tests using twotools, twill and Selenium. twill is a simpleweb scripting language that can be usedto automate web tests, while Selenium isa web testing framework that runs in anybrowser and can be used to test complexweb sites that make extensive use ofJavaScript. The best way to use this Short Cut is torun through the examples. We expectthat within an hour you can start writingyour own functional tests in either twillor Selenium, and within a day you willunderstand most, if not all, of the possibilitiesand the limitations of these t

  13. Towards the multilingual semantic web principles, methods and applications

    CERN Document Server

    Buitelaar, Paul

    2014-01-01

    To date, the relation between multilingualism and the Semantic Web has not yet received enough attention in the research community. One major challenge for the Semantic Web community is to develop architectures, frameworks and systems that can help in overcoming national and language barriers, facilitating equal access to information produced in different cultures and languages. As such, this volume aims at documenting the state-of-the-art with regard to the vision of a Multilingual Semantic Web, in which semantic information will be accessible in and across multiple languages. The Multiling

  14. Search-based Tier Assignment for Optimising Offline Availability in Multi-tier Web Applications

    OpenAIRE

    Philips, Laure; De Koster, Joeri; De Meuter, Wolfgang; De Roover, Coen

    2017-01-01

    Web programmers are often faced with several challenges in the development process of modern, rich internet applications. Technologies for the different tiers of the application have to be selected: a server-side language, a combination of JavaScript, HTML and CSS for the client, and a database technology. Meeting the expectations of contemporary web applications requires even more effort from the developer: many state of the art libraries must be mastered and glued together. This leads to an...

  15. Using Semantic Web Services for Context-Aware Mobile Applications

    OpenAIRE

    Sheshagiri , Mithun; Sadeh , Norman; Gandon , Fabien

    2004-01-01

    International audience; One way of overcoming the challenges associated with mobile and pervasive computing environments involves providing users with higher levels of automation. This in turn requires capturing the context within which the user operates. In this paper, we describe ongoing research aimed leveraging Semantic Web Services in support of context awareness. This includes modeling sources of contextual information as web services that can be automatically discovered and accessed by...

  16. Gender Divide and Acceptance of Collaborative Web 2.0 Applications for Learning in Higher Education

    Science.gov (United States)

    Huang, Wen-Hao David; Hood, Denice Ward; Yoo, Sun Joo

    2013-01-01

    Situated in the gender digital divide framework, this survey study investigated the role of computer anxiety in influencing female college students' perceptions toward Web 2.0 applications for learning. Based on 432 college students' "Web 2.0 for learning" perception ratings collected by relevant categories of "Unified Theory of Acceptance and Use…

  17. Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui

    Science.gov (United States)

    2012-01-01

    Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics

  18. CellBase, a comprehensive collection of RESTful web services for retrieving relevant biological information from heterogeneous sources.

    Science.gov (United States)

    Bleda, Marta; Tarraga, Joaquin; de Maria, Alejandro; Salavert, Francisco; Garcia-Alonso, Luz; Celma, Matilde; Martin, Ainoha; Dopazo, Joaquin; Medina, Ignacio

    2012-07-01

    During the past years, the advances in high-throughput technologies have produced an unprecedented growth in the number and size of repositories and databases storing relevant biological data. Today, there is more biological information than ever but, unfortunately, the current status of many of these repositories is far from being optimal. Some of the most common problems are that the information is spread out in many small databases; frequently there are different standards among repositories and some databases are no longer supported or they contain too specific and unconnected information. In addition, data size is increasingly becoming an obstacle when accessing or storing biological data. All these issues make very difficult to extract and integrate information from different sources, to analyze experiments or to access and query this information in a programmatic way. CellBase provides a solution to the growing necessity of integration by easing the access to biological data. CellBase implements a set of RESTful web services that query a centralized database containing the most relevant biological data sources. The database is hosted in our servers and is regularly updated. CellBase documentation can be found at http://docs.bioinfo.cipf.es/projects/cellbase.

  19. A Retrieval Optimized Surveillance Video Storage System for Campus Application Scenarios

    Directory of Open Access Journals (Sweden)

    Shengcheng Ma

    2018-01-01

    Full Text Available This paper investigates and analyzes the characteristics of video data and puts forward a campus surveillance video storage system with the university campus as the specific application environment. Aiming at the challenge that the content-based video retrieval response time is too long, the key-frame index subsystem is designed. The key frame of the video can reflect the main content of the video. Extracted from the video, key frames are associated with the metadata information to establish the storage index. The key-frame index is used in lookup operations while querying. This method can greatly reduce the amount of video data reading and effectively improves the query’s efficiency. From the above, we model the storage system by a stochastic Petri net (SPN and verify the promotion of query performance by quantitative analysis.

  20. Collaboration Expertise in Medicine - No Evidence for Cross-Domain Application from a Memory Retrieval Study.

    Directory of Open Access Journals (Sweden)

    Jan Kiesewetter

    Full Text Available Is there evidence for expertise on collaboration and, if so, is there evidence for cross-domain application? Recall of stimuli was used to measure so-called internal collaboration scripts of novices and experts in two studies. Internal collaboration scripts refer to an individual's knowledge about how to interact with others in a social situation. METHOD—Ten collaboration experts and ten novices of the content domain social science were presented with four pictures of people involved in collaborative activities. The recall texts were coded, distinguishing between superficial and collaboration script information. RESULTS—Experts recalled significantly more collaboration script information (M = 25.20; SD = 5.88 than did novices (M = 13.80; SD = 4.47. Differences in superficial information were not found.Study 2 tested whether the differences found in Study 1 could be replicated. Furthermore, the cross-domain application of internal collaboration scripts was explored. METHOD—Twenty collaboration experts and 20 novices of the content domain medicine were presented with four pictures and four videos of their content domain and a video and picture of another content domain. All stimuli showed collaborative activities typical for the respective content domains. RESULTS—As in Study 1, experts recalled significantly more collaboration script information of their content domain (M = 71.65; SD = 33.23 than did novices (M = 54.25; SD = 15.01. For the novices, no differences were found for the superficial information nor for the retrieval of collaboration script information recalled after the other content domain stimuli.There is evidence for expertise on collaboration in memory tasks. The results show that experts hold substantially more collaboration script information than did novices. Furthermore, the differences between collaboration novices and collaboration experts occurred only in their own content domain, indicating that internal

  1. Collaboration Expertise in Medicine - No Evidence for Cross-Domain Application from a Memory Retrieval Study.

    Science.gov (United States)

    Kiesewetter, Jan; Fischer, Frank; Fischer, Martin R

    2016-01-01

    Is there evidence for expertise on collaboration and, if so, is there evidence for cross-domain application? Recall of stimuli was used to measure so-called internal collaboration scripts of novices and experts in two studies. Internal collaboration scripts refer to an individual's knowledge about how to interact with others in a social situation. METHOD— Ten collaboration experts and ten novices of the content domain social science were presented with four pictures of people involved in collaborative activities. The recall texts were coded, distinguishing between superficial and collaboration script information. RESULTS— Experts recalled significantly more collaboration script information (M = 25.20; SD = 5.88) than did novices (M = 13.80; SD = 4.47). Differences in superficial information were not found. Study 2 tested whether the differences found in Study 1 could be replicated. Furthermore, the cross-domain application of internal collaboration scripts was explored. METHOD— Twenty collaboration experts and 20 novices of the content domain medicine were presented with four pictures and four videos of their content domain and a video and picture of another content domain. All stimuli showed collaborative activities typical for the respective content domains. RESULTS— As in Study 1, experts recalled significantly more collaboration script information of their content domain (M = 71.65; SD = 33.23) than did novices (M = 54.25; SD = 15.01). For the novices, no differences were found for the superficial information nor for the retrieval of collaboration script information recalled after the other content domain stimuli. There is evidence for expertise on collaboration in memory tasks. The results show that experts hold substantially more collaboration script information than did novices. Furthermore, the differences between collaboration novices and collaboration experts occurred only in their own content domain, indicating that internal collaboration scripts

  2. A boosting framework for visuality-preserving distance metric learning and its application to medical image retrieval.

    Science.gov (United States)

    Yang, Liu; Jin, Rong; Mummert, Lily; Sukthankar, Rahul; Goode, Adam; Zheng, Bin; Hoi, Steven C H; Satyanarayanan, Mahadev

    2010-01-01

    Similarity measurement is a critical component in content-based image retrieval systems, and learning a good distance metric can significantly improve retrieval performance. However, despite extensive study, there are several major shortcomings with the existing approaches for distance metric learning that can significantly affect their application to medical image retrieval. In particular, "similarity" can mean very different things in image retrieval: resemblance in visual appearance (e.g., two images that look like one another) or similarity in semantic annotation (e.g., two images of tumors that look quite different yet are both malignant). Current approaches for distance metric learning typically address only one goal without consideration of the other. This is problematic for medical image retrieval where the goal is to assist doctors in decision making. In these applications, given a query image, the goal is to retrieve similar images from a reference library whose semantic annotations could provide the medical professional with greater insight into the possible interpretations of the query image. If the system were to retrieve images that did not look like the query, then users would be less likely to trust the system; on the other hand, retrieving images that appear superficially similar to the query but are semantically unrelated is undesirable because that could lead users toward an incorrect diagnosis. Hence, learning a distance metric that preserves both visual resemblance and semantic similarity is important. We emphasize that, although our study is focused on medical image retrieval, the problem addressed in this work is critical to many image retrieval systems. We present a boosting framework for distance metric learning that aims to preserve both visual and semantic similarities. The boosting framework first learns a binary representation using side information, in the form of labeled pairs, and then computes the distance as a weighted Hamming

  3. Web Services for public cosmological surveys: the VVDS-CDFS application

    Science.gov (United States)

    Paioro, L.; Garilli, B.; Le Brun, V.; Franzetti, P.; Fumana, M.; Scodeggio, M.

    2007-08-01

    Cosmological surveys (like VVDS, GOODS, DEEP2, COSMOS, etc.) aim at providing a complete census of the universe over a broad redshift range. Often different information are gathered with different instruments (e.g., spectrographs, HST, X-ray telescopes, etc.) and it is only by correctly assembling and easily manipulating such wide sets of data that astronomers can attempt to describe the universe; many different scientific goals can be tackled grouping and filtering the different data sets. When dealing with the huge databases resulting from public cosmological surveys , what is needed is: (a) a versatile system of queries, to allow searches by different parameters (like redshifts, magnitude, colors, etc.) according to the specific scientific goal to be tackled; (b) a cross-matching system to verify or redefine the identification of the sources; and (c) a data products retrieving system to download data related images and spectra. The Virtual Observatory Alliance defines a set of services which can satisfy the needs described above, exploiting Web Services technology. Having in mind the exploitation of cosmological surveys, we have implemented what we consider the most fundamental VO Web Services for our scientific interests: Conesearch (retrieves physical data values from a cone centered on one point in the sky - the simplest query), SkyNode (allows to filter on the physical quantities in the database in order to select a well defined data subset), SIAP (retrieves all the images contained in a sky region of interest), SSAP (retrieves 1D spectra). Our testing bench is the VVDSCDFS data set, made public in 2004, which contains photometric and spectroscopic information for 1599 sources (Le F`rve et al., 2004, A&A, 428, 1043, see ). On e this data set, we have implemented and published on US NVO registry the first three services mentioned above, to demonstrate the viability of this approach and its usefulness to the astronomical community. Implementation of SSAP

  4. C#: Connecting a Mobile Application to Oracle Server via Web Services

    Directory of Open Access Journals (Sweden)

    Daniela Ilea

    2008-01-01

    Full Text Available This article is focused on mobile development using Visual Studio 2005, web services and their connection to Oracle Server, willing to help programmers to realize simple and useful mobile applications.

  5. Nuun: A System for Developing Platform and Browser Independent Arabic Web Applications

    National Research Council Canada - National Science Library

    Habash, Nizar Y

    2001-01-01

    .... Arabic web applications are far from this state of ubiquitous support. Full support is available only under Arabic Windows, while little support is provided under other versions of Windows, and no support at all under UNIX systems...

  6. Launch of Village Blue Web Application Shares Water Monitoring Data with Baltimore Community

    Science.gov (United States)

    EPA and the U.S. Geological Survey (USGS) have launched their mobile-friendly web application for Village Blue, a project that provides real-time water quality monitoring data to the Baltimore, Maryland community.

  7. Real-time web application development with Vert.x 2.0

    CERN Document Server

    Parviainen, Tero

    2013-01-01

    A quick, clear, and concise tutorial-guide-based approach that helps you to develop a web application based on Vert.x.Real-time Web Application Development with Vert.x is written for web developers who want to take the next step and dive into real-time web application development.This book uses JavaScript (and some Java) to introduce the Vert.x platform, so basic JavaScript knowledge is expected. If you're planning to write your applications using some of the other Vert.x languages, all the techniques and concepts will translate to them directly. All you need to do is refer to the Vert.x API r

  8. Comprehensive NASA Cis-Lunar Earth Moon Libration Orbit Reference and Web Application

    Data.gov (United States)

    National Aeronautics and Space Administration — This work will provide research and trajectory design analysis to develop a NASA Cis-Lunar / Earth-Moon Libration Orbit Reference and Web Application. A compendium...

  9. AMP: a science-driven web-based application for the TeraGrid

    Science.gov (United States)

    Woitaszek, M.; Metcalfe, T.; Shorrock, I.

    The Asteroseismic Modeling Portal (AMP) provides a web-based interface for astronomers to run and view simulations that derive the properties of Sun-like stars from observations of their pulsation frequencies. In this paper, we describe the architecture and implementation of AMP, highlighting the lightweight design principles and tools used to produce a functional fully-custom web-based science application in less than a year. Targeted as a TeraGrid science gateway, AMP's architecture and implementation are intended to simplify its orchestration of TeraGrid computational resources. AMP's web-based interface was developed as a traditional standalone database-backed web application using the Python-based Django web development framework, allowing us to leverage the Django framework's capabilities while cleanly separating the user interface development from the grid interface development. We have found this combination of tools flexible and effective for rapid gateway development and deployment.

  10. Use and utility of Web-based residency program information: a survey of residency applicants.

    Science.gov (United States)

    Embi, Peter J; Desai, Sima; Cooney, Thomas G

    2003-01-01

    The Internet has become essential to the residency application process. In recent years, applicants and residency programs have used the Internet-based tools of the National Residency Matching Program (NRMP, the Match) and the Electronic Residency Application Service (ERAS) to process and manage application and Match information. In addition, many residency programs have moved their recruitment information from printed brochures to Web sites. Despite this change, little is known about how applicants use residency program Web sites and what constitutes optimal residency Web site content, information that is critical to developing and maintaining such sites. To study the use and perceived utility of Web-based residency program information by surveying applicants to an internal medicine program. Our sample population was the applicants to the Oregon Health & Science University Internal Medicine Residency Program who were invited for an interview. We solicited participation using the group e-mail feature available through the Electronic Residency Application Service Post-Office application. To minimize the possibility for biased responses, the study was confined to the period between submission of National Residency Matching Program rank-order lists and release of Match results. Applicants could respond using an anonymous Web-based form or by reply to the e-mail solicitation. We tabulated responses, calculated percentages for each, and performed a qualitative analysis of comments. Of the 431 potential participants, 218 responded (51%) during the study period. Ninety-nine percent reported comfort browsing the Web; 52% accessed the Web primarily from home. Sixty-nine percent learned about residency Web sites primarily from residency-specific directories while 19% relied on general directories. Eighty percent found these sites helpful when deciding where to apply, 69% when deciding where to interview, and 36% when deciding how to rank order programs for the Match. Forty

  11. A NEW APPROACH FOR IMPROVING QUALITY OF WEB APPLICATIONS USING DESIGN PATTERNS

    OpenAIRE

    J. Srikanth R. Savithri

    2012-01-01

    Design patterns are descriptions of communicating objects and classes that are customized to solve a general design problem in a particular context, they describes the problem and its corresponding solution. Professional software engineers always use Design patterns for introducing abstractions in software and by the way they can build complex web applications. The right adoption of Design Patterns while designing web applications can promote the factors like reusability and consistency of th...

  12. Analysis and Design of Web-Based Database Application for Culinary Community

    OpenAIRE

    Huda, Choirul; Awang, Osel Dharmawan; Raymond, Raymond; Raynaldi, Raynaldi

    2017-01-01

    This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the cu...

  13. Soil food web structure after wood ash application

    DEFF Research Database (Denmark)

    Mortensen, L. H.; Qin, J.; Krogh, Paul Henning

    with varying intervals and subsequently analyzed. The food web analysis includes several trophic levels; bacteria/fungi, protozoa, nematodes, enchytraeids, microarthropods and arthropods. The initial results indicate that bacteria and protozoa are stimulated in the uppermost soil layer (0-3 cm) two months...... can facilitate an increase in the bacteria to fungi ratio with possible cascading effects for the soil food web structure. This is tested by applying ash of different concentrations to experimental plots in a coniferous forest. During the course of the project soil samples will be collected...

  14. Developing web map application based on user centered design

    Directory of Open Access Journals (Sweden)

    Petr Voldan

    2012-03-01

    Full Text Available User centred design is an approach in process of development any kind of human product where the main idea is to create a product for the end user. This article presents User centred design method in developing web mapping services. This method can be split into four main phases – user research, creation of concepts, developing with usability research and lunch of product. The article describes each part of this phase with an aim to provide guidelines for developers and primarily with an aim to improve the usability of web mapping services.

  15. ROPUESTA DE MODELO EN CINCO CAPAS PARA APLICACIONES WEB I PROPOSAL OF A FIVE LAYERS MODEL FOR WEB APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Loly Valentina Gómez Fermín

    2018-04-01

    Full Text Available The ability to create cross-platform programs is a goal for many software developers; however, it is complex to achieve. At present, web applications have been introduced as the chosen way to achieve this desired goal, but being the project tied to a software or hardware platform it is, inevitably, subject to obsolescence. This is not unique to the client application, but also applies to the server side, so that when developing software, even unwillingly, there are strings attached to a database and other applications. Because of this, to develop systems capable of operating with multiple databases and not dependent on a specific platform becomes a Herculean task for a development team. Therefore, this research proposes a working model which enables web applications capable of operating in a layered structure, where each defined layer can be replaced without the need to rewrite the other, and allowing real abstraction for application on any platform software or hardware, both on the client and server sides. For this purpose, the research is supported by Sommerville (2005, who presents a detailed classification of models of software development, according to its organization and modular decomposition, which layed the basis for the development of this proposal. This research is documentary in nature, since it is based on the collection of bibliographic material relating to existing software architectures.

  16. Web-based recruitment: effects of information, organizational brand, and attitudes toward a Web site on applicant attraction.

    Science.gov (United States)

    Allen, David G; Mahto, Raj V; Otondo, Robert F

    2007-11-01

    Recruitment theory and research show that objective characteristics, subjective considerations, and critical contact send signals to prospective applicants about the organization and available opportunities. In the generating applicants phase of recruitment, critical contact may consist largely of interactions with recruitment sources (e.g., newspaper ads, job fairs, organization Web sites); however, research has yet to fully address how all 3 types of signaling mechanisms influence early job pursuit decisions in the context of organizational recruitment Web sites. Results based on data from 814 student participants searching actual organization Web sites support and extend signaling and brand equity theories by showing that job information (directly) and organization information (indirectly) are related to intentions to pursue employment when a priori perceptions of image are controlled. A priori organization image is related to pursuit intentions when subsequent information search is controlled, but organization familiarity is not, and attitudes about a recruitment source also influence attraction and partially mediate the effects of organization information. Theoretical and practical implications for recruitment are discussed. (c) 2007 APA

  17. Using Web Speech Technology with Language Learning Applications

    Science.gov (United States)

    Daniels, Paul

    2015-01-01

    In this article, the author presents the history of human-to-computer interaction based upon the design of sophisticated computerized speech recognition algorithms. Advancements such as the arrival of cloud-based computing and software like Google's Web Speech API allows anyone with an Internet connection and Chrome browser to take advantage of…

  18. InCHlib - interactive cluster heatmap for web applications

    Czech Academy of Sciences Publication Activity Database

    Škuta, Ctibor; Bartůněk, Petr; Svozil, Daniel

    2014-01-01

    Roč. 6, č. 44 (2014) ISSN 1758-2946 R&D Projects: GA MŠk LO1220 Institutional support: RVO:68378050 Keywords : Data clustering * Cluster heatmap * Scientific visualization * Web integration * Client-side scripting * JavaScript library * Big data * Exploration Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 4.547, year: 2014

  19. Web 2.0 Technologies: Applications for Community Colleges

    Science.gov (United States)

    Bajt, Susanne K.

    2011-01-01

    The current generation of new students, referred to as the Millennial Generation, brings a new set of challenges to the community college. The influx of these technologically sophisticated students, who interact through the social phenomenon of Web 2.0 technology, bring expectations that may reshape institutions of higher learning. This chapter…

  20. Evolution of Web Applications with Aspect-Oriented Design Patterns

    DEFF Research Database (Denmark)

    Bebjak, Michal; Vranic, Valentino; Dolog, Peter

    2007-01-01

    It is more convenient to talk about changes in a domainspecific way than to formulate them at the programming construct level or-even worse-purely lexical level. Using aspect-oriented programming, changes can be modularized and made reapplicable. In this paper, selected change types in web...

  1. Establishing and Applying Criteria for Evaluating the Ease of Use of Dynamic Platforms for Teaching Web Application Development

    Science.gov (United States)

    Dehinbo, Johnson

    2011-01-01

    The widespread use of the Internet and the World Wide Web led to the availability of many platforms for developing dynamic Web application and the problem of choosing the most appropriate platform that will be easy to use for undergraduate students of web applications development in tertiary institutions. Students beginning to learn web…

  2. Modeling the HTML DOM and Browser API in Static Analysis of JavaScript Web Applications

    DEFF Research Database (Denmark)

    Jensen, Simon Holm; Madsen, Magnus; Møller, Anders

    2011-01-01

    of reasoning about the flow of control and data in modern JavaScript applications that interact with the HTML DOM and browser API. One application of such a static analysis is to detect type-related and dataflow-related programming errors. We report on experiments with a range of modern web applications...

  3. Integration of Web Technologies in Software Applications. Is Web 2.0 a Solution?

    Directory of Open Access Journals (Sweden)

    Cezar Liviu CERVINSCHI

    2010-12-01

    Full Text Available Starting from the idea that Web 2.0 represents “the era of dynamic web”, the paper proposes to provide arguments (demonstrated by physical results regarding the question that is at the foundation if this article. Due to the findings we can definitely affirm that Web 2.0 is a solution to building powerful and robust software, since the Internet has become more than just a simple presence on the users’ desktop that develops easy access to information, services, entertainment, online transactions, e-commerce, e-learning and so on, but basically every kind of human or institutional interaction can happen online. This paper seeks to study the impact of two of these braches upon the user – e-commerce and e-testing. The statistic reports will be made on different sets of people, while the conclusions are the results of a detailed research and study of the applications’ behaviour in the actual operating environment.

  4. Retrieval and impact of scientific production in google era: a comparative analysis between google scholar and web of science

    Directory of Open Access Journals (Sweden)

    Rogério Mugnaini

    2008-04-01

    Full Text Available The changes caused by the development of the information technologies, with respect to the visibility of scientific publications and to the production of impact indicators are discussed. Questions related to the use of citation data and the ISI-Thomsom Scientific Impact Factor are first considered and next the resources offered by Google Scholar to measure the relevance of the scientific works are analysed. It concludes with considerations on the importance of the studies on the application of bibliometric and webometric indicators for analysis of scientific production as form of stablishing an evaluation system adapted to diverse contexts.

  5. Development of a web application for water resources based on open source software

    Science.gov (United States)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri P.

    2014-01-01

    This article presents research and development of a prototype web application for water resources using latest advancements in Information and Communication Technologies (ICT), open source software and web GIS. The web application has three web services for: (1) managing, presenting and storing of geospatial data, (2) support of water resources modeling and (3) water resources optimization. The web application is developed using several programming languages (PhP, Ajax, JavaScript, Java), libraries (OpenLayers, JQuery) and open source software components (GeoServer, PostgreSQL, PostGIS). The presented web application has several main advantages: it is available all the time, it is accessible from everywhere, it creates a real time multi-user collaboration platform, the programing languages code and components are interoperable and designed to work in a distributed computer environment, it is flexible for adding additional components and services and, it is scalable depending on the workload. The application was successfully tested on a case study with concurrent multi-users access.

  6. Grid-optimized Web 3D applications on wide area network

    Science.gov (United States)

    Wang, Frank; Helian, Na; Meng, Lingkui; Wu, Sining; Zhang, Wen; Guo, Yike; Parker, Michael Andrew

    2008-08-01

    Geographical information system has come into the Web Service times now. In this paper, Web3D applications have been developed based on our developed Gridjet platform, which provides a more effective solution for massive 3D geo-dataset sharing in distributed environments. Web3D services enabling web users could access the services as 3D scenes, virtual geographical environment and so on. However, Web3D services should be shared by thousands of essential users that inherently distributed on different geography locations. Large 3D geo-datasets need to be transferred to distributed clients via conventional HTTP, NFS and FTP protocols, which often encounters long waits and frustration in distributed wide area network environments. GridJet was used as the underlying engine between the Web 3D application node and geo-data server that utilizes a wide range of technologies including the one of paralleling the remote file access, which is a WAN/Grid-optimized protocol and provides "local-like" accesses to remote 3D geo-datasets. No change in the way of using software is required since the multi-streamed GridJet protocol remains fully compatible with existing IP infrastructures. Our recent progress includes a real-world test that Web3D applications as Google Earth over the GridJet protocol beats those over the classic ones by a factor of 2-7 where the transfer distance is over 10,000 km.

  7. A Method to Ease the Deployment of Web Applications that Involve Database Systems A Method to Ease the Deployment of Web Applications that Involve Database Systems

    Directory of Open Access Journals (Sweden)

    Antonio Vega Corona

    2012-02-01

    Full Text Available El crecimiento continuo de la Internet ha permitido a las personas, alrededor de todo mundo, realizar transacciones en línea, buscar información o navegar usando el explorador de la Web. A medida que más gente se siente cómoda usando los exploradores de Web, más empresas productoras de software tratan de ofrecer interfaces Web como una forma alternativa para proporcionar acceso a sus aplicaciones. La naturaleza de la conexión Web y las restricciones impuestas por el ancho de banda disponible, hacen la integración de aplicaciones Web y los sistemas de bases de datos críticas. Debido a que las aplicaciones que usan bases de datos proporcionan una interfase gráfica para editar la información en la base de datos y debido a que cada columna en una tabla de una base de datos corresponde a un control en una interfase gráfica, el desarrollo de estas aplicaciones puede consumirun tiempo considerable, ya que la validación de campos y reglas de integridad referencial deben ser respetadas. Se propone un diseño orientado a objetos para así facilitar el desarrollo de aplicaciones que usan sistemas de bases de datos.The continuous growth of the Internet has driven people, all around the globe, to performtransactions on-line, search information or navigate using a browser. As more people feelcomfortable using a Web browser, more software companies are trying to alternatively offerWeb interfaces to provide access to their applications. The consequent nature of the Webconnection and the restrictions imposed by the available bandwidth make the successfulintegration of Web applications and database systems critical. Because popular databaseapplications provide a user interface to edit and maintain the information in the databaseand because each column in the database table maps to a graphic user interface control,the deployment of these applications can be time consuming; appropriate fi eld validationand referential integrity rules must be observed

  8. Great Basin land managers provide detailed feedback about usefulness of two climate information web applications

    Directory of Open Access Journals (Sweden)

    Chad Zanocco

    Full Text Available Land managers in the Great Basin are working to maintain or restore sagebrush ecosystems as climate change exacerbates existing threats. Web applications delivering climate change and climate impacts information have the potential to assist their efforts. Although many web applications containing climate information currently exist, few have been co-produced with land managers or have incorporated information specifically focused on land managers’ needs. Through surveys and interviews, we gathered detailed feedback from federal, state, and tribal sagebrush land managers in the Great Basin on climate information web applications targeting land management. We found that a managers are searching for weather and climate information they can incorporate into their current management strategies and plans; b they are willing to be educated on how to find and understand climate related web applications; c both field and administrative-type managers want data for timescales ranging from seasonal to decadal; d managers want multiple levels of climate information, from simple summaries, to detailed descriptions accessible through the application; and e managers are interested in applications that evaluate uncertainty and provide projected climate impacts. Keywords: Great Basin, Sagebrush, Land management, Climate change, Web application, Co-production

  9. Evaluation of a Web-based Online Grant Application Review Solution

    Directory of Open Access Journals (Sweden)

    Marius Daniel PETRISOR

    2013-12-01

    Full Text Available This paper focuses on the evaluation of a web-based application used in grant application evaluations, software developed in our university, and underlines the need for simple solutions, based on recent technology, specifically tailored to one’s needs. We asked the reviewers to answer a short questionnaire, in order to assess their satisfaction with such a web-based grant application evaluation solution. All 20 reviewers accepted to answer the questionnaire, which contained 8 closed items (YES/NO answers related to reviewer’s previous experience in evaluating grant applications, previous use of such software solutions and his familiarity in using computer systems. The presented web-based application, evaluated by the users, shown a high level of acceptance and those respondents stated that they are willing to use such a solution in the future.

  10. Learning to rank for information retrieval

    CERN Document Server

    Liu, Tie-Yan

    2011-01-01

    Due to the fast growth of the Web and the difficulties in finding desired information, efficient and effective information retrieval systems have become more important than ever, and the search engine has become an essential tool for many people. The ranker, a central component in every search engine, is responsible for the matching between processed queries and indexed documents. Because of its central role, great attention has been paid to the research and development of ranking technologies. In addition, ranking is also pivotal for many other information retrieval applications, such as coll

  11. Promoting Your Web Site.

    Science.gov (United States)

    Raeder, Aggi

    1997-01-01

    Discussion of ways to promote sites on the World Wide Web focuses on how search engines work and how they retrieve and identify sites. Appropriate Web links for submitting new sites and for Internet marketing are included. (LRW)

  12. The AppComposer Web application for school teachers: A platform for translating and adapting educational web applications

    NARCIS (Netherlands)

    Rodriguez-Gil, Luis; Orduna, Pablo; Bollen, Lars; Govaerts, Sten; Holzer, Adrian; Gillet, Dennis; Lopez-de-Ipina, Diego; Garcia-Zubia, Javier

    2015-01-01

    Developing educational apps that cover a wide range of learning contexts and languages is a challenging task. In this paper, we introduce the AppComposer Web app to address this issue. The AppComposer aims at empowering teachers to easily translate and adapt existing apps that fit their educational

  13. Online Shopping With Spoofing Detection Using Web and Mobile Application

    OpenAIRE

    M.Poovizhi; K.Karthika; S.Nandhini; Dr.P.Gomathi

    2018-01-01

    Abstract— E-commerce is that the shopping for marketing of products and services, or the sending of funds or knowledge, over a web and straightforward access to immense stores of reference material, email, and new avenues for advertising and data distribution, to call some. Like most technological advances, there conjointly another side: criminal hackers. Governments, companies, and personal voters round the world area unit anxious to be an area of this revolution, however, they are afraid th...

  14. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks

    Science.gov (United States)

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-01-01

    Hybrid mobile applications (apps) combine the features of Web applications and “native” mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources—file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies “bridges” that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources—the ability to read and write contacts list, local files, etc.—to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign

  15. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks.

    Science.gov (United States)

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-02-01

    Hybrid mobile applications (apps) combine the features of Web applications and "native" mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources-file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies "bridges" that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources-the ability to read and write contacts list, local files, etc.-to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content

  16. Nonmaterialized Relations and the Support of Information Retrieval Applications by Relational Database Systems.

    Science.gov (United States)

    Lynch, Clifford A.

    1991-01-01

    Describes several aspects of the problem of supporting information retrieval system query requirements in the relational database management system (RDBMS) environment and proposes an extension to query processing called nonmaterialized relations. User interactions with information retrieval systems are discussed, and nonmaterialized relations are…

  17. A Formal Approach to Exploiting Multi-Stage Attacks based on File-System Vulnerabilities of Web Applications (Extended Version)

    OpenAIRE

    De Meo, Federico; Viganò, Luca

    2017-01-01

    Web applications require access to the file-system for many different tasks. When analyzing the security of a web application, secu- rity analysts should thus consider the impact that file-system operations have on the security of the whole application. Moreover, the analysis should take into consideration how file-system vulnerabilities might in- teract with other vulnerabilities leading an attacker to breach into the web application. In this paper, we first propose a classification of file-...

  18. The semantic web : research and applications : 7th extended semantic web conference, ESWC 2010, Heraklion, Crete, Greece, May 30 - June 3, 2010 : proceedings

    NARCIS (Netherlands)

    Aroyo, L.M.; Antoniou, G.; Hyvönen, E.; Teije, ten A.; Stuckenschmidt, H.; Cabral, L.; Tudorache, T.

    2010-01-01

    Preface. This volume contains papers from the technical program of the 7th Extended Semantic Web Conference (ESWC 2010), held from May 30 to June 3, 2010, in Heraklion, Greece. ESWC 2010 presented the latest results in research and applications of Semantic Web technologies. ESWC 2010 built on the

  19. Web services as applications' integration tool: QikProp case study.

    Science.gov (United States)

    Laoui, Abdel; Polyakov, Valery R

    2011-07-15

    Web services are a new technology that enables to integrate applications running on different platforms by using primarily XML to enable communication among different computers over the Internet. Large number of applications was designed as stand alone systems before the concept of Web services was introduced and it is a challenge to integrate them into larger computational networks. A generally applicable method of wrapping stand alone applications into Web services was developed and is described. To test the technology, it was applied to the QikProp for DOS (Windows). Although performance of the application did not change when it was delivered as a Web service, this form of deployment had offered several advantages like simplified and centralized maintenance, smaller number of licenses, and practically no training for the end user. Because by using the described approach almost any legacy application can be wrapped as a Web service, this form of delivery may be recommended as a global alternative to traditional deployment solutions. Copyright © 2011 Wiley Periodicals, Inc.

  20. PLANNING APPLICATION OF WEB 2.0 FOR ORGANIZATIONAL LEARNING IN UNIVERSITAS PENDIDIKAN INDONESIA LIBRARY

    Directory of Open Access Journals (Sweden)

    Santi Santika

    2017-07-01

    Full Text Available Library of Universitas Pendidikan Indonesia (UPI has a quality policy commitment to continuous improvement in every area and process. It can be achieved by continuously optimizing organizational learning. Web 2.0 is a media application that can help the organizational learning process because it has the characteristics of read and write, as well as having the flexibility of time use, but the application must be in accordance with the culture and character of the organization. Therefore, this study aimed to find out the Web 2.0 application that can be applied to the organizational learning in the Library of UPI. The method used is a mixed method qualitative and quantitative approach. Research stage refers to the stage of planning and support phases of Web 2.0 Tools Implementation Model. The results showed that the application of Web 2.0 can be applied to the organizational learning in the Library UPI. It refers to the tendency of organizational culture Library of UPI that is good and tendency of HR Library UPI attitude against the use of the Internet and computers are very good. Web 2.0 applications that can be used by UPI library are blogs, online forums, and wiki as a primary tools. Facebook, Youtube, chat application, twitter and Instagram as a supporting tools.

  1. XML representation and management of temporal information for web-based cultural heritage applications

    Directory of Open Access Journals (Sweden)

    Fabio Grandi

    2006-01-01

    Full Text Available In this paper we survey the recent activities and achievements of our research group in the deployment of XMLrelated technologies in Cultural Heritage applications concerning the encoding of temporal semantics in Web documents. In particular we will review "The Valid Web", which is an XML/XSL infrastructure we defined and implemented for the definition and management of historical information within multimedia documents available on the Web, and its further extension to the effective encoding of advanced temporal features like indeterminacy, multiple granularities and calendars, enabling an efficient processing in a user-friendly Web-based environment. Potential uses of the developed infrastructures include a broad range of applications in the cultural heritage domain, where the historical perspective is relevant, with potentially positive impacts on E-Education and E-Science.

  2. Application of the Tikhonov regularization method to wind retrieval from scatterometer data I. Sensitivity analysis and simulation experiments

    International Nuclear Information System (INIS)

    Zhong Jian; Huang Si-Xun; Du Hua-Dong; Zhang Liang

    2011-01-01

    Scatterometer is an instrument which provides all-day and large-scale wind field information, and its application especially to wind retrieval always attracts meteorologists. Certain reasons cause large direction error, so it is important to find where the error mainly comes. Does it mainly result from the background field, the normalized radar cross-section (NRCS) or the method of wind retrieval? It is valuable to research. First, depending on SDP2.0, the simulated ‘true’ NRCS is calculated from the simulated ‘true’ wind through the geophysical model function NSCAT2. The simulated background field is configured by adding a noise to the simulated ‘true’ wind with the non-divergence constraint. Also, the simulated ‘measured’ NRCS is formed by adding a noise to the simulated ‘true’ NRCS. Then, the sensitivity experiments are taken, and the new method of regularization is used to improve the ambiguity removal with simulation experiments. The results show that the accuracy of wind retrieval is more sensitive to the noise in the background than in the measured NRCS; compared with the two-dimensional variational (2DVAR) ambiguity removal method, the accuracy of wind retrieval can be improved with the new method of Tikhonov regularization through choosing an appropriate regularization parameter, especially for the case of large error in the background. The work will provide important information and a new method for the wind retrieval with real data. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  3. Application of a regularized model inversion system (REGFLEC) to multi-temporal RapidEye imagery for retrieving vegetation characteristics

    KAUST Repository

    Houborg, Rasmus

    2015-10-14

    Accurate retrieval of canopy biophysical and leaf biochemical constituents from space observations is critical to diagnosing the functioning and condition of vegetation canopies across spatio-temporal scales. Retrieved vegetation characteristics may serve as important inputs to precision farming applications and as constraints in spatially and temporally distributed model simulations of water and carbon exchange processes. However significant challenges remain in the translation of composite remote sensing signals into useful biochemical, physiological or structural quantities and treatment of confounding factors in spectrum-trait relations. Bands in the red-edge spectrum have particular potential for improving the robustness of retrieved vegetation properties. The development of observationally based vegetation retrieval capacities, effectively constrained by the enhanced information content afforded by bands in the red-edge, is a needed investment towards optimizing the benefit of current and future satellite sensor systems. In this study, a REGularized canopy reFLECtance model (REGFLEC) for joint leaf chlorophyll (Chll) and leaf area index (LAI) retrieval is extended to sensor systems with a band in the red-edge region for the first time. Application to time-series of 5 m resolution multi-spectral RapidEye data is demonstrated over an irrigated agricultural region in central Saudi Arabia, showcasing the value of satellite-derived crop information at this fine scale for precision management. Validation against in-situ measurements in fields of alfalfa, Rhodes grass, carrot and maize indicate improved accuracy of retrieved vegetation properties when exploiting red-edge information in the model inversion process. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  4. A Novel Method for Live Debugging of Production Web Applications by Dynamic Resource Replacement

    OpenAIRE

    Khalid Al-Tahat; Khaled Zuhair Mahmoud; Ahmad Al-Mughrabi

    2014-01-01

    This paper proposes a novel methodology for enabling debugging and tracing of production web applications without affecting its normal flow and functionality. This method of debugging enables developers and maintenance engineers to replace a set of existing resources such as images, server side scripts, cascading style sheets with another set of resources per web session. The new resources will only be active in the debug session and other sessions will not be affected. T...

  5. System and Method for Providing Web-Based Remote Application Service

    OpenAIRE

    Shuen-Tai Wang; Yu-Ching Lin; Hsi-Ya Chang

    2017-01-01

    With the development of virtualization technologies, a new type of service named cloud computing service is produced. Cloud users usually encounter the problem of how to use the virtualized platform easily over the web without requiring the plug-in or installation of special software. The object of this paper is to develop a system and a method enabling process interfacing within an automation scenario for accessing remote application by using the web browser. To meet this challenge, we have ...

  6. Web Page Recommendation Using Web Mining

    OpenAIRE

    Modraj Bhavsar; Mrs. P. M. Chavan

    2014-01-01

    On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each...

  7. Server Interface Descriptions for Automated Testing of JavaScript Web Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning; Møller, Anders; Su, Zhendong

    2013-01-01

    Automated testing of JavaScript web applications is complicated by the communication with servers. Specifically, it is difficult to test the JavaScript code in isolation from the server code and database contents. We present a practical solution to this problem. First, we demonstrate that formal...... server interface descriptions are useful in automated testing of JavaScript web applications for separating the concerns of the client and the server. Second, to support the construction of server interface descriptions for existing applications, we introduce an effective inference technique that learns...... communication patterns from sample data. By incorporating interface descriptions into the testing tool Artemis, our experimental results show that we increase the level of automation for high-coverage testing on a collection of JavaScript web applications that exchange JSON data between the clients and servers...

  8. Regional Geology Web Map Application Development: Javascript v2.0

    International Nuclear Information System (INIS)

    Russell, Glenn

    2017-01-01

    This is a milestone report for the FY2017 continuation of the Spent Fuel, Storage, and Waste, Technology (SFSWT) program (formerly Used Fuel Disposal (UFD) program) development of the Regional Geology Web Mapping Application by the Idaho National Laboratory Geospatial Science and Engineering group. This application was developed for general public use and is an interactive web-based application built in Javascript to visualize, reference, and analyze US pertinent geological features of the SFSWT program. This tool is a version upgrade from Adobe FLEX technology. It is designed to facilitate informed decision making of the geology of continental US relevant to the SFSWT program.

  9. Regional Geology Web Map Application Development: Javascript v2.0

    Energy Technology Data Exchange (ETDEWEB)

    Russell, Glenn [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-06-19

    This is a milestone report for the FY2017 continuation of the Spent Fuel, Storage, and Waste, Technology (SFSWT) program (formerly Used Fuel Disposal (UFD) program) development of the Regional Geology Web Mapping Application by the Idaho National Laboratory Geospatial Science and Engineering group. This application was developed for general public use and is an interactive web-based application built in Javascript to visualize, reference, and analyze US pertinent geological features of the SFSWT program. This tool is a version upgrade from Adobe FLEX technology. It is designed to facilitate informed decision making of the geology of continental US relevant to the SFSWT program.

  10. ClusterControl: a web interface for distributing and monitoring bioinformatics applications on a Linux cluster.

    Science.gov (United States)

    Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko

    2004-03-22

    ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl

  11. Distributed Two-Dimensional Fourier Transforms on DSPs with an Application for Phase Retrieval

    Science.gov (United States)

    Smith, Jeffrey Scott

    2006-01-01

    Many applications of two-dimensional Fourier Transforms require fixed timing as defined by system specifications. One example is image-based wavefront sensing. The image-based approach has many benefits, yet it is a computational intensive solution for adaptive optic correction, where optical adjustments are made in real-time to correct for external (atmospheric turbulence) and internal (stability) aberrations, which cause image degradation. For phase retrieval, a type of image-based wavefront sensing, numerous two-dimensional Fast Fourier Transforms (FFTs) are used. To meet the required real-time specifications, a distributed system is needed, and thus, the 2-D FFT necessitates an all-to-all communication among the computational nodes. The 1-D floating point FFT is very efficient on a digital signal processor (DSP). For this study, several architectures and analysis of such are presented which address the all-to-all communication with DSPs. Emphasis of this research is on a 64-node cluster of Analog Devices TigerSharc TS-101 DSPs.

  12. CloudTPS: Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2010-01-01

    NoSQL Cloud data services provide scalability and high availability properties for web applications but at the same time they sacrifice data consistency. However, many applications cannot afford any data inconsistency. CloudTPS is a scalable transaction manager to allow cloud database services to

  13. A novel 2.5D approach for interfacing with web applications

    OpenAIRE

    Sarkar, Saurabh

    2012-01-01

    Web applications need better user interface to be interactive and attractive. A new approach/concept of dimensional enhancement - 2.5D "a 2D display of a virtual 3D environment", which can be implemented in social networking sites and further in other system applications.

  14. A Service Oriented Web Application for Learner Knowledge Representation, Management and Sharing Conforming to IMS LIP

    Science.gov (United States)

    Lazarinis, Fotis

    2014-01-01

    iLM is a Web based application for representation, management and sharing of IMS LIP conformant user profiles. The tool is developed using a service oriented architecture with emphasis on the easy data sharing. Data elicitation from user profiles is based on the utilization of XQuery scripts and sharing with other applications is achieved through…

  15. THREE-DIMENSIONAL WEB-BASED PHYSICS SIMULATION APPLICATION FOR PHYSICS LEARNING TOOL

    Directory of Open Access Journals (Sweden)

    William Salim

    2012-10-01

    Full Text Available The purpose of this research is to present a multimedia application for doing simulation in Physics. The application is a web based simulator that implementing HTML5, WebGL, and JavaScript. The objects and the environment will be in three dimensional views. This application is hoped will become the substitute for practicum activity. The current development is the application only covers Newtonian mechanics. Questionnaire and literature study is used as the data collecting method. While Waterfall Method used as the design method. The result is Three-DimensionalPhysics Simulator as online web application. Three-Dimensionaldesign and mentor-mentee relationship is the key features of this application. The conclusion made is Three-DimensionalPhysics Simulator already fulfilled in both design and functionality according to user. This application also helps them to understand Newtonian mechanics by simulation. Improvements are needed, because this application only covers Newtonian Mechanics. There is a lot possibility in the future that this simulation can also covers other Physics topic, such as optic, energy, or electricity.Keywords: Simulation, Physic, Learning Tool, HTML5, WebGL

  16. Using declarative workflow languages to develop process-centric web applications

    NARCIS (Netherlands)

    Bernardi, M.L.; Cimitile, M.; Di Lucca, G.A.; Maggi, F.M.

    2012-01-01

    Nowadays, process-centric Web Applications (WAs) are extensively used in contexts where multi-user, coordinated work is required. Recently, Model Driven Engineering (MDE) techniques have been investigated for the development of this kind of applications. However, there are still some open issues.

  17. Near-Real Time Satellite-Retrieved Cloud and Surface Properties for Weather and Aviation Safety Applications

    Science.gov (United States)

    Minnis, P.; Smith, W., Jr.; Bedka, K. M.; Nguyen, L.; Palikonda, R.; Hong, G.; Trepte, Q.; Chee, T.; Scarino, B. R.; Spangenberg, D.; Sun-Mack, S.; Fleeger, C.; Ayers, J. K.; Chang, F. L.; Heck, P. W.

    2014-12-01

    Cloud properties determined from satellite imager radiances provide a valuable source of information for nowcasting and weather forecasting. In recent years, it has been shown that assimilation of cloud top temperature, optical depth, and total water path can increase the accuracies of weather analyses and forecasts. Aircraft icing conditions can be accurately diagnosed in near-real time (NRT) retrievals of cloud effective particle size, phase, and water path, providing valuable data for pilots. NRT retrievals of surface skin temperature can also be assimilated in numerical weather prediction models to provide more accurate representations of solar heating and longwave cooling at the surface, where convective initiation. These and other applications are being exploited more frequently as the value of NRT cloud data become recognized. At NASA Langley, cloud properties and surface skin temperature are being retrieved in near-real time globally from both geostationary (GEO) and low-earth orbiting (LEO) satellite imagers for weather model assimilation and nowcasting for hazards such as aircraft icing. Cloud data from GEO satellites over North America are disseminated through NCEP, while those data and global LEO and GEO retrievals are disseminated from a Langley website. This paper presents an overview of the various available datasets, provides examples of their application, and discusses the use of the various datasets downstream. Future challenges and areas of improvement are also presented.

  18. Near-Real Time Satellite-Retrieved Cloud and Surface Properties for Weather and Aviation Safety Applications

    Science.gov (United States)

    Minnis, Patrick; Smith, William L., Jr.; Bedka, Kristopher M.; Nguyen, Louis; Palikonda, Rabindra; Hong, Gang; Trepte, Qing Z.; Chee, Thad; Scarino, Benjamin; Spangenberg, Douglas A.; hide

    2014-01-01

    Cloud properties determined from satellite imager radiances provide a valuable source of information for nowcasting and weather forecasting. In recent years, it has been shown that assimilation of cloud top temperature, optical depth, and total water path can increase the accuracies of weather analyses and forecasts. Aircraft icing conditions can be accurately diagnosed in near-­-real time (NRT) retrievals of cloud effective particle size, phase, and water path, providing valuable data for pilots. NRT retrievals of surface skin temperature can also be assimilated in numerical weather prediction models to provide more accurate representations of solar heating and longwave cooling at the surface, where convective initiation. These and other applications are being exploited more frequently as the value of NRT cloud data become recognized. At NASA Langley, cloud properties and surface skin temperature are being retrieved in near-­-real time globally from both geostationary (GEO) and low-­-earth orbiting (LEO) satellite imagers for weather model assimilation and nowcasting for hazards such as aircraft icing. Cloud data from GEO satellites over North America are disseminated through NCEP, while those data and global LEO and GEO retrievals are disseminated from a Langley website. This paper presents an overview of the various available datasets, provides examples of their application, and discusses the use of the various datasets downstream. Future challenges and areas of improvement are also presented.

  19. The Use of Web Based Expert System Application for Identification and Intervention of Children with Special Needs in Inclusive School

    Directory of Open Access Journals (Sweden)

    Dian Atnantomi Wiliyanto

    2017-11-01

    Full Text Available This research is conducted to determine the effectiveness of web based expert system application for identification and intervention of children with special needs in inclusive school. 40 teachers of inclusive school in Surakarta participated in this research. The result showed that: (1 web based expert system application was suitable with the needs of teachers/officers, had 50% (excellence criteria, (2 web based expert system application was worthwhile for identification of children with special needs, had 50% (excellence criteria, (3 web based expert system application was easy to use, had 52.5% (good criteria, and (4 web based expert system application had result accuracy in making decision, had 52.5% (good criteria. It shows that the use of web based expert system application is effective to be used by teachers in inclusive school in conducting identification and intervention with percentage on average was more than 50%.

  20. Stratification-Based Outlier Detection over the Deep Web

    OpenAIRE

    Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S.; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming

    2016-01-01

    For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribu...

  1. Examining the application of Web 2.0 in medical-related organisations.

    Science.gov (United States)

    Chu, Samuel Kai Wah; Woo, Matsuko; King, Ronnel B; Choi, Stephen; Cheng, Miffy; Koo, Peggy

    2012-03-01

    This study surveyed Web 2.0 application in three types of selected health or medical-related organisations such as university medical libraries, hospitals and non-profit medical-related organisations. Thirty organisations participated in an online survey on the perceived purposes, benefits and difficulties in using Web 2.0. A phone interview was further conducted with eight organisations (26.7%) to collect information on the use of Web 2.0. Data were analysed using both quantitative and qualitative approaches. Results showed that knowledge and information sharing and the provision of a better communication platform were rated as the main purposes of using Web 2.0. Time constraints and low staff engagement were the most highly rated difficulties. In addition, most participants found Web 2.0 to be beneficial to their organisations. Medical-related organisations that adopted Web 2.0 technologies have found them useful, with benefits outweighing the difficulties in the long run. The implications of this study are discussed to help medical-related organisations make decisions regarding the use of Web 2.0 technologies. © 2011 The authors. Health Information and Libraries Journal © 2011 Health Libraries Group.

  2. Insights into Tikhonov regularization: application to trace gas column retrieval and the efficient calculation of total column averaging kernels

    Directory of Open Access Journals (Sweden)

    T. Borsdorff

    2014-02-01

    Full Text Available Insights are given into Tikhonov regularization and its application to the retrieval of vertical column densities of atmospheric trace gases from remote sensing measurements. The study builds upon the equivalence of the least-squares profile-scaling approach and Tikhonov regularization method of the first kind with an infinite regularization strength. Here, the vertical profile is expressed relative to a reference profile. On the basis of this, we propose a new algorithm as an extension of the least-squares profile scaling which permits the calculation of total column averaging kernels on arbitrary vertical grids using an analytic expression. Moreover, we discuss the effective null space of the retrieval, which comprises those parts of a vertical trace gas distribution which cannot be inferred from the measurements. Numerically the algorithm can be implemented in a robust and efficient manner. In particular for operational data processing with challenging demands on processing time, the proposed inversion method in combination with highly efficient forward models is an asset. For demonstration purposes, we apply the algorithm to CO column retrieval from simulated measurements in the 2.3 μm spectral region and to O3 column retrieval from the UV. These represent ideal measurements of a series of spaceborne spectrometers such as SCIAMACHY, TROPOMI, GOME, and GOME-2. For both spectral ranges, we consider clear-sky and cloudy scenes where clouds are modelled as an elevated Lambertian surface. Here, the smoothing error for the clear-sky and cloudy atmosphere is significant and reaches several percent, depending on the reference profile which is used for scaling. This underlines the importance of the column averaging kernel for a proper interpretation of retrieved column densities. Furthermore, we show that the smoothing due to regularization can be underestimated by calculating the column averaging kernel on a too coarse vertical grid. For both

  3. Toward Exposing Timing-Based Probing Attacks in Web Applications

    Science.gov (United States)

    Mao, Jian; Chen, Yue; Shi, Futian; Jia, Yaoqi; Liang, Zhenkai

    2017-01-01

    Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT) systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users’ browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach. PMID:28245610

  4. Robust image obfuscation for privacy protection in Web 2.0 applications

    Science.gov (United States)

    Poller, Andreas; Steinebach, Martin; Liu, Huajian

    2012-03-01

    We present two approaches to robust image obfuscation based on permutation of image regions and channel intensity modulation. The proposed concept of robust image obfuscation is a step towards end-to-end security in Web 2.0 applications. It helps to protect the privacy of the users against threats caused by internet bots and web applications that extract biometric and other features from images for data-linkage purposes. The approaches described in this paper consider that images uploaded to Web 2.0 applications pass several transformations, such as scaling and JPEG compression, until the receiver downloads them. In contrast to existing approaches, our focus is on usability, therefore the primary goal is not a maximum of security but an acceptable trade-off between security and resulting image quality.

  5. HTSstation: a web application and open-access libraries for high-throughput sequencing data analysis.

    Science.gov (United States)

    David, Fabrice P A; Delafontaine, Julien; Carat, Solenne; Ross, Frederick J; Lefebvre, Gregory; Jarosz, Yohan; Sinclair, Lucas; Noordermeer, Daan; Rougemont, Jacques; Leleu, Marion

    2014-01-01

    The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch.

  6. SOA based Data Architecture for HTML5 Web Applications

    Directory of Open Access Journals (Sweden)

    Catalin STRIMBEI

    2013-01-01

    Full Text Available Web Services based architectures have already been established as the preferred way to integrate SOA specific components, from the front-end to the back-end business services. One of the key elements of such architecture are data-based or entity services. In this context, SDO standard and SDO related technologies have been confirmed as a possible approach to aggregate such enterprise-wide federation of data services, mainly backed by database servers, but not limited to them. In the followings, we will discuss an architectural purpose based on SDO approach to seamlessly integrate presentation and data services within an enterprise SOA context. This way we will outline the benefits of a common end-to-end data integration strategy. Also, we will try to argue that using HTML5 based clients as front end services in conjunction with SDO data services could be an effective strategy to adopt the mobile computing in the enterprise context.

  7. CASAS: Cancer Survival Analysis Suite, a web based application.

    Science.gov (United States)

    Rupji, Manali; Zhang, Xinyan; Kowalski, Jeanne

    2017-01-01

    We present CASAS, a shiny R based tool for interactive survival analysis and visualization of results. The tool provides a web-based one stop shop to perform the following types of survival analysis:  quantile, landmark and competing risks, in addition to standard survival analysis.  The interface makes it easy to perform such survival analyses and obtain results using the interactive Kaplan-Meier and cumulative incidence plots.  Univariate analysis can be performed on one or several user specified variable(s) simultaneously, the results of which are displayed in a single table that includes log rank p-values and hazard ratios along with their significance. For several quantile survival analyses from multiple cancer types, a single summary grid is constructed. The CASAS package has been implemented in R and is available via http://shinygispa.winship.emory.edu/CASAS/. The developmental repository is available at https://github.com/manalirupji/CASAS/.

  8. Introduction to information retrieval

    CERN Document Server

    Manning, Christopher D; Schütze, Hinrich

    2008-01-01

    Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced un

  9. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Co...

  10. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    CERN Document Server

    Andreeva, J; Karavakis, E; Kokoszkiewicz, L; Nowotka, M; Saiz, P; Tuckett, D

    2012-01-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Comp...

  11. Web Applications That Promote Learning Communities in Today's Online Classrooms

    Science.gov (United States)

    Reigle, Rosemary R.

    2015-01-01

    The changing online learning environment requires that instructors depend less on the standard tools built into most educational learning platforms and turn their focus to use of Open Educational Resources (OERs) and free or low-cost commercial applications. These applications permit new and more efficient ways to build online learning communities…

  12. Educators' Perceived Importance of Web 2.0 Technology Applications

    Science.gov (United States)

    Pritchett, Christal C.; Wohleb, Elisha C.; Pritchett, Christopher G.

    2013-01-01

    This research study was designed to examine the degree of perceived importance of interactive technology applications among various groups of certified educators; the degree to which education professionals utilized interactive online technology applications and to determine if there was a significant difference between the different groups based…

  13. microCOMB web application for the identification of gene expression components

    OpenAIRE

    Skok, Boštjan

    2016-01-01

    The goal of this thesis is to develop a web application that functions as user interface for microCOMB and manages it's gene expression database. The main functions of the application are to enable the user to upload expression profiles to be analyzed and show it's result, store user history of completed analyses and keep the public database up to date. In the thesis we describe the technologies used, architecture, development process and application functionality. During the development and ...

  14. The Ground Flash Fraction Retrieval Algorithm Employing Differential Evolution: Simulations and Applications

    Science.gov (United States)

    Koshak, William; Solakiewicz, Richard

    2012-01-01

    The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error

  15. Popular song and lyrics synchronization and its application to music information retrieval

    Science.gov (United States)

    Chen, Kai; Gao, Sheng; Zhu, Yongwei; Sun, Qibin

    2006-01-01

    An automatic synchronization system of the popular song and its lyrics is presented in the paper. The system includes two main components: a) automatically detecting vocal/non-vocal in the audio signal and b) automatically aligning the acoustic signal of the song with its lyric using speech recognition techniques and positioning the boundaries of the lyrics in its acoustic realization at the multiple levels simultaneously (e.g. the word / syllable level and phrase level). The GMM models and a set of HMM-based acoustic model units are carefully designed and trained for the detection and alignment. To eliminate the severe mismatch due to the diversity of musical signal and sparse training data available, the unsupervised adaptation technique such as maximum likelihood linear regression (MLLR) is exploited for tailoring the models to the real environment, which improves robustness of the synchronization system. To further reduce the effect of the missed non-vocal music on alignment, a novel grammar net is build to direct the alignment. As we know, this is the first automatic synchronization system only based on the low-level acoustic feature such as MFCC. We evaluate the system on a Chinese song dataset collecting from 3 popular singers. We obtain 76.1% for the boundary accuracy at the syllable level (BAS) and 81.5% for the boundary accuracy at the phrase level (BAP) using fully automatic vocal/non-vocal detection and alignment. The synchronization system has many applications such as multi-modality (audio and textual) content-based popular song browsing and retrieval. Through the study, we would like to open up the discussion of some challenging problems when developing a robust synchronization system for largescale database.

  16. Recent advancements on the development of web-based applications for the implementation of seismic analysis and surveillance systems

    Science.gov (United States)

    Friberg, P. A.; Luis, R. S.; Quintiliani, M.; Lisowski, S.; Hunter, S.

    2014-12-01

    Recently, a novel set of modules has been included in the Open Source Earthworm seismic data processing system, supporting the use of web applications. These include the Mole sub-system, for storing relevant event data in a MySQL database (see M. Quintiliani and S. Pintore, SRL, 2013), and an embedded webserver, Moleserv, for serving such data to web clients in QuakeML format. These modules have enabled, for the first time using Earthworm, the use of web applications for seismic data processing. These can greatly simplify the operation and maintenance of seismic data processing centers by having one or more servers providing the relevant data as well as the data processing applications themselves to client machines running arbitrary operating systems.Web applications with secure online web access allow operators to work anywhere, without the often cumbersome and bandwidth hungry use of secure shell or virtual private networks. Furthermore, web applications can seamlessly access third party data repositories to acquire additional information, such as maps. Finally, the usage of HTML email brought the possibility of specialized web applications, to be used in email clients. This is the case of EWHTMLEmail, which produces event notification emails that are in fact simple web applications for plotting relevant seismic data.Providing web services as part of Earthworm has enabled a number of other tools as well. One is ISTI's EZ Earthworm, a web based command and control system for an otherwise command line driven system; another is a waveform web service. The waveform web service serves Earthworm data to additional web clients for plotting, picking, and other web-based processing tools. The current Earthworm waveform web service hosts an advanced plotting capability for providing views of event-based waveforms from a Mole database served by Moleserve.The current trend towards the usage of cloud services supported by web applications is driving improvements in Java

  17. Spatiotemporal Land Use Change Analysis Using Open-source GIS and Web Based Application

    Directory of Open Access Journals (Sweden)

    Wan Yusryzal Wan Ibrahim

    2015-05-01

    Full Text Available Spatiotemporal changes are very important information to reveal the characteristics of the urbanization process. Sharing the information is beneficial for public awareness which then improves their participation in adaptive management for spatial planning process. Open-source software and web application are freely available tools that can be the best medium used by any individual or agencies to share this important information. The objective of the paper is to discuss on the spatiotemporal land use change in Iskandar Malaysia by using open-source GIS (Quantum GIS and publish them through web application (Mash-up. Land use in 1994 to 2011 were developed and analyzed to show the landscape change of the region. Subsequently, web application was setup to distribute the findings of the study. The result show there is significant changes of land use in the study area especially on the decline of agricultural and natural land which were converted to urban land uses. Residential and industrial areas largely replaced the agriculture and natural areas particularly along the coastal zone of the region. This information is published through interactive GIS web in order to share it with the public and stakeholders. There are some limitations of web application but still not hindering the advantages of using it. The integration of open-source GIS and web application is very helpful in sharing planning information particularly in the study area that experiences rapid land use and land cover change. Basic information from this study is vital for conducting further study such as projecting future land use change and other related studies in the area.

  18. Linearization of the Principal Component Analysis method for radiative transfer acceleration: Application to retrieval algorithms and sensitivity studies

    International Nuclear Information System (INIS)

    Spurr, R.; Natraj, V.; Lerot, C.; Van Roozendael, M.; Loyola, D.

    2013-01-01

    Principal Component Analysis (PCA) is a promising tool for enhancing radiative transfer (RT) performance. When applied to binned optical property data sets, PCA exploits redundancy in the optical data, and restricts the number of full multiple-scatter calculations to those optical states corresponding to the most important principal components, yet still maintaining high accuracy in the radiance approximations. We show that the entire PCA RT enhancement process is analytically differentiable with respect to any atmospheric or surface parameter, thus allowing for accurate and fast approximations of Jacobian matrices, in addition to radiances. This linearization greatly extends the power and scope of the PCA method to many remote sensing retrieval applications and sensitivity studies. In the first example, we examine accuracy for PCA-derived UV-backscatter radiance and Jacobian fields over a 290–340 nm window. In a second application, we show that performance for UV-based total ozone column retrieval is considerably improved without compromising the accuracy. -- Highlights: •Principal Component Analysis (PCA) of spectrally-binned atmospheric optical properties. •PCA-based accelerated radiative transfer with 2-stream model for fast multiple-scatter. •Atmospheric and surface property linearization of this PCA performance enhancement. •Accuracy of PCA enhancement for radiances and bulk-property Jacobians, 290–340 nm. •Application of PCA speed enhancement to UV backscatter total ozone retrievals

  19. A web application to support telemedicine services in Brazil.

    Science.gov (United States)

    Barbosa, Ana Karina P; de A Novaes, Magdala; de Vasconcelos, Alexandre M L

    2003-01-01

    This paper describes a system that has been developed to support Telemedicine activities in Brazil, a country that has serious problems in the delivery of health services. The system is a part of the broader Tele-health Project that has been developed to make health services more accessible to the low-income population in the northeast region. The HealthNet system is based upon a pilot area that uses fetal and pediatric cardiology. This article describes both the system's conceptual model, including the tele-diagnosis and second medical opinion services, as well as its architecture and development stages. The system model describes both collaborating tools used asynchronously, such as discussion forums, and synchronous tools, such as videoconference services. Web and free-of-charge tools are utilized for implementation, such as Java and MySQL database. Furthermore, an interface with Electronic Patient Record (EPR) systems using Extended Markup Language (XML) technology is also proposed. Finally, considerations concerning the development and implementation process are presented.

  20. Soil Moisture Retrieval Using Convolutional Neural Networks: Application to Passive Microwave Remote Sensing

    Science.gov (United States)

    Hu, Z.; Xu, L.; Yu, B.

    2018-04-01

    A empirical model is established to analyse the daily retrieval of soil moisture from passive microwave remote sensing using convolutional neural networks (CNN). Soil moisture plays an important role in the water cycle. However, with the rapidly increasing of the acquiring technology for remotely sensed data, it's a hard task for remote sensing practitioners to find a fast and convenient model to deal with the massive data. In this paper, the AMSR-E brightness temperatures are used to train CNN for the prediction of the European centre for medium-range weather forecasts (ECMWF) model. Compared with the classical inversion methods, the deep learning-based method is more suitable for global soil moisture retrieval. It is very well supported by graphics processing unit (GPU) acceleration, which can meet the demand of massive data inversion. Once the model trained, a global soil moisture map can be predicted in less than 10 seconds. What's more, the method of soil moisture retrieval based on deep learning can learn the complex texture features from the big remote sensing data. In this experiment, the results demonstrates that the CNN deployed to retrieve global soil moisture can achieve a better performance than the support vector regression (SVR) for soil moisture retrieval.

  1. SOIL MOISTURE RETRIEVAL USING CONVOLUTIONAL NEURAL NETWORKS: APPLICATION TO PASSIVE MICROWAVE REMOTE SENSING

    Directory of Open Access Journals (Sweden)

    Z. Hu

    2018-04-01

    Full Text Available A empirical model is established to analyse the daily retrieval of soil moisture from passive microwave remote sensing using convolutional neural networks (CNN. Soil moisture plays an important role in the water cycle. However, with the rapidly increasing of the acquiring technology for remotely sensed data, it's a hard task for remote sensing practitioners to find a fast and convenient model to deal with the massive data. In this paper, the AMSR-E brightness temperatures are used to train CNN for the prediction of the European centre for medium-range weather forecasts (ECMWF model. Compared with the classical inversion methods, the deep learning-based method is more suitable for global soil moisture retrieval. It is very well supported by graphics processing unit (GPU acceleration, which can meet the demand of massive data inversion. Once the model trained, a global soil moisture map can be predicted in less than 10 seconds. What's more, the method of soil moisture retrieval based on deep learning can learn the complex texture features from the big remote sensing data. In this experiment, the results demonstrates that the CNN deployed to retrieve global soil moisture can achieve a better performance than the support vector regression (SVR for soil moisture retrieval.

  2. Application of discriminative models for interactive query refinement in video retrieval

    Science.gov (United States)

    Srivastava, Amit; Khanwalkar, Saurabh; Kumar, Anoop

    2013-12-01

    The ability to quickly search for large volumes of videos for specific actions or events can provide a dramatic new capability to intelligence agencies. Example-based queries from video are a form of content-based information retrieval (CBIR) where the objective is to retrieve clips from a video corpus, or stream, using a representative query sample to find more like this. Often, the accuracy of video retrieval is largely limited by the gap between the available video descriptors and the underlying query concept, and such exemplar queries return many irrelevant results with relevant ones. In this paper, we present an Interactive Query Refinement (IQR) system which acts as a powerful tool to leverage human feedback and allow intelligence analyst to iteratively refine search queries for improved precision in the retrieved results. In our approach to IQR, we leverage discriminative models that operate on high dimensional features derived from low-level video descriptors in an iterative framework. Our IQR model solicits relevance feedback on examples selected from the region of uncertainty and updates the discriminating boundary to produce a relevance ranked results list. We achieved 358% relative improvement in Mean Average Precision (MAP) over initial retrieval list at a rank cutoff of 100 over 4 iterations. We compare our discriminative IQR model approach to a naïve IQR and show our model-based approach yields 49% relative improvement over the no model naïve system.

  3. Web 2.0 applications in medicine: trends and topics in the literature.

    Science.gov (United States)

    Boudry, Christophe

    2015-04-01

    The World Wide Web has changed research habits, and these changes were further expanded when "Web 2.0" became popular in 2005. Bibliometrics is a helpful tool used for describing patterns of publication, for interpreting progression over time, and the geographical distribution of research in a given field. Few studies employing bibliometrics, however, have been carried out on the correlative nature of scientific literature and Web 2.0. The aim of this bibliometric analysis was to provide an overview of Web 2.0 implications in the biomedical literature. The objectives were to assess the growth rate of literature, key journals, authors, and country contributions, and to evaluate whether the various Web 2.0 applications were expressed within this biomedical literature, and if so, how. A specific query with keywords chosen to be representative of Web 2.0 applications was built for the PubMed database. Articles related to Web 2.0 were downloaded in Extensible Markup Language (XML) and were processed through developed hypertext preprocessor (PHP) scripts, then imported to Microsoft Excel 2010 for data processing. A total of 1347 articles were included in this study. The number of articles related to Web 2.0 has been increasing from 2002 to 2012 (average annual growth rate was 106.3% with a maximum of 333% in 2005). The United States was by far the predominant country for authors, with 514 articles (54.0%; 514/952). The second and third most productive countries were the United Kingdom and Australia, with 87 (9.1%; 87/952) and 44 articles (4.6%; 44/952), respectively. Distribution of number of articles per author showed that the core population of researchers working on Web 2.0 in the medical field could be estimated at approximately 75. In total, 614 journals were identified during this analysis. Using Bradford's law, 27 core journals were identified, among which three (Studies in Health Technology and Informatics, Journal of Medical Internet Research, and Nucleic Acids

  4. Hanford tank waste simulants specification and their applicability for the retrieval, pretreatment, and vitrification processes

    Energy Technology Data Exchange (ETDEWEB)

    GR Golcar; NG Colton; JG Darab; HD Smith

    2000-04-04

    A wide variety of waste simulants were developed over the past few years to test various retrieval, pretreatment and waste immobilization technologies and unit operations. Experiments can be performed cost-effectively using non-radioactive waste simulants in open laboratories. This document reviews the composition of many previously used waste simulants for remediation of tank wastes at the Hanford reservation. In this review, the simulants used in testing for the retrieval, pretreatment, and vitrification processes are compiled, and the representative chemical and physical characteristics of each simulant are specified. The retrieval and transport simulants may be useful for testing in-plant fluidic devices and in some cases for filtration technologies. The pretreatment simulants will be useful for filtration, Sr/TRU removal, and ion exchange testing. The vitrification simulants will be useful for testing melter, melter feed preparation technologies, and for waste form evaluations.

  5. Hanford tank waste simulants specification and their applicability for the retrieval, pretreatment, and vitrification processes

    International Nuclear Information System (INIS)

    GR Golcar; NG Colton; JG Darab; HD Smith

    2000-01-01

    A wide variety of waste simulants were developed over the past few years to test various retrieval, pretreatment and waste immobilization technologies and unit operations. Experiments can be performed cost-effectively using non-radioactive waste simulants in open laboratories. This document reviews the composition of many previously used waste simulants for remediation of tank wastes at the Hanford reservation. In this review, the simulants used in testing for the retrieval, pretreatment, and vitrification processes are compiled, and the representative chemical and physical characteristics of each simulant are specified. The retrieval and transport simulants may be useful for testing in-plant fluidic devices and in some cases for filtration technologies. The pretreatment simulants will be useful for filtration, Sr/TRU removal, and ion exchange testing. The vitrification simulants will be useful for testing melter, melter feed preparation technologies, and for waste form evaluations

  6. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    Science.gov (United States)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have

  7. Development of a Web application for a real time information system; Desarrollo de una aplicacion web para un sistema de informacion en tiempo real

    Energy Technology Data Exchange (ETDEWEB)

    Espinosa R, Alfredo; Silva F, Brisa M; Quintero R, Agustin [Instituto de Investigaciones Electricas, Cuernavaca, Morelos (Mexico)

    2007-07-01

    In this article its is described a technique for the development of a Web application for a real time information system that allows the remote and concurrent connection of different equipment to the network historical data base of the system, without the need of the installation of any software component in the remote equipment of the user who makes the consultation. It defines and establishes the software architecture that allows the development of the Web application, the analysis stages, the operation of the technology to be used, as well as the design, development and implementation of the application. Finally, the accomplishments obtained with the development of the Web application for a real time information system are described. [Spanish] En este articulo se describe una tecnica para el desarrollo de una aplicacion web para un sistema de informacion en tiempo real, que permita la conexion remota y concurrente de diferentes equipos en la red a la base de datos historica del sistema, sin necesidad de que se instale ningun componente de software en el equipo remoto del usuario que realiza la consulta. Se define y establece la arquitectura de software que permite el desarrollo de la aplicacion web, las etapas de analisis, el funcionamiento de la tecnologia a utilizar, asi como el diseno, desarrollo e implementacion de la aplicacion. Finalmente, se describen los logros obtenidos con el desarrollo de la aplicacion web para un sistema de informacion en tiempo real.

  8. Development of a multichemical food web model: application to PBDEs in Lake Ellasjoen, Bear Island, Norway.

    Science.gov (United States)

    Gandhi, Nilima; Bhavsar, Satyendra P; Gewurtz, Sarah B; Diamond, Miriam L; Evenset, Anita; Christensen, Guttorm N; Gregor, Dennis

    2006-08-01

    A multichemical food web model has been developed to estimate the biomagnification of interconverting chemicals in aquatic food webs. We extended a fugacity-based food web model for single chemicals to account for reversible and irreversible biotransformation among a parent chemical and transformation products, by simultaneously solving mass balance equations of the chemicals using a matrix solution. The model can be applied to any number of chemicals and organisms or taxonomic groups in a food web. The model was illustratively applied to four PBDE congeners, BDE-47, -99, -100, and -153, in the food web of Lake Ellasjøen, Bear Island, Norway. In Ellasjøen arctic char (Salvelinus alpinus), the multichemical model estimated PBDE biotransformation from higher to lower brominated congeners and improved the correspondence between estimated and measured concentrations in comparison to estimates from the single-chemical food web model. The underestimation of BDE-47, even after considering bioformation due to biotransformation of the otherthree congeners, suggests its formation from additional biotransformation pathways not considered in this application. The model estimates approximate values for congener-specific biotransformation half-lives of 5.7,0.8,1.14, and 0.45 years for BDE-47, -99, -100, and -153, respectively, in large arctic char (S. alpinus) of Lake Ellasjøen.

  9. Study of Query Expansion Techniques and Their Application in the Biomedical Information Retrieval

    Directory of Open Access Journals (Sweden)

    A. R. Rivas

    2014-01-01

    retrieval systems. These techniques help to overcome vocabulary mismatch issues by expanding the original query with additional relevant terms and reweighting the terms in the expanded query. In this paper, different text preprocessing and query expansion approaches are combined to improve the documents initially retrieved by a query in a scientific documental database. A corpus belonging to MEDLINE, called Cystic Fibrosis, is used as a knowledge source. Experimental results show that the proposed combinations of techniques greatly enhance the efficiency obtained by traditional queries.

  10. Workflow and web application for annotating NCBI BioProject transcriptome data.

    Science.gov (United States)

    Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A; Barrero, Luz S; Landsman, David; Mariño-Ramírez, Leonardo

    2017-01-01

    The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. URL: http://www.ncbi.nlm.nih.gov/projects/physalis/. Published by Oxford University Press 2017. This work is written by US Government employees and is in the public domain in the US.

  11. Overview and sample applications of SMILES and Odin-SMR retrievals of upper tropospheric humidity and cloud ice mass

    Directory of Open Access Journals (Sweden)

    P. Eriksson

    2014-12-01

    Full Text Available Retrievals of cloud ice mass and humidity from the Superconducting Submillimeter-Wave Limb-Emission Sounder (SMILES and the Odin-SMR (Sub-Millimetre Radiometer limb sounder are presented and example applications of the data are given. SMILES data give an unprecedented view of the diurnal variation of cloud ice mass. Mean regional diurnal cycles are reported and compared to some global climate models. Some improvements in the models regarding diurnal timing and relative amplitude were noted, but the models' mean ice mass around 250 hPa is still low compared to the observations. The influence of the ENSO (El Niño–Southern Oscillation state on the upper troposphere is demonstrated using 12 years of Odin-SMR data. The same retrieval scheme is applied for both sensors, and gives low systematic differences between the two data sets. A special feature of this Bayesian retrieval scheme, of Monte Carlo integration type, is that values are produced for all measurements but for some atmospheric states retrieved values only reflect a priori assumptions. However, this "all-weather" capability allows a direct statistical comparison to model data, in contrast to many other satellite data sets. Another strength of the retrievals is the detailed treatment of "beam filling" that otherwise would cause large systematic biases for these passive cloud ice mass retrievals. The main retrieval inputs are spectra around 635/525 GHz from tangent altitudes below 8/9 km for SMILES/Odin-SMR, respectively. For both sensors, the data cover the upper troposphere between 30° S and 30° N. Humidity is reported as both relative humidity and volume mixing ratio. The vertical coverage of SMILES is restricted to a single layer, while Odin-SMR gives some profiling capability between 300 and 150 hPa. Ice mass is given as the partial ice water path above 260 hPa, but for Odin-SMR ice water content, estimates are also provided. Besides a smaller contrast between most dry and wet

  12. State Synchronization Approaches in Web-based Applications

    Directory of Open Access Journals (Sweden)

    Grocevs Aleksejs

    2014-12-01

    Full Text Available The main objective of the article is to provide insight into technologies and approaches available to maintain consistent state on both client and server sides. The article describes basic RIA application state persistence difficulties and offers approaches to overcoming such problems using asynchronous data transmission synchronization channels and other user-available browser abilities.

  13. Web application development with Yii 2 and PHP

    CERN Document Server

    Safronov, Mark

    2014-01-01

    This book is for professional PHP developers who wish to master the powerful Yii 2 application framework. It is assumed that you have knowledge of object-oriented programming. The previous version of the Yii framework is only briefly mentioned, but it''ll be even easier to grasp Yii 2 with the knowledge of Yii 1.1.x.

  14. Back-End of the web application for the scientific journal Studia Kinanthropologica

    OpenAIRE

    ŠIMÁK, Lubomír

    2017-01-01

    The bachelor thesis deals with the creation of the server part of the web application of the scientific reviewed magazine Studia Kinanthropologica, which will serve for the review of the articles for printing. The bachelor thesis describes how to work on this system and how to solve the problems that have arisen in this work.

  15. Adaptation of the RenalSmart ® web-based application for the ...

    African Journals Online (AJOL)

    Adaptation of the RenalSmart ® web-based application for the dietary management of patients with diabetic nephropathy. ... and enhanced to include functions for the nutritional assessment of a patient with diabetic nephropathy, the formulation of a dietary prescription and the development of a meal plan and sample menu.

  16. Development of a Simherd web application for herd health advisors - experiences and perspectives

    DEFF Research Database (Denmark)

    Østergaard, Søren; Ettema, Jehan Frans; Kudahl, Anne Braad

    2010-01-01

    and the lack of user friendliness. This was challenged in the project supported by the Danish Law of Innovation during 2007 to 2009, where the specific aim was to develop a user friendly web application of the SimHerd model in collaboration between StrateKo, Danish Cattle Federation and Aarhus University...

  17. Analysis and Design of Web-Based Database Application for Culinary Community

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2017-03-01

    Full Text Available This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the culinary community. This research used literature review, user interviews, and questionnaires. Moreover, the database system development life cycle was used as a guide for designing a database especially for conceptual database design, logical database design, and physical design database. Web-based application design used eight golden rules for user interface design. The result of this research is the availability of a web-based database application that can fulfill the needs of users in the culinary field related to communication and recipe management.

  18. Integration of data validation and user interface concerns in a DSL for web applications

    NARCIS (Netherlands)

    Groenewegen, D.M.; Visser, E.

    2009-01-01

    This paper is a pre-print of: Danny M. Groenewegen, Eelco Visser. Integration of Data Validation and User Interface Concerns in a DSL for Web Applications. In Mark G. J. van den Brand, Jeff Gray, editors, Software Language Engineering, Second International Conference, SLE 2009, Denver, USA, October,

  19. Learning and Teaching with Web 2.0 Applications in Saudi K-12 Schools

    Science.gov (United States)

    Bingimlas, Khalid Abdullah

    2017-01-01

    This study aims to understand teachers' perspectives of the use of Web 2.0 applications in learning and teaching and to explore the barriers to their use. The sample of this study involved teachers from primary, middle, and secondary schools in the Kharj region. The total sample consisted of 352 teachers. A quantitative survey instrument was…

  20. Usage, Barriers, and Training of Web 2.0 Technology Applications

    Science.gov (United States)

    Pritchett, Christopher G.; Pritchett, Christal C.; Wohleb, Elisha C.

    2013-01-01

    This research study was designed to determine the degree of use of Web 2.0 technology applications by certified education professionals and examine differences among various groups as well as reasons for these differences. A quantitative survey instrument was developed to gather demographic information and data. Participants reported they would be…

  1. M3D: A tool for the model driven development of web applications

    NARCIS (Netherlands)

    Bernardi, M.L.; Cimitile, M.; Di Lucca, G.A.; Maggi, F.M.; Fletcher, G.H.L.; Mitra, P.

    2012-01-01

    Nowadays, Web Applications (WAs) are complex software sys- tems, used by multiple users with different roles and often devel- oped to support and manage business processes. Due to the chang- ing nature of the supported processes, WAs need to be easily and quickly modified, to adapt and align them to

  2. A novel methodology towards a trusted environment in mashup web applications

    DEFF Research Database (Denmark)

    Patel, Ahmed; Al-Janabi, Samaher; AlShourbaji, Ibrahim

    2015-01-01

    A mashup is a web-based application developed through aggregation of data from different public external or internal sources (including trusted and untrusted). Mashup introduces an open environment that is exposed to many security vulnerabilities, threats and risks. These weaknesses will bring se...

  3. Medical Student Perceptions of Learner-Initiated Feedback Using a Mobile Web Application

    Directory of Open Access Journals (Sweden)

    Amy C Robertson

    2017-12-01

    Full Text Available Feedback, especially timely, specific, and actionable feedback, frequently does not occur. Efforts to better understand methods to improve the effectiveness of feedback are an important area of educational research. This study represents preliminary work as part of a plan to investigate the perceptions of a student-driven system to request feedback from faculty using a mobile device and Web-based application. We hypothesize that medical students will perceive learner-initiated, timely feedback to be an essential component of clinical education. Furthermore, we predict that students will recognize the use of a mobile device and Web application to be an advantageous and effective method when requesting feedback from supervising physicians. Focus group data from 18 students enrolled in a 4-week anesthesia clerkship revealed the following themes: (1 students often have to solicit feedback, (2 timely feedback is perceived as being advantageous, (3 feedback from faculty is perceived to be more effective, (4 requesting feedback from faculty physicians poses challenges, (5 the decision to request feedback may be influenced by the student’s clinical performance, and (6 using a mobile device and Web application may not guarantee timely feedback. Students perceived using a mobile Web-based application to initiate feedback from supervising physicians to be a valuable method of assessment. However, challenges and barriers were identified.

  4. Developing a Cross-Platform Web Application for Online EFL Vocabulary Learning Courses

    Science.gov (United States)

    Enokida, Kazumichi; Sakaue, Tatsuya; Morita, Mitsuhiro; Kida, Shusaku; Ohnishi, Akio

    2017-01-01

    In this paper, the development of a web application for self-access English vocabulary courses at a national university in Japan will be reported upon. Whilst the basic concepts are inherited from an old Flash-based online vocabulary learning system that had been long used at the university, the new HTML5-based app comes with several new features…

  5. A Framework for Automated Testing of JavaScript Web Applications

    DEFF Research Database (Denmark)

    Artzi, Shay; Dolby, Julian; Jensen, Simon Holm

    2011-01-01

    Current practice in testing JavaScript web applications requires manual construction of test cases, which is difficult and tedious. We present a framework for feedback-directed automated test generation for JavaScript in which execution is monitored to collect information that directs the test...

  6. Promoting Reflective Thinking Skills by Using Web 2.0 Application

    Science.gov (United States)

    Abdullah, Mohamed

    2015-01-01

    The study aims to investigate are using Web 2.0 applications promoting reflective thinking skills for higher education student in faculty for education. Although the literature reveals that technology integration is a trend in higher education and researchers and educators have increasingly shared their ideas and examples of implementations of Web…

  7. "UML Quiz": Automatic Conversion of Web-Based E-Learning Content in Mobile Applications

    Science.gov (United States)

    von Franqué, Alexander; Tellioglu, Hilda

    2014-01-01

    Many educational institutions use Learning Management Systems to provide e-learning content to their students. This often includes quizzes that can help students to prepare for exams. However, the content is usually web-optimized and not very usable on mobile devices. In this work a native mobile application ("UML Quiz") that imports…

  8. Authoring support in concept-based web information systems for educational applications

    NARCIS (Netherlands)

    Aroyo, L.M.; Dicheva, D.

    2004-01-01

    The increasing complexity of concept-based web information systems (WIS) and their educational applications requires more intelligent support for their authoring. We propose an ontological approach towards a common authoring framework for such systems to formally describe the overall authoring

  9. A web application for poloidal field analysis on HL-2M

    Energy Technology Data Exchange (ETDEWEB)

    Song, X.M., E-mail: songxm@swip.ac.cn; Pan, W.; Chen, L.Y.; Song, X.; Li, X.D.

    2014-05-15

    Highlights: • An original way to develop web application with a new framework (jQuery + PHP + Matlab) is introduced. • A convenient but powerful application for electromagnetic calculation is implemented. • The web application can run in any popular browser, on any hardware and in any operating system. • No any plugin is needed; no any maintenance is required. - Abstract: Recently, many web tools [1–3] in fusion society have been designed and demonstrated, which has been proved to be powerful and convenient to fusion researchers. Many physicists and engineers need a tool to compute the poloidal magnetic field for some purposes (for example, the calibration of magnetic probes for EFIT, the field null structure analysis for control, the design of some plasma diagnostic systems), so to develop a powerful and convenient web application for the calculation of magnetic field and magnetic flux produced by PF coils is very important. In this paper, a web application tool for poloidal field analysis on HL-2M with a totally original framework is presented. This web application is full of dynamic and interactive interface, and can run in any popular browser (IE, safari, firefox, opera), on any hardware (smart phone, PC, ipad, Mac) and operating system (ios, android, windows, linux, Mac OS). No any plugins is needed. The three layers (jQuery + PHP + Matlab) of this framework are introduced. The front top client layer is developed by jQuery code. The middle layer, which plays a role of a bridge to connect the server and client through socket communication, is developed by PHP code. The behind server layer is developed by Matlab, which compute the magnetic field or magnetic flux through a Special Function called Complete Elliptic Integral, and returns the results in the client favorite way, either by table or by JPG image. The field null structure and the vertical and radial field structure calculated by this tool are introduced with details. The idea to design a web

  10. A web application for poloidal field analysis on HL-2M

    International Nuclear Information System (INIS)

    Song, X.M.; Pan, W.; Chen, L.Y.; Song, X.; Li, X.D.

    2014-01-01

    Highlights: • An original way to develop web application with a new framework (jQuery + PHP + Matlab) is introduced. • A convenient but powerful application for electromagnetic calculation is implemented. • The web application can run in any popular browser, on any hardware and in any operating system. • No any plugin is needed; no any maintenance is required. - Abstract: Recently, many web tools [1–3] in fusion society have been designed and demonstrated, which has been proved to be powerful and convenient to fusion researchers. Many physicists and engineers need a tool to compute the poloidal magnetic field for some purposes (for example, the calibration of magnetic probes for EFIT, the field null structure analysis for control, the design of some plasma diagnostic systems), so to develop a powerful and convenient web application for the calculation of magnetic field and magnetic flux produced by PF coils is very important. In this paper, a web application tool for poloidal field analysis on HL-2M with a totally original framework is presented. This web application is full of dynamic and interactive interface, and can run in any popular browser (IE, safari, firefox, opera), on any hardware (smart phone, PC, ipad, Mac) and operating system (ios, android, windows, linux, Mac OS). No any plugins is needed. The three layers (jQuery + PHP + Matlab) of this framework are introduced. The front top client layer is developed by jQuery code. The middle layer, which plays a role of a bridge to connect the server and client through socket communication, is developed by PHP code. The behind server layer is developed by Matlab, which compute the magnetic field or magnetic flux through a Special Function called Complete Elliptic Integral, and returns the results in the client favorite way, either by table or by JPG image. The field null structure and the vertical and radial field structure calculated by this tool are introduced with details. The idea to design a web

  11. [Evaluation of Web-based software applications for administrating and organising an ophthalmological clinical trial site].

    Science.gov (United States)

    Kortüm, K; Reznicek, L; Leicht, S; Ulbig, M; Wolf, A

    2013-07-01

    The importance and complexity of clinical trials is continuously increasing, especially in innovative specialties like ophthalmology. Therefore an efficient clinical trial site organisational structure is essential. In modern internet times, this can be accomplished by web-based applications. In total, 3 software applications (Vibe on Prem, Sharepoint and open source software) were evaluated in a clinical trial site in ophthalmology. Assessment criteria were set; they were: reliability, easiness of administration, usability, scheduling, task list, knowledge management, operating costs and worldwide availability. Vibe on Prem customised by the local university met the assessment criteria best. Other applications were not as strong. By introducing a web-based application for administrating and organising an ophthalmological trial site, studies can be conducted in a more efficient and reliable manner. Georg Thieme Verlag KG Stuttgart · New York.

  12. Analysis of Decision Making and Incentives in Danish Green Web Applications

    DEFF Research Database (Denmark)

    Scheele, Christian Elling

    2013-01-01

    Traditional information campaigns aimed at incentivising the kind of behaviour change that will lead to more sustainable levels of energy consumption have been proven inefficient. Politicians and government bodies could consider using green web applications as an alternative. However, there is li...... normative or behavioural gains. The third approach is based on a socio-psychological decision model in which values, attitudes and norms affect the choices we make. All three theoretical approaches aim at explaining decision-making in the context of energy consumption......., there is little research documenting how such applications actually motivate behaviour change. There is a need for a better understanding of how such applications work and whether they are effective. This paper addresses the first question by demonstrating how three Danish green web applications employ different...

  13. The clinical application of the implantation of retrievable filters in superior vena cava

    International Nuclear Information System (INIS)

    Tian Yulong; Zhang Xitong; Hong Duo

    2011-01-01

    Objective: To investigate the safety of the placement of Tulip retrievable filter in superior vena cava and to discuss the prevention of pulmonary embolism (PE). Methods: Implantation of Tulip retrievable filter in superior vena cava was performed in ten patients (6 males and 4 females, aged 42-60 years) with acute or subacute deep venous thrombosis in upper extremity or cephalo-cervical region. After the placement of filter, the local via-catheter thrombolysis was conducted. The clinical results, such as the improvement of venous obstructed symptoms at upper extremity or cephalo-cervical region, were recorded. The filter's shape and location were checked. The possible occurrence of pulmonary embolism was observed. Results: The filter was successfully implanted in supper vena cava in all patients, and the deep venous thrombosis at upper extremity and cephalo-cervical region responded well to the local via-catheter thrombolysis. The filters showed no displacement or tilting. The swelling at upper extremity and cephalo-cervical region was markedly faded away. No symptomatic pulmonary embolism occurred. the filter was successfully retrieved via the femoral vein in four patients. Conclusion: Tulip filter can be safety implanted in superior vena cava and can be smoothly retrieved. The occurrence of pulmonary embolism can be effectively prevented if corresponding local via-catheter thrombolysis is carried out. (authors)

  14. WEB-BASED APPLICATIONS FOR GUIDELINE IMPLEMENTATION IN PRIMARY CARE

    Directory of Open Access Journals (Sweden)

    Matteo Capobussi

    2013-01-01

    Full Text Available Efforts in developing guidelines have to be supported by investments on their application. Medical software may have a role in these initiatives. Two computer programs have been developed: one regarding chronic kidney disease and one about chronic pain management. For six months their use by 104 general practitioners was monitored. At study conclusion, a questionnaire of 13 multiple choice questions was emailed to all participating doctors. To evaluate the clinical benefits for the patients, a GP regularly used the CKD program and provided patients’ outcomes and clinical data. The application recorded 108 accesses during 66 work sessions. In the clinical outcomes section of this study, 7 patients out of 21 were diagnosed with CKD. Our study shows a need for programs of the “expert systems” kind: sources devoted to a narrow field of competence, accessed only when needed, in a way that resembles traditional specialist consultation.

  15. Lexical Link Analysis (LLA) Application: Improving Web Service to Defense Acquisition Visibility Environment (DAVE)

    Science.gov (United States)

    2015-05-01

    1 LEXICAL LINK ANALYSIS (LLA) APPLICATION: IMPROVING WEB SERVICE TO DEFENSE ACQUISITION VISIBILITY ENVIRONMENT(DAVE) May 13-14, 2015 Dr. Ying...REPORT DATE MAY 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Lexical Link Analysis (LLA) Application...Making 3 2 1 3 L L A Methods • Lexical Link Analysis (LLA) Core – LLA Reports and Visualizations • Collaborative Learning Agents (CLA) for

  16. Frame of reference of software architecture for web applications and mobile

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Maliza Martinez

    2016-08-01

    Full Text Available Given the need to have a guide for the implementation of informatics applications, and thus achieve automate tasks improving response times of users, designed the framework of reference the architecture software for web and mobile applications with technology free software and open source. The technology to be used is the Object Oriented Programming (OOP with JAVA programming language, a client / server architecture and style of multitier architecture, which will allow us to create scalable, robust and stable systems, together of Java Platform Enterprise Edition (JEE that helps us to implement business applications thanks to the JPA and EJB APIs. By the server for handling transactions, security, scalability and concurrency we have Wildfly application server. And on the client side, for creating graphical interfaces we use the ExtJS and Sencha Touch Framework, which are lightweight, high-performance libraries based on HTML5, JavaScript and CSS3. The report generator is JasperReports, because it has the ability to deliver rich content display and printer. The database engine is MySQL, because its connectivity, speed, and security make it a very appropriate server for access from the web. Finally, as editor of web and mobile applications, we have the integrated development environment Eclipse IDE platform of open source. In this paper we make a critical analysis of such applications and formulate the Framework of Software Architecture for the development and implementation of Web and Mobile Applications, which were implemented in the ECU911 Babahoyo and at the Instituto Tecnologico Superior Babahoyo, proving through its application their effectiveness and efficiency in the implementation of integrated systems

  17. A mobile and web application-based recommendation system using color quantization and collaborative filtering

    OpenAIRE

    KAYA, FİDAN; YILDIZ, GÜREL; KAVAK, ADNAN

    2015-01-01

    In this paper, a recommendation system based on a mobile and web application is proposed for indoor decoration. The main contribution of this work is to apply two-stage filtering using linear matching and collaborative filtering to make recommendations. In the mobile application part, the image of the medium captured by a mobile phone is analyzed using color quantization methods, and these color analysis results along with other user-defined parameters such as height, width, and type of the p...

  18. Application of World Wide Web (W3) Technologies in Payload Operations

    Science.gov (United States)

    Sun, Charles; Windrem, May; Picinich, Lou

    1996-01-01

    World Wide Web (W3) technologies are considered in relation to their application to space missions. It is considered that such technologies, including the hypertext transfer protocol and the Java object-oriented language, offer a powerful and relatively inexpensive framework for distributed application software development. The suitability of these technologies for payload monitoring systems development is discussed, and the experience gained from the development of an insect habitat monitoring system based on W3 technologies is reported.

  19. EpiCollect: linking smartphones to web applications for epidemiology, ecology and community data collection.

    Directory of Open Access Journals (Sweden)

    David M Aanensen

    2009-09-01

    Full Text Available Epidemiologists and ecologists often collect data in the field and, on returning to their laboratory, enter their data into a database for further analysis. The recent introduction of mobile phones that utilise the open source Android operating system, and which include (among other features both GPS and Google Maps, provide new opportunities for developing mobile phone applications, which in conjunction with web applications, allow two-way communication between field workers and their project databases.Here we describe a generic framework, consisting of mobile phone software, EpiCollect, and a web application located within www.spatialepidemiology.net. Data collected by multiple field workers can be submitted by phone, together with GPS data, to a common web database and can be displayed and analysed, along with previously collected data, using Google Maps (or Google Earth. Similarly, data from the web database can be requested and displayed on the mobile phone, again using Google Maps. Data filtering options allow the display of data submitted by the individual field workers or, for example, those data within certain values of a measured variable or a time period.Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting 'citizen scientists' to contribute data easily to central databases through their mobile phone.

  20. EpiCollect: linking smartphones to web applications for epidemiology, ecology and community data collection.

    Science.gov (United States)

    Aanensen, David M; Huntley, Derek M; Feil, Edward J; al-Own, Fada'a; Spratt, Brian G

    2009-09-16

    Epidemiologists and ecologists often collect data in the field and, on returning to their laboratory, enter their data into a database for further analysis. The recent introduction of mobile phones that utilise the open source Android operating system, and which include (among other features) both GPS and Google Maps, provide new opportunities for developing mobile phone applications, which in conjunction with web applications, allow two-way communication between field workers and their project databases. Here we describe a generic framework, consisting of mobile phone software, EpiCollect, and a web application located within www.spatialepidemiology.net. Data collected by multiple field workers can be submitted by phone, together with GPS data, to a common web database and can be displayed and analysed, along with previously collected data, using Google Maps (or Google Earth). Similarly, data from the web database can be requested and displayed on the mobile phone, again using Google Maps. Data filtering options allow the display of data submitted by the individual field workers or, for example, those data within certain values of a measured variable or a time period. Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting 'citizen scientists' to contribute data easily to central databases through their mobile phone.

  1. Reads2Type: a web application for rapid microbial taxonomy identification

    DEFF Research Database (Denmark)

    Saputra, Dhany; Rasmussen, Simon; Larsen, Mette Voldby

    2015-01-01

    genome of microbial isolates. Therefore we have developed Reads2Type, a web-based tool for taxonomy identification based on whole bacterial genome sequence data. Raw sequencing data provided by the user are mapped against a set of marker probes that are derived from currently available bacteria complete......, as the entire computational analysis is done on the computer of whom utilizes the web application. This also prevents data privacy issues to arise. The Reads2Type tool is available at http://www.cbs.dtu.dk/~dhany/reads2type.html ....

  2. Designing an intuitive web application for drug discovery scientists.

    Science.gov (United States)

    Karamanis, Nikiforos; Pignatelli, Miguel; Carvalho-Silva, Denise; Rowland, Francis; Cham, Jennifer A; Dunham, Ian

    2018-01-11

    We discuss how we designed the Open Targets Platform (www.targetvalidation.org), an intuitive application for bench scientists working in early drug discovery. To meet the needs of our users, we applied lean user experience (UX) design methods: we started engaging with users very early and carried out research, design and evaluation activities within an iterative development process. We also emphasize the collaborative nature of applying lean UX design, which we believe is a foundation for success in this and many other scientific projects. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. A Semantic Sensor Web for Environmental Decision Support Applications

    Directory of Open Access Journals (Sweden)

    Raúl García-Castro

    2011-09-01

    Full Text Available Sensing devices are increasingly being deployed to monitor the physical world around us. One class of application for which sensor data is pertinent is environmental decision support systems, e.g., flood emergency response. For these applications, the sensor readings need to be put in context by integrating them with other sources of data about the surrounding environment. Traditional systems for predicting and detecting floods rely on methods that need significant human resources. In this paper we describe a semantic sensor web architecture for integrating multiple heterogeneous datasets, including live and historic sensor data, databases, and map layers. The architecture provides mechanisms for discovering datasets, defining integrated views over them, continuously receiving data in real-time, and visualising on screen and interacting with the data. Our approach makes extensive use of web service standards for querying and accessing data, and semantic technologies to discover and integrate datasets. We demonstrate the use of our semantic sensor web architecture in the context of a flood response planning web application that uses data from sensor networks monitoring the sea-state around the coast of England.

  4. The World-Wide Web past present and future, and its application to medicine

    CERN Document Server

    Sendall, D M

    1997-01-01

    The World-Wide Web was first developed as a tool for collaboration in the high energy physics community. From there it spread rapidly to other fields, and grew to its present impressive size. As an easy way to access information, it has been a great success, and a huge number of medical applications have taken advantage of it. But there is another side to the Web, its potential as a tool for collaboration between people. Medical examples include telemedicine and teaching. New technical developments offer still greater potential in medical and other fields. This paper gives some background to the early development of the World-Wide Web, a brief overview of its present state with some examples relevant to medicine, and a look at the future.

  5. Critical Reflectance Derived from MODIS: Application for the Retrieval of Aerosol Absorption over Desert Regions

    Science.gov (United States)

    Wells, Kelley C.; Martins, J. Vanderlei; Remer, Lorraine A.; Kreidenweis, Sonia M.; Stephens, Graeme L.

    2012-01-01

    Aerosols are tiny suspended particles in the atmosphere that scatter and absorb sunlight. Smoke particles are aerosols, as are sea salt, particulate pollution and airborne dust. When you look down at the earth from space sometimes you can see vast palls of whitish smoke or brownish dust being transported by winds. The reason that you can see these aerosols is because they are reflecting incoming sunlight back to the view in space. The reason for the difference in color between the different types of aerosol is that the particles arc also absorbing sunlight at different wavelengths. Dust appears brownish or reddish because it absorbs light in the blue wavelengths and scatters more reddish light to space, Knowing how much light is scattered versus how much is absorbed, and knowin that as a function of wavelength is essential to being able to quantify the role aerosols play in the energy balance of the earth and in climate change. It is not easy measuring the absorption properties of aerosols when they are suspended in the atmosphere. People have been doing this one substance at a time in the laboratory, but substances mix when they are in the atmosphere and the net absorption effect of all the particles in a column of air is a goal of remote sensing that has not yet been completely successful. In this paper we use a technique based on observing the point at which aerosols change from brightening the surface beneath to darkening it. If aerosols brighten a surface. they must scatter more light to space. If they darken the surface. they must be absorbing more. That cross over point is called the critical reflectance and in this paper we show that critical reflectance is a monotonic function of the intrinsic absorption properties of the particles. This parameter we call the single scattering albedo. We apply the technique to MODIS imagery over the Sahara and Sahel regions to retrieve the single scattering albedo in seven wavelengths, compare these retrievals to ground

  6. New nuclear data service at CNEA: retrieval of the update libraries from a local Web-Server; Nuevo servicio de datos nucleares en CNEA: obtencion de bibliotecas actualizadas desde un Servidor Local

    Energy Technology Data Exchange (ETDEWEB)

    Suarez, Patricia M [Comision Nacional de Energia Atomica, Ezeiza (Argentina). Centro Atomico Ezeiza; Pepe, Maria E [Comision Nacional de Energia Atomica, General San Martin (Argentina). Centro Atomico Constituyentes; Sbaffoni, Maria M [Comision Nacional de Energia Atomica, Buenos Aires (Argentina). Gerencia de Tecnologia

    2000-07-01

    A new On-line Nuclear Data Service was implemented at National Atomic Energy Commission (CNEA) Web-Site. The information usually issued by the Nuclear Data Section of IAEA (NDS-IAEA) on CD-ROM, as well as complementary libraries periodically downloaded from the a mirror server of NDS-IAEA Service located at IPEN, Brazil are available on the new CNEA Web page. In the site, users can find numerical data on neutron, charged-particle, and photonuclear reactions, nuclear structure, and decay data, with related bibliographic information. This data server is permanently maintained and updated by CNEA staff members. This crew also offers assistance on the use and retrieval of nuclear data to local users. (author)

  7. SU-E-J-114: Web-Browser Medical Physics Applications Using HTML5 and Javascript.

    Science.gov (United States)

    Bakhtiari, M

    2012-06-01

    Since 2010, there has been a great attention about HTML5. Application developers and browser makers fully embrace and support the web of the future. Consumers have started to embrace HTML5, especially as more users understand the benefits and potential that HTML5 can mean for the future.Modern browsers such as Firefox, Google Chrome, and Safari are offering better and more robust support for HTML5, CSS3, and JavaScript. The idea is to introduce the HTML5 to medical physics community for open source software developments. The benefit of using HTML5 is developing portable software systems. The HTML5, CSS, and JavaScript programming languages were used to develop several applications for Quality Assurance in radiation therapy. The canvas element of HTML5 was used for handling and displaying the images, and JavaScript was used to manipulate the data. Sample application were developed to: 1. analyze the flatness and symmetry of the radiotherapy fields in a web browser, 2.analyze the Dynalog files from Varian machines, 3. visualize the animated Dynamic MLC files, 4. Simulation via Monte Carlo, and 5. interactive image manipulation. The programs showed great performance and speed in uploading the data and displaying the results. The flatness and symmetry program and Dynalog file analyzer ran in a fraction of second. The reason behind this performance is using JavaScript language which is a lower level programming language in comparison to the most of the scientific programming packages such as Matlab. The second reason is that JavaScript runs locally on client side computers not on the web-servers. HTML5 and JavaScript can be used to develop useful applications that can be run online or offline on different modern web-browsers. The programming platform can be also one of the modern web-browsers which are mostly open source (such as Firefox). © 2012 American Association of Physicists in Medicine.

  8. Web-Based Group Decision Support System: an Economic Application

    Directory of Open Access Journals (Sweden)

    Ion ISTUDOR

    2010-01-01

    Full Text Available Decision Support Systems (DSS form a specific class of computerized information systems that support business and managerial decision-making activities. Making the right decision in business primarily depends on the quality of data. It also depends on the ability to analyze the data with a view to identifying trends that can suggest solutions and strategies. A “cooperative” decision support system means the data are collected, analyzed and then provided to a human agent who can help the system to revise or refine the data. It means that both a human component and computer component work together to come up with the best solution. This paper describes the usage of a software product (Vanguard System to a specific economic application (evaluating the financial risk assuming that the rate of the economic profitability can be under the value of the interest rate.

  9. Photonics applications and web engineering: WILGA Winter 2016

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2016-09-01

    Since twenty years, young researchers form the Institute of Electronic Systems, Warsaw University of Technology, organize two times a year, under only a marginal supervision of the senior faculty members, under the patronage of WEiTI PW, KEiT PAN, SPIE, IEEE, PKOpto SEP and PSF, the WILGA Symposium on advanced, integrated functional electronic, photonic and mechatronic systems [1-5]. All aspects are considered like: research and development, theory and design, technology - material and construction, software and hardware, commissioning and tests, as well as pilot and practical applications. The applications concern mostly, which turned after several years to be a proud specialization of the WILGA Symposium, Internet engineering, high energy physics experiments, new power industry including fusion, nuclear industry, space and satellite technologies, telecommunications, smart municipal environment, as well as biology and medicine [6-8]. XXXVIIth WILGA Symposium was held on 29-31 January 2016 and gathered a few tens of young researchers active in the mentioned research areas. There were presented a few tens of technical papers which will be published in Proc.SPIE together with the accepted articles from the Summer Edition of the WILGA Symposium scheduled for 29.05-06.06.2016. This article is a digest of chosen presentations from WILGA Symposium 2016 Winter Edition. The survey is narrowed to a few chosen and main topical tracks, like electronics and photonics design using industrial standards like ATCA/MTCA, also particular designs of functional systems using this series of industrial standards. The paper, summarizing traditionally since many years the accomplished WILGA Symposium organized by young researchers from Warsaw University of Technology, is also the following part of a cycle of papers concerning their participation in design of new generations of electronic systems used in discovery experiments in Poland and in leading research laboratories of the world.

  10. Update on Small Modular Reactors Dynamic System Modeling Tool: Web Application

    International Nuclear Information System (INIS)

    Hale, Richard Edward; Cetiner, Sacit M.; Fugate, David L.; Batteh, John J; Tiller, Michael M.

    2015-01-01

    Previous reports focused on the development of component and system models as well as end-to-end system models using Modelica and Dymola for two advanced reactor architectures: (1) Advanced Liquid Metal Reactor and (2) fluoride high-temperature reactor (FHR). The focus of this report is the release of the first beta version of the web-based application for model use and collaboration, as well as an update on the FHR model. The web-based application allows novice users to configure end-to-end system models from preconfigured choices to investigate the instrumentation and controls implications of these designs and allows for the collaborative development of individual component models that can be benchmarked against test systems for potential inclusion in the model library. A description of this application is provided along with examples of its use and a listing and discussion of all the models that currently exist in the library.

  11. Security Guidelines for the Development of Accessible Web Applications through the implementation of intelligent systems

    Directory of Open Access Journals (Sweden)

    Luis Joyanes Aguilar

    2009-12-01

    Full Text Available Due to the significant increase in threats, attacks and vulnerabilities that affect the Web in recent years has resulted the development and implementation of pools and methods to ensure security measures in the privacy, confidentiality and data integrity of users and businesses. Under certain circumstances, despite the implementation of these tools do not always get the flow of information which is passed in a secure manner. Many of these security tools and methods cannot be accessed by people who have disabilities or assistive technologies which enable people to access the Web efficiently. Among these security tools that are not accessible are the virtual keyboard, the CAPTCHA and other technologies that help to some extent to ensure safety on the Internet and are used in certain measures to combat malicious code and attacks that have been increased in recent times on the Web. Through the implementation of intelligent systems can detect, recover and receive information on the characteristics and properties of the different tools and hardware devices or software with which the user is accessing a web application and through analysis and interpretation of these intelligent systems can infer and automatically adjust the characteristics necessary to have these tools to be accessible by anyone regardless of disability or navigation context. This paper defines a set of guidelines and specific features that should have the security tools and methods to ensure the Web accessibility through the implementation of intelligent systems.

  12. Research on Techniques of Multifeatures Extraction for Tongue Image and Its Application in Retrieval

    Directory of Open Access Journals (Sweden)

    Liyan Chen

    2017-01-01

    Full Text Available Tongue diagnosis is one of the important methods in the Chinese traditional medicine. Doctors can judge the disease’s situation by observing patient’s tongue color and texture. This paper presents a novel approach to extract color and texture features of tongue images. First, we use improved GLA (Generalized Lloyd Algorithm to extract the main color of tongue image. Considering that the color feature cannot fully express tongue image information, the paper analyzes tongue edge’s texture features and proposes an algorithm to extract them. Then, we integrate the two features in retrieval by different weight. Experimental results show that the proposed method can improve the detection rate of lesion in tongue image relative to single feature retrieval.

  13. Wind Retrieval Algorithms for the IWRAP and HIWRAP Airborne Doppler Radars with Applications to Hurricanes

    Science.gov (United States)

    Guimond, Stephen Richard; Tian, Lin; Heymsfield, Gerald M.; Frasier, Stephen J.

    2013-01-01

    Algorithms for the retrieval of atmospheric winds in precipitating systems from downward-pointing, conically-scanning airborne Doppler radars are presented. The focus in the paper is on two radars: the Imaging Wind and Rain Airborne Profiler(IWRAP) and the High-altitude IWRAP (HIWRAP). The IWRAP is a dual-frequency (Cand Ku band), multi-beam (incidence angles of 30 50) system that flies on the NOAAWP-3D aircraft at altitudes of 2-4 km. The HIWRAP is a dual-frequency (Ku and Kaband), dual-beam (incidence angles of 30 and 40) system that flies on the NASA Global Hawk aircraft at altitudes of 18-20 km. Retrievals of the three Cartesian wind components over the entire radar sampling volume are described, which can be determined using either a traditional least squares or variational solution procedure. The random errors in the retrievals are evaluated using both an error propagation analysis and a numerical simulation of a hurricane. These analyses show that the vertical and along-track wind errors have strong across-track dependence with values of 0.25 m s-1 at nadir to 2.0 m s-1 and 1.0 m s-1 at the swath edges, respectively. The across-track wind errors also have across-track structure and are on average, 3.0 3.5 m s-1 or 10 of the hurricane wind speed. For typical rotated figure four flight patterns through hurricanes, the zonal and meridional wind speed errors are 2 3 m s-1.Examples of measured data retrievals from IWRAP during an eyewall replacement cycle in Hurricane Isabel (2003) and from HIWRAP during the development of Tropical Storm Matthew (2010) are shown.

  14. Application of information-retrieval methods to the classification of physical data

    Science.gov (United States)

    Mamotko, Z. N.; Khorolskaya, S. K.; Shatrovskiy, L. I.

    1975-01-01

    Scientific data received from satellites are characterized as a multi-dimensional time series, whose terms are vector functions of a vector of measurement conditions. Information retrieval methods are used to construct lower dimensional samples on the basis of the condition vector, in order to obtain these data and to construct partial relations. The methods are applied to the joint Soviet-French Arkad project.

  15. New web-based applications for mechanistic case diagramming

    Directory of Open Access Journals (Sweden)

    Fred R. Dee

    2014-07-01

    Full Text Available The goal of mechanistic case diagraming (MCD is to provide students with more in-depth understanding of cause and effect relationships and basic mechanistic pathways in medicine. This will enable them to better explain how observed clinical findings develop from preceding pathogenic and pathophysiological events. The pedagogic function of MCD is in relating risk factors, disease entities and morphology, signs and symptoms, and test and procedure findings in a specific case scenario with etiologic pathogenic and pathophysiological sequences within a flow diagram. In this paper, we describe the addition of automation and predetermined lists to further develop the original concept of MCD as described by Engelberg in 1992 and Guerrero in 2001. We demonstrate that with these modifications, MCD is effective and efficient in small group case-based teaching for second-year medical students (ratings of ~3.4 on a 4.0 scale. There was also a significant correlation with other measures of competency, with a ‘true’ score correlation of 0.54. A traditional calculation of reliability showed promising results (α =0.47 within a low stakes, ungraded environment. Further, we have demonstrated MCD's potential for use in independent learning and TBL. Future studies are needed to evaluate MCD's potential for use in medium stakes assessment or self-paced independent learning and assessment. MCD may be especially relevant in returning students to the application of basic medical science mechanisms in the clinical years.

  16. Supporting secure programming in web applications through interactive static analysis

    Science.gov (United States)

    Zhu, Jun; Xie, Jing; Lipford, Heather Richter; Chu, Bill

    2013-01-01

    Many security incidents are caused by software developers’ failure to adhere to secure programming practices. Static analysis tools have been used to detect software vulnerabilities. However, their wide usage by developers is limited by the special training required to write rules customized to application-specific logic. Our approach is interactive static analysis, to integrate static analysis into Integrated Development Environment (IDE) and provide in-situ secure programming support to help developers prevent vulnerabilities during code construction. No additional training is required nor are there any assumptions on ways programs are built. Our work is motivated in part by the observation that many vulnerabilities are introduced due to failure to practice secure programming by knowledgeable developers. We implemented a prototype interactive static analysis tool as a plug-in for Java in Eclipse. Our technical evaluation of our prototype detected multiple zero-day vulnerabilities in a large open source project. Our evaluations also suggest that false positives may be limited to a very small class of use cases. PMID:25685513

  17. Supporting secure programming in web applications through interactive static analysis

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2014-07-01

    Full Text Available Many security incidents are caused by software developers’ failure to adhere to secure programming practices. Static analysis tools have been used to detect software vulnerabilities. However, their wide usage by developers is limited by the special training required to write rules customized to application-specific logic. Our approach is interactive static analysis, to integrate static analysis into Integrated Development Environment (IDE and provide in-situ secure programming support to help developers prevent vulnerabilities during code construction. No additional training is required nor are there any assumptions on ways programs are built. Our work is motivated in part by the observation that many vulnerabilities are introduced due to failure to practice secure programming by knowledgeable developers. We implemented a prototype interactive static analysis tool as a plug-in for Java in Eclipse. Our technical evaluation of our prototype detected multiple zero-day vulnerabilities in a large open source project. Our evaluations also suggest that false positives may be limited to a very small class of use cases.

  18. Supporting secure programming in web applications through interactive static analysis.

    Science.gov (United States)

    Zhu, Jun; Xie, Jing; Lipford, Heather Richter; Chu, Bill

    2014-07-01

    Many security incidents are caused by software developers' failure to adhere to secure programming practices. Static analysis tools have been used to detect software vulnerabilities. However, their wide usage by developers is limited by the special training required to write rules customized to application-specific logic. Our approach is interactive static analysis, to integrate static analysis into Integrated Development Environment (IDE) and provide in-situ secure programming support to help developers prevent vulnerabilities during code construction. No additional training is required nor are there any assumptions on ways programs are built. Our work is motivated in part by the observation that many vulnerabilities are introduced due to failure to practice secure programming by knowledgeable developers. We implemented a prototype interactive static analysis tool as a plug-in for Java in Eclipse. Our technical evaluation of our prototype detected multiple zero-day vulnerabilities in a large open source project. Our evaluations also suggest that false positives may be limited to a very small class of use cases.

  19. Development of dynamic Bayesian models for web application test management

    Science.gov (United States)

    Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.

    2018-03-01

    The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.

  20. X-band COSMO-SkyMed wind field retrieval, with application to coastal circulation modeling

    Directory of Open Access Journals (Sweden)

    A. Montuori

    2013-02-01

    Full Text Available In this paper, X-band COSMO-SkyMed© synthetic aperture radar (SAR wind field retrieval is investigated, and the obtained data are used to force a coastal ocean circulation model. The SAR data set consists of 60 X-band Level 1B Multi-Look Ground Detected ScanSAR Huge Region COSMO-SkyMed© SAR data, gathered in the southern Tyrrhenian Sea during the summer and winter seasons of 2010. The SAR-based wind vector field estimation is accomplished by resolving both the SAR-based wind speed and wind direction retrieval problems independently. The sea surface wind speed is retrieved by means of a SAR wind speed algorithm based on the azimuth cut-off procedure, while the sea surface wind direction is provided by means of a SAR wind direction algorithm based on the discrete wavelet transform multi-resolution analysis. The obtained wind fields are compared with ground truth data provided by both ASCAT scatterometer and ECMWF model wind fields. SAR-derived wind vector fields and ECMWF model wind data are used to construct a blended wind product regularly sampled in both space and time, which is then used to force a coastal circulation model of a southern Tyrrhenian coastal area to simulate wind-driven circulation processes. The modeling results show that X-band COSMO-SkyMed© SAR data can be valuable in providing effective wind fields for coastal circulation modeling.

  1. Qualitative risk assessment of subsurface barriers in applications supporting retrieval of SST waste

    International Nuclear Information System (INIS)

    Treat, R.L.

    1994-04-01

    This report provides a brief, qualitative assessment of risks associated with the potential use of impermeable surface barriers installed around and beneath Hanford Site single-shell tanks (SSTs) to support the retrieval of wastes from those tanks. These risks are compared to qualitative assessment of costs and risks associated with a case in which barriers are not used. A quantitative assessment of costs and risks associated with these two cases will be prepared and documented in a companion report. The companion report will compare quantitatively the costs and risks of several retrieval options with varying parameters, such as effectiveness of retrieval, effectiveness of subsurface barriers, and the use of surface barriers. For ease of comparison of qualitative risks, a case in which impermeable subsurface barriers are used in conjunction with another technology to remove tank waste is referred, to in this report as the Barrier Case. A case in which waste removal technologies are used without employing a subsurface barrier is referred to as the No Barrier Case. The technologies associated with each case are described in the following sections

  2. Retrieving Storm Electric Fields from Aircrfaft Field Mill Data: Part II: Applications

    Science.gov (United States)

    Koshak, William; Mach, D. M.; Christian H. J.; Stewart, M. F.; Bateman M. G.

    2006-01-01

    The Lagrange multiplier theory developed in Part I of this study is applied to complete a relative calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the Lagrange multiplier method performs well in computer simulations. For mill measurement errors of 1 V m(sup -1) and a 5 V m(sup -1) error in the mean fair-weather field function, the 3D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair-weather field was also tested using computer simulations. For mill measurement errors of 1 V m(sup -l), the method retrieves the 3D storm field to within an error of about 8% if the fair-weather field estimate is typically within 1 V m(sup -1) of the true fair-weather field. Using this type of side constraint and data from fair-weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. Absolute calibration was completed using the pitch down method developed in Part I, and conventional analyses. The resulting calibration matrices were then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably in many respects with results derived from earlier (iterative) techniques of calibration.

  3. Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part 2; Applications

    Science.gov (United States)

    Koshak, W. J.; Mach, D. M.; Christian, H. J.; Stewart, M. F.; Bateman, M. G.

    2005-01-01

    The Lagrange multiplier theory and "pitch down method" developed in Part I of this study are applied to complete the calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the method performs well in computer simulations. For mill measurement errors of 1 V/m and a 5 V/m error in the mean fair weather field function, the 3-D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair weather field was also tested using computer simulations. For mill measurement errors of 1 V/m, the method retrieves the 3-D storm field to within an error of about 8% if the fair weather field estimate is typically within 1 V/m of the true fair weather field. Using this side constraint and data from fair weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. The resulting calibration matrix was then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably with the results obtained from earlier calibration analyses that were based on iterative techniques.

  4. Data-driven Green's function retrieval and application to imaging with multidimensional deconvolution

    Science.gov (United States)

    Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel

    2014-01-01

    An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.

  5. Effects of customization on application decisions and applicant pool characteristics in a web-based recruitment context.

    Science.gov (United States)

    Dineen, Brian R; Noe, Raymond A

    2009-01-01

    The authors examined 2 forms of customization in a Web-based recruitment context. Hypotheses were tested in a controlled study in which participants viewed multiple Web-based job postings that each included information about multiple fit categories. Results indicated that customization of information regarding person-organization (PO), needs-supplies, and demands-abilities (DA) fit (fit information customization) and customization of the order in which these fit categories were presented (configural customization) had differential effects on outcomes. Specifically, (a) applicant pool PO and DA fit were greater when fit information customization was provided, (b) applicant pool fit in high- versus low-relevance fit categories was better differentiated when configural customization was provided, and (c) overall application rates were lower when either or both forms of customization were provided. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  6. Implementation of a scalable, web-based, automated clinical decision support risk-prediction tool for chronic kidney disease using C-CDA and application programming interfaces.

    Science.gov (United States)

    Samal, Lipika; D'Amore, John D; Bates, David W; Wright, Adam

    2017-11-01

    Clinical decision support tools for risk prediction are readily available, but typically require workflow interruptions and manual data entry so are rarely used. Due to new data interoperability standards for electronic health records (EHRs), other options are available. As a clinical case study, we sought to build a scalable, web-based system that would automate calculation of kidney failure risk and display clinical decision support to users in primary care practices. We developed a single-page application, web server, database, and application programming interface to calculate and display kidney failure risk. Data were extracted from the EHR using the Consolidated Clinical Document Architecture interoperability standard for Continuity of Care Documents (CCDs). EHR users were presented with a noninterruptive alert on the patient's summary screen and a hyperlink to details and recommendations provided through a web application. Clinic schedules and CCDs were retrieved using existing application programming interfaces to the EHR, and we provided a clinical decision support hyperlink to the EHR as a service. We debugged a series of terminology and technical issues. The application was validated with data from 255 patients and subsequently deployed to 10 primary care clinics where, over the course of 1 year, 569 533 CCD documents were processed. We validated the use of interoperable documents and open-source components to develop a low-cost tool for automated clinical decision support. Since Consolidated Clinical Document Architecture-based data extraction extends to any certified EHR, this demonstrates a successful modular approach to clinical decision support. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  7. Expanding the isotopic toolbox: Applications of hydrogen and oxygen stable isotope ratios to food web studies

    Directory of Open Access Journals (Sweden)

    Hannah B Vander Zanden

    2016-03-01

    Full Text Available The measurement of stable carbon (δ13C and nitrogen (δ15N isotopes in tissues of organisms has formed the foundation of isotopic food web reconstructions, as these values directly reflect assimilated diet. In contrast, stable hydrogen (δ2H and oxygen (δ18O isotope measurements have typically been reserved for studies of migratory origin and paleoclimate reconstruction based on systematic relationships between organismal tissue and local environmental water. Recently, innovative applications using δ2H and, to a lesser extent, δ18O values have demonstrated potential for these elements to provide novel insights in modern food web studies. We explore the advantages and challenges associated with three applications of δ2H and δ18O values in food web studies. First, large δ2H differences between aquatic and terrestrial ecosystem end members can permit the quantification of energy inputs and nutrient fluxes between these two sources, with potential applications for determining allochthonous vs. autochthonous nutrient sources in freshwater systems and relative aquatic habitat utilization by terrestrial organisms. Next, some studies have identified a relationship between δ2H values and trophic position, which suggests that this marker may serve as a trophic indicator, in addition to the more commonly used δ15N values. Finally, coupled measurements of δ2H and δ18O values are increasing as a result of reduced analytical challenges to measure both simultaneously and may provide additional ecological information over single element measurements. In some organisms, the isotopic ratios of these two elements are tightly coupled, whereas the isotopic disequilibrium in other organisms may offer insight into the diet and physiology of individuals. Although a coherent framework for interpreting δ2H and δ18O data in the context of food web studies is emerging, many fundamental uncertainties remain. We highlight directions for targeted research that

  8. Building mobile applications using Kendo UI mobile and ASP.NET web API

    CERN Document Server

    Nair, Nishanth

    2013-01-01

    The Packt Beginner's Guide format is designed to make you as comfortable as possible. Using practical examples, this guide will walk you through the ins and outs of web application development with easy step-by-step instructions.If you want to build your own application but don't know where to start, then this is the book for you. With easy-to-follow, step-by-step and real-life examples, you will be building your own applications in a matter of weeks not years.

  9. ForistomApp a Web application for scientific and technological information management of Forsitom foundation

    Science.gov (United States)

    Saavedra-Duarte, L. A.; Angarita-Jerardino, A.; Ruiz, P. A.; Dulce-Moreno, H. J.; Vera-Rivera, F. H.; V-Niño, E. D.

    2017-12-01

    Information and Communication Technologies (ICT) are essential in the transfer of knowledge, and the Web tools, as part of ICT, are important for institutions seeking greater visibility of the products developed by their researchers. For this reason, we implemented an application that allows the information management of the FORISTOM Foundation (Foundation of Researchers in Science and Technology of Materials). The application shows a detailed description, not only of all its members also of all the scientific production that they carry out, such as technological developments, research projects, articles, presentations, among others. This application can be implemented by other entities committed to the scientific dissemination and transfer of technology and knowledge.

  10. Harnessing modern web application technology to create intuitive and efficient data visualization and sharing tools

    Directory of Open Access Journals (Sweden)

    Dylan eWood

    2014-08-01

    Full Text Available Neuroscientists increasingly need to work with big data in order to derive meaningful results in their field. Collecting, organizing and analyzing this data can be a major hurdle on the road to scientific discovery. This hurdle can be lowered using the same technologies that are currently revolutionizing the way that cultural and social media sites represent and share information with their users. Web application technologies and standards such as RESTful webservices, HTML5 and high-performance in-browser JavaScript engines are being utilized to vastly improve the way that the world accesses and shares information. The neuroscience community can also benefit tremendously from these technologies. We present here a web application that allows users to explore and request the complex datasets that need to be shared among the neuroimaging community. The COINS (Collaborative Informatics and Neuroimaging Suite Data Exchange uses web application technologies to facilitate data sharing in three phases: Exploration, Request/Communication, and Download. This paper will focus on the first phase, and how intuitive exploration of large and complex datasets is achieved using a framework that centers around asynchronous client-server communication (AJAX and also exposes a powerful API that can be utilized by other applications to explore available data. First opened to the neuroscience community in August 2012, the Data Exchange has already provided researchers with over 2500 GB of data.

  11. Harnessing modern web application technology to create intuitive and efficient data visualization and sharing tools.

    Science.gov (United States)

    Wood, Dylan; King, Margaret; Landis, Drew; Courtney, William; Wang, Runtang; Kelly, Ross; Turner, Jessica A; Calhoun, Vince D

    2014-01-01

    Neuroscientists increasingly need to work with big data in order to derive meaningful results in their field. Collecting, organizing and analyzing this data can be a major hurdle on the road to scientific discovery. This hurdle can be lowered using the same technologies that are currently revolutionizing the way that cultural and social media sites represent and share information with their users. Web application technologies and standards such as RESTful webservices, HTML5 and high-performance in-browser JavaScript engines are being utilized to vastly improve the way that the world accesses and shares information. The neuroscience community can also benefit tremendously from these technologies. We present here a web application that allows users to explore and request the complex datasets that need to be shared among the neuroimaging community. The COINS (Collaborative Informatics and Neuroimaging Suite) Data Exchange uses web application technologies to facilitate data sharing in three phases: Exploration, Request/Communication, and Download. This paper will focus on the first phase, and how intuitive exploration of large and complex datasets is achieved using a framework that centers around asynchronous client-server communication (AJAX) and also exposes a powerful API that can be utilized by other applications to explore available data. First opened to the neuroscience community in August 2012, the Data Exchange has already provided researchers with over 2500 GB of data.

  12. Assessing soil erosion risk using RUSLE through a GIS open source desktop and web application.

    Science.gov (United States)

    Duarte, L; Teodoro, A C; Gonçalves, J A; Soares, D; Cunha, M

    2016-06-01

    Soil erosion is a serious environmental problem. An estimation of the expected soil loss by water-caused erosion can be calculated considering the Revised Universal Soil Loss Equation (RUSLE). Geographical Information Systems (GIS) provide different tools to create categorical maps of soil erosion risk which help to study the risk assessment of soil loss. The objective of this study was to develop a GIS open source application (in QGIS), using the RUSLE methodology for estimating erosion rate at the watershed scale (desktop application) and provide the same application via web access (web application). The applications developed allow one to generate all the maps necessary to evaluate the soil erosion risk. Several libraries and algorithms from SEXTANTE were used to develop these applications. These applications were tested in Montalegre municipality (Portugal). The maps involved in RUSLE method-soil erosivity factor, soil erodibility factor, topographic factor, cover management factor, and support practices-were created. The estimated mean value of the soil loss obtained was 220 ton km(-2) year(-1) ranged from 0.27 to 1283 ton km(-2) year(-1). The results indicated that most of the study area (80 %) is characterized by very low soil erosion level (soil erosion was higher than 962 ton km(-2) year(-1). It was also concluded that areas with high slope values and bare soil are related with high level of erosion and the higher the P and C values, the higher the soil erosion percentage. The RUSLE web and the desktop application are freely available.

  13. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    International Nuclear Information System (INIS)

    Andreeva, J; Dzhunov, I; Karavakis, E; Kokoszkiewicz, L; Nowotka, M; Saiz, P; Tuckett, D

    2012-01-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  14. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    Science.gov (United States)

    Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.

    2012-12-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  15. Web Engineering

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-06-23

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.

  16. Advancing the Implementation of Hydrologic Models as Web-based Applications

    Science.gov (United States)

    Dahal, P.; Tarboton, D. G.; Castronova, A. M.

    2017-12-01

    Advanced computer simulations are required to understand hydrologic phenomenon such as rainfall-runoff response, groundwater hydrology, snow hydrology, etc. Building a hydrologic model instance to simulate a watershed requires investment in data (diverse geospatial datasets such as terrain, soil) and computer resources, typically demands a wide skill set from the analyst, and the workflow involved is often difficult to reproduce. This work introduces a web-based prototype infrastructure in the form of a web application that provides researchers with easy to use access to complete hydrological modeling functionality. This includes creating the necessary geospatial and forcing data, preparing input files for a model by applying complex data preprocessing, running the model for a user defined watershed, and saving the results to a web repository. The open source Tethys Platform was used to develop the web app front-end Graphical User Interface (GUI). We used HydroDS, a webservice that provides data preparation processing capability to support backend computations used by the app. Results are saved in HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. The TOPographic Kinematic APproximation and Integration (TOPKAPI) model served as the example for which we developed a complete hydrologic modeling service to demonstrate the approach. The final product is a complete modeling system accessible through the web to create input files, and run the TOPKAPI hydrologic model for a watershed of interest. We are investigating similar functionality for the preparation of input to Regional Hydro-Ecological Simulation System (RHESSys). Key Words: hydrologic modeling, web services, hydrologic information system, HydroShare, HydroDS, Tethys Platform

  17. Semantic Web applications and tools for the life sciences: SWAT4LS 2010.

    Science.gov (United States)

    Burger, Albert; Paschke, Adrian; Romano, Paolo; Marshall, M Scott; Splendiani, Andrea

    2012-01-25

    As Semantic Web technologies mature and new releases of key elements, such as SPARQL 1.1 and OWL 2.0, become available, the Life Sciences continue to push the boundaries of these technologies with ever more sophisticated tools and applications. Unsurprisingly, therefore, interest in the SWAT4LS (Semantic Web Applications and Tools for the Life Sciences) activities have remained high, as was evident during the third international SWAT4LS workshop held in Berlin in December 2010. Contributors to this workshop were invited to submit extended versions of their papers, the best of which are now made available in the special supplement of BMC Bioinformatics. The papers reflect the wide range of work in this area, covering the storage and querying of Life Sciences data in RDF triple stores, tools for the development of biomedical ontologies and the semantics-based integration of Life Sciences as well as clinicial data.

  18. Lexical Link Analysis Application: Improving Web Service to Acquisition Visibility Portal Phase III

    Science.gov (United States)

    2015-04-30

    ååì~ä=^Åèìáëáíáçå= oÉëÉ~êÅÜ=póãéçëáìã= qÜìêëÇ~ó=pÉëëáçåë= sçäìãÉ=ff= = Lexical Link Analysis Application: Improving Web Service to Acquisition...2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Lexical Link Analysis Application: Improving Web Service...processes. Lexical Link Analysis (LLA) can help, by applying automation to reveal and depict???to decisionmakers??? the correlations, associations, and

  19. The QuakeSim Project: Web Services for Managing Geophysical Data and Applications

    Science.gov (United States)

    Pierce, Marlon E.; Fox, Geoffrey C.; Aktas, Mehmet S.; Aydin, Galip; Gadgil, Harshawardhan; Qi, Zhigang; Sayar, Ahmet

    2008-04-01

    We describe our distributed systems research efforts to build the “cyberinfrastructure” components that constitute a geophysical Grid, or more accurately, a Grid of Grids. Service-oriented computing principles are used to build a distributed infrastructure of Web accessible components for accessing data and scientific applications. Our data services fall into two major categories: Archival, database-backed services based around Geographical Information System (GIS) standards from the Open Geospatial Consortium, and streaming services that can be used to filter and route real-time data sources such as Global Positioning System data streams. Execution support services include application execution management services and services for transferring remote files. These data and execution service families are bound together through metadata information and workflow services for service orchestration. Users may access the system through the QuakeSim scientific Web portal, which is built using a portlet component approach.

  20. Current trends and new challenges of databases and web applications for systems driven biological research

    Directory of Open Access Journals (Sweden)

    Pradeep Kumar eSreenivasaiah

    2010-12-01

    Full Text Available Dynamic and rapidly evolving nature of systems driven research imposes special requirements on the technology, approach, design and architecture of computational infrastructure including database and web application. Several solutions have been proposed to meet the expectations and novel methods have been developed to address the persisting problems of data integration. It is important for researchers to understand different technologies and approaches. Having familiarized with the pros and cons of the existing technologies, researchers can exploit its capabilities to the maximum potential for integrating data. In this review we discuss the architecture, design and key technologies underlying some of the prominent databases (DBs and web applications. We will mention their roles in integration of biological data and investigate some of the emerging design concepts and computational technologies that are likely to have a key role in the future of systems driven biomedical research.