WorldWideScience

Sample records for retrieval web applications

  1. A Specialized Framework for Data Retrieval Web Applications

    Directory of Open Access Journals (Sweden)

    Jerzy Nogiec

    2005-06-01

    Full Text Available Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application debugging, yet the scope is deliberately kept limited to allow for easy comprehension and rapid application development. The system clearly decouples the application processing and visualization, which in turn allows for clean separation of layout and processing development. Duplication of standard web page features such as toolbars and navigational aids is therefore eliminated. The framework employs the popular Model-View-Controller (MVC architecture, but it also uses the filter mechanism for several of its base functionalities, which permits easy extension of the provided core functionality of the system.

  2. A specialized framework for data retrieval Web applications

    International Nuclear Information System (INIS)

    Jerzy Nogiec; Kelley Trombly-Freytag; Dana Walbridge

    2004-01-01

    Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application debugging, yet the scope is deliberately kept limited to allow for easy comprehension and rapid application development. The system clearly decouples the application processing and visualization, which in turn allows for clean separation of layout and processing development. Duplication of standard web page features such as toolbars and navigational aids is therefore eliminated. The framework employs the popular Model-View-Controller (MVC) architecture, but it also uses the filter mechanism for several of its base functionalities, which permits easy extension of the provided core functionality of the system

  3. Web information retrieval for health professionals.

    Science.gov (United States)

    Ting, S L; See-To, Eric W K; Tse, Y K

    2013-06-01

    This paper presents a Web Information Retrieval System (WebIRS), which is designed to assist the healthcare professionals to obtain up-to-date medical knowledge and information via the World Wide Web (WWW). The system leverages the document classification and text summarization techniques to deliver the highly correlated medical information to the physicians. The system architecture of the proposed WebIRS is first discussed, and then a case study on an application of the proposed system in a Hong Kong medical organization is presented to illustrate the adoption process and a questionnaire is administrated to collect feedback on the operation and performance of WebIRS in comparison with conventional information retrieval in the WWW. A prototype system has been constructed and implemented on a trial basis in a medical organization. It has proven to be of benefit to healthcare professionals through its automatic functions in classification and summarizing the medical information that the physicians needed and interested. The results of the case study show that with the use of the proposed WebIRS, significant reduction of searching time and effort, with retrieval of highly relevant materials can be attained.

  4. An Effective Combined Feature For Web Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    H.M.R.B Herath

    2015-08-01

    Full Text Available Abstract Technology advances as well as the emergence of large scale multimedia applications and the revolution of the World Wide Web has changed the world into a digital age. Anybody can use their mobile phone to take a photo at any time anywhere and upload that image to ever growing image databases. Development of effective techniques for visual and multimedia retrieval systems is one of the most challenging and important directions of the future research. This paper proposes an effective combined feature for web based image retrieval. Frequently used colour and texture features are explored in order to develop a combined feature for this purpose. Widely used three colour features Colour moments Colour coherence vector and Colour Correlogram and three texture features Grey Level Co-occurrence matrix Tamura features and Gabor filter were analyzed for their performance. Precision and Recall were used to evaluate the performance of each of these techniques. By comparing precision and recall values the methods that performed best were taken and combined to form a hybrid feature. The developed combined feature was evaluated by developing a web based CBIR system. A web crawler was used to first crawl through Web sites and images found in those sites are downloaded and the combined feature representation technique was used to extract image features. The test results indicated that this web system can be used to index web images with the combined feature representation schema and to find similar images. Random image retrievals using the web system shows that the combined feature can be used to retrieve images belonging to the general image domain. Accuracy of the retrieval can be noted high for natural images like outdoor scenes images of flowers etc. Also images which have a similar colour and texture distribution were retrieved as similar even though the images were belonging to deferent semantic categories. This can be ideal for an artist who wants

  5. Emergent web intelligence advanced information retrieval

    CERN Document Server

    Badr, Youakim; Abraham, Ajith; Hassanien, Aboul-Ella

    2010-01-01

    Web Intelligence explores the impact of artificial intelligence and advanced information technologies representing the next generation of Web-based systems, services, and environments, and designing hybrid web systems that serve wired and wireless users more efficiently. Multimedia and XML-based data are produced regularly and in increasing way in our daily digital activities, and their retrieval must be explored and studied in this emergent web-based era. 'Emergent Web Intelligence: Advanced information retrieval, provides reviews of the related cutting-edge technologies and insights. It is v

  6. Retrieving top-k prestige-based relevant spatial web objects

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian S.

    2010-01-01

    The location-aware keyword query returns ranked objects that are near a query location and that have textual descriptions that match query keywords. This query occurs inherently in many types of mobile and traditional web services and applications, e.g., Yellow Pages and Maps services. Previous...... of prestige-based relevance to capture both the textual relevance of an object to a query and the effects of nearby objects. Based on this, a new type of query, the Location-aware top-k Prestige-based Text retrieval (LkPT) query, is proposed that retrieves the top-k spatial web objects ranked according...... to both prestige-based relevance and location proximity. We propose two algorithms that compute LkPT queries. Empirical studies with real-world spatial data demonstrate that LkPT queries are more effective in retrieving web objects than a previous approach that does not consider the effects of nearby...

  7. Geospatial metadata retrieval from web services

    Directory of Open Access Journals (Sweden)

    Ivanildo Barbosa

    Full Text Available Nowadays, producers of geospatial data in either raster or vector formats are able to make them available on the World Wide Web by deploying web services that enable users to access and query on those contents even without specific software for geoprocessing. Several providers around the world have deployed instances of WMS (Web Map Service, WFS (Web Feature Service and WCS (Web Coverage Service, all of them specified by the Open Geospatial Consortium (OGC. In consequence, metadata about the available contents can be retrieved to be compared with similar offline datasets from other sources. This paper presents a brief summary and describes the matching process between the specifications for OGC web services (WMS, WFS and WCS and the specifications for metadata required by the ISO 19115 - adopted as reference for several national metadata profiles, including the Brazilian one. This process focuses on retrieving metadata about the identification and data quality packages as well as indicates the directions to retrieve metadata related to other packages. Therefore, users are able to assess whether the provided contents fit to their purposes.

  8. Web information retrieval based on ontology

    Science.gov (United States)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  9. Blueprint of a Cross-Lingual Web Retrieval Collection

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.; van Zwol, R.

    2005-01-01

    The world wide web is a natural setting for cross-lingual information retrieval; web content is essentially multilingual, and web searchers are often polyglots. Even though English has emerged as the lingua franca of the web, planning for a business trip or holiday usually involves digesting pages

  10. Network and User-Perceived Performance of Web Page Retrievals

    Science.gov (United States)

    Kruse, Hans; Allman, Mark; Mallasch, Paul

    1998-01-01

    The development of the HTTP protocol has been driven by the need to improve the network performance of the protocol by allowing the efficient retrieval of multiple parts of a web page without the need for multiple simultaneous TCP connections between a client and a server. We suggest that the retrieval of multiple page elements sequentially over a single TCP connection may result in a degradation of the perceived performance experienced by the user. We attempt to quantify this perceived degradation through the use of a model which combines a web retrieval simulation and an analytical model of TCP operation. Starting with the current HTTP/l.1 specification, we first suggest a client@side heuristic to improve the perceived transfer performance. We show that the perceived speed of the page retrieval can be increased without sacrificing data transfer efficiency. We then propose a new client/server extension to the HTTP/l.1 protocol to allow for the interleaving of page element retrievals. We finally address the issue of the display of advertisements on web pages, and in particular suggest a number of mechanisms which can make efficient use of IP multicast to send advertisements to a number of clients within the same network.

  11. Usage of Web Service in Mobile Application for Parents and Students in Binus School Serpong

    Directory of Open Access Journals (Sweden)

    Karto Iskandar

    2016-09-01

    Full Text Available A web service is a service offered by a device electronically to communicate with other electronic device using the World wide web. Smartphone is an electronic device that almost everyone has, especially student and parent for getting information about the school. In BINUS School Serpong mobile application, web services used for getting data from web server like student and menu data. Problem faced by BINUS School Serpong today is the time-consuming application update when using the native application while the application updates are very frequent. To resolve this problem, BINUS School Serpong mobile application will use the web service. This article showed the usage of web services with XML for retrieving data of student. The result from this study is that by using web service, smartphone can retrieve data consistently between multiple platforms. 

  12. Comparing the Scale of Web Subject Directories Precision in Technical-Engineering Information Retrieval

    Directory of Open Access Journals (Sweden)

    Mehrdokht Wazirpour Keshmiri

    2012-07-01

    Full Text Available The main purpose of this research was to compare the scale of web subject directories precision in information retrieval of technical-engineering science. Information gathering was documentary and webometric. Keywords of technical-engineering science were chosen at twenty different subjects from IEEE (Institute of Electrical and Electronics Engineers and engineering magazines that situated in sciencedirect site. These keywords are used at five subject directories Yahoo, Google, Infomine, Intute, Dmoz, that were web directories high-utilization. Usually first results in searching tools are connected to searching keywords. Because, first ten results was evaluated in every search. These assessments to consist of scale of precision, scale of error, scale retrieval items in technical-engineering categories to retrieval items entirely. The used criteria for determining the scale of precision that was according to high-utilization standards in different documents, to consist of presence of the keywords in title, appearance of keywords at the part of web retrieved pages, keywords adjacency, URL of page, page description and subject categories. Information analysis was according to Kruskal-Wallis Test and L.S.D fisher. Results revealed that there was meaningful difference about precision of web subject directories in information retrieval of technical-engineering science, Therefore this theory was confirmed.web subject directories ranked from point of precision as follows. Google, Yahoo, Intute, Dmoz, and Infomine. The scale of observed error at the first results was another criterion that was used for comparing web subject directories. In this research, Yahoo had minimum scale of error and Infomine had most of error. This research also compared the scale of retrieval items in all of categories web subject directories entirely to retrieval items in technical-engineering categories, results revealed that there was meaningful difference between them. And

  13. Web multimedia information retrieval using improved Bayesian algorithm.

    Science.gov (United States)

    Yu, Yi-Jun; Chen, Chun; Yu, Yi-Min; Lin, Huai-Zhong

    2003-01-01

    The main thrust of this paper is application of a novel data mining approach on the log of user's feedback to improve web multimedia information retrieval performance. A user space model was constructed based on data mining, and then integrated into the original information space model to improve the accuracy of the new information space model. It can remove clutter and irrelevant text information and help to eliminate mismatch between the page author's expression and the user's understanding and expectation. User space model was also utilized to discover the relationship between high-level and low-level features for assigning weight. The authors proposed improved Bayesian algorithm for data mining. Experiment proved that the authors' proposed algorithm was efficient.

  14. Design and Application of an Intelligent Agent for Web Information Discovery

    Institute of Scientific and Technical Information of China (English)

    闵君; 冯珊; 唐超; 许立达

    2003-01-01

    With the propagation of applications on the internet, the internet has become a great information source which supplies users with valuable information. But it is hard for users to quickly acquire the right information on the web. This paper an intelligent agent for internet applications to retrieve and extract web information under user's guidance. The intelligent agent is made up of a retrieval script to identify web sources, an extraction script based on the document object model to express extraction process, a data translator to export the extracted information into knowledge bases with frame structures, and a data reasoning to reply users' questions. A GUI tool named Script Writer helps to generate the extraction script visually, and knowledge rule databases help to extract wanted information and to generate the answer to questions.

  15. Improving life sciences information retrieval using semantic web technology.

    Science.gov (United States)

    Quan, Dennis

    2007-05-01

    The ability to retrieve relevant information is at the heart of every aspect of research and development in the life sciences industry. Information is often distributed across multiple systems and recorded in a way that makes it difficult to piece together the complete picture. Differences in data formats, naming schemes and network protocols amongst information sources, both public and private, must be overcome, and user interfaces not only need to be able to tap into these diverse information sources but must also assist users in filtering out extraneous information and highlighting the key relationships hidden within an aggregated set of information. The Semantic Web community has made great strides in proposing solutions to these problems, and many efforts are underway to apply Semantic Web techniques to the problem of information retrieval in the life sciences space. This article gives an overview of the principles underlying a Semantic Web-enabled information retrieval system: creating a unified abstraction for knowledge using the RDF semantic network model; designing semantic lenses that extract contextually relevant subsets of information; and assembling semantic lenses into powerful information displays. Furthermore, concrete examples of how these principles can be applied to life science problems including a scenario involving a drug discovery dashboard prototype called BioDash are provided.

  16. A novel architecture for information retrieval system based on semantic web

    Science.gov (United States)

    Zhang, Hui

    2011-12-01

    Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.

  17. Towards an Intelligent Possibilistic Web Information Retrieval Using Multiagent System

    Science.gov (United States)

    Elayeb, Bilel; Evrard, Fabrice; Zaghdoud, Montaceur; Ahmed, Mohamed Ben

    2009-01-01

    Purpose: The purpose of this paper is to make a scientific contribution to web information retrieval (IR). Design/methodology/approach: A multiagent system for web IR is proposed based on new technologies: Hierarchical Small-Worlds (HSW) and Possibilistic Networks (PN). This system is based on a possibilistic qualitative approach which extends the…

  18. Millennial Undergraduate Research Strategies in Web and Library Information Retrieval Systems

    Science.gov (United States)

    Porter, Brandi

    2011-01-01

    This article summarizes the author's dissertation regarding search strategies of millennial undergraduate students in Web and library online information retrieval systems. Millennials bring a unique set of search characteristics and strategies to their research since they have never known a world without the Web. Through the use of search engines,…

  19. Improving Web Page Retrieval using Search Context from Clicked Domain Names

    NARCIS (Netherlands)

    Li, R.

    Search context is a crucial factor that helps to understand a user’s information need in ad-hoc Web page retrieval. A query log of a search engine contains rich information on issued queries and their corresponding clicked Web pages. The clicked data implies its relevance to the query and can be

  20. Information Retrieval Models

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Göker, Ayse; Davies, John

    2009-01-01

    Many applications that handle information on the internet would be completely inadequate without the support of information retrieval technology. How would we find information on the world wide web if there were no web search engines? How would we manage our email without spam filtering? Much of the

  1. WEB STRUCTURE MINING

    Directory of Open Access Journals (Sweden)

    CLAUDIA ELENA DINUCĂ

    2011-01-01

    Full Text Available The World Wide Web became one of the most valuable resources for information retrievals and knowledge discoveries due to the permanent increasing of the amount of data available online. Taking into consideration the web dimension, the users get easily lost in the web’s rich hyper structure. Application of data mining methods is the right solution for knowledge discovery on the Web. The knowledge extracted from the Web can be used to raise the performances for Web information retrievals, question answering and Web based data warehousing. In this paper, I provide an introduction of Web mining categories and I focus on one of these categories: the Web structure mining. Web structure mining, one of three categories of web mining for data, is a tool used to identify the relationship between Web pages linked by information or direct link connection. It offers information about how different pages are linked together to form this huge web. Web Structure Mining finds hidden basic structures and uses hyperlinks for more web applications such as web search.

  2. Web-based control application using WebSocket

    International Nuclear Information System (INIS)

    Furukawa, Y.

    2012-01-01

    The WebSocket allows asynchronous full-duplex communication between a Web-based (i.e. Java Script-based) application and a Web-server. WebSocket started as a part of HTML5 standardization but has now been separated from HTML5 and has been developed independently. Using WebSocket, it becomes easy to develop platform independent presentation layer applications for accelerator and beamline control software. In addition, a Web browser is the only application program that needs to be installed on client computer. The WebSocket-based applications communicate with the WebSocket server using simple text-based messages, so WebSocket is applicable message-based control system like MADOCA, which was developed for the SPring-8 control system. A simple WebSocket server for the MADOCA control system and a simple motor control application were successfully made as a first trial of the WebSocket control application. Using Google-Chrome (version 13.0) on Debian/Linux and Windows 7, Opera (version 11.0) on Debian/Linux and Safari (version 5.0.3) on Mac OS X as clients, the motors can be controlled using a WebSocket-based Web-application. Diffractometer control application use in synchrotron radiation diffraction experiment was also developed. (author)

  3. The Role of the Medical Students’ Emotional Mood in Information Retrieval from the Web

    Directory of Open Access Journals (Sweden)

    Marzieh Yari Zanganeh

    2018-04-01

    Full Text Available Background: Online information retrieval is a process the result of which is influenced by the changes in the emotional moods of the user. It seems reasonable to include emotional aspects in developing information retrieval systems in order to optimize the experience of the users. Therefore, this study aimed to identify the role of positive and negative affects in the information seeking process on the web among students of medical sciences. Methods: From the methodological perspective, the present study was an experimental and applied research. According to the nature of the experimental method, observation and questionnaire were used. The participants were the students of various fields of Medical Sciences. The research sample included 50 students of Shiraz University of Medical Sciences selected through purposeful sampling method; they regularly used World Wide Web and google engine for information retrieval in educational, Research, personal, or managerial activities. In order to collect the data, search tasks were characterized by the topic, sequence in a search process, difficulty level, and searcher’s interest (simple in a task. Face and content validity of the questionnaire were confirmed by the experts. Reliability of the questionnaire was tested by Alpha Cronbach. Cronbach’s alpha coefficient (PA=0.777, NA=0.754 showed a high rate of reliability in a PANAS questionnaire. The collected data were analyzed using SPSS, version 20.0; also, to test the research hypothesis, T-Test and pair Samples T-Test were used. The P0.05. Conclusion: Information retrieval systems in the Web should identify positive and negative affects in the information seeking process in a set of perceiving signs in human interaction with the computer. The automatic identification of the users’ affect opens new dimensions into users moderators and information retrieval systems for successful retrieval from the Web.

  4. Introduction to the JASIST Special Topic Issue on Web Retrieval and Mining: A Machine Learning Perspective.

    Science.gov (United States)

    Chen, Hsinchun

    2003-01-01

    Discusses information retrieval techniques used on the World Wide Web. Topics include machine learning in information extraction; relevance feedback; information filtering and recommendation; text classification and text clustering; Web mining, based on data mining techniques; hyperlink structure; and Web size. (LRW)

  5. Web-based information search and retrieval: effects of strategy use and age on search success.

    Science.gov (United States)

    Stronge, Aideen J; Rogers, Wendy A; Fisk, Arthur D

    2006-01-01

    The purpose of this study was to investigate the relationship between strategy use and search success on the World Wide Web (i.e., the Web) for experienced Web users. An additional goal was to extend understanding of how the age of the searcher may influence strategy use. Current investigations of information search and retrieval on the Web have provided an incomplete picture of Web strategy use because participants have not been given the opportunity to demonstrate their knowledge of Web strategies while also searching for information on the Web. Using both behavioral and knowledge-engineering methods, we investigated searching behavior and system knowledge for 16 younger adults (M = 20.88 years of age) and 16 older adults (M = 67.88 years). Older adults were less successful than younger adults in finding correct answers to the search tasks. Knowledge engineering revealed that the age-related effect resulted from ineffective search strategies and amount of Web experience rather than age per se. Our analysis led to the development of a decision-action diagram representing search behavior for both age groups. Older adults had more difficulty than younger adults when searching for information on the Web. However, this difficulty was related to the selection of inefficient search strategies, which may have been attributable to a lack of knowledge about available Web search strategies. Actual or potential applications of this research include training Web users to search more effectively and suggestions to improve the design of search engines.

  6. Web Application Vulnerabilities

    OpenAIRE

    Yadav, Bhanu

    2014-01-01

    Web application security has been a major issue in information technology since the evolvement of dynamic web application. The main objective of this project was to carry out a detailed study on the top three web application vulnerabilities such as injection, cross site scripting, broken authentication and session management, present the situation where an application can be vulnerable to these web threats and finally provide preventative measures against them. ...

  7. Web tools for effective retrieval, visualization, and evaluation of cardiology medical images and records

    Science.gov (United States)

    Masseroli, Marco; Pinciroli, Francesco

    2000-12-01

    To provide easy retrieval, integration and evaluation of multimodal cardiology images and data in a web browser environment, distributed application technologies and java programming were used to implement a client-server architecture based on software agents. The server side manages secure connections and queries to heterogeneous remote databases and file systems containing patient personal and clinical data. The client side is a Java applet running in a web browser and providing a friendly medical user interface to perform queries on patient and medical test dat and integrate and visualize properly the various query results. A set of tools based on Java Advanced Imaging API enables to process and analyze the retrieved cardiology images, and quantify their features in different regions of interest. The platform-independence Java technology makes the developed prototype easy to be managed in a centralized form and provided in each site where an intranet or internet connection can be located. Giving the healthcare providers effective tools for querying, visualizing and evaluating comprehensively cardiology medical images and records in all locations where they can need them- i.e. emergency, operating theaters, ward, or even outpatient clinics- the developed prototype represents an important aid in providing more efficient diagnoses and medical treatments.

  8. Content-based multimedia retrieval: indexing and diversification

    NARCIS (Netherlands)

    van Leuken, R.H.

    2009-01-01

    The demand for efficient systems that facilitate searching in multimedia databases and collections is vastly increasing. Application domains include criminology, musicology, trademark registration, medicine and image or video retrieval on the web. This thesis discusses content-based retrieval

  9. Web User Profile Using XUL and Information Retrieval Techniques

    Directory of Open Access Journals (Sweden)

    Dan MUNTEANU

    2008-12-01

    Full Text Available This paper presents the importance of user profile in information retrieval, information filtering and recommender systems using explicit and implicit feedback. A Firefox extension (based on XUL used for gathering data needed to infer a web user profile and an example file with collected data are presented. Also an algorithm for creating and updating the user profile and keeping track of a fixed number k of subjects of interest is presented.

  10. Applying Semantic Web technologies to improve the retrieval, credibility and use of health-related web resources.

    Science.gov (United States)

    Mayer, Miguel A; Karampiperis, Pythagoras; Kukurikos, Antonis; Karkaletsis, Vangelis; Stamatakis, Kostas; Villarroel, Dagmar; Leis, Angela

    2011-06-01

    The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various "trust marks" and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.

  11. Design and development of semantic web-based system for computer science domain-specific information retrieval

    Directory of Open Access Journals (Sweden)

    Ritika Bansal

    2016-09-01

    Full Text Available In semantic web-based system, the concept of ontology is used to search results by contextual meaning of input query instead of keyword matching. From the research literature, there seems to be a need for a tool which can provide an easy interface for complex queries in natural language that can retrieve the domain-specific information from the ontology. This research paper proposes an IRSCSD system (Information retrieval system for computer science domain as a solution. This system offers advanced querying and browsing of structured data with search results automatically aggregated and rendered directly in a consistent user-interface, thus reducing the manual effort of users. So, the main objective of this research is design and development of semantic web-based system for integrating ontology towards domain-specific retrieval support. Methodology followed is a piecemeal research which involves the following stages. First Stage involves the designing of framework for semantic web-based system. Second stage builds the prototype for the framework using Protégé tool. Third Stage deals with the natural language query conversion into SPARQL query language using Python-based QUEPY framework. Fourth Stage involves firing of converted SPARQL queries to the ontology through Apache's Jena API to fetch the results. Lastly, evaluation of the prototype has been done in order to ensure its efficiency and usability. Thus, this research paper throws light on framework development for semantic web-based system that assists in efficient retrieval of domain-specific information, natural language query interpretation into semantic web language, creation of domain-specific ontology and its mapping with related ontology. This research paper also provides approaches and metrics for ontology evaluation on prototype ontology developed to study the performance based on accessibility of required domain-related information.

  12. Correct software in web applications and web services

    CERN Document Server

    Thalheim, Bernhard; Prinz, Andreas; Buchberger, Bruno

    2015-01-01

    The papers in this volume aim at obtaining a common understanding of the challenging research questions in web applications comprising web information systems, web services, and web interoperability; obtaining a common understanding of verification needs in web applications; achieving a common understanding of the available rigorous approaches to system development, and the cases in which they have succeeded; identifying how rigorous software engineering methods can be exploited to develop suitable web applications; and at developing a European-scale research agenda combining theory, methods a

  13. Engineering Web Applications

    DEFF Research Database (Denmark)

    Casteleyn, Sven; Daniel, Florian; Dolog, Peter

    Nowadays, Web applications are almost omnipresent. The Web has become a platform not only for information delivery, but also for eCommerce systems, social networks, mobile services, and distributed learning environments. Engineering Web applications involves many intrinsic challenges due...... to their distributed nature, content orientation, and the requirement to make them available to a wide spectrum of users who are unknown in advance. The authors discuss these challenges in the context of well-established engineering processes, covering the whole product lifecycle from requirements engineering through...... design and implementation to deployment and maintenance. They stress the importance of models in Web application development, and they compare well-known Web-specific development processes like WebML, WSDM and OOHDM to traditional software development approaches like the waterfall model and the spiral...

  14. Bat-Inspired Algorithm Based Query Expansion for Medical Web Information Retrieval.

    Science.gov (United States)

    Khennak, Ilyes; Drias, Habiba

    2017-02-01

    With the increasing amount of medical data available on the Web, looking for health information has become one of the most widely searched topics on the Internet. Patients and people of several backgrounds are now using Web search engines to acquire medical information, including information about a specific disease, medical treatment or professional advice. Nonetheless, due to a lack of medical knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, which deem their search queries to be imprecise due the use of unclear keywords. The use of these ambiguous and vague queries to describe the patients' needs has resulted in a failure of Web search engines to retrieve accurate and relevant information. One of the most natural and promising method to overcome this drawback is Query Expansion. In this paper, an original approach based on Bat Algorithm is proposed to improve the retrieval effectiveness of query expansion in medical field. In contrast to the existing literature, the proposed approach uses Bat Algorithm to find the best expanded query among a set of expanded query candidates, while maintaining low computational complexity. Moreover, this new approach allows the determination of the length of the expanded query empirically. Numerical results on MEDLINE, the on-line medical information database, show that the proposed approach is more effective and efficient compared to the baseline.

  15. Application of Google Maps API service for creating web map of information retrieved from CORINE land cover databases

    Directory of Open Access Journals (Sweden)

    Kilibarda Milan

    2010-01-01

    Full Text Available Today, Google Maps API application based on Ajax technology as standard web service; facilitate users with publication interactive web maps, thus opening new possibilities in relation to the classical analogue maps. CORINE land cover databases are recognized as the fundamental reference data sets for numerious spatial analysis. The theoretical and applicable aspects of Google Maps API cartographic service are considered on the case of creating web map of change in urban areas in Belgrade and surround from 2000. to 2006. year, obtained from CORINE databases.

  16. Significant Benefits from Libraries in Web 3.0 Environment

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Keywords- Web 3.0, library 3.0, Web 3.0 Applications, Semantic. Web ... providing virtual information services, and other services cannot be ... web third generation, definition, beginning, and retrieve system. The study ...

  17. Mobile medical image retrieval

    Science.gov (United States)

    Duc, Samuel; Depeursinge, Adrien; Eggel, Ivan; Müller, Henning

    2011-03-01

    Images are an integral part of medical practice for diagnosis, treatment planning and teaching. Image retrieval has gained in importance mainly as a research domain over the past 20 years. Both textual and visual retrieval of images are essential. In the process of mobile devices becoming reliable and having a functionality equaling that of formerly desktop clients, mobile computing has gained ground and many applications have been explored. This creates a new field of mobile information search & access and in this context images can play an important role as they often allow understanding complex scenarios much quicker and easier than free text. Mobile information retrieval in general has skyrocketed over the past year with many new applications and tools being developed and all sorts of interfaces being adapted to mobile clients. This article describes constraints of an information retrieval system including visual and textual information retrieval from the medical literature of BioMedCentral and of the RSNA journals Radiology and Radiographics. Solutions for mobile data access with an example on an iPhone in a web-based environment are presented as iPhones are frequently used and the operating system is bound to become the most frequent smartphone operating system in 2011. A web-based scenario was chosen to allow for a use by other smart phone platforms such as Android as well. Constraints of small screens and navigation with touch screens are taken into account in the development of the application. A hybrid choice had to be taken to allow for taking pictures with the cell phone camera and upload them for visual similarity search as most producers of smart phones block this functionality to web applications. Mobile information access and in particular access to images can be surprisingly efficient and effective on smaller screens. Images can be read on screen much faster and relevance of documents can be identified quickly through the use of images contained in

  18. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-08-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  19. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-12-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  20. Information Retrieval Strategies of Millennial Undergraduate Students in Web and Library Database Searches

    Science.gov (United States)

    Porter, Brandi

    2009-01-01

    Millennial students make up a large portion of undergraduate students attending colleges and universities, and they have a variety of online resources available to them to complete academically related information searches, primarily Web based and library-based online information retrieval systems. The content, ease of use, and required search…

  1. [A systematic evaluation of application of the web-based cancer database].

    Science.gov (United States)

    Huang, Tingting; Liu, Jialin; Li, Yong; Zhang, Rui

    2013-10-01

    In order to support the theory and practice of the web-based cancer database development in China, we applied a systematic evaluation to assess the development condition of the web-based cancer databases at home and abroad. We performed computer-based retrieval of the Ovid-MEDLINE, Springerlink, EBSCOhost, Wiley Online Library and CNKI databases, the papers of which were published between Jan. 1995 and Dec. 2011, and retrieved the references of these papers by hand. We selected qualified papers according to the pre-established inclusion and exclusion criteria, and carried out information extraction and analysis of the papers. Eventually, searching the online database, we obtained 1244 papers, and checking the reference lists, we found other 19 articles. Thirty-one articles met the inclusion and exclusion criteria and we extracted the proofs and assessed them. Analyzing these evidences showed that the U.S.A. counted for 26% in the first place. Thirty-nine percent of these web-based cancer databases are comprehensive cancer databases. As for single cancer databases, breast cancer and prostatic cancer are on the top, both counting for 10% respectively. Thirty-two percent of the cancer database are associated with cancer gene information. For the technical applications, MySQL and PHP applied most widely, nearly 23% each.

  2. MedlinePlus Connect: Web Application

    Science.gov (United States)

    ... MedlinePlus Connect → Web Application URL of this page: https://medlineplus.gov/connect/application.html MedlinePlus Connect: Web ... will change.) Old URLs New URLs Web Application https://apps.nlm.nih.gov/medlineplus/services/mpconnect.cfm? ...

  3. Engineering Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    suit the user profile the most. This paper summarizes the domain engineering framework for such adaptive web applications. The framework provides guidelines to develop adaptive web applications as members of a family. It suggests how to utilize the design artifacts as knowledge which can be used......Information and services on the web are accessible for everyone. Users of the web differ in their background, culture, political and social environment, interests and so on. Ambient intelligence was envisioned as a concept for systems which are able to adapt to user actions and needs....... With the growing amount of information and services, the web applications become natural candidates to adopt the concepts of ambient intelligence. Such applications can deal with divers user intentions and actions based on the user profile and can suggest the combination of information content and services which...

  4. Building Social Web Applications

    CERN Document Server

    Bell, Gavin

    2009-01-01

    Building a web application that attracts and retains regular visitors is tricky enough, but creating a social application that encourages visitors to interact with one another requires careful planning. This book provides practical solutions to the tough questions you'll face when building an effective community site -- one that makes visitors feel like they've found a new home on the Web. If your company is ready to take part in the social web, this book will help you get started. Whether you're creating a new site from scratch or reworking an existing site, Building Social Web Applications

  5. Maintenance-Ready Web Application Development

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2016-01-01

    Full Text Available The current paper tackles the subject of developing maintenance-ready web applications. Maintenance is presented as a core stage in a web application’s lifecycle. The concept of maintenance-ready is defined in the context of web application development. Web application maintenance tasks types are enunciated and suitable task types are identified for further analysis. The research hypothesis is formulated based on a direct link between tackling maintenance in the development stage and reducing overall maintenance costs. A live maintenance-ready web application is presented and maintenance related aspects are highlighted. The web application’s features, that render it maintenance-ready, are emphasize. The cost of designing and building the web-application to be maintenance-ready are disclosed. The savings in maintenance development effort facilitated by maintenance ready features are also disclosed. Maintenance data is collected from 40 projects implemented by a web development company. Homogeneity and diversity of collected data is evaluated. A data sample is presented and the size and comprehensive nature of the entire dataset is depicted. Research hypothesis are validated and conclusions are formulated on the topic of developing maintenance-ready web applications. The limits of the research process which represented the basis for the current paper are enunciated. Future research topics are submitted for debate.

  6. Adopting and adapting a commercial view of web services for the Navy

    Science.gov (United States)

    Warner, Elizabeth; Ladner, Roy; Katikaneni, Uday; Petry, Fred

    2005-05-01

    Web Services are being adopted as the enabling technology to provide net-centric capabilities for many Department of Defense operations. The Navy Enterprise Portal, for example, is Web Services-based, and the Department of the Navy is promulgating guidance for developing Web Services. Web Services, however, only constitute a baseline specification that provides the foundation on which users, under current approaches, write specialized applications in order to retrieve data over the Internet. Application development may increase dramatically as the number of different available Web Services increases. Reasons for specialized application development include XML schema versioning differences, adoption/use of diverse business rules, security access issues, and time/parameter naming constraints, among others. We are currently developing for the US Navy a system which will improve delivery of timely and relevant meteorological and oceanographic (MetOc) data to the warfighter. Our objective is to develop an Advanced MetOc Broker (AMB) that leverages Web Services technology to identify, retrieve and integrate relevant MetOc data in an automated manner. The AMB will utilize a Mediator, which will be developed by applying ontological research and schema matching techniques to MetOc forms of data. The AMB, using the Mediator, will support a new, advanced approach to the use of Web Services; namely, the automated identification, retrieval and integration of MetOc data. Systems based on this approach will then not require extensive end-user application development for each Web Service from which data can be retrieved. Users anywhere on the globe will be able to receive timely environmental data that fits their particular needs.

  7. Efficient Retrieval of the Top-k Most Relevant Spatial Web Objects

    DEFF Research Database (Denmark)

    Cong, Gao; Jensen, Christian Søndergaard; Wu, Dingming

    2009-01-01

    The conventional Internet is acquiring a geo-spatial dimension. Web documents are being geo-tagged, and geo-referenced objects such as points of interest are being associated with descriptive text documents. The resulting fusion of geo-location and documents enables a new kind of top-k query...... that takes into account both location proximity and text relevancy. To our knowledge, only naive techniques exist that are capable of computing a general web information retrieval query while also taking location into account. This paper proposes a new indexing framework for location-aware top-k text...... both text relevancy and location proximity to prune the search space. Results of empirical studies with an implementation of the framework demonstrate that the paper’s proposal offers scalability and is capable of excellent performance....

  8. Express web application development

    CERN Document Server

    Yaapa, Hage

    2013-01-01

    Express Web Application Development is a practical introduction to learning about Express. Each chapter introduces you to a different area of Express, using screenshots and examples to get you up and running as quickly as possible.If you are looking to use Express to build your next web application, ""Express Web Application Development"" will help you get started and take you right through to Express' advanced features. You will need to have an intermediate knowledge of JavaScript to get the most out of this book.

  9. Integrating Data Warehouses with Web Data

    DEFF Research Database (Denmark)

    Perez, Juan Manuel; Berlanga, Rafael; Aramburu, Maria Jose

    This paper surveys the most relevant research on combining Data Warehouse (DW) and Web data. It studies the XML technologies that are currently being used to integrate, store, query and retrieve web data, and their application to data warehouses. The paper addresses the problem of integrating...

  10. WAPTT - Web Application Penetration Testing Tool

    Directory of Open Access Journals (Sweden)

    DURIC, Z.

    2014-02-01

    Full Text Available Web applications vulnerabilities allow attackers to perform malicious actions that range from gaining unauthorized account access to obtaining sensitive data. The number of reported web application vulnerabilities in last decade is increasing dramatically. The most of vulnerabilities result from improper input validation and sanitization. The most important of these vulnerabilities based on improper input validation and sanitization are: SQL injection (SQLI, Cross-Site Scripting (XSS and Buffer Overflow (BOF. In order to address these vulnerabilities we designed and developed the WAPTT (Web Application Penetration Testing Tool tool - web application penetration testing tool. Unlike other web application penetration testing tools, this tool is modular, and can be easily extended by end-user. In order to improve efficiency of SQLI vulnerability detection, WAPTT uses an efficient algorithm for page similarity detection. The proposed tool showed promising results as compared to six well-known web application scanners in detecting various web application vulnerabilities.

  11. An Implementation of Semantic Web System for Information retrieval using J2EE Technologies.

    OpenAIRE

    B.Hemanth kumar,; Prof. M.Surendra Prasad Babu

    2011-01-01

    Accessing web resources (Information) is an essential facility provided by web applications to every body. Semantic web is one of the systems that provide a facility to access the resources through web service applications. Semantic web and web Services are new emerging web based technologies. An automatic information processing system can be developed by using semantic web and web services, each having its own contribution within the context of developing web-based information systems and ap...

  12. Comparing Web Applications with Desktop Applications: An Empirical Study

    DEFF Research Database (Denmark)

    Pop, Paul

    2002-01-01

    In recent years, many desktop applications have been ported to the world wide web in order to reduce (multiplatform) development, distribution and maintenance costs. However, there is little data concerning the usability of web applications, and the impact of their usability on the total cost...... of developing and using such applications. In this paper we present a comparison of web and desktop applications from the usability point of view. The comparison is based on an empirical study that investigates the performance of a group of users on two calendaring applications: Yahoo!Calendar and Microsoft...... Calendar. The study shows that in the case of web applications the performance of the users is significantly reduced, mainly because of the restricted interaction mechanisms provided by current web browsers....

  13. Information Retrieval and Graph Analysis Approaches for Book Recommendation

    OpenAIRE

    Chahinez Benkoussas; Patrice Bellot

    2015-01-01

    A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval ...

  14. Virtual patients on the semantic Web: a proof-of-application study.

    Science.gov (United States)

    Dafli, Eleni; Antoniou, Panagiotis; Ioannidis, Lazaros; Dombros, Nicholas; Topps, David; Bamidis, Panagiotis D

    2015-01-22

    Virtual patients are interactive computer simulations that are increasingly used as learning activities in modern health care education, especially in teaching clinical decision making. A key challenge is how to retrieve and repurpose virtual patients as unique types of educational resources between different platforms because of the lack of standardized content-retrieving and repurposing mechanisms. Semantic Web technologies provide the capability, through structured information, for easy retrieval, reuse, repurposing, and exchange of virtual patients between different systems. An attempt to address this challenge has been made through the mEducator Best Practice Network, which provisioned frameworks for the discovery, retrieval, sharing, and reuse of medical educational resources. We have extended the OpenLabyrinth virtual patient authoring and deployment platform to facilitate the repurposing and retrieval of existing virtual patient material. A standalone Web distribution and Web interface, which contains an extension for the OpenLabyrinth virtual patient authoring system, was implemented. This extension was designed to semantically annotate virtual patients to facilitate intelligent searches, complex queries, and easy exchange between institutions. The OpenLabyrinth extension enables OpenLabyrinth authors to integrate and share virtual patient case metadata within the mEducator3.0 network. Evaluation included 3 successive steps: (1) expert reviews; (2) evaluation of the ability of health care professionals and medical students to create, share, and exchange virtual patients through specific scenarios in extended OpenLabyrinth (OLabX); and (3) evaluation of the repurposed learning objects that emerged from the procedure. We evaluated 30 repurposed virtual patient cases. The evaluation, with a total of 98 participants, demonstrated the system's main strength: the core repurposing capacity. The extensive metadata schema presentation facilitated user exploration

  15. LUNARINFO:A Data Archiving and Retrieving System for the Circumlunar Explorer Based on XML/Web Services

    Institute of Scientific and Technical Information of China (English)

    ZUO Wei; LI Chunlai; OUYANG Ziyuan; LIU Jianjun; XU Tao

    2004-01-01

    It is essential to build a modem information management system to store and manage data of our circumlunar explorer in order to realize the scientific objectives. It is difficult for an information system based on traditional distributed technology to communicate information and work together among heterogeneous systems in order to meet the new requirement of Intemet development. XML and Web Services, because of their open standards and self-containing properties, have changed the mode of information organization and data management. Now they can provide a good solution for building an open, extendable, and compatible information management system, and facilitate interchanging and transferring of data among heterogeneous systems. On the basis of the three-tiered browse/server architectures and the Oracle 9i Database as an information storage platform, we have designed and implemented a data archiving and retrieval system for the circumlunar explorer-LUNARINFO. We have also successfully realized the integration between LUNARINFO and the cosmic dust database system. LUNARINFO consists of five function modules for data management, information publishing, system management, data retrieval, and interface integration. Based on XML and Web Services, it not only is an information database system for archiving, long-term storing, retrieving and publication of lunar reference data related to the circumlunar explorer, but also provides data web Services which can be easily developed by various expert groups and connected to the common information system to realize data resource integration.

  16. An Efficient Approach for Web Indexing of Big Data through Hyperlinks in Web Crawling

    Science.gov (United States)

    Devi, R. Suganya; Manjula, D.; Siddharth, R. K.

    2015-01-01

    Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling. PMID:26137592

  17. Retrieval of very large numbers of items in the Web of Science: an exercise to develop accurate search strategies

    NARCIS (Netherlands)

    Arencibia-Jorge, R.; Leydesdorff, L.; Chinchilla-Rodríguez, Z.; Rousseau, R.; Paris, S.W.

    2009-01-01

    The Web of Science interface counts at most 100,000 retrieved items from a single query. If the query results in a dataset containing more than 100,000 items the number of retrieved items is indicated as >100,000. The problem studied here is how to find the exact number of items in a query that

  18. The Role of the Web Server in a Capstone Web Application Course

    Science.gov (United States)

    Umapathy, Karthikeyan; Wallace, F. Layne

    2010-01-01

    Web applications have become commonplace in the Information Systems curriculum. Much of the discussion about Web development for capstone courses has centered on the scripting tools. Very little has been discussed about different ways to incorporate the Web server into Web application development courses. In this paper, three different ways of…

  19. Secure Java For Web Application Development

    CERN Document Server

    Bhargav, Abhay

    2010-01-01

    As the Internet has evolved, so have the various vulnerabilities, which largely stem from the fact that developers are unaware of the importance of a robust application security program. This book aims to educate readers on application security and building secure web applications using the new Java Platform. The text details a secure web application development process from the risk assessment phase to the proof of concept phase. The authors detail such concepts as application risk assessment, secure SDLC, security compliance requirements, web application vulnerabilities and threats, security

  20. Opal web services for biomedical applications.

    Science.gov (United States)

    Ren, Jingyuan; Williams, Nadya; Clementi, Luca; Krishnan, Sriram; Li, Wilfred W

    2010-07-01

    Biomedical applications have become increasingly complex, and they often require large-scale high-performance computing resources with a large number of processors and memory. The complexity of application deployment and the advances in cluster, grid and cloud computing require new modes of support for biomedical research. Scientific Software as a Service (sSaaS) enables scalable and transparent access to biomedical applications through simple standards-based Web interfaces. Towards this end, we built a production web server (http://ws.nbcr.net) in August 2007 to support the bioinformatics application called MEME. The server has grown since to include docking analysis with AutoDock and AutoDock Vina, electrostatic calculations using PDB2PQR and APBS, and off-target analysis using SMAP. All the applications on the servers are powered by Opal, a toolkit that allows users to wrap scientific applications easily as web services without any modification to the scientific codes, by writing simple XML configuration files. Opal allows both web forms-based access and programmatic access of all our applications. The Opal toolkit currently supports SOAP-based Web service access to a number of popular applications from the National Biomedical Computation Resource (NBCR) and affiliated collaborative and service projects. In addition, Opal's programmatic access capability allows our applications to be accessed through many workflow tools, including Vision, Kepler, Nimrod/K and VisTrails. From mid-August 2007 to the end of 2009, we have successfully executed 239,814 jobs. The number of successfully executed jobs more than doubled from 205 to 411 per day between 2008 and 2009. The Opal-enabled service model is useful for a wide range of applications. It provides for interoperation with other applications with Web Service interfaces, and allows application developers to focus on the scientific tool and workflow development. Web server availability: http://ws.nbcr.net.

  1. Design and Analysis of Web Application Frameworks

    DEFF Research Database (Denmark)

    Schwarz, Mathias Romme

    -state manipulation vulnerabilities. The hypothesis of this dissertation is that we can design frameworks and static analyses that aid the programmer to avoid such errors. First, we present the JWIG web application framework for writing secure and maintainable web applications. We discuss how this framework solves...... some of the common errors through an API that is designed to be safe by default. Second, we present a novel technique for checking HTML validity for output that is generated by web applications. Through string analysis, we approximate the output of web applications as context-free grammars. We model......Numerous web application frameworks have been developed in recent years. These frameworks enable programmers to reuse common components and to avoid typical pitfalls in web application development. Although such frameworks help the programmer to avoid many common errors, we nd...

  2. Wordpress web application development

    CERN Document Server

    Ratnayake, Rakhitha Nimesh

    2015-01-01

    This book is intended for WordPress developers and designers who want to develop quality web applications within a limited time frame and for maximum profit. Prior knowledge of basic web development and design is assumed.

  3. Project Assessment Skills Web Application

    Science.gov (United States)

    Goff, Samuel J.

    2013-01-01

    The purpose of this project is to utilize Ruby on Rails to create a web application that will replace a spreadsheet keeping track of training courses and tasks. The goal is to create a fast and easy to use web application that will allow users to track progress on training courses. This application will allow users to update and keep track of all of the training required of them. The training courses will be organized by group and by user, making readability easier. This will also allow group leads and administrators to get a sense of how everyone is progressing in training. Currently, updating and finding information from this spreadsheet is a long and tedious task. By upgrading to a web application, finding and updating information will be easier than ever as well as adding new training courses and tasks. Accessing this data will be much easier in that users just have to go to a website and log in with NDC credentials rather than request the relevant spreadsheet from the holder. In addition to Ruby on Rails, I will be using JavaScript, CSS, and jQuery to help add functionality and ease of use to my web application. This web application will include a number of features that will help update and track progress on training. For example, one feature will be to track progress of a whole group of users to be able to see how the group as a whole is progressing. Another feature will be to assign tasks to either a user or a group of users. All of these together will create a user friendly and functional web application.

  4. Learning to rank for information retrieval

    CERN Document Server

    Liu, Tie-Yan

    2011-01-01

    Due to the fast growth of the Web and the difficulties in finding desired information, efficient and effective information retrieval systems have become more important than ever, and the search engine has become an essential tool for many people. The ranker, a central component in every search engine, is responsible for the matching between processed queries and indexed documents. Because of its central role, great attention has been paid to the research and development of ranking technologies. In addition, ranking is also pivotal for many other information retrieval applications, such as coll

  5. WordPress web application development

    CERN Document Server

    Ratnayake, Rakhitha Nimesh

    2013-01-01

    An extensive, practical guide that explains how to adapt WordPress features, both conventional and trending, for web applications.This book is intended for WordPress developers and designers who have the desire to go beyond conventional website development to develop quality web applications within a limited time frame and for maximum profit. Experienced web developers who are looking for a framework for rapid application development will also find this to be a useful resource. Prior knowledge with of WordPress is preferable as the main focus will be on explaining methods for adapting WordPres

  6. Forensics Investigation of Web Application Security Attacks

    OpenAIRE

    Amor Lazzez; Thabet Slimani

    2015-01-01

    Nowadays, web applications are popular targets for security attackers. Using specific security mechanisms, we can prevent or detect a security attack on a web application, but we cannot find out the criminal who has carried out the security attack. Being unable to trace back an attack, encourages hackers to launch new attacks on the same system. Web application forensics aims to trace back and attribute a web application security attack to its originator. This may significantly reduce the sec...

  7. Web Application Development Utilizing Cloud Virtual Machine

    OpenAIRE

    Muukka, Olli

    2014-01-01

    The thesis goes through a development project where a web application was implemented to support the start-up company business operations. The main reason to implement a web application was the company needed a system where business data is centrally managed with cost-efficient, simple and easy tool. The deployed cloud service provided a platform for the web application. The alternative to the web application development was to deploy commercial customer relationship management tool, but the ...

  8. Developing Large Web Applications

    CERN Document Server

    Loudon, Kyle

    2010-01-01

    How do you create a mission-critical site that provides exceptional performance while remaining flexible, adaptable, and reliable 24/7? Written by the manager of a UI group at Yahoo!, Developing Large Web Applications offers practical steps for building rock-solid applications that remain effective even as you add features, functions, and users. You'll learn how to develop large web applications with the extreme precision required for other types of software. Avoid common coding and maintenance headaches as small websites add more pages, more code, and more programmersGet comprehensive soluti

  9. Information Retrieval and Graph Analysis Approaches for Book Recommendation

    Directory of Open Access Journals (Sweden)

    Chahinez Benkoussas

    2015-01-01

    Full Text Available A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments.

  10. Information Retrieval and Graph Analysis Approaches for Book Recommendation.

    Science.gov (United States)

    Benkoussas, Chahinez; Bellot, Patrice

    2015-01-01

    A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments.

  11. The PSI web interface to the EPICS channel archiver

    International Nuclear Information System (INIS)

    Gaudenz Jud; Luedeke, A.; Portmann, W.

    2012-01-01

    the EPICS (Experimental Physics and Industrial Control System) channel archiver is used at different facilities at PSI (Paul Scherrer Institute) like the Swiss Light Source or the medical cyclotron. The EPICS channel archiver is a powerful tool to collect control system data of thousands of EPICS process variables with rates of many Hertz each to an archive for later retrieval. Within the package of the channel archiver version 2 you get a Java application for graphical data retrieval and a command line tool for data extraction into different file formats. For the Paul Scherrer Institute (PSI) we wanted a possibility to retrieve the archived data from a web interface. It was desired to have flexible retrieval functions and to allow interchanging data references by e-mail. This web interface has been implemented by the PSI controls group and has now been in operation for several years. This paper will highlight the special features of the PSI web interface to the EPICS channel archiver

  12. Estimating Maintenance Cost for Web Applications

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2016-01-01

    Full Text Available The current paper tackles the issue of determining a method for estimating maintenance costs for web applications. The current state of research in the field of web application maintenance is summarized and leading theories and results are highlighted. The cost of web maintenance is determined by the number of man-hours invested in maintenance tasks. Web maintenance tasks are categorized into content maintenance and technical maintenance. Research is centered on analyzing technical maintenance tasks. The research hypothesis is formulated on the assumption that the number of man-hours invested in maintenance tasks can be assessed based on the web application’s user interaction level, complexity and content update effort. Data regarding the costs of maintenance tasks is collected from 24 maintenance projects implemented by a web development company that tackles a wide area of web applications. Homogeneity and diversity of collected data is submitted for debate by presenting a sample of the data and depicting the overall size and comprehensive nature of the entire dataset. A set of metrics dedicated to estimating maintenance costs in web applications is defined based on conclusions formulated by analyzing the collected data and the theories and practices dominating the current state of research. Metrics are validated with regards to the initial research hypothesis. Research hypothesis are validated and conclusions are formulated on the topic of estimating the maintenance cost of web applications. The limits of the research process which represented the basis for the current paper are enunciated. Future research topics are submitted for debate.

  13. Progressive Web applications

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Progressive Web Applications are native-like applications running inside of a browser context. In my presentation I would like describe their characteristics, benchmarks and building process using a quick and simple case study example with focus on Service Workers api.

  14. A survey on web modeling approaches for ubiquitous web applications

    NARCIS (Netherlands)

    Schwinger, W.; Retschitzegger, W.; Schauerhuber, A.; Kappel, G.; Wimmer, M.; Pröll, B.; Cachero Castro, C.; Casteleyn, S.; De Troyer, O.; Fraternali, P.; Garrigos, I.; Garzotto, F.; Ginige, A.; Houben, G.J.P.M.; Koch, N.; Moreno, N.; Pastor, O.; Paolini, P.; Pelechano Ferragud, V.; Rossi, G.; Schwabe, D.; Tisi, M.; Vallecillo, A.; Sluijs, van der K.A.M.; Zhang, G.

    2008-01-01

    Purpose – Ubiquitous web applications (UWA) are a new type of web applications which are accessed in various contexts, i.e. through different devices, by users with various interests, at anytime from anyplace around the globe. For such full-fledged, complex software systems, a methodologically sound

  15. Distributed nuclear medicine applications using World Wide Web and Java technology

    International Nuclear Information System (INIS)

    Knoll, P.; Hoell, K.; Koriska, K.; Mirzaei, S.; Koehn, H.

    2000-01-01

    At present, medical applications applying World Wide Web (WWW) technology are mainly used to view static images and to retrieve some information. The Java platform is a relative new way of computing, especially designed for network computing and distributed applications which enables interactive connection between user and information via the WWW. The Java 2 Software Development Kit (SDK) including Java2D API, Java Remote Method Invocation (RMI) technology, Object Serialization and the Java Advanced Imaging (JAI) extension was used to achieve a robust, platform independent and network centric solution. Medical image processing software based on this technology is presented and adequate performance capability of Java is demonstrated by an iterative reconstruction algorithm for single photon emission computerized tomography (SPECT). (orig.)

  16. Just-in-time Database-Driven Web Applications

    Science.gov (United States)

    2003-01-01

    "Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that comprised 80% of the hits were results reporting (27%), graduate medical education (26%), research (20%), and bed availability (8%). The mean number of hits per application was 3888 (SD = 5598; range, 14-19879). A model is described for just-in-time database-driven Web application development and an example given with a popular HTML editor and database program. PMID:14517109

  17. Web Services in Mobile Applications

    Directory of Open Access Journals (Sweden)

    Octavian DOSPINESCU

    2013-01-01

    Full Text Available Information and communication technologies are designed to support and anticipate the continuing changes of the information society, while outlining new economic, social and cultural dimensions. We see the growth of new business models whose aim is to remove traditional barriers and improve the value of goods and services. Information is a strategic resource and its manipulation raises new problems for all entities involved in the process. Information and communication technologies should be a stable support in managing the flow of data and support the integrity, confidentiality and availability. Concepts such as eBusiness, eCommerce, Software as a Service, Cloud Computing and Social Media are based on web technologies consisting of complex languages, protocols and standards, built around client-server architecture. One of the most used technologies in mobile applications are the Web Services defined as an application model supported by any operating system able to provide certain functionalities using Internet technologies to promote interoperability between various appli-cations and platforms. Web services use HTTP, XML, SSL, SMTP and SOAP, because their stability has proven over the years. Their functionalities are highly variable, with Web services applications exchange type, weather, arithmetic or authentication services. In this article we will talk about SOAP and REST architectures for web services in mobile applications and we will also provide some practical examples based on Android platform.

  18. WebViz: A web browser based application for collaborative analysis of 3D data

    Science.gov (United States)

    Ruegg, C. S.

    2011-12-01

    In the age of high speed Internet where people can interact instantly, scientific tools have lacked technology which can incorporate this concept of communication using the web. To solve this issue a web application for geological studies has been created, tentatively titled WebViz. This web application utilizes tools provided by Google Web Toolkit to create an AJAX web application capable of features found in non web based software. Using these tools, a web application can be created to act as piece of software from anywhere in the globe with a reasonably speedy Internet connection. An application of this technology can be seen with data regarding the recent tsunami from the major japan earthquakes. After constructing the appropriate data to fit a computer render software called HVR, WebViz can request images of the tsunami data and display it to anyone who has access to the application. This convenience alone makes WebViz a viable solution, but the option to interact with this data with others around the world causes WebViz to be taken as a serious computational tool. WebViz also can be used on any javascript enabled browser such as those found on modern tablets and smart phones over a fast wireless connection. Due to the fact that WebViz's current state is built using Google Web Toolkit the portability of the application is in it's most efficient form. Though many developers have been involved with the project, each person has contributed to increase the usability and speed of the application. In the project's most recent form a dramatic speed increase has been designed as well as a more efficient user interface. The speed increase has been informally noticed in recent uses of the application in China and Australia with the hosting server being located at the University of Minnesota. The user interface has been improved to not only look better but the functionality has been improved. Major functions of the application are rotating the 3D object using buttons

  19. Building Grid applications using Web Services

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    There has been a lot of discussion within the Grid community about the use of Web Services technologies in building large-scale, loosely-coupled, cross-organisation applications. In this talk we are going to explore the principles that govern Service-Oriented Architectures and the promise of Web Services technologies for integrating applications that span administrative domains. We are going to see how existing Web Services specifications and practices could provide the necessary infrastructure for implementing Grid applications. Biography Dr. Savas Parastatidis is a Principal Research Associate at the School of Computing Science, University of Newcastle upon Tyne, UK. Savas is one of the authors of the "Grid Application Framework based on Web Services Specifications and Practices" document that was influential in the convergence between Grid and Web Services and the move away from OGSI (more information can be found at http://www.neresc.ac.uk/ws-gaf). He has done research on runtime support for distributed-m...

  20. TRIP: An interactive retrieving-inferring data imputation approach

    KAUST Repository

    Li, Zhixu

    2016-06-25

    Data imputation aims at filling in missing attribute values in databases. Existing imputation approaches to nonquantitive string data can be roughly put into two categories: (1) inferring-based approaches [2], and (2) retrieving-based approaches [1]. Specifically, the inferring-based approaches find substitutes or estimations for the missing ones from the complete part of the data set. However, they typically fall short in filling in unique missing attribute values which do not exist in the complete part of the data set [1]. The retrieving-based approaches resort to external resources for help by formulating proper web search queries to retrieve web pages containing the missing values from the Web, and then extracting the missing values from the retrieved web pages [1]. This webbased retrieving approach reaches a high imputation precision and recall, but on the other hand, issues a large number of web search queries, which brings a large overhead [1]. © 2016 IEEE.

  1. TRIP: An interactive retrieving-inferring data imputation approach

    KAUST Repository

    Li, Zhixu; Qin, Lu; Cheng, Hong; Zhang, Xiangliang; Zhou, Xiaofang

    2016-01-01

    Data imputation aims at filling in missing attribute values in databases. Existing imputation approaches to nonquantitive string data can be roughly put into two categories: (1) inferring-based approaches [2], and (2) retrieving-based approaches [1]. Specifically, the inferring-based approaches find substitutes or estimations for the missing ones from the complete part of the data set. However, they typically fall short in filling in unique missing attribute values which do not exist in the complete part of the data set [1]. The retrieving-based approaches resort to external resources for help by formulating proper web search queries to retrieve web pages containing the missing values from the Web, and then extracting the missing values from the retrieved web pages [1]. This webbased retrieving approach reaches a high imputation precision and recall, but on the other hand, issues a large number of web search queries, which brings a large overhead [1]. © 2016 IEEE.

  2. AMBIT RESTful web services: an implementation of the OpenTox application programming interface

    Directory of Open Access Journals (Sweden)

    Jeliazkova Nina

    2011-05-01

    Full Text Available Abstract The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i an information model, based on a common OWL-DL ontology ii links to related ontologies; iii data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF representation, or initiate the associated calculations. The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative Structure-Activity Relationship (QSAR models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The

  3. AMBIT RESTful web services: an implementation of the OpenTox application programming interface.

    Science.gov (United States)

    Jeliazkova, Nina; Jeliazkov, Vedrin

    2011-05-16

    The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations.The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative) Structure-Activity Relationship (QSAR) models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The downloadable web application

  4. Development of a 3D WebGIS System for Retrieving and Visualizing CityGML Data Based on their Geometric and Semantic Characteristics by Using Free and Open Source Technology

    Science.gov (United States)

    Pispidikis, I.; Dimopoulou, E.

    2016-10-01

    CityGML is considered as an optimal standard for representing 3D city models. However, international experience has shown that visualization of the latter is quite difficult to be implemented on the web, due to the large size of data and the complexity of CityGML. As a result, in the context of this paper, a 3D WebGIS application is developed in order to successfully retrieve and visualize CityGML data in accordance with their respective geometric and semantic characteristics. Furthermore, the available web technologies and the architecture of WebGIS systems are investigated, as provided by international experience, in order to be utilized in the most appropriate way for the purposes of this paper. Specifically, a PostgreSQL/ PostGIS Database is used, in compliance with the 3DCityDB schema. At Server tier, Apache HTTP Server and GeoServer are utilized, while a Server Side programming language PHP is used. At Client tier, which implemented the interface of the application, the following technologies were used: JQuery, AJAX, JavaScript, HTML5, WebGL and Ol3-Cesium. Finally, it is worth mentioning that the application's primary objectives are a user-friendly interface and a fully open source development.

  5. Mobile Application Development: Component Retrieval System

    Data.gov (United States)

    National Aeronautics and Space Administration — The purpose of this project was to investigate requirements to develop an innovative mobile application to retrieve components’ detailed information from the Stennis...

  6. [Application of spaced retrieval training on patients with dementia].

    Science.gov (United States)

    Wu, Hua-Shan; Lin, Li-Chan

    2012-10-01

    Dementia causes semantic and episodic memory impairments that limit patients' activities of daily living (ADL) and increase caregiver burden. Spaced retrieval training uses repetitive retrieval to strengthen cognitive and motor skills intuitively in mild / moderate dementia patients who retain preserved implicit / non-declarative memory. This article describes and discusses the operative mechanism, influencing variables, and practical applications of spaced retrieval training. We hope this article increases professional understanding and application of this training approach to improve dementia patient ADL and improve quality of life for both caregivers and patients.

  7. The Nuclear Science References (NSR) database and Web Retrieval System

    International Nuclear Information System (INIS)

    Pritychenko, B.; Betak, E.; Kellett, M.A.; Singh, B.; Totans, J.

    2011-01-01

    The Nuclear Science References (NSR) database together with its associated Web interface is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 200,000 articles since the beginning of nuclear science. The weekly updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr).

  8. SAMP: Application Messaging for Desktop and Web Applications

    Science.gov (United States)

    Taylor, M. B.; Boch, T.; Fay, J.; Fitzpatrick, M.; Paioro, L.

    2012-09-01

    SAMP, the Simple Application Messaging Protocol, is a technology which allows tools to communicate. It is deployed in a number of desktop astronomy applications including ds9, Aladin, TOPCAT, World Wide Telescope and numerous others, and makes it straightforward for a user to treat a selection of these tools as a loosely-integrated suite, combining the most powerful features of each. It has been widely used within Virtual Observatory contexts, but is equally suitable for non-VO use. Enabling SAMP communication from web-based content has long been desirable. An obvious use case is arranging for a click on a web page link to deliver an image, table or spectrum to a desktop viewer, but more sophisticated two-way interaction with rich internet applications would also be possible. Use from the web however presents some problems related to browser sandboxing. We explain how the SAMP Web Profile, introduced in version 1.3 of the SAMP protocol, addresses these issues, and discuss the resulting security implications.

  9. OntoTrader: An Ontological Web Trading Agent Approach for Environmental Information Retrieval

    Directory of Open Access Journals (Sweden)

    Luis Iribarne

    2014-01-01

    Full Text Available Modern Web-based Information Systems (WIS are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS. This framework implements a “Query-Searching/Recovering-Response” information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated.

  10. Geant4 application in a Web browser

    International Nuclear Information System (INIS)

    Garnier, Laurent

    2014-01-01

    Geant4 is a toolkit for the simulation of the passage of particles through matter. The Geant4 visualization system supports many drivers including OpenGL[1], OpenInventor, HepRep[2], DAWN[3], VRML, RayTracer, gMocren[4] and ASCIITree, with diverse and complementary functionalities. Web applications have an increasing role in our work, and thanks to emerging frameworks such as Wt [5], building a web application on top of a C++ application without rewriting all the code can be done. Because the Geant4 toolkit's visualization and user interface modules are well decoupled from the rest of Geant4, it is straightforward to adapt these modules to render in a web application instead of a computer's native window manager. The API of the Wt framework closely matches that of Qt [6], our experience in building Qt driver will benefit for Wt driver. Porting a Geant4 application to a web application is easy, and with minimal effort, Geant4 users can replicate this process to share their own Geant4 applications in a web browser.

  11. Nuclear expert web search and crawler algorithm

    International Nuclear Information System (INIS)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D.

    2013-01-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  12. Nuclear expert web search and crawler algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D., E-mail: thiagoreis@usp.br, E-mail: barroso@ipen.br, E-mail: bdbfilho@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  13. PageRank without hyperlinks: Reranking with PubMed related article networks for biomedical text retrieval

    Directory of Open Access Journals (Sweden)

    Lin Jimmy

    2008-06-01

    Full Text Available Abstract Background Graph analysis algorithms such as PageRank and HITS have been successful in Web environments because they are able to extract important inter-document relationships from manually-created hyperlinks. We consider the application of these techniques to biomedical text retrieval. In the current PubMed® search interface, a MEDLINE® citation is connected to a number of related citations, which are in turn connected to other citations. Thus, a MEDLINE record represents a node in a vast content-similarity network. This article explores the hypothesis that these networks can be exploited for text retrieval, in the same manner as hyperlink graphs on the Web. Results We conducted a number of reranking experiments using the TREC 2005 genomics track test collection in which scores extracted from PageRank and HITS analysis were combined with scores returned by an off-the-shelf retrieval engine. Experiments demonstrate that incorporating PageRank scores yields significant improvements in terms of standard ranked-retrieval metrics. Conclusion The link structure of content-similarity networks can be exploited to improve the effectiveness of information retrieval systems. These results generalize the applicability of graph analysis algorithms to text retrieval in the biomedical domain.

  14. PageRank without hyperlinks: reranking with PubMed related article networks for biomedical text retrieval.

    Science.gov (United States)

    Lin, Jimmy

    2008-06-06

    Graph analysis algorithms such as PageRank and HITS have been successful in Web environments because they are able to extract important inter-document relationships from manually-created hyperlinks. We consider the application of these techniques to biomedical text retrieval. In the current PubMed(R) search interface, a MEDLINE(R) citation is connected to a number of related citations, which are in turn connected to other citations. Thus, a MEDLINE record represents a node in a vast content-similarity network. This article explores the hypothesis that these networks can be exploited for text retrieval, in the same manner as hyperlink graphs on the Web. We conducted a number of reranking experiments using the TREC 2005 genomics track test collection in which scores extracted from PageRank and HITS analysis were combined with scores returned by an off-the-shelf retrieval engine. Experiments demonstrate that incorporating PageRank scores yields significant improvements in terms of standard ranked-retrieval metrics. The link structure of content-similarity networks can be exploited to improve the effectiveness of information retrieval systems. These results generalize the applicability of graph analysis algorithms to text retrieval in the biomedical domain.

  15. An Integrated Information Retrieval Support System for Campus Network

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper presents a new integrated information retrieval support system (IIRSS) which can help Web search engines retrieve cross-lingual information from heterogeneous resources stored in multi-databases in Intranet. The IIRSS, with a three-layer architecture, can cooperate with other application servers running in Intranet. By using intelligent agents to collect information and to create indexes on-the-fly, using an access control strategy to confine a user to browsing those accessible documents for him/her through a single portal, and using a new cross-lingual translation tool to help the search engine retrieve documents, the new system provides controllable information access with different authorizations, personalized services, and real-time information retrieval.

  16. Multimedia database retrieval technology and applications

    CERN Document Server

    Muneesawang, Paisarn; Guan, Ling

    2014-01-01

    This book explores multimedia applications that emerged from computer vision and machine learning technologies. These state-of-the-art applications include MPEG-7, interactive multimedia retrieval, multimodal fusion, annotation, and database re-ranking. The application-oriented approach maximizes reader understanding of this complex field. Established researchers explain the latest developments in multimedia database technology and offer a glimpse of future technologies. The authors emphasize the crucial role of innovation, inspiring users to develop new applications in multimedia technologies

  17. Stratification-Based Outlier Detection over the Deep Web

    OpenAIRE

    Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S.; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming

    2016-01-01

    For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribu...

  18. Quality issues in the management of web information

    CERN Document Server

    Bordogna, Gloria; Jain, Lakhmi

    2013-01-01

    This research volume presents a sample of recent contributions related to the issue of quality-assessment for Web Based information in the context of information access, retrieval, and filtering systems. The advent of the Web and the uncontrolled process of documents' generation have raised the problem of declining quality assessment to information on the Web, by considering both the nature of documents (texts, images, video, sounds, and so on), the genre of documents ( news, geographic information, ontologies, medical records, products records, and so on), the reputation of information sources and sites, and, last but not least the actions performed on documents (content indexing, retrieval and ranking, collaborative filtering, and so on). The volume constitutes a compendium of both heterogeneous approaches and sample applications focusing specific aspects of the quality assessment for Web-based information for researchers, PhD students and practitioners carrying out their research activity in the field of W...

  19. Capturing Trust in Social Web Applications

    Science.gov (United States)

    O'Donovan, John

    The Social Web constitutes a shift in information flow from the traditional Web. Previously, content was provided by the owners of a website, for consumption by the end-user. Nowadays, these websites are being replaced by Social Web applications which are frameworks for the publication of user-provided content. Traditionally, Web content could be `trusted' to some extent based on the site it originated from. Algorithms such as Google's PageRank were (and still are) used to compute the importance of a website, based on analysis of underlying link topology. In the Social Web, analysis of link topology merely tells us about the importance of the information framework which hosts the content. Consumers of information still need to know about the importance/reliability of the content they are reading, and therefore about the reliability of the producers of that content. Research into trust and reputation of the producers of information in the Social Web is still very much in its infancy. Every day, people are forced to make trusting decisions about strangers on the Web based on a very limited amount of information. For example, purchasing a product from an eBay seller with a `reputation' of 99%, downloading a file from a peer-to-peer application such as Bit-Torrent, or allowing Amazon.com tell you what products you will like. Even something as simple as reading comments on a Web-blog requires the consumer to make a trusting decision about the quality of that information. In all of these example cases, and indeed throughout the Social Web, there is a pressing demand for increased information upon which we can make trusting decisions. This chapter examines the diversity of sources from which trust information can be harnessed within Social Web applications and discusses a high level classification of those sources. Three different techniques for harnessing and using trust from a range of sources are presented. These techniques are deployed in two sample Social Web

  20. Designing Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2008-01-01

    Learning system to study a discipline. In business to business interaction, different requirements and parameters of exchanged business requests might be served by different services from third parties. Such applications require certain intelligence and a slightly different approach to design. Adpative web......The unique characteristic of web applications is that they are supposed to be used by much bigger and diverse set of users and stakeholders. An example application area is e-Learning or business to business interaction. In eLearning environment, various users with different background use the e......-based applications aim to leave some of their features at the design stage in the form of variables which are dependent on several criteria. The resolution of the variables is called adaptation and can be seen from two perspectives: adaptation by humans to the changed requirements of stakeholders and dynamic system...

  1. Enhancing UCSF Chimera through web services.

    Science.gov (United States)

    Huang, Conrad C; Meng, Elaine C; Morris, John H; Pettersen, Eric F; Ferrin, Thomas E

    2014-07-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. A Web Service Framework for Economic Applications

    Directory of Open Access Journals (Sweden)

    Dan BENTA

    2010-01-01

    Full Text Available The Internet offers multiple solutions to linkcompanies with their partners, customers or suppliersusing IT solutions, including a special focus on Webservices. Web services are able to solve the problem relatedto the exchange of data between business partners, marketsthat can use each other's services, problems ofincompatibility between IT applications. As web servicesare described, discovered and accessed programs based onXML vocabularies and Web protocols, Web servicesrepresents solutions for Web-based technologies for smalland medium-sized enterprises (SMEs. This paper presentsa web service framework for economic applications. Also, aprototype of this IT solution using web services waspresented and implemented in a few companies from IT,commerce and consulting fields measuring the impact ofthe solution in the business environment development.

  3. The TDAQ Analytics Dashboard: a real-time web application for the ATLAS TDAQ control infrastructure

    International Nuclear Information System (INIS)

    Miotto, Giovanna Lehmann; Magnoni, Luca; Sloper, John Erik

    2011-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) infrastructure is responsible for filtering and transferring ATLAS experimental data from detectors to mass storage systems. It relies on a large, distributed computing system composed of thousands of software applications running concurrently. In such a complex environment, information sharing is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking, the streams of messages sent by applications and data published via information services are constantly monitored by experts to verify the correctness of running operations and to understand problematic situations. To simplify and improve system analysis and errors detection tasks, we developed the TDAQ Analytics Dashboard, a web application that aims to collect, correlate and visualize effectively this real time flow of information. The TDAQ Analytics Dashboard is composed of two main entities that reflect the twofold scope of the application. The first is the engine, a Java service that performs aggregation, processing and filtering of real time data stream and computes statistical correlation on sliding windows of time. The results are made available to clients via a simple web interface supporting SQL-like query syntax. The second is the visualization, provided by an Ajax-based web application that runs on client's browser. The dashboard approach allows to present information in a clear and customizable structure. Several types of interactive graphs are proposed as widgets that can be dynamically added and removed from visualization panels. Each widget acts as a client for the engine, querying the web interface to retrieve data with desired criteria. In this paper we present the design, development and evolution of the TDAQ Analytics Dashboard. We also present the statistical analysis computed by the application in this first period of high energy data taking operations for the ATLAS experiment.

  4. IBM WebSphere Application Server 80 Administration Guide

    CERN Document Server

    Robinson, Steve

    2011-01-01

    IBM WebSphere Application Server 8.0 Administration Guide is a highly practical, example-driven tutorial. You will be introduced to WebSphere Application Server 8.0, and guided through configuration, deployment, and tuning for optimum performance. If you are an administrator who wants to get up and running with IBM WebSphere Application Server 8.0, then this book is not to be missed. Experience with WebSphere and Java would be an advantage, but is not essential.

  5. APFEL Web a web-based application for the graphical visualization of parton distribution functions

    CERN Document Server

    Carrazza, Stefano; Palazzo, Daniele; Rojo, Juan

    2015-01-01

    We present APFEL Web, a web-based application designed to provide a flexible user-friendly tool for the graphical visualization of parton distribution functions (PDFs). In this note we describe the technical design of the APFEL Web application, motivating the choices and the framework used for the development of this project. We document the basic usage of APFEL Web and show how it can be used to provide useful input for a variety of collider phenomenological studies. Finally we provide some examples showing the output generated by the application.

  6. APFEL Web: a web-based application for the graphical visualization of parton distribution functions

    International Nuclear Information System (INIS)

    Carrazza, Stefano; Ferrara, Alfio; Palazzo, Daniele; Rojo, Juan

    2015-01-01

    We present APFEL Web, a Web-based application designed to provide a flexible user-friendly tool for the graphical visualization of parton distribution functions. In this note we describe the technical design of the APFEL Web application, motivating the choices and the framework used for the development of this project. We document the basic usage of APFEL Web and show how it can be used to provide useful input for a variety of collider phenomenological studies. Finally we provide some examples showing the output generated by the application. (note)

  7. Value of Information Web Application

    Science.gov (United States)

    2015-04-01

    their understanding of VoI attributes (source reliable, information content, and latency). The VoI web application emulates many features of a...only when using the Firefox web browser on those computers (Internet Explorer was not viable due to unchangeable user settings). During testing, the

  8. New nuclear data service at CNEA: retrieval of the update libraries from a local Web-Server

    International Nuclear Information System (INIS)

    Suarez, Patricia M.; Pepe, Maria E.; Sbaffoni, Maria M.

    2000-01-01

    A new On-line Nuclear Data Service was implemented at National Atomic Energy Commission (CNEA) Web-Site. The information usually issued by the Nuclear Data Section of IAEA (NDS-IAEA) on CD-ROM, as well as complementary libraries periodically downloaded from the a mirror server of NDS-IAEA Service located at IPEN, Brazil are available on the new CNEA Web page. In the site, users can find numerical data on neutron, charged-particle, and photonuclear reactions, nuclear structure, and decay data, with related bibliographic information. This data server is permanently maintained and updated by CNEA staff members. This crew also offers assistance on the use and retrieval of nuclear data to local users. (author)

  9. A STUDY ON RANKING METHOD IN RETRIEVING WEB PAGES BASED ON CONTENT AND LINK ANALYSIS: COMBINATION OF FOURIER DOMAIN SCORING AND PAGERANK SCORING

    Directory of Open Access Journals (Sweden)

    Diana Purwitasari

    2008-01-01

    Full Text Available Ranking module is an important component of search process which sorts through relevant pages. Since collection of Web pages has additional information inherent in the hyperlink structure of the Web, it can be represented as link score and then combined with the usual information retrieval techniques of content score. In this paper we report our studies about ranking score of Web pages combined from link analysis, PageRank Scoring, and content analysis, Fourier Domain Scoring. Our experiments use collection of Web pages relate to Statistic subject from Wikipedia with objectives to check correctness and performance evaluation of combination ranking method. Evaluation of PageRank Scoring show that the highest score does not always relate to Statistic. Since the links within Wikipedia articles exists so that users are always one click away from more information on any point that has a link attached, it it possible that unrelated topics to Statistic are most likely frequently mentioned in the collection. While the combination method show link score which is given proportional weight to content score of Web pages does effect the retrieval results.

  10. A new measurement of workload in Web application reliability assessment

    Directory of Open Access Journals (Sweden)

    CUI Xia

    2015-02-01

    Full Text Available Web application has been popular in various fields of social life.It becomes more and more important to study the reliability of Web application.In this paper the definition of Web application failure is firstly brought out,and then the definition of Web application reliability.By analyzing data in the IIS server logs and selecting corresponding usage and information delivery failure data,the paper study the feasibility of Web application reliability assessment from the perspective of Web software system based on IIS server logs.Because the usage for a Web site often has certain regularity,a new measurement of workload in Web application reliability assessment is raised.In this method,the unit is removed by weighted average technique;and the weights are assessed by setting objective function and optimization.Finally an experiment was raised for validation.The experiment result shows the assessment of Web application reliability base on the new workload is better.

  11. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    International Nuclear Information System (INIS)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-01-01

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster

  12. Using centrality to rank web snippets

    NARCIS (Netherlands)

    Jijkoun, V.; de Rijke, M.; Peters, C.; Jijkoun, V.; Mandl, T.; Müller, H.; Oard, D.W.; Peñas, A.; Petras, V.; Santos, D.

    2008-01-01

    We describe our participation in the WebCLEF 2007 task, targeted at snippet retrieval from web data. Our system ranks snippets based on a simple similarity-based centrality, inspired by the web page ranking algorithms. We experimented with retrieval units (sentences and paragraphs) and with the

  13. Large-Scale Partial-Duplicate Image Retrieval and Its Applications

    Science.gov (United States)

    2016-04-23

    tree based image retrieval , a semantic-aware co-indexing algorithm is proposed to jointly embed two strong cues into the inverted indexes: 1) local...based image retrieval , a semantic-aware co-indexing algorithm is proposed to jointly embed two strong cues into the inverted indexes: 1) local...Distribution Unlimited UU UU UU UU 23-04-2016 23-Jan-2012 22-Jan-2016 Final Report: Large-Scale Partial-Duplicate Image Retrieval and Its Applications

  14. Introduction to information retrieval

    CERN Document Server

    Manning, Christopher D; Schütze, Hinrich

    2008-01-01

    Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced un

  15. BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications.

    Science.gov (United States)

    Whetzel, Patricia L; Noy, Natalya F; Shah, Nigam H; Alexander, Paul R; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A

    2011-07-01

    The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

  16. Life Cycle Project Plan Outline: Web Sites and Web-based Applications

    Science.gov (United States)

    This tool is a guideline for planning and checking for 508 compliance on web sites and web based applications. Determine which EIT components are covered or excepted, which 508 standards and requirements apply, and how to implement them.

  17. Determining Data Entry Points For Javascript-rich Web applications

    Directory of Open Access Journals (Sweden)

    George Maksimovich Noseevich

    2013-02-01

    Full Text Available The paper is devoted the task of automatic crawling of javascript-rich web applications for data entry points. A new technique is proposed, which combines dynamic and static javascript code analysis. Testing the proposed technique on real world web applications such as Twitter, Youtube and Reddit has confirmed its applicability for analysis of modern web applications.

  18. Web-Scale Discovery Services Retrieve Relevant Results in Health Sciences Topics Including MEDLINE Content

    Directory of Open Access Journals (Sweden)

    Elizabeth Margaret Stovold

    2017-06-01

    Full Text Available A Review of: Hanneke, R., & O’Brien, K. K. (2016. Comparison of three web-scale discovery services for health sciences research. Journal of the Medical Library Association, 104(2, 109-117. http://dx.doi.org/10.3163/1536-5050.104.2.004 Abstract Objective – To compare the results of health sciences search queries in three web-scale discovery (WSD services for relevance, duplicate detection, and retrieval of MEDLINE content. Design – Comparative evaluation and bibliometric study. Setting – Six university libraries in the United States of America. Subjects – Three commercial WSD services: Primo, Summon, and EBSCO Discovery Service (EDS. Methods – The authors collected data at six universities, including their own. They tested each of the three WSDs at two data collection sites. However, since one of the sites was using a legacy version of Summon that was due to be upgraded, data collected for Summon at this site were considered obsolete and excluded from the analysis. The authors generated three questions for each of six major health disciplines, then designed simple keyword searches to mimic typical student search behaviours. They captured the first 20 results from each query run at each test site, to represent the first “page” of results, giving a total of 2,086 total search results. These were independently assessed for relevance to the topic. Authors resolved disagreements by discussion, and calculated a kappa inter-observer score. They retained duplicate records within the results so that the duplicate detection by the WSDs could be compared. They assessed MEDLINE coverage by the WSDs in several ways. Using precise strategies to generate a relevant set of articles, they conducted one search from each of the six disciplines in PubMed so that they could compare retrieval of MEDLINE content. These results were cross-checked against the first 20 results from the corresponding query in the WSDs. To aid investigation of overall

  19. Unipept web services for metaproteomics analysis.

    Science.gov (United States)

    Mesuere, Bart; Willems, Toon; Van der Jeugt, Felix; Devreese, Bart; Vandamme, Peter; Dawyndt, Peter

    2016-06-01

    Unipept is an open source web application that is designed for metaproteomics analysis with a focus on interactive datavisualization. It is underpinned by a fast index built from UniProtKB and the NCBI taxonomy that enables quick retrieval of all UniProt entries in which a given tryptic peptide occurs. Unipept version 2.4 introduced web services that provide programmatic access to the metaproteomics analysis features. This enables integration of Unipept functionality in custom applications and data processing pipelines. The web services are freely available at http://api.unipept.ugent.be and are open sourced under the MIT license. Unipept@ugent.be Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Developing web applications with Oracle ADF essentials

    CERN Document Server

    Vesterli, Sten E

    2013-01-01

    Developing Web Applications with Oracle ADF Essentials covers the basics of Oracle ADF and then works through more complex topics such as debugging and logging features and JAAS Security in JDeveloper as the reader gains more skills. This book will follow a tutorial approach, using a practical example, with the content and tasks getting harder throughout.""Developing Web Applications with Oracle ADF Essentials"" is for you if you want to build modern, user-friendly web applications for all kinds of data gathering, analysis, and presentations. You do not need to know any advanced HTML or JavaSc

  1. Modelling Safe Interface Interactions in Web Applications

    Science.gov (United States)

    Brambilla, Marco; Cabot, Jordi; Grossniklaus, Michael

    Current Web applications embed sophisticated user interfaces and business logic. The original interaction paradigm of the Web based on static content pages that are browsed by hyperlinks is, therefore, not valid anymore. In this paper, we advocate a paradigm shift for browsers and Web applications, that improves the management of user interaction and browsing history. Pages are replaced by States as basic navigation nodes, and Back/Forward navigation along the browsing history is replaced by a full-fledged interactive application paradigm, supporting transactions at the interface level and featuring Undo/Redo capabilities. This new paradigm offers a safer and more precise interaction model, protecting the user from unexpected behaviours of the applications and the browser.

  2. Domainwise Web Page Optimization Based On Clustered Query Sessions Using Hybrid Of Trust And ACO For Effective Information Retrieval

    Directory of Open Access Journals (Sweden)

    Dr. Suruchi Chawla

    2015-08-01

    Full Text Available Abstract In this paper hybrid of Ant Colony OptimizationACO and trust has been used for domainwise web page optimization in clustered query sessions for effective Information retrieval. The trust of the web page identifies its degree of relevance in satisfying specific information need of the user. The trusted web pages when optimized using pheromone updates in ACO will identify the trusted colonies of web pages which will be relevant to users information need in a given domain. Hence in this paper the hybrid of Trust and ACO has been used on clustered query sessions for identifying more and more relevant number of documents in a given domain in order to better satisfy the information need of the user. Experiment was conducted on the data set of web query sessions to test the effectiveness of the proposed approach in selected three domains Academics Entertainment and Sports and the results confirm the improvement in the precision of search results.

  3. Data mining approach to web application intrusions detection

    Science.gov (United States)

    Kalicki, Arkadiusz

    2011-10-01

    Web applications became most popular medium in the Internet. Popularity, easiness of web application script languages and frameworks together with careless development results in high number of web application vulnerabilities and high number of attacks performed. There are several types of attacks possible because of improper input validation: SQL injection Cross-site scripting, Cross-Site Request Forgery (CSRF), web spam in blogs and others. In order to secure web applications intrusion detection (IDS) and intrusion prevention systems (IPS) are being used. Intrusion detection systems are divided in two groups: misuse detection (traditional IDS) and anomaly detection. This paper presents data mining based algorithm for anomaly detection. The principle of this method is the comparison of the incoming HTTP traffic with a previously built profile that contains a representation of the "normal" or expected web application usage sequence patterns. The frequent sequence patterns are found with GSP algorithm. Previously presented detection method was rewritten and improved. Some tests show that the software catches malicious requests, especially long attack sequences, results quite good with medium length sequences, for short length sequences must be complemented with other methods.

  4. Development of Content Management System-based Web Applications

    OpenAIRE

    Souer, J.

    2012-01-01

    Web engineering is the application of systematic and quantifiable approaches (concepts, methods, techniques, tools) to cost-effective requirements analysis, design, implementation, testing, operation, and maintenance of high quality web applications. Over the past years, Content Management Systems (CMS) have emerged as an important foundation for the web engineering process. CMS can be defined as a tool for the creation, editing and management of web information in an integral way. A CMS appe...

  5. Automatic invariant detection in dynamic web applications

    NARCIS (Netherlands)

    Groeneveld, F.; Mesbah, A.; Van Deursen, A.

    2010-01-01

    The complexity of modern web applications increases as client-side JavaScript and dynamic DOM programming are used to offer a more interactive web experience. In this paper, we focus on improving the dependability of such applications by automatically inferring invariants from the client-side and

  6. A web services choreography scenario for interoperating bioinformatics applications

    Directory of Open Access Journals (Sweden)

    Cheung David W

    2004-03-01

    Full Text Available Abstract Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1 the platforms on which the applications run are heterogeneous, 2 their web interface is not machine-friendly, 3 they use a non-standard format for data input and output, 4 they do not exploit standards to define application interface and message exchange, and 5 existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates

  7. Leveraging Web Services in Providing Efficient Discovery, Retrieval, and Integration of NASA-Sponsored Observations and Predictions

    Science.gov (United States)

    Bambacus, M.; Alameh, N.; Cole, M.

    2006-12-01

    The Applied Sciences Program at NASA focuses on extending the results of NASA's Earth-Sun system science research beyond the science and research communities to contribute to national priority applications with societal benefits. By employing a systems engineering approach, supporting interoperable data discovery and access, and developing partnerships with federal agencies and national organizations, the Applied Sciences Program facilitates the transition from research to operations in national applications. In particular, the Applied Sciences Program identifies twelve national applications, listed at http://science.hq.nasa.gov/earth-sun/applications/, which can be best served by the results of NASA aerospace research and development of science and technologies. The ability to use and integrate NASA data and science results into these national applications results in enhanced decision support and significant socio-economic benefits for each of the applications. This paper focuses on leveraging the power of interoperability and specifically open standard interfaces in providing efficient discovery, retrieval, and integration of NASA's science research results. Interoperability (the ability to access multiple, heterogeneous geoprocessing environments, either local or remote by means of open and standard software interfaces) can significantly increase the value of NASA-related data by increasing the opportunities to discover, access and integrate that data in the twelve identified national applications (particularly in non-traditional settings). Furthermore, access to data, observations, and analytical models from diverse sources can facilitate interdisciplinary and exploratory research and analysis. To streamline this process, the NASA GeoSciences Interoperability Office (GIO) is developing the NASA Earth-Sun System Gateway (ESG) to enable access to remote geospatial data, imagery, models, and visualizations through open, standard web protocols. The gateway (online

  8. A Holistic Approach to Securing Web Applications

    OpenAIRE

    Stankovic, Srdjan; Simic, Dejan

    2010-01-01

    Protection of Web applications is an activity that requires constant monitoring of security threats as well as looking for solutions in this field. Since protection has moved from the lower layers of OSI models to the application layer and having in mind the fact that 75% of all the attacks are performed at the application layer, special attention should be paid to the application layer. It is possible to improve protection of Web application on the level of the system architecture by introdu...

  9. Technical Note: On The Usage and Development of the AWAKE Web Server and Web Applications

    CERN Document Server

    Berger, Dillon Tanner

    2017-01-01

    The purpose of this technical note is to give a brief explanation of the AWAKE Web Server, the current web applications it serves, and how to edit, maintain, and update the source code. The majority of this paper is dedicated to the development of the server and its web applications.

  10. JWIG: Yet Another Framework for Maintainable and Secure Web Applications

    DEFF Research Database (Denmark)

    Møller, Anders; Schwarz, Mathias Romme

    2009-01-01

    Although numerous frameworks for web application programming have been developed in recent years, writing web applications remains a challenging task. Guided by a collection of classical design principles, we propose yet another framework. It is based on a simple but flexible server......-oriented architecture that coherently supports general aspects of modern web applications, including dynamic XML construction, session management, data persistence, caching, and authentication, but it also simplifies programming of server-push communication and integration of XHTML-based applications and XML-based web...... services.The resulting framework provides a novel foundation for developing maintainable and secure web applications....

  11. Quantifying retrieval bias in Web archive search

    NARCIS (Netherlands)

    Samar, Thaer; Traub, Myriam C.; van Ossenbruggen, Jacco; Hardman, Lynda; de Vries, Arjen P.

    2018-01-01

    A Web archive usually contains multiple versions of documents crawled from the Web at different points in time. One possible way for users to access a Web archive is through full-text search systems. However, previous studies have shown that these systems can induce a bias, known as the

  12. Two Algorithms for Web Applications Assessment

    Directory of Open Access Journals (Sweden)

    Stavros Ioannis Valsamidis

    2011-09-01

    Full Text Available The usage of web applications can be measured with the use of metrics. In a LMS, a typical web application, there are no appropriate metrics which would facilitate their qualitative and quantitative measurement. The purpose of this paper is to propose the use of existing techniques with a different way, in order to analyze the log file of a typical LMS and deduce useful conclusions. Three metrics for course usage measurement are used. It also describes two algorithms for course classification and suggestion actions. The metrics and the algorithms and were in Open eClass LMS tracking data of an academic institution. The results from 39 courses presented interest insights. Although the case study concerns a LMS it can also be applied to other web applications such as e-government, e-commerce, e-banking, blogs e.t.c.

  13. Directions for Web and E-Commerce Applications Security

    OpenAIRE

    Thuraisingham, Bhavani; Clifton, Chris; Gupta, Amar; Bertino, Elisa; Ferrari, Elena

    2003-01-01

    This paper provides directions for web and e-commerce applications security. In particular, access control policies, workflow security, XML security and federated database security issues pertaining to the web and ecommerce applications are discussed.

  14. WE-E-BRB-11: Riview a Web-Based Viewer for Radiotherapy.

    Science.gov (United States)

    Apte, A; Wang, Y; Deasy, J

    2012-06-01

    Collaborations involving radiotherapy data collection, such as the recently proposed international radiogenomics consortium, require robust, web-based tools to facilitate reviewing treatment planning information. We present the architecture and prototype characteristics for a web-based radiotherapy viewer. The web-based environment developed in this work consists of the following components: 1) Import of DICOM/RTOG data: CERR was leveraged to import DICOM/RTOG data and to convert to database friendly RT objects. 2) Extraction and Storage of RT objects: The scan and dose distributions were stored as .png files per slice and view plane. The file locations were written to the MySQL database. Structure contours and DVH curves were written to the database as numeric data. 3) Web interfaces to query, retrieve and visualize the RT objects: The Web application was developed using HTML 5 and Ruby on Rails (RoR) technology following the MVC philosophy. The open source ImageMagick library was utilized to overlay scan, dose and structures. The application allows users to (i) QA the treatment plans associated with a study, (ii) Query and Retrieve patients matching anonymized ID and study, (iii) Review up to 4 plans simultaneously in 4 window panes (iv) Plot DVH curves for the selected structures and dose distributions. A subset of data for lung cancer patients was used to prototype the system. Five user accounts were created to have access to this study. The scans, doses, structures and DVHs for 10 patients were made available via the web application. A web-based system to facilitate QA, and support Query, Retrieve and the Visualization of RT data was prototyped. The RIVIEW system was developed using open source and free technology like MySQL and RoR. We plan to extend the RIVIEW system further to be useful in clinical trial data collection, outcomes research, cohort plan review and evaluation. © 2012 American Association of Physicists in Medicine.

  15. A contribution to semantic indexing and retrieval based on FCA - An application to song datasets

    OpenAIRE

    Codocedo , Victor; Lykourentzou , Ioanna; Napoli , Amedeo

    2012-01-01

    International audience; Semantic indexing and retrieval is an important research area, as the available amount of information on the Web is growing more and more. In this paper, we introduce an original approach to semantic indexing and retrieval based on Formal Concept Analysis. The concept lattice is used as a semantic index and we propose an original algorithm for traversing the lattice and answering user queries. This framework has been used and evaluated on a song dataset.

  16. ANALYSIS OF WEB MINING APPLICATIONS AND BENEFICIAL AREAS

    Directory of Open Access Journals (Sweden)

    Khaleel Ahmad

    2011-10-01

    Full Text Available The main purpose of this paper is to study the process of Web mining techniques, features, application ( e-commerce and e-business and its beneficial areas. Web mining has become more popular and its widely used in varies application areas (such as business intelligent system, e-commerce and e-business. The e-commerce or e-business results are bettered by the application of the mining techniques such as data mining and text mining, among all the mining techniques web mining is better.

  17. Bifröst: debugging web applications as a whole

    NARCIS (Netherlands)

    K.B. van der Vlist (Kevin)

    2013-01-01

    htmlabstractEven though web application development is supported by professional tooling, debugging support is lacking. If one starts to debug a web application, hardly any tooling support exists. Only the core components like server processes and a web browser are exposed. Developers need to

  18. Development of Content Management System-based Web Applications

    NARCIS (Netherlands)

    Souer, J.

    2012-01-01

    Web engineering is the application of systematic and quantifiable approaches (concepts, methods, techniques, tools) to cost-effective requirements analysis, design, implementation, testing, operation, and maintenance of high quality web applications. Over the past years, Content Management Systems

  19. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems......This book constitutes the refereed proceedings of the 16th International Conference on Database and Expert Systems Applications, DEXA 2005, held in Copenhagen, Denmark, in August 2005.The 92 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 390...... submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...

  20. Web Application Obfuscation '-WAFsEvasionFiltersalert(Obfuscation)-'

    CERN Document Server

    Heiderich, Mario; Heyes, Gareth; Lindsay, David

    2010-01-01

    Web applications are used every day by millions of users, which is why they are one of the most popular vectors for attackers. Obfuscation of code has allowed hackers to take one attack and create hundreds-if not millions-of variants that can evade your security measures. Web Application Obfuscation takes a look at common Web infrastructure and security controls from an attacker's perspective, allowing the reader to understand the shortcomings of their security systems. Find out how an attacker would bypass different types of security controls, how these very security controls introduce new ty

  1. Migrating Multi-page Web Applications to Single-page AJAX Interfaces

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.

    2006-01-01

    Recently, a new web development technique for creating interactive web applications, dubbed AJAX, has emerged. In this new model, the single-page web interface is composed of individual components which can be updated/replaced independently. With the rise of AJAX web applications classical

  2. Usage Of Asp.Net Ajax for Binus School Serpong Web Applications

    Directory of Open Access Journals (Sweden)

    Karto Iskandar

    2016-03-01

    Full Text Available Today web applications have become a necessity and many companies use them as a communication tool to keep in touch with their customers. The usage of Web Application in current time increases as the numberof internet users has been rised. For reason of Rich Internet Application, the desktop application developer wasmoved to web application developer with AJAX technology. BINUS School Serpong is a Cambridge Curriculum base International School that uses web application for access every information about the school. By usingAJAX, performance of web application should be improved and the bandwidth usage is decreased. Problems thatoccur at BINUS School Serpong is not all part of the web application that uses AJAX. This paper introducesusage of AJAX in ASP.NET with C# programming language in web application BINUS School Serpong. It is expected by using ASP.NET AJAX, BINUS School Serpong website performance will be faster because of reducing web page reload. The methodology used in this paper is literature study. Results from this study are to prove that the ASP.NET AJAX can be used easily and improve BINUS School Serpong website performance. Conclusion of this paper is the implementation of ASP.NET AJAX improves performance of web application in BINUS School Serpong.

  3. General Aspects of some Causes of Web Application Vulnerabilities

    Directory of Open Access Journals (Sweden)

    Mironela Pîrnău

    2015-10-01

    Full Text Available Because web applications are complex software systems in constant evolution, they become real targets for hackers as they provide direct access to corporate or personal data. Web application security is supposed to represent an essential priority for organizations in order to protect sensitive customer data, or those of the employees of a company. Worldwide, there are many organizations that report the most common types of attacks on Web applications and methods for their prevention. While the paper is an overview, it puts forward several typical examples of web application vulnerabilities that are due to programming errors; these may be used by attackers to take unauthorized control over computers.

  4. Distributed Systems and Applications of Information Filtering and Retrieval

    CERN Document Server

    Giuliani, Alessandro; Semeraro, Giovanni; DART 2012

    2014-01-01

    This volume focuses on new challenges in distributed Information Filtering and Retrieval. It collects invited chapters and extended research contributions from the special session on Information Filtering and Retrieval: Novel Distributed Systems and Applications (DART) of the 4th International Conference on Knowledge Discovery and Information Retrieval (KDIR 2012), held in Barcelona, Spain, on 4-7 October 2012. The main focus of DART was to discuss and compare suitable novel solutions based on intelligent techniques and applied to real-world applications. The chapters of this book present a comprehensive review of related works and state of the art. Authors, both practitioners and researchers, shared their results in several topics such as "Multi-Agent Systems", "Natural Language Processing", "Automatic Advertisement", "Customer Interaction Analytics", "Opinion Mining". Contributions have been careful reviewed by experts in the area, who also gave useful suggestions to improve the quality of the volume.

  5. PaaS for web applications with OpenShift Origin

    OpenAIRE

    Lossent, A; Rodriguez Peon, A; Wagner, A

    2017-01-01

    The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.

  6. PaaS for web applications with OpenShift Origin

    Science.gov (United States)

    Lossent, A.; Rodriguez Peon, A.; Wagner, A.

    2017-10-01

    The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.

  7. Using ChEMBL web services for building applications and data processing workflows relevant to drug discovery.

    Science.gov (United States)

    Nowotka, Michał M; Gaulton, Anna; Mendez, David; Bento, A Patricia; Hersey, Anne; Leach, Andrew

    2017-08-01

    ChEMBL is a manually curated database of bioactivity data on small drug-like molecules, used by drug discovery scientists. Among many access methods, a REST API provides programmatic access, allowing the remote retrieval of ChEMBL data and its integration into other applications. This approach allows scientists to move from a world where they go to the ChEMBL web site to search for relevant data, to one where ChEMBL data can be simply integrated into their everyday tools and work environment. Areas covered: This review highlights some of the audiences who may benefit from using the ChEMBL API, and the goals they can address, through the description of several use cases. The examples cover a team communication tool (Slack), a data analytics platform (KNIME), batch job management software (Luigi) and Rich Internet Applications. Expert opinion: The advent of web technologies, cloud computing and micro services oriented architectures have made REST APIs an essential ingredient of modern software development models. The widespread availability of tools consuming RESTful resources have made them useful for many groups of users. The ChEMBL API is a valuable resource of drug discovery bioactivity data for professional chemists, chemistry students, data scientists, scientific and web developers.

  8. JavaScript Web Applications

    CERN Document Server

    MacCaw, Alex

    2011-01-01

    Building rich JavaScript applications that bring a desktop experience to the Web requires moving state from the server to the client side-not a simple task. This hands-on book takes proficient JavaScript developers through all the steps necessary to create state-of-the-art applications, including structure, templating, frameworks, communicating with the server, and many other issues. Throughout the book, you'll work with real-world example applications to help you grasp the concepts involved. Learn how to create JavaScript applications that offer a more responsive and improved experience. U

  9. FedWeb Greatest Hits: Presenting the New Test Collection for Federated Web Search

    NARCIS (Netherlands)

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Zhou, Ke; Nguyen, Dong-Phuong; Hiemstra, Djoerd

    This paper presents 'FedWeb Greatest Hits', a large new test collection for research in web information retrieval. As a combination and extension of the datasets used in the TREC Federated Web Search Track, this collection opens up new research possibilities on federated web search challenges, as

  10. Web Services for public cosmological surveys: the VVDS-CDFS application

    Science.gov (United States)

    Paioro, L.; Garilli, B.; Le Brun, V.; Franzetti, P.; Fumana, M.; Scodeggio, M.

    2007-08-01

    Cosmological surveys (like VVDS, GOODS, DEEP2, COSMOS, etc.) aim at providing a complete census of the universe over a broad redshift range. Often different information are gathered with different instruments (e.g., spectrographs, HST, X-ray telescopes, etc.) and it is only by correctly assembling and easily manipulating such wide sets of data that astronomers can attempt to describe the universe; many different scientific goals can be tackled grouping and filtering the different data sets. When dealing with the huge databases resulting from public cosmological surveys , what is needed is: (a) a versatile system of queries, to allow searches by different parameters (like redshifts, magnitude, colors, etc.) according to the specific scientific goal to be tackled; (b) a cross-matching system to verify or redefine the identification of the sources; and (c) a data products retrieving system to download data related images and spectra. The Virtual Observatory Alliance defines a set of services which can satisfy the needs described above, exploiting Web Services technology. Having in mind the exploitation of cosmological surveys, we have implemented what we consider the most fundamental VO Web Services for our scientific interests: Conesearch (retrieves physical data values from a cone centered on one point in the sky - the simplest query), SkyNode (allows to filter on the physical quantities in the database in order to select a well defined data subset), SIAP (retrieves all the images contained in a sky region of interest), SSAP (retrieves 1D spectra). Our testing bench is the VVDSCDFS data set, made public in 2004, which contains photometric and spectroscopic information for 1599 sources (Le F`rve et al., 2004, A&A, 428, 1043, see ). On e this data set, we have implemented and published on US NVO registry the first three services mentioned above, to demonstrate the viability of this approach and its usefulness to the astronomical community. Implementation of SSAP

  11. Web application for monitoring mainframe computer, Linux operating systems and application servers

    OpenAIRE

    Dimnik, Tomaž

    2016-01-01

    This work presents the idea and the realization of web application for monitoring the operation of the mainframe computer, servers with Linux operating system and application servers. Web application is intended for administrators of these systems, as an aid to better understand the current state, load and operation of the individual components of the server systems.

  12. AN EFFICIENT WEB PERSONALIZATION APPROACH TO DISCOVER USER INTERESTED DIRECTORIES

    Directory of Open Access Journals (Sweden)

    M. Robinson Joel

    2014-04-01

    Full Text Available Web Usage Mining is the application of data mining technique used to retrieve the web usage from web proxy log file. Web Usage Mining consists of three major stages: preprocessing, clustering and pattern analysis. This paper explains each of these stages in detail. In this proposed approach, the web directories are discovered based on the user’s interestingness. The web proxy log file undergoes a preprocessing phase to improve the quality of data. Fuzzy Clustering Algorithm is used to cluster the user and session into disjoint clusters. In this paper, an effective approach is presented for Web personalization based on an Advanced Apriori algorithm. It is used to select the user interested web directories. The proposed method is compared with the existing web personalization methods like Objective Probabilistic Directory Miner (OPDM, Objective Community Directory Miner (OCDM and Objective Clustering and Probabilistic Directory Miner (OCPDM. The result shows that the proposed approach provides better results than the aforementioned existing approaches. At last, an application is developed with the user interested directories and web usage details.

  13. Engineering semantic-based interactive multi-device web applications

    NARCIS (Netherlands)

    Bellekens, P.A.E.; Sluijs, van der K.A.M.; Aroyo, L.M.; Houben, G.J.P.M.; Baresi, L.; Fraternali, P.; Houben, G.J.

    2007-01-01

    To build high-quality personalized Web applications developers have to deal with a number of complex problems. We look at the growing class of personalized Web Applications that share three characteristic challenges. Firstly, the semantic problem of how to enable content reuse and integration.

  14. Web application for recording learners’ mouse trajectories and retrieving their study logs for data analysis

    Directory of Open Access Journals (Sweden)

    Yoshinori Miyazaki

    2012-03-01

    Full Text Available With the accelerated implementation of e-learning systems in educational institutions, it has become possible to record learners’ study logs in recent years. It must be admitted that little research has been conducted upon the analysis of the study logs that are obtained. In addition, there is no software that traces the mouse movements of learners during their learning processes, which the authors believe would enable teachers to better understand their students’ behaviors. The objective of this study is to develop a Web application that records students’ study logs, including their mouse trajectories, and to devise an IR tool that can summarize such diversified data. The results of an experiment are also scrutinized to provide an analysis of the relationship between learners’ activities and their study logs.

  15. Memory versus logic: two models of organizing information and their influences on web retrieval strategies

    Directory of Open Access Journals (Sweden)

    Teresa Numerico

    2008-07-01

    Full Text Available We can find the first anticipation of the World Wide Web hypertextual structure in Bush paper of 1945, where he described a “selection” and storage machine called the Memex, capable of keeping the useful information of a user and connecting it to other relevant material present in the machine or added by other users. We will argue that Vannevar Bush, who conceived this type of machine, did it because its involvement with analogical devices. During the 1930s, in fact, he invented and built the Differential Analyzer, a powerful analogue machine, used to calculate various relevant mathematical functions. The model of the Memex is not the digital one, because it relies on another form of data representation that emulates more the procedures of memory than the attitude of the logic used by the intellect. Memory seems to select and arrange information according to association strategies, i.e., using analogies and connections that are very often arbitrary, sometimes even chaotic and completely subjective. The organization of information and the knowledge creation process suggested by logic and symbolic formal representation of data is deeply different from the former one, though the logic approach is at the core of the birth of computer science (i.e., the Turing Machine and the Von Neumann Machine. We will discuss the issues raised by these two “visions” of information management and the influences of the philosophical tradition of the theory of knowledge on the hypertextual organization of content. We will also analyze all the consequences of these different attitudes with respect to information retrieval techniques in a hypertextual environment, as the web. Our position is that it necessary to take into accounts the nature and the dynamic social topology of the network when we choose information retrieval methods for the network; otherwise, we risk creating a misleading service for the end user of web search tools (i.e., search engines.

  16. Integrating Web Services into Map Image Applications

    National Research Council Canada - National Science Library

    Tu, Shengru

    2003-01-01

    Web services have been opening a wide avenue for software integration. In this paper, we have reported our experiments with three applications that are built by utilizing and providing web services for Geographic Information Systems (GIS...

  17. Invariant-Based Automatic Testing of Modern Web Applications

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.; Roest, D.

    2011-01-01

    AJAX-based Web 2.0 applications rely on stateful asynchronous client/server communication, and client-side run-time manipulation of the DOM tree. This not only makes them fundamentally different from traditional web applications, but also more error-prone and harder to test. We propose a method for

  18. JWIG: Yet Another Framework for Maintainable and Secure Web Applications

    DEFF Research Database (Denmark)

    Møller, Anders; Schwarz, Mathias Romme

    2009-01-01

    Although numerous frameworks for web application programming have been developed in recent years, writing web applications remains a challenging task. Guided by a collection of classical design principles, we propose yet another framework. It is based on a simple but flexible server-oriented arch...... services.The resulting framework provides a novel foundation for developing maintainable and secure web applications....

  19. X-Switch: An Efficient , Multi-User, Multi-Language Web Application Server

    Directory of Open Access Journals (Sweden)

    Mayumbo Nyirenda

    2010-07-01

    Full Text Available Web applications are usually installed on and accessed through a Web server. For security reasons, these Web servers generally provide very few privileges to Web applications, defaulting to executing them in the realm of a guest ac- count. In addition, performance often is a problem as Web applications may need to be reinitialised with each access. Various solutions have been designed to address these security and performance issues, mostly independently of one another, but most have been language or system-specic. The X-Switch system is proposed as an alternative Web application execution environment, with more secure user-based resource management, persistent application interpreters and support for arbitrary languages/interpreters. Thus it provides a general-purpose environment for developing and deploying Web applications. The X-Switch system's experimental results demonstrated that it can achieve a high level of performance. Further- more it was shown that X-Switch can provide functionality matching that of existing Web application servers but with the added benet of multi-user support. Finally the X-Switch system showed that it is feasible to completely separate the deployment platform from the application code, thus ensuring that the developer does not need to modify his/her code to make it compatible with the deployment platform.

  20. University Presentation to Potential Students Using Web 2.0 Environments

    Directory of Open Access Journals (Sweden)

    Andrius Eidimtas

    2013-02-01

    Full Text Available Choosing what to study for school graduates is a compound and multi-stage process (Chapman, 1981; Hossler et al., 1999; Brennan, 2001; Shankle, 2009. In the information retrieval stage, future students have to gather and assimilate actual information, form a list of possible higher education institutions. Nowadays modern internet technologies enable universities to create conditions for attractive and interactive information retrieval. Userfriendliness and accessibility of Web 2.0-based environments attract more young people to search for information in the web. Western universities have noticed a great potential of Web 2.0 in information dissemination back in 2007. Meanwhile, Lithuanian universities began using Web 2.0 to assemble virtual communities only in 2010 (Valinevičienė, 2010. Purpose—to disclose possibilities to present universities to school graduates in Web 2.0 environments. Design/methodology/approach—strategies of a case study by using methods of scientific literature analysis, observation and quantitative content analysis. Findings—referring to the information retrieval types and particularity of information retrieval by school graduates disclosed in the analysis of scientific literature, it has been identified that 76 per cent of Lithuanian universities apply at least one website created on the basis of Web 2.0 technology for their official presentation. The variety of Web 2.0 being used distributes only from 1 to 6 different tools, while in scientific literature more possibilities to apply Web 2.0 environments can be found. Research limitations/implications—the empiric part of the case study has been contextualized for Lithuania; however, the theoretic construct of possibilities to present universities in Web 2.0 environments can be used for the analysis presentation of foreign universities in Web 2.0 environments. Practical implications—the work can become the recommendation to develop possibilities for Lithuanian

  1. University Presentation to Potential Students Using Web 2.0 Environments

    Directory of Open Access Journals (Sweden)

    Andrius Eidimtas

    2012-12-01

    Full Text Available Choosing what to study for school graduates is a compound and multi-stage process (Chapman, 1981; Hossler et al., 1999; Brennan, 2001; Shankle, 2009. In the information retrieval stage, future students have to gather and assimilate actual information, form a list of possible higher education institutions. Nowadays modern internet technologies enable universities to create conditions for attractive and interactive information retrieval. Userfriendliness and accessibility of Web 2.0-based environments attract more young people to search for information in the web. Western universities have noticed a great potential of Web 2.0 in information dissemination back in 2007. Meanwhile, Lithuanian universities began using Web 2.0 to assemble virtual communities only in 2010 (Valinevičienė, 2010.Purpose—to disclose possibilities to present universities to school graduates in Web 2.0 environments.Design/methodology/approach—strategies of a case study by using methods of scientific literature analysis, observation and quantitative content analysis.Findings—referring to the information retrieval types and particularity of information retrieval by school graduates disclosed in the analysis of scientific literature, it has been identified that 76 per cent of Lithuanian universities apply at least one website created on the basis of Web 2.0 technology for their official presentation. The variety of Web 2.0 being used distributes only from 1 to 6 different tools, while in scientific literature more possibilities to apply Web 2.0 environments can be found.Research limitations/implications—the empiric part of the case study has been contextualized for Lithuania; however, the theoretic construct of possibilities to present universities in Web 2.0 environments can be used for the analysis presentation of foreign universities in Web 2.0 environments.Practical implications—the work can become the recommendation to develop possibilities for Lithuanian

  2. Neutralizing SQL Injection Attack Using Server Side Code Modification in Web Applications

    OpenAIRE

    Dalai, Asish Kumar; Jena, Sanjay Kumar

    2017-01-01

    Reports on web application security risks show that SQL injection is the top most vulnerability. The journey of static to dynamic web pages leads to the use of database in web applications. Due to the lack of secure coding techniques, SQL injection vulnerability prevails in a large set of web applications. A successful SQL injection attack imposes a serious threat to the database, web application, and the entire web server. In this article, the authors have proposed a novel method for prevent...

  3. Developing Web Applications

    CERN Document Server

    Moseley, Ralph

    2007-01-01

    Building applications for the Internet is a complex and fast-moving field which utilizes a variety of continually evolving technologies. Whether your perspective is from the client or server side, there are many languages to master - X(HTML), JavaScript, PHP, XML and CSS to name but a few. These languages have to work together cleanly, logically and in harmony with the systems they run on, and be compatible with any browsers with which they interact. Developing Web Applications presents script writing and good programming practice but also allows students to see how the individual technologi

  4. A Web Based Financial and Accounting Software Application

    Directory of Open Access Journals (Sweden)

    Doru E. TILIUTE

    2010-01-01

    Full Text Available The Client-server applications become more attractivein comparison with their counterpart desktop-type due to someincontestable advantages. Among the client-server applicationssome uses the Web environment providing full access fromanywhere and anytime to all application features. The presentwork presents the fist results in the achievement of a web basedfinancial and accounting application using open-sourcestechnologies and programming languages (Apache, MySQL,PHP and JavaScript

  5. Development of a Web-based financial application System

    Science.gov (United States)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.; Mostafa, M. G.

    2013-12-01

    The paper describes a technique to develop a web based financial system, following latest technology and business needs. In the development of web based application, the user friendliness and technology both are very important. It is used ASP .NET MVC 4 platform and SQL 2008 server for development of web based financial system. It shows the technique for the entry system and report monitoring of the application is user friendly. This paper also highlights the critical situations of development, which will help to develop the quality product.

  6. Continuous Integration in PHP web applications development

    OpenAIRE

    Hujer, Martin

    2011-01-01

    This work deals with continuous integration of web applications, especially those in PHP language. The main objective is the selection of the server for continuous integration, its deployment and configuration for continuous integration of PHP web applications. The first chapter describes the concept of continuous integration and its individual techniques. The second chapter deals with the choice of server for continuous integration and its basic settings. The third chapter contains an overvi...

  7. Web application development with Laravel PHP Framework version 4

    OpenAIRE

    Armel, Jamal

    2014-01-01

    The purpose of this thesis work was to learn a new PHP framework and use it efficiently to build an eCommerce web application for a small start-up freelancing company that will let potential customers check products by category and pass orders securely. To fulfil this set of requirements, a system consisting of a web application with a backend was designed and implemented using built in Laravel features such as Composer, Eloquent, Blade and Artisan and a WAMP stack. The web application wa...

  8. Real-time web application development with Vert.x 2.0

    CERN Document Server

    Parviainen, Tero

    2013-01-01

    A quick, clear, and concise tutorial-guide-based approach that helps you to develop a web application based on Vert.x.Real-time Web Application Development with Vert.x is written for web developers who want to take the next step and dive into real-time web application development.This book uses JavaScript (and some Java) to introduce the Vert.x platform, so basic JavaScript knowledge is expected. If you're planning to write your applications using some of the other Vert.x languages, all the techniques and concepts will translate to them directly. All you need to do is refer to the Vert.x API r

  9. Sigma: Web Retrieval Interface for Nuclear Reaction Data

    International Nuclear Information System (INIS)

    Pritychenko, B.; Sonzogni, A.A.

    2008-01-01

    The authors present Sigma, a Web-rich application which provides user-friendly access in processing and plotting of the evaluated and experimental nuclear reaction data stored in the ENDF-6 and EXFOR formats. The main interface includes browsing using a periodic table and a directory tree, basic and advanced search capabilities, interactive plots of cross sections, angular distributions and spectra, comparisons between evaluated and experimental data, computations between different cross section sets. Interactive energy-angle, neutron cross section uncertainties plots and visualization of covariance matrices are under development. Sigma is publicly available at the National Nuclear Data Center website at www.nndc.bnl.gov/sigma

  10. SPIDERGL: A GRAPHICS LIBRARY FOR 3D WEB APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. Di Benedetto

    2012-09-01

    Full Text Available The recent introduction of the WebGL API for leveraging the power of 3D graphics accelerators within Web browsers opens the possibility to develop advanced graphics applications without the need for an ad-hoc plug-in. There are several contexts in which this new technology can be exploited to enhance user experience and data fruition, like e-commerce applications, games and, in particular, Cultural Heritage. In fact, it is now possible to use the Web platform to present a virtual reconstruction hypothesis of ancient pasts, to show detailed 3D models of artefacts of interests to a wide public, and to create virtual museums. We introduce SpiderGL, a JavaScript library for developing 3D graphics Web applications. SpiderGL provides data structures and algorithms to ease the use of WebGL, to define and manipulate shapes, to import 3D models in various formats, and to handle asynchronous data loading. We show the potential of this novel library with a number of demo applications and give details about its future uses in the context of Cultural Heritage applications.

  11. COEUS: "semantic web in a box" for biomedical applications.

    Science.gov (United States)

    Lopes, Pedro; Oliveira, José Luís

    2012-12-17

    As the "omics" revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter's complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a "semantic web in a box" approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.

  12. Web services as applications' integration tool: QikProp case study.

    Science.gov (United States)

    Laoui, Abdel; Polyakov, Valery R

    2011-07-15

    Web services are a new technology that enables to integrate applications running on different platforms by using primarily XML to enable communication among different computers over the Internet. Large number of applications was designed as stand alone systems before the concept of Web services was introduced and it is a challenge to integrate them into larger computational networks. A generally applicable method of wrapping stand alone applications into Web services was developed and is described. To test the technology, it was applied to the QikProp for DOS (Windows). Although performance of the application did not change when it was delivered as a Web service, this form of deployment had offered several advantages like simplified and centralized maintenance, smaller number of licenses, and practically no training for the end user. Because by using the described approach almost any legacy application can be wrapped as a Web service, this form of delivery may be recommended as a global alternative to traditional deployment solutions. Copyright © 2011 Wiley Periodicals, Inc.

  13. Web-page Prediction for Domain Specific Web-search using Boolean Bit Mask

    OpenAIRE

    Sinha, Sukanta; Duttagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Search Engine is a Web-page retrieval tool. Nowadays Web searchers utilize their time using an efficient search engine. To improve the performance of the search engine, we are introducing a unique mechanism which will give Web searchers more prominent search results. In this paper, we are going to discuss a domain specific Web search prototype which will generate the predicted Web-page list for user given search string using Boolean bit mask.

  14. SIRW: A web server for the Simple Indexing and Retrieval System that combines sequence motif searches with keyword searches.

    Science.gov (United States)

    Ramu, Chenna

    2003-07-01

    SIRW (http://sirw.embl.de/) is a World Wide Web interface to the Simple Indexing and Retrieval System (SIR) that is capable of parsing and indexing various flat file databases. In addition it provides a framework for doing sequence analysis (e.g. motif pattern searches) for selected biological sequences through keyword search. SIRW is an ideal tool for the bioinformatics community for searching as well as analyzing biological sequences of interest.

  15. Web Application Design Using Server-Side JavaScript

    Energy Technology Data Exchange (ETDEWEB)

    Hampton, J.; Simons, R.

    1999-02-01

    This document describes the application design philosophy for the Comprehensive Nuclear Test Ban Treaty Research & Development Web Site. This design incorporates object-oriented techniques to produce a flexible and maintainable system of applications that support the web site. These techniques will be discussed at length along with the issues they address. The overall structure of the applications and their relationships with one another will also be described. The current problems and future design changes will be discussed as well.

  16. Development and challenges of using web-based GIS for health applications

    DEFF Research Database (Denmark)

    Gao, Sheng; Mioc, Darka; Boley, Harold

    2011-01-01

    Web-based GIS is increasingly used in health applications. It has the potential to provide critical information in a timely manner, support health care policy development, and educate decision makers and the general public. This paper describes the trends and recent development of health...... applications using a Web-based GIS. Recent progress on the database storage and geospatial Web Services has advanced the use of Web-based GIS for health applications, with various proprietary software, open source software, and Application Programming Interfaces (APIs) available. Current challenges in applying...... care planning, and public health participation....

  17. web cellHTS2: A web-application for the analysis of high-throughput screening data

    Directory of Open Access Journals (Sweden)

    Boutros Michael

    2010-04-01

    Full Text Available Abstract Background The analysis of high-throughput screening data sets is an expanding field in bioinformatics. High-throughput screens by RNAi generate large primary data sets which need to be analyzed and annotated to identify relevant phenotypic hits. Large-scale RNAi screens are frequently used to identify novel factors that influence a broad range of cellular processes, including signaling pathway activity, cell proliferation, and host cell infection. Here, we present a web-based application utility for the end-to-end analysis of large cell-based screening experiments by cellHTS2. Results The software guides the user through the configuration steps that are required for the analysis of single or multi-channel experiments. The web-application provides options for various standardization and normalization methods, annotation of data sets and a comprehensive HTML report of the screening data analysis, including a ranked hit list. Sessions can be saved and restored for later re-analysis. The web frontend for the cellHTS2 R/Bioconductor package interacts with it through an R-server implementation that enables highly parallel analysis of screening data sets. web cellHTS2 further provides a file import and configuration module for common file formats. Conclusions The implemented web-application facilitates the analysis of high-throughput data sets and provides a user-friendly interface. web cellHTS2 is accessible online at http://web-cellHTS2.dkfz.de. A standalone version as a virtual appliance and source code for platforms supporting Java 1.5.0 can be downloaded from the web cellHTS2 page. web cellHTS2 is freely distributed under GPL.

  18. U.S. Geological Survey (USGS) Earthquake Web Applications

    Science.gov (United States)

    Fee, J.; Martinez, E.

    2015-12-01

    USGS Earthquake web applications provide access to earthquake information from USGS and other Advanced National Seismic System (ANSS) contributors. One of the primary goals of these applications is to provide a consistent experience for accessing both near-real time information as soon as it is available and historic information after it is thoroughly reviewed. Millions of people use these applications every month including people who feel an earthquake, emergency responders looking for the latest information about a recent event, and scientists researching historic earthquakes and their effects. Information from multiple catalogs and contributors is combined by the ANSS Comprehensive Catalog into one composite catalog, identifying the most preferred information from any source for each event. A web service and near-real time feeds provide access to all contributed data, and are used by a number of users and software packages. The Latest Earthquakes application displays summaries of many events, either near-real time feeds or custom searches, and the Event Page application shows detailed information for each event. Because all data is accessed through the web service, it can also be downloaded by users. The applications are maintained as open source projects on github, and use mobile-first and responsive-web-design approaches to work well on both mobile devices and desktop computers. http://earthquake.usgs.gov/earthquakes/map/

  19. Semantic-Web Technology: Applications at NASA

    Science.gov (United States)

    Ashish, Naveen

    2004-01-01

    We provide a description of work at the National Aeronautics and Space Administration (NASA) on building system based on semantic-web concepts and technologies. NASA has been one of the early adopters of semantic-web technologies for practical applications. Indeed there are several ongoing 0 endeavors on building semantics based systems for use in diverse NASA domains ranging from collaborative scientific activity to accident and mishap investigation to enterprise search to scientific information gathering and integration to aviation safety decision support We provide a brief overview of many applications and ongoing work with the goal of informing the external community of these NASA endeavors.

  20. Region-Based Color Image Indexing and Retrieval

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper a region-based color image indexing and retrieval algorithm is presented. As a basis for the indexing, a novel K-Means segmentation algorithm is used, modified so as to take into account the coherence of the regions. A new color distance is also defined for this algorithm. Based on ....... Experimental results demonstrate the performance of the algorithm. The development of an intelligent image content-based search engine for the World Wide Web is also presented, as a direct application of the presented algorithm....

  1. Efficient development of web applications for remote participation using Ruby on Rails

    International Nuclear Information System (INIS)

    Emoto, M.; Yoshida, M.; Iwata, C.; Inagaki, S.; Nagayama, Y.

    2010-01-01

    Large-scale experiments such as ITER require international collaboration, and remote participation plays an important role in carrying out such experiments. Web-based applications are useful tools for scientists participating in experiments remotely using personal computers. Since the participants typically download web-based applications to their computer each time they access the web servers, they do not need to install extra software in order to use these applications. In addition, the application developers do not need to distribute the latest program files each time they are modified, thus reducing maintenance costs for remote participation systems. For these reasons, we have been developing web-based applications for the LHD experiments at NIFS. In a previous study, we showed the benefits of using Ruby on Rails (RoR) to develop web-based applications for analysis code. We thought this approach would also be useful for developing applications for remote participation. Therefore, we have developed several web-based applications using RoR for participating in the LHD experiments. These applications include a data viewer and a scheduler of experiments. The main reason to adopt RoR for this purpose is its efficiency for developing web-based applications. For example, to develop a data viewer, we used an existing program running on an X-Windows System. Using RoR, we could minimize the modifications of the existing programs to add web interfaces. In this paper, we will report a web-based application developed using RoR for the LHD experiment. We will also discuss the benefits of using RoR in developing remote participation tools.

  2. Research of web application based on B/S structure testing

    International Nuclear Information System (INIS)

    Ou Ge; Zhang Hongmei; Song Liming

    2007-01-01

    Software testing is very important method used to assure the quality of Web application. With the fast development of Web application, the old testing techniques can not satisfied the require any more. Because of this people begin to classify different part of the application, find out the content that can be tested by the test tools and studies the structure of testing to enhance his efficiency. This paper analyses the testing based on the feature of Web application, sums up the testing method and gives some improvements of them. (authors)

  3. The Evolution of Web Searching.

    Science.gov (United States)

    Green, David

    2000-01-01

    Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…

  4. DEVELOPING WEB MAPPING APPLICATION USING ARCGIS SERVER WEB APPLICATION DEVELOPMEN FRAMEWORK (ADF FOR GEOSPATIAL DATA GENERATED DURING REHABILITATION AND RECONSTRUCTION PROCESS OF POST-TSUNAMI 2004 DISASTER IN ACEH

    Directory of Open Access Journals (Sweden)

    Nizamuddin Nizamuddin

    2014-04-01

    Full Text Available ESRI ArcGIS Server is equipped with ArcGIS Server Web Application Development Framework (ADF and ArcGIS Web Controls integration for Visual Studio.NET. Both the ArcGIS Server Manager for .NET and ArcGIS Web Controls can be easily utilized for developing the ASP.NET based ESRI Web mapping application. In  this study we implemented both tools for developing the ASP.NET based ESRI Web mapping application for geospatial data generated dring rehabilitation and reconstruction process of post-tsunami 2004 disaster in Aceh province. Rehabilitation and reconstruction process has produced a tremendous amount of geospatial data. This method was chosen in this study because in the process of developing  a web mapping application, one can easily and quickly create Mapping Services of huge geospatial data and also develop Web mapping application without writing any code. However, when utilizing Visual Studio.NET 2008, one needs to have some coding ability.

  5. Web services interface of SSRF archive data analysis system

    International Nuclear Information System (INIS)

    Li Lin; Shen Liren; Zhu Qing; Wan Tianmin

    2007-01-01

    Accelerator database stores various static parameters and real-time data of accelerator. SSRF (Shanghai Synchrotron Radiation Facility) adopts relational database to save the data. We developed a data retrieval system based on XML Web Services for accessing the archive data. It includes a bottom layer interface and an interface applicable for accelerator physics. Client samples exemplifying how to consume the interface are given. The users can browse, retrieve and plot data by the client samples. Also, we give a method to test its stability. The test result and performance are described. (authors)

  6. Project Management Methodology for the Development of M-Learning Web Based Applications

    Directory of Open Access Journals (Sweden)

    Adrian VISOIU

    2010-01-01

    Full Text Available M-learning web based applications are a particular case of web applications designed to be operated from mobile devices. Also, their purpose is to implement learning aspects. Project management of such applications takes into account the identified peculiarities. M-learning web based application characteristics are identified. M-learning functionality covers the needs of an educational process. Development is described taking into account the mobile web and its influences over the analysis, design, construction and testing phases. Activities building up a work breakdown structure for development of m-learning web based applications are presented. Project monitoring and control techniques are proposed. Resources required for projects are discussed.

  7. Hera : Development of semantic web information systems

    NARCIS (Netherlands)

    Houben, G.J.P.M.; Barna, P.; Frasincar, F.; Vdovják, R.; Cuella Lovelle, J.M.; et al., xx

    2003-01-01

    As a consequence of the success of the Web, methodologies for information system development need to consider systems that use the Web paradigm. These Web Information Systems (WIS) use Web technologies to retrieve information from the Web and to deliver information in a Web presentation to the

  8. Automated Functional Testing based on the Navigation of Web Applications

    Directory of Open Access Journals (Sweden)

    Boni García

    2011-08-01

    Full Text Available Web applications are becoming more and more complex. Testing such applications is an intricate hard and time-consuming activity. Therefore, testing is often poorly performed or skipped by practitioners. Test automation can help to avoid this situation. Hence, this paper presents a novel approach to perform automated software testing for web applications based on its navigation. On the one hand, web navigation is the process of traversing a web application using a browser. On the other hand, functional requirements are actions that an application must do. Therefore, the evaluation of the correct navigation of web applications results in the assessment of the specified functional requirements. The proposed method to perform the automation is done in four levels: test case generation, test data derivation, test case execution, and test case reporting. This method is driven by three kinds of inputs: i UML models; ii Selenium scripts; iii XML files. We have implemented our approach in an open-source testing framework named Automatic Testing Platform. The validation of this work has been carried out by means of a case study, in which the target is a real invoice management system developed using a model-driven approach.

  9. AN AUTOMATIC AND METHODOLOGICAL APPROACH FOR ACCESSIBLE WEB APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Lourdes Moreno

    2007-06-01

    Full Text Available Semantic Web approaches try to get the interoperability and communication among technologies and organizations. Nevertheless, sometimes it is forgotten that the Web must be useful for every user, consequently it is necessary to include tools and techniques doing Semantic Web be accessible. Accessibility and usability are two usually joined concepts widely used in web application development, however their meaning are different. Usability means the way to make easy the use but accessibility is referred to the access possibility. For the first one, there are many well proved approaches in real cases. However, accessibility field requires a deeper research that will make feasible the access to disable people and also the access to novel non-disable people due to the cost to automate and maintain accessible applications. In this paper, we propose one architecture to achieve the accessibility in web-environments dealing with the WAI accessibility standard and the Universal Design paradigm. This architecture tries to control the accessibility in web applications development life-cycle following a methodology starting from a semantic conceptual model and leans on description languages and controlled vocabularies.

  10. Crawl-Based Analysis of Web Applications : Prospects and Challenges

    NARCIS (Netherlands)

    Van Deursen, A.; Mesbah, A.; Nederlof, A.

    2014-01-01

    In this paper we review five years of research in the field of automated crawling and testing of web applications. We describe the open source Crawljax tool, and the various extensions that have been proposed in order to address such issues as cross-browser compatibility testing, web application

  11. Promoting Your Web Site.

    Science.gov (United States)

    Raeder, Aggi

    1997-01-01

    Discussion of ways to promote sites on the World Wide Web focuses on how search engines work and how they retrieve and identify sites. Appropriate Web links for submitting new sites and for Internet marketing are included. (LRW)

  12. a Web Api and Web Application Development for Dissemination of Air Quality Information

    Science.gov (United States)

    Şahin, K.; Işıkdağ, U.

    2017-11-01

    Various studies have been carried out since 2005 under the leadership of Ministry of Environment and Urbanism of Turkey, in order to observe the quality of air in Turkey, to develop new policies and to develop a sustainable air quality management strategy. For this reason, a national air quality monitoring network has been developed providing air quality indices. By this network, the quality of the air has been continuously monitored and an important information system has been constructed in order to take precautions for preventing a dangerous situation. The biggest handicap in the network is the data access problem for instant and time series data acquisition and processing because of its proprietary structure. Currently, there is no service offered by the current air quality monitoring system for exchanging information with third party applications. Within the context of this work, a web service has been developed to enable location based querying of the current/past air quality data in Turkey. This web service is equipped with up-todate and widely preferred technologies. In other words, an architecture is chosen in which applications can easily integrate. In the second phase of the study, a web-based application was developed to test the developed web service and this testing application can perform location based acquisition of air-quality data. This makes it possible to easily carry out operations such as screening and examination of the area in the given time-frame which cannot be done with the national monitoring network.

  13. The Semantics of Web Services: An Examination in GIScience Applications

    Directory of Open Access Journals (Sweden)

    Xuan Shi

    2013-09-01

    Full Text Available Web service is a technological solution for software interoperability that supports the seamless integration of diverse applications. In the vision of web service architecture, web services are described by the Web Service Description Language (WSDL, discovered through Universal Description, Discovery and Integration (UDDI and communicate by the Simple Object Access Protocol (SOAP. Such a divination has never been fully accomplished yet. Although it was criticized that WSDL only has a syntactic definition of web services, but was not semantic, prior initiatives in semantic web services did not establish a correct methodology to resolve the problem. This paper examines the distinction and relationship between the syntactic and semantic definitions for web services that characterize different purposes in service computation. Further, this paper proposes that the semantics of web service are neutral and independent from the service interface definition, data types and platform. Such a conclusion can be a universal law in software engineering and service computing. Several use cases in the GIScience application are examined in this paper, while the formalization of geospatial services needs to be constructed by the GIScience community towards a comprehensive ontology of the conceptual definitions and relationships for geospatial computation. Advancements in semantic web services research will happen in domain science applications.

  14. Web Content Search and Adaptation for IDTV: One Step Forward in the Mediamorphosis Process toward Personal-TV

    Directory of Open Access Journals (Sweden)

    Stefano Ferretti

    2007-01-01

    Full Text Available We are on the threshold of a mediamorphosis that will revolutionize the way we interact with our TV sets. The combination between interactive digital TV (IDTV and the Web fosters the development of new interactive multimedia services enjoyable even through a TV screen and a remote control. Yet, several design constraints complicate the deployment of this new pattern of services. Prominent unresolved issues involve macro-problems such as collecting information on the Web based on users' preferences and appropriately presenting retrieved Web contents on the TV screen. To this aim, we propose a system able to dynamically convey contents from the Web to IDTV systems. Our system presents solutions both for personalized Web content search and automatic TV-format adaptation of retrieved documents. As we demonstrate through two case study applications, our system merges the best of IDTV and Web domains spinning the TV mediamorphosis toward the creation of the personal-TV concept.

  15. Neutralizing SQL Injection Attack Using Server Side Code Modification in Web Applications

    Directory of Open Access Journals (Sweden)

    Asish Kumar Dalai

    2017-01-01

    Full Text Available Reports on web application security risks show that SQL injection is the top most vulnerability. The journey of static to dynamic web pages leads to the use of database in web applications. Due to the lack of secure coding techniques, SQL injection vulnerability prevails in a large set of web applications. A successful SQL injection attack imposes a serious threat to the database, web application, and the entire web server. In this article, the authors have proposed a novel method for prevention of SQL injection attack. The classification of SQL injection attacks has been done based on the methods used to exploit this vulnerability. The proposed method proves to be efficient in the context of its ability to prevent all types of SQL injection attacks. Some popular SQL injection attack tools and web application security datasets have been used to validate the model. The results obtained are promising with a high accuracy rate for detection of SQL injection attack.

  16. Peningkatan Efisiensi dan Efektifitas Layanan Dosen dalam Pemanfaatan Web Application

    Directory of Open Access Journals (Sweden)

    Reina Reina

    2013-06-01

    Full Text Available This study aims to determine the benefits of a web application in improving the efficiency and effectiveness of services to lecturers. The research method consists of literature study and data collection analysis based on observations. Implementing a web application, an observation is conducted followed by a comparison on data prior to the implementation. The evaluation results show that the implementation of a web application improves efficiency and effectiveness in the use of time and resources in providing services to lecturers in information access.

  17. Asset Identification for Security Risk Assessment in Web Applications

    OpenAIRE

    Hisham M. Haddad; Brunil D. Romero

    2009-01-01

    As software applications become more complex they require more security, allowing them to reach an appropriate level of quality to manage information, and therefore achieving business objectives. Web applications represent one segment of software industry where security risk assessment is essential. Web engineering must address new challenges to provide new techniques and tools that guarantee high quality application development. This work focuses asset identification, the initial step in sec...

  18. EuroGOV: Engineering a Multilingual Web Corpus

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.

    2005-01-01

    EuroGOV is a multilingual web corpus that was created to serve as the document collection for WebCLEF, the CLEF 2005 web retrieval task. EuroGOV is a collection of web pages crawled from the European Union portal, European Union member state governmental web sites, and Russian government web sites.

  19. Lecture 3: Web Application Security

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Computer security has been an increasing concern for IT professionals for a number of years, yet despite all the efforts, computer systems and networks remain highly vulnerable to attacks of different kinds. Design flaws and security bugs in the underlying software are among the main reasons for this. This lecture focuses on security aspects of Web application development. Various vulnerabilities typical to web applications (such as Cross-site scripting, SQL injection, cross-site request forgery etc.) are introduced and discussed. Sebastian Lopienski is CERN’s deputy Computer Security Officer. He works on security strategy and policies; offers internal consultancy and audit services; develops and maintains security tools for vulnerability assessment and intrusion detection; provides training and awareness raising; and does incident investigation and response. During his work at CERN since 2001, Sebastian has had various assignments, including designing and developing software to manage and support servic...

  20. System Testing of Desktop and Web Applications

    Science.gov (United States)

    Slack, James M.

    2011-01-01

    We want our students to experience system testing of both desktop and web applications, but the cost of professional system-testing tools is far too high. We evaluate several free tools and find that AutoIt makes an ideal educational system-testing tool. We show several examples of desktop and web testing with AutoIt, starting with simple…

  1. Stratification-Based Outlier Detection over the Deep Web.

    Science.gov (United States)

    Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming

    2016-01-01

    For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribution of this paper is to develop a new data mining method for outlier detection over deep web. In our approach, the query space of a deep web data source is stratified based on a pilot sample. Neighborhood sampling and uncertainty sampling are developed in this paper with the goal of improving recall and precision based on stratification. Finally, a careful performance evaluation of our algorithm confirms that our approach can effectively detect outliers in deep web.

  2. A semantic approach to concept lattice-based information retrieval

    OpenAIRE

    Codocedo , Victor; Lykourentzou , Ioanna; Napoli , Amedeo

    2014-01-01

    International audience; The volume of available information is growing, especially on the web, and in parallel the questions of the users are changing and becoming harder to satisfy. Thus there is a need for organizing the available information in a meaningful way in order to guide and improve document indexing for information retrieval applications taking into account more complex data such as semantic relations. In this paper we show that Formal Concept Analysis (FCA) and concept lattices p...

  3. QuickEval: a web application for psychometric scaling experiments

    Science.gov (United States)

    Van Ngo, Khai; Storvik, Jehans J.; Dokkeberg, Christopher A.; Farup, Ivar; Pedersen, Marius

    2015-01-01

    QuickEval is a web application for carrying out psychometric scaling experiments. It offers the possibility of running controlled experiments in a laboratory, or large scale experiment over the web for people all over the world. It is a unique one of a kind web application, and it is a software needed in the image quality field. It is also, to the best of knowledge, the first software that supports the three most common scaling methods; paired comparison, rank order, and category judgement. It is also the first software to support rank order. Hopefully, a side effect of this newly created software is that it will lower the threshold to perform psychometric experiments, improve the quality of the experiments being carried out, make it easier to reproduce experiments, and increase research on image quality both in academia and industry. The web application is available at www.colourlab.no/quickeval.

  4. WEB STRUCTURE MINING USING PAGERANK, IMPROVED PAGERANK – AN OVERVIEW

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2011-03-01

    Full Text Available Web Mining is the extraction of interesting and potentially useful patterns and information from Web. It includes Web documents, hyperlinks between documents, and usage logs of web sites. The significant task for web mining can be listed out as Information Retrieval, Information Selection / Extraction, Generalization and Analysis. Web information retrieval tools consider only the text on pages and ignore information in the links. The goal of Web structure mining is to explore structural summary about web. Web structure mining focusing on link information is an important aspect of web data. This paper presents an overview of the PageRank, Improved Page Rank and its working functionality in web structure mining.

  5. The Web Application Hacker's Handbook Finding and Exploiting Security Flaws

    CERN Document Server

    Stuttard, Dafydd

    2011-01-01

    The highly successful security book returns with a new edition, completely updated Web applications are the front door to most organizations, exposing them to attacks that may disclose personal information, execute fraudulent transactions, or compromise ordinary users. This practical book has been completely updated and revised to discuss the latest step-by-step techniques for attacking and defending the range of ever-evolving web applications. You'll explore the various new technologies employed in web applications that have appeared since the first edition and review the new attack technique

  6. DESENVOLVIMENTO DE UMA FERRAMENTA ASSISTENTE PARA CRIAÇÃO DE APLICAÇÕES CRUD EM JAVA NA WEB

    Directory of Open Access Journals (Sweden)

    Carlos Renato de Souza Perri

    2010-12-01

    Full Text Available Due to the need for computerization of business processes, storage of relevant information in databases and making these information available on Internet, this project proposes to develop a tool for generating Web applications written in Java, that build functionalities to perform CRUD (Create, Retrieve, Update, Delete, ie, storage, read, update and deletion. The software is a tool, from which, the programmer inserts the script to create a database and, after setting the parameters into the tool, source code of Java Web applications are generated. There is a need to create Web applications in Java with low production time, because building these applications using common methods for development takes much time. The implementation of this tool is also to change that concept and the way of developing Java Web applications, because this tool will be used as an assistant, becoming easier to create Java Web applications. The generated applications use the technologies Servlets and JSPs, the Hibernate framework and the jQuery JavaScript library.

  7. A WEB API AND WEB APPLICATION DEVELOPMENT FOR DISSEMINATION OF AIR QUALITY INFORMATION

    Directory of Open Access Journals (Sweden)

    K. Şahin

    2017-11-01

    Full Text Available Various studies have been carried out since 2005 under the leadership of Ministry of Environment and Urbanism of Turkey, in order to observe the quality of air in Turkey, to develop new policies and to develop a sustainable air quality management strategy. For this reason, a national air quality monitoring network has been developed providing air quality indices. By this network, the quality of the air has been continuously monitored and an important information system has been constructed in order to take precautions for preventing a dangerous situation. The biggest handicap in the network is the data access problem for instant and time series data acquisition and processing because of its proprietary structure. Currently, there is no service offered by the current air quality monitoring system for exchanging information with third party applications. Within the context of this work, a web service has been developed to enable location based querying of the current/past air quality data in Turkey. This web service is equipped with up-todate and widely preferred technologies. In other words, an architecture is chosen in which applications can easily integrate. In the second phase of the study, a web-based application was developed to test the developed web service and this testing application can perform location based acquisition of air-quality data. This makes it possible to easily carry out operations such as screening and examination of the area in the given time-frame which cannot be done with the national monitoring network.

  8. Introduction to information retrieval

    CERN Document Server

    Manning, Christopher D; Schütze, Hinrich

    2008-01-01

    Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Slides and additional exercises (with solutions for lecturers) are also available through the book's supporting website to help course instructors prepare their lectures.

  9. Information Retrieval in Telemedicine: a Comparative Study on Bibliographic Databases.

    Science.gov (United States)

    Ahmadi, Maryam; Sarabi, Roghayeh Ershad; Orak, Roohangiz Jamshidi; Bahaadinbeigy, Kambiz

    2015-06-01

    The first step in each systematic review is selection of the most valid database that can provide the highest number of relevant references. This study was carried out to determine the most suitable database for information retrieval in telemedicine field. Cinhal, PubMed, Web of Science and Scopus databases were searched for telemedicine matched with Education, cost benefit and patient satisfaction. After analysis of the obtained results, the accuracy coefficient, sensitivity, uniqueness and overlap of databases were calculated. The studied databases differed in the number of retrieved articles. PubMed was identified as the most suitable database for retrieving information on the selected topics with the accuracy and sensitivity ratios of 50.7% and 61.4% respectively. The uniqueness percent of retrieved articles ranged from 38% for Pubmed to 3.0% for Cinhal. The highest overlap rate (18.6%) was found between PubMed and Web of Science. Less than 1% of articles have been indexed in all searched databases. PubMed is suggested as the most suitable database for starting search in telemedicine and after PubMed, Scopus and Web of Science can retrieve about 90% of the relevant articles.

  10. Developing BP-driven web application through the use of MDE techniques

    OpenAIRE

    Torres Bosch, Maria Victoria; Giner Blasco, Pau; Pelechano Ferragud, Vicente

    2012-01-01

    Model driven engineering (MDE) is a suitable approach for performing the construction of software systems (in particular in the Web application domain). There are different types of Web applications depending on their purpose (i.e., document-centric, interactive, transactional, workflow/business process-based, collaborative, etc). This work focusses on business process-based Web applications in order to be able to understand business processes in a broad sense, from the lightweight business p...

  11. Ontology-Based Information Visualization: Toward Semantic Web Applications

    NARCIS (Netherlands)

    Fluit, Christiaan; Sabou, Marta; Harmelen, Frank van

    2006-01-01

    The Semantic Web is an extension of the current World Wide Web, based on the idea of exchanging information with explicit, formal, and machine-accessible descriptions of meaning. Providing information with such semantics will enable the construction of applications that have an increased awareness

  12. Engineering semantic web information systems in Hera

    NARCIS (Netherlands)

    Vdovják, R.; Frasincar, F.; Houben, G.J.P.M.; Barna, P.

    2003-01-01

    The success of the World Wide Web has caused the concept of information system to change. Web Information Systems (WIS) use from the Web its paradigm and technologies in order to retrieve information from sources on the Web, and to present the information in terms of a Web or hypermedia

  13. Web application security: a beginner's guide

    National Research Council Canada - National Science Library

    Sullivan, Bryan; Liu, Vincent

    2012-01-01

    .... Sullivan and Liu have created a savvy, essentials-based approach to web app security packed with immediately applicable tools for any information security practitioner sharpening his or her tools or just starting...

  14. Programming Collective Intelligence Building Smart Web 2.0 Applications

    CERN Document Server

    Segaran, Toby

    2008-01-01

    This fascinating book demonstrates how you can build web applications to mine the enormous amount of data created by people on the Internet. With the sophisticated algorithms in this book, you can write smart programs to access interesting datasets from other web sites, collect data from users of your own applications, and analyze and understand the data once you've found it.

  15. Toward Exposing Timing-Based Probing Attacks in Web Applications

    Directory of Open Access Journals (Sweden)

    Jian Mao

    2017-02-01

    Full Text Available Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users’ browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach.

  16. Toward Exposing Timing-Based Probing Attacks in Web Applications.

    Science.gov (United States)

    Mao, Jian; Chen, Yue; Shi, Futian; Jia, Yaoqi; Liang, Zhenkai

    2017-02-25

    Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT) systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users' browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach.

  17. A web-based approach to data imputation

    KAUST Repository

    Li, Zhixu; Sharaf, Mohamed Abdel Fattah; Sitbon, Laurianne; Sadiq, Shazia Wasim; Indulska, Marta; Zhou, Xiaofang

    2013-01-01

    principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme

  18. TREC2002 Web, Novelty and Filtering Track Experiments Using PIRCS

    National Research Council Canada - National Science Library

    Kwok, K. L; Deng, P; Dinstl, N; Chan, M

    2006-01-01

    .... The Web track has two tasks: distillation and named-page retrieval. Distillation is a new utility concept for ranking documents, and needs new design on the output document ranked list after an ad-hoc retrieval from the web (.gov) collection...

  19. Situational Requirements Engineering for the Development of Content Management System-based Web Applications

    NARCIS (Netherlands)

    Souer, J.; van de Weerd, I.; Versendaal, J.M.; Brinkkemper, S.

    2005-01-01

    Web applications are evolving towards strong content-centered Web applications. The development processes and implementation of these applications are unlike the development and implementation of traditional information systems. In this paper we propose WebEngineering Method; a method for developing

  20. Image Retrieval based on Integration between Color and Geometric Moment Features

    International Nuclear Information System (INIS)

    Saad, M.H.; Saleh, H.I.; Konbor, H.; Ashour, M.

    2012-01-01

    Content based image retrieval is the retrieval of images based on visual features such as colour, texture and shape. .the Current approaches to CBIR differ in terms of which image features are extracted; recent work deals with combination of distances or scores from different and usually independent representations in an attempt to induce high level semantics from the low level descriptors of the images. content-based image retrieval has many application areas such as, education, commerce, military, searching, commerce, and biomedicine and Web image classification. This paper proposes a new image retrieval system, which uses color and geometric moment feature to form the feature vectors. Bhattacharyya distance and histogram intersection are used to perform feature matching. This framework integrates the color histogram which represents the global feature and geometric moment as local descriptor to enhance the retrieval results. The proposed technique is proper for precisely retrieving images even in deformation cases such as geometric deformations and noise. It is tested on a standard the results shows that a combination of our approach as a local image descriptor with other global descriptors outperforms other approaches.

  1. Teaching web application development: Microsoft proprietary or open systems?

    Directory of Open Access Journals (Sweden)

    Stephen Corich

    Full Text Available This paper revisits the debate concerning which development environment should be used to teach server-side Web Application Development courses to undergraduate students. In 2002, following an industry-based survey of Web developers, a decision was made to adopt an open source platform consisting of PHP and MySQL rather than a Microsoft platform utilising Access and Active Server Pages. Since that date there have been a number of significant changes within the computing industry that suggest that perhaps it is appropriate to revisit the original decision. This paper investigates expert opinion by reviewing current literature regarding web development environments, it looks at the results of a survey of web development companies and it examines the current employment trends in the web development area. The paper concludes by examining the impact of making a decision to change the development environment used to teach Web Application Development to a third year computing degree class and describes the impact on course delivery that the change has brought about.

  2. A RESTful Web service interface to the ATLAS COOL database

    International Nuclear Information System (INIS)

    Roe, S A

    2010-01-01

    The COOL database in ATLAS is primarily used for storing detector conditions data, but also status flags which are uploaded summaries of information to indicate the detector reliability during a run. This paper introduces the use of CherryPy, a Python application server which acts as an intermediate layer between a web interface and the database, providing a simple means of storing to and retrieving from the COOL database which has found use in many web applications. The software layer is designed to be RESTful, implementing the common CRUD (Create, Read, Update, Delete) database methods by means of interpreting the HTTP method (POST, GET, PUT, DELETE) on the server along with a URL identifying the database resource to be operated on. The format of the data (text, xml etc) is also determined by the HTTP protocol. The details of this layer are described along with a popular application demonstrating its use, the ATLAS run list web page.

  3. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    Science.gov (United States)

    Zerkin, V. V.; Pritychenko, B.

    2018-04-01

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ∼22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. It is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.

  4. Creation of web applications by Rich Internet Application Adobe Flex

    OpenAIRE

    PEKA, Karel

    2011-01-01

    Bachelor work focuses on explaining the functions and development of interactive applications in Adobe Flex RIA also compared to similar web technologies such as AJAX, Microsoft Silverlight or Adobe Flash. Explain the difference between "ordinary" sites and Rich Internet Application (RIA) and the difference shows a series of demonstration examples were processed in Adobe Flash Builder (environment for building Flex applications). Also will be created large-scale application for comprehensive ...

  5. Evaluation of a Web-based Online Grant Application Review Solution

    Directory of Open Access Journals (Sweden)

    Marius Daniel PETRISOR

    2013-12-01

    Full Text Available This paper focuses on the evaluation of a web-based application used in grant application evaluations, software developed in our university, and underlines the need for simple solutions, based on recent technology, specifically tailored to one’s needs. We asked the reviewers to answer a short questionnaire, in order to assess their satisfaction with such a web-based grant application evaluation solution. All 20 reviewers accepted to answer the questionnaire, which contained 8 closed items (YES/NO answers related to reviewer’s previous experience in evaluating grant applications, previous use of such software solutions and his familiarity in using computer systems. The presented web-based application, evaluated by the users, shown a high level of acceptance and those respondents stated that they are willing to use such a solution in the future.

  6. IMPROVING PERSONALIZED WEB SEARCH USING BOOKSHELF DATA STRUCTURE

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2012-10-01

    Full Text Available Search engines are playing a vital role in retrieving relevant information for the web user. In this research work a user profile based web search is proposed. So the web user from different domain may receive different set of results. The main challenging work is to provide relevant results at the right level of reading difficulty. Estimating user expertise and re-ranking the results are the main aspects of this paper. The retrieved results are arranged in Bookshelf Data Structure for easy access. Better presentation of search results hence increases the usability of web search engines significantly in visual mode.

  7. THREE-DIMENSIONAL WEB-BASED PHYSICS SIMULATION APPLICATION FOR PHYSICS LEARNING TOOL

    Directory of Open Access Journals (Sweden)

    William Salim

    2012-10-01

    Full Text Available The purpose of this research is to present a multimedia application for doing simulation in Physics. The application is a web based simulator that implementing HTML5, WebGL, and JavaScript. The objects and the environment will be in three dimensional views. This application is hoped will become the substitute for practicum activity. The current development is the application only covers Newtonian mechanics. Questionnaire and literature study is used as the data collecting method. While Waterfall Method used as the design method. The result is Three-DimensionalPhysics Simulator as online web application. Three-Dimensionaldesign and mentor-mentee relationship is the key features of this application. The conclusion made is Three-DimensionalPhysics Simulator already fulfilled in both design and functionality according to user. This application also helps them to understand Newtonian mechanics by simulation. Improvements are needed, because this application only covers Newtonian Mechanics. There is a lot possibility in the future that this simulation can also covers other Physics topic, such as optic, energy, or electricity.Keywords: Simulation, Physic, Learning Tool, HTML5, WebGL

  8. User centered and ontology based information retrieval system for life sciences.

    Science.gov (United States)

    Sy, Mohameth-François; Ranwez, Sylvie; Montmain, Jacky; Regnault, Armelle; Crampes, Michel; Ranwez, Vincent

    2012-01-25

    Because of the increasing number of electronic resources, designing efficient tools to retrieve and exploit them is a major challenge. Some improvements have been offered by semantic Web technologies and applications based on domain ontologies. In life science, for instance, the Gene Ontology is widely exploited in genomic applications and the Medical Subject Headings is the basis of biomedical publications indexation and information retrieval process proposed by PubMed. However current search engines suffer from two main drawbacks: there is limited user interaction with the list of retrieved resources and no explanation for their adequacy to the query is provided. Users may thus be confused by the selection and have no idea on how to adapt their queries so that the results match their expectations. This paper describes an information retrieval system that relies on domain ontology to widen the set of relevant documents that is retrieved and that uses a graphical rendering of query results to favor user interactions. Semantic proximities between ontology concepts and aggregating models are used to assess documents adequacy with respect to a query. The selection of documents is displayed in a semantic map to provide graphical indications that make explicit to what extent they match the user's query; this man/machine interface favors a more interactive and iterative exploration of data corpus, by facilitating query concepts weighting and visual explanation. We illustrate the benefit of using this information retrieval system on two case studies one of which aiming at collecting human genes related to transcription factors involved in hemopoiesis pathway. The ontology based information retrieval system described in this paper (OBIRS) is freely available at: http://www.ontotoolkit.mines-ales.fr/ObirsClient/. This environment is a first step towards a user centred application in which the system enlightens relevant information to provide decision help.

  9. User centered and ontology based information retrieval system for life sciences

    Directory of Open Access Journals (Sweden)

    Sy Mohameth-François

    2012-01-01

    Full Text Available Abstract Background Because of the increasing number of electronic resources, designing efficient tools to retrieve and exploit them is a major challenge. Some improvements have been offered by semantic Web technologies and applications based on domain ontologies. In life science, for instance, the Gene Ontology is widely exploited in genomic applications and the Medical Subject Headings is the basis of biomedical publications indexation and information retrieval process proposed by PubMed. However current search engines suffer from two main drawbacks: there is limited user interaction with the list of retrieved resources and no explanation for their adequacy to the query is provided. Users may thus be confused by the selection and have no idea on how to adapt their queries so that the results match their expectations. Results This paper describes an information retrieval system that relies on domain ontology to widen the set of relevant documents that is retrieved and that uses a graphical rendering of query results to favor user interactions. Semantic proximities between ontology concepts and aggregating models are used to assess documents adequacy with respect to a query. The selection of documents is displayed in a semantic map to provide graphical indications that make explicit to what extent they match the user's query; this man/machine interface favors a more interactive and iterative exploration of data corpus, by facilitating query concepts weighting and visual explanation. We illustrate the benefit of using this information retrieval system on two case studies one of which aiming at collecting human genes related to transcription factors involved in hemopoiesis pathway. Conclusions The ontology based information retrieval system described in this paper (OBIRS is freely available at: http://www.ontotoolkit.mines-ales.fr/ObirsClient/. This environment is a first step towards a user centred application in which the system enlightens

  10. Measurment of Web Usability: Web Page of Hacettepe University Department of Information Management

    OpenAIRE

    Nazan Özenç Uçak; Tolga Çakmak

    2009-01-01

    Today, information is produced increasingly in electronic form and retrieval of information is provided via web pages. As a result of the rise of the number of web pages, many of them seem to comprise similar contents but different designs. In this respect, presenting information over the web pages according to user expectations and specifications is important in terms of effective usage of information. This study provides an insight about web usability studies that are executed for measuring...

  11. Building rich and interactive web applications with CoverageJSON

    OpenAIRE

    Blower, Jon; Riechert, Maik; Griffiths, Guy; Kumar, Mridul; Williams, Riley

    2017-01-01

    Web browsers are becoming increasingly capable as visualisation and analysis platformsLots of tools and libraries are built around images and “simple features”GeoJSON, KML, OpenLayers, Leaflet ...Formats and tools for scientific / meteorological data are not always web-friendlyComplex, binary, desktop-orientedLarge variety, usually community-specific=> Lots of people building ad-hoc solutions for web applicationsWe want to bring scientific data within the reach of more Web and mobile app deve...

  12. Ajax and Firefox: New Web Applications and Browsers

    Science.gov (United States)

    Godwin-Jones, Bob

    2005-01-01

    Alternative browsers are gaining significant market share, and both Apple and Microsoft are releasing OS upgrades which portend some interesting changes in Web development. Of particular interest for language learning professionals may be new developments in the area of Web browser based applications, particularly using an approach dubbed "Ajax."…

  13. DEVELOPMENT OF A WEB-BASED PROXIMITY BASED MEDIA SHARING APPLICATION

    OpenAIRE

    Erol Ozan

    2016-01-01

    This article reports the development of Vissou, which is a location based web application that enables media recording and sharing among users who are in close proximity to each other. The application facilitates the automated hand-over of the recorded media files from one user to another. There are many social networking applications and web sites that provide digital media sharing and editing functionalities. What differentiates Vissou from other similar applications is the functions and us...

  14. Web Application Software for Ground Operations Planning Database (GOPDb) Management

    Science.gov (United States)

    Lanham, Clifton; Kallner, Shawn; Gernand, Jeffrey

    2013-01-01

    A Web application facilitates collaborative development of the ground operations planning document. This will reduce costs and development time for new programs by incorporating the data governance, access control, and revision tracking of the ground operations planning data. Ground Operations Planning requires the creation and maintenance of detailed timelines and documentation. The GOPDb Web application was created using state-of-the-art Web 2.0 technologies, and was deployed as SaaS (Software as a Service), with an emphasis on data governance and security needs. Application access is managed using two-factor authentication, with data write permissions tied to user roles and responsibilities. Multiple instances of the application can be deployed on a Web server to meet the robust needs for multiple, future programs with minimal additional cost. This innovation features high availability and scalability, with no additional software that needs to be bought or installed. For data governance and security (data quality, management, business process management, and risk management for data handling), the software uses NAMS. No local copy/cloning of data is permitted. Data change log/tracking is addressed, as well as collaboration, work flow, and process standardization. The software provides on-line documentation and detailed Web-based help. There are multiple ways that this software can be deployed on a Web server to meet ground operations planning needs for future programs. The software could be used to support commercial crew ground operations planning, as well as commercial payload/satellite ground operations planning. The application source code and database schema are owned by NASA.

  15. Rare disease diagnosis as an information retrieval task

    DEFF Research Database (Denmark)

    Dragusin, Radu; Petcu, Paula; Lioma, Christina

    2011-01-01

    Increasingly more clinicians use web Information Retrieval (IR) systems to assist them in diagnosing difficult medical cases, for instance rare diseases that they may not be familiar with. However, web IR systems are not necessarily optimised for this task. For instance, clinicians’ queries tend...

  16. A Semantic Sensor Web for Environmental Decision Support Applications

    Science.gov (United States)

    Gray, Alasdair J. G.; Sadler, Jason; Kit, Oles; Kyzirakos, Kostis; Karpathiotakis, Manos; Calbimonte, Jean-Paul; Page, Kevin; García-Castro, Raúl; Frazer, Alex; Galpin, Ixent; Fernandes, Alvaro A. A.; Paton, Norman W.; Corcho, Oscar; Koubarakis, Manolis; De Roure, David; Martinez, Kirk; Gómez-Pérez, Asunción

    2011-01-01

    Sensing devices are increasingly being deployed to monitor the physical world around us. One class of application for which sensor data is pertinent is environmental decision support systems, e.g., flood emergency response. For these applications, the sensor readings need to be put in context by integrating them with other sources of data about the surrounding environment. Traditional systems for predicting and detecting floods rely on methods that need significant human resources. In this paper we describe a semantic sensor web architecture for integrating multiple heterogeneous datasets, including live and historic sensor data, databases, and map layers. The architecture provides mechanisms for discovering datasets, defining integrated views over them, continuously receiving data in real-time, and visualising on screen and interacting with the data. Our approach makes extensive use of web service standards for querying and accessing data, and semantic technologies to discover and integrate datasets. We demonstrate the use of our semantic sensor web architecture in the context of a flood response planning web application that uses data from sensor networks monitoring the sea-state around the coast of England. PMID:22164110

  17. An Application for Data Preprocessing and Models Extractions in Web Usage Mining

    Directory of Open Access Journals (Sweden)

    Claudia Elena DINUCA

    2011-11-01

    Full Text Available Web servers worldwide generate a vast amount of information on web users’ browsing activities. Several researchers have studied these so-called clickstream or web access log data to better understand and characterize web users. The goal of this application is to analyze user behaviour by mining enriched web access log data. With the continued growth and proliferation of e-commerce, Web services, and Web-based information systems, the volumes of click stream and user data collected by Web-based organizations in their daily operations has reached astronomical proportions. This information can be exploited in various ways, such as enhancing the effectiveness of websites or developing directed web marketing campaigns. The discovered patterns are usually represented as collections of pages, objects, or re-sources that are frequently accessed by groups of users with common needs or interests. In this paper we will focus on displaying the way how it was implemented the application for data preprocessing and extracting different data models from web logs data, finding association as a data mining technique to extract potentially useful knowledge from web usage data. We find different data models navigation patterns by analysing the log files of the web-site. I implemented the application in Java using NetBeans IDE. For exemplification, I used the log files data from a commercial web site www.nice-layouts.com.

  18. Validating Satellite-Retrieved Cloud Properties for Weather and Climate Applications

    Science.gov (United States)

    Minnis, P.; Bedka, K. M.; Smith, W., Jr.; Yost, C. R.; Bedka, S. T.; Palikonda, R.; Spangenberg, D.; Sun-Mack, S.; Trepte, Q.; Dong, X.; Xi, B.

    2014-12-01

    Cloud properties determined from satellite imager radiances are increasingly used in weather and climate applications, particularly in nowcasting, model assimilation and validation, trend monitoring, and precipitation and radiation analyses. The value of using the satellite-derived cloud parameters is determined by the accuracy of the particular parameter for a given set of conditions, such as viewing and illumination angles, surface background, and cloud type and structure. Because of the great variety of those conditions and of the sensors used to monitor clouds, determining the accuracy or uncertainties in the retrieved cloud parameters is a daunting task. Sensitivity studies of the retrieved parameters to the various inputs for a particular cloud type are helpful for understanding the errors associated with the retrieval algorithm relative to the plane-parallel world assumed in most of the model clouds that serve as the basis for the retrievals. Real world clouds, however, rarely fit the plane-parallel mold and generate radiances that likely produce much greater errors in the retrieved parameter than can be inferred from sensitivity analyses. Thus, independent, empirical methods are used to provide a more reliable uncertainty analysis. At NASA Langley, cloud properties are being retrieved from both geostationary (GEO) and low-earth orbiting (LEO) satellite imagers for climate monitoring and model validation as part of the NASA CERES project since 2000 and from AVHRR data since 1978 as part of the NOAA CDR program. Cloud properties are also being retrieved in near-real time globally from both GEO and LEO satellites for weather model assimilation and nowcasting for hazards such as aircraft icing. This paper discusses the various independent datasets and approaches that are used to assessing the imager-based satellite cloud retrievals. These include, but are not limited to data from ARM sites, CloudSat, and CALIPSO. This paper discusses the use of the various

  19. Solving Guesstimation Problems Using the Semantic Web:Four Lessons from an Application

    OpenAIRE

    Bundy, Alan; Sasnauskas, Gintautas; Chan, Michael

    2013-01-01

    We draw on our experience of implementing a semi-automated guesstimation application of the Semantic Web, gort, to draw four lessons, which we claim are of general applicability. These are:1. Inference can unleash the Semantic Web: The full power of the web will only be realised when we can use it to infer new knowledge from old.2. The Semantic Web does not constrain the inference mechanisms: Since we must anyway curate the knowledge we extract from the web, we can take the opportunity to tra...

  20. Unit 148 - World Wide Web Basics

    OpenAIRE

    148, CC in GIScience; Yeung, Albert K.

    2000-01-01

    This unit explains the characteristics and the working principles of the World Wide Web as the most important protocol of the Internet. Topics covered in this unit include characteristics of the World Wide Web; using the World Wide Web for the dissemination of information on the Internet; and using the World Wide Web for the retrieval of information from the Internet.

  1. Web document clustering using hyperlink structures

    Energy Technology Data Exchange (ETDEWEB)

    He, Xiaofeng; Zha, Hongyuan; Ding, Chris H.Q; Simon, Horst D.

    2001-05-07

    With the exponential growth of information on the World Wide Web there is great demand for developing efficient and effective methods for organizing and retrieving the information available. Document clustering plays an important role in information retrieval and taxonomy management for the World Wide Web and remains an interesting and challenging problem in the field of web computing. In this paper we consider document clustering methods exploring textual information hyperlink structure and co-citation relations. In particular we apply the normalized cut clustering method developed in computer vision to the task of hyperdocument clustering. We also explore some theoretical connections of the normalized-cut method to K-means method. We then experiment with normalized-cut method in the context of clustering query result sets for web search engines.

  2. Web-based applications for virtual laboratories

    NARCIS (Netherlands)

    Bier, H.H.

    2011-01-01

    Web-based applications for academic education facilitate, usually, exchange of multimedia files, while design-oriented domains such as architectural and urban design require additional support in collaborative real-time drafting and modeling. In this context, multi-user interactive interfaces

  3. Specification framework for engineering adaptive web applications

    NARCIS (Netherlands)

    Frasincar, F.; Houben, G.J.P.M.; Vdovják, R.

    2002-01-01

    The growing demand for data-driven Web applications has led to the need for a structured and controlled approach to the engineering of such applications. Both designers and developers need a framework that in all stages of the engineering process allows them to specify the relevant aspects of the

  4. Interoperable Multimedia Annotation and Retrieval for the Tourism Sector

    NARCIS (Netherlands)

    Chatzitoulousis, Antonios; Efraimidis, Pavlos S.; Athanasiadis, I.N.

    2015-01-01

    The Atlas Metadata System (AMS) employs semantic web annotation techniques in order to create an interoperable information annotation and retrieval platform for the tourism sector. AMS adopts state-of-the-art metadata vocabularies, annotation techniques and semantic web technologies.

  5. Introducing the PRIDE Archive RESTful web services.

    Science.gov (United States)

    Reisinger, Florian; del-Toro, Noemi; Ternent, Tobias; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-07-01

    The PRIDE (PRoteomics IDEntifications) database is one of the world-leading public repositories of mass spectrometry (MS)-based proteomics data and it is a founding member of the ProteomeXchange Consortium of proteomics resources. In the original PRIDE database system, users could access data programmatically by accessing the web services provided by the PRIDE BioMart interface. New REST (REpresentational State Transfer) web services have been developed to serve the most popular functionality provided by BioMart (now discontinued due to data scalability issues) and address the data access requirements of the newly developed PRIDE Archive. Using the API (Application Programming Interface) it is now possible to programmatically query for and retrieve peptide and protein identifications, project and assay metadata and the originally submitted files. Searching and filtering is also possible by metadata information, such as sample details (e.g. species and tissues), instrumentation (mass spectrometer), keywords and other provided annotations. The PRIDE Archive web services were first made available in April 2014. The API has already been adopted by a few applications and standalone tools such as PeptideShaker, PRIDE Inspector, the Unipept web application and the Python-based BioServices package. This application is free and open to all users with no login requirement and can be accessed at http://www.ebi.ac.uk/pride/ws/archive/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. CDAPubMed: a browser extension to retrieve EHR-based biomedical literature

    Directory of Open Access Journals (Sweden)

    Perez-Rey David

    2012-04-01

    Full Text Available Abstract Background Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs. In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. Results We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA, (ii identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH, automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. Conclusions CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard

  7. THE DIFFERENCE BETWEEN DEVELOPING SINGLE PAGE APPLICATION AND TRADITIONAL WEB APPLICATION BASED ON MECHATRONICS ROBOT LABORATORY ONAFT APPLICATION

    Directory of Open Access Journals (Sweden)

    V. Solovei

    2018-04-01

    Full Text Available Today most of desktop and mobile applications have analogues in the form of web-based applications.  With evolution of development technologies and web technologies web application increased in functionality to desktop applications. The Web application consists of two parts of the client part and the server part. The client part is responsible for providing the user with visual information through the browser. The server part is responsible for processing and storing data.MPA appeared simultaneously with the Internet. Multiple-page applications work in a "traditional" way. Every change eg. display the data or submit data back to the server. With the advent of AJAX, MPA learned to load not the whole page, but only a part of it, which eventually led to the appearance of the SPA. SPA is the principle of development when only one page is transferred to the client part, and the content is downloaded only to a certain part of the page, without rebooting it, which allows to speed up the application and simplify the user experience of using the application to the level of desktop applications.Based on the SPA, the Mechatronics Robot Laboratory ONAFT application was designed to automate the management process. The application implements the client-server architecture. The server part consists of a RESTful API, which allows you to get unified access to the application functionality, and a database for storing information. Since the client part is a spa, this allows you to reduce the load on the connection to the server and improve the user experience

  8. Security Assessment of Web Based Distributed Applications

    Directory of Open Access Journals (Sweden)

    Catalin BOJA

    2010-01-01

    Full Text Available This paper presents an overview about the evaluation of risks and vulnerabilities in a web based distributed application by emphasizing aspects concerning the process of security assessment with regards to the audit field. In the audit process, an important activity is dedicated to the measurement of the characteristics taken into consideration for evaluation. From this point of view, the quality of the audit process depends on the quality of assessment methods and techniques. By doing a review of the fields involved in the research process, the approach wants to reflect the main concerns that address the web based distributed applications using exploratory research techniques. The results show that many are the aspects which must carefully be worked with, across a distributed system and they can be revealed by doing a depth introspective analyze upon the information flow and internal processes that are part of the system. This paper reveals the limitations of a non-existing unified security risk assessment model that could prevent such risks and vulnerabilities debated. Based on such standardize models, secure web based distributed applications can be easily audited and many vulnerabilities which can appear due to the lack of access to information can be avoided.

  9. Information Retrieval for Education: Making Search Engines Language Aware

    Science.gov (United States)

    Ott, Niels; Meurers, Detmar

    2010-01-01

    Search engines have been a major factor in making the web the successful and widely used information source it is today. Generally speaking, they make it possible to retrieve web pages on a topic specified by the keywords entered by the user. Yet web searching currently does not take into account which of the search results are comprehensible for…

  10. Development of a web application for water resources based on open source software

    Science.gov (United States)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri P.

    2014-01-01

    This article presents research and development of a prototype web application for water resources using latest advancements in Information and Communication Technologies (ICT), open source software and web GIS. The web application has three web services for: (1) managing, presenting and storing of geospatial data, (2) support of water resources modeling and (3) water resources optimization. The web application is developed using several programming languages (PhP, Ajax, JavaScript, Java), libraries (OpenLayers, JQuery) and open source software components (GeoServer, PostgreSQL, PostGIS). The presented web application has several main advantages: it is available all the time, it is accessible from everywhere, it creates a real time multi-user collaboration platform, the programing languages code and components are interoperable and designed to work in a distributed computer environment, it is flexible for adding additional components and services and, it is scalable depending on the workload. The application was successfully tested on a case study with concurrent multi-users access.

  11. Retrieving high-resolution images over the Internet from an anatomical image database

    Science.gov (United States)

    Strupp-Adams, Annette; Henderson, Earl

    1999-12-01

    The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.

  12. Access Control of Web- and Java-Based Applications

    Science.gov (United States)

    Tso, Kam S.; Pajevski, Michael J.

    2013-01-01

    Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers

  13. Mastering web application development with Express

    CERN Document Server

    Vlăduțu, Alexandru

    2014-01-01

    If you are a Node.js developer who wants to take your Express skills to the next level and develop high performing, reliable web applications using best practices, this book is ideal for you. The only prerequisite is knowledge of Node.js.

  14. Grid-optimized Web 3D applications on wide area network

    Science.gov (United States)

    Wang, Frank; Helian, Na; Meng, Lingkui; Wu, Sining; Zhang, Wen; Guo, Yike; Parker, Michael Andrew

    2008-08-01

    Geographical information system has come into the Web Service times now. In this paper, Web3D applications have been developed based on our developed Gridjet platform, which provides a more effective solution for massive 3D geo-dataset sharing in distributed environments. Web3D services enabling web users could access the services as 3D scenes, virtual geographical environment and so on. However, Web3D services should be shared by thousands of essential users that inherently distributed on different geography locations. Large 3D geo-datasets need to be transferred to distributed clients via conventional HTTP, NFS and FTP protocols, which often encounters long waits and frustration in distributed wide area network environments. GridJet was used as the underlying engine between the Web 3D application node and geo-data server that utilizes a wide range of technologies including the one of paralleling the remote file access, which is a WAN/Grid-optimized protocol and provides "local-like" accesses to remote 3D geo-datasets. No change in the way of using software is required since the multi-streamed GridJet protocol remains fully compatible with existing IP infrastructures. Our recent progress includes a real-world test that Web3D applications as Google Earth over the GridJet protocol beats those over the classic ones by a factor of 2-7 where the transfer distance is over 10,000 km.

  15. MyLabStocks: a web-application to manage molecular biology materials.

    Science.gov (United States)

    Chuffart, Florent; Yvert, Gaël

    2014-05-01

    Laboratory stocks are the hardware of research. They must be stored and managed with mimimum loss of material and information. Plasmids, oligonucleotides and strains are regularly exchanged between collaborators within and between laboratories. Managing and sharing information about every item is crucial for retrieval of reagents, for planning experiments and for reproducing past experimental results. We have developed a web-based application to manage stocks commonly used in a molecular biology laboratory. Its functionalities include user-defined privileges, visualization of plasmid maps directly from their sequence and the capacity to search items from fields of annotation or directly from a query sequence using BLAST. It is designed to handle records of plasmids, oligonucleotides, yeast strains, antibodies, pipettes and notebooks. Based on PHP/MySQL, it can easily be extended to handle other types of stocks and it can be installed on any server architecture. MyLabStocks is freely available from: https://forge.cbp.ens-lyon.fr/redmine/projects/mylabstocks under an open source licence. © 2014 Laboratoire de Biologie Moleculaire de la Cellule CNRS. Yeast published by John Wiley & Sons, Ltd.

  16. Exploiting semantic linkages among multiple sources for semantic information retrieval

    Science.gov (United States)

    Li, JianQiang; Yang, Ji-Jiang; Liu, Chunchen; Zhao, Yu; Liu, Bo; Shi, Yuliang

    2014-07-01

    The vision of the Semantic Web is to build a global Web of machine-readable data to be consumed by intelligent applications. As the first step to make this vision come true, the initiative of linked open data has fostered many novel applications aimed at improving data accessibility in the public Web. Comparably, the enterprise environment is so different from the public Web that most potentially usable business information originates in an unstructured form (typically in free text), which poses a challenge for the adoption of semantic technologies in the enterprise environment. Considering that the business information in a company is highly specific and centred around a set of commonly used concepts, this paper describes a pilot study to migrate the concept of linked data into the development of a domain-specific application, i.e. the vehicle repair support system. The set of commonly used concepts, including the part name of a car and the phenomenon term on the car repairing, are employed to build the linkage between data and documents distributed among different sources, leading to the fusion of documents and data across source boundaries. Then, we describe the approaches of semantic information retrieval to consume these linkages for value creation for companies. The experiments on two real-world data sets show that the proposed approaches outperform the best baseline 6.3-10.8% and 6.4-11.1% in terms of top five and top 10 precisions, respectively. We believe that our pilot study can serve as an important reference for the development of similar semantic applications in an enterprise environment.

  17. Nuclear data retrieval for PC applications, PCNuDat

    International Nuclear Information System (INIS)

    Kinsey, R.R.

    1996-01-01

    The PCNuDat program for IBM-PC compatibles is similar to the NuDat program available through the NNDC Online Nuclear Data Service. They provide a user with access to nuclear data in a convenient and menu driven system. This data is useful in both basic and applied research. The nuclear base used by NuDat is extracted from several data bases maintained at the National Nuclear Data Center (NNDC). The program is an extended DOS program which uses 32 bit addressing. It can run in a DOS window on all the current Windows operating systems. The program and its data base are currently available on both a CD-ROM or electronically over the Internet. Electronic access can be made through the NNDC's Web home page. The files may also be FTP'd from the public area under the [pc prog] directory on bnlnd2.dne.bnl.gov. The CD-ROM version also contains the Nuclear Science References (NSR) data base and its retrieval program, Papyrus NSR

  18. Advances in Electronic Commerce, Web Application and Communication v.1

    CERN Document Server

    Lin, Sally; Second International Conference on Electronic Commerce, Web Application and Communication (ECWAC 2012)

    2012-01-01

    ECWAC2012 is an integrated conference devoted to Electronic Commerce, Web Application and Communication. In the this proceedings you can find the carefully reviewed scientific outcome of the second International Conference on Electronic Commerce, Web Application and Communication (ECWAC 2012) held at March 17-18,2012  in Wuhan, China, bringing together researchers from all around the world in the field.

  19. Advances in Electronic Commerce, Web Application and Communication v.2

    CERN Document Server

    Lin, Sally; Second International Conference on Electronic Commerce, Web Application and Communication (ECWAC 2012)

    2012-01-01

    ECWAC2012 is an integrated conference devoted to Electronic Commerce, Web Application and Communication. In the this proceedings you can find the carefully reviewed scientific outcome of the second International Conference on Electronic Commerce, Web Application and Communication (ECWAC 2012) held at March 17-18,2012  in Wuhan, China, bringing together researchers from all around the world in the field.

  20. Toward Exposing Timing-Based Probing Attacks in Web Applications

    Science.gov (United States)

    Mao, Jian; Chen, Yue; Shi, Futian; Jia, Yaoqi; Liang, Zhenkai

    2017-01-01

    Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT) systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users’ browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach. PMID:28245610

  1. Ajax, XSLT and SVG: Displaying ATLAS conditions data with new web technologies

    Energy Technology Data Exchange (ETDEWEB)

    Roe, S A, E-mail: shaun.roe@cern.c [CERN, CH-1211 Geneve 23 (Switzerland)

    2010-04-01

    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous JavaScript And XML) are used to dynamically inject the data into a web display accompanied by an XSLT transform template which determines how the data will be formatted. By tuning the transform to generate SVG (Scalable Vector Graphics) a direct graphical representation can be produced in the web page while retaining the database data as the XML source, allowing dynamic links to be generated in the web representation, but programmatic use of the data when used from a user application. With the release of the SVG 1.2 Tiny draft specification, the display can also be tailored for display on mobile devices. The technologies are described and a sample application demonstrated, showing conditions data from the ATLAS Semiconductor Tracker.

  2. Ajax, XSLT and SVG: Displaying ATLAS conditions data with new web technologies

    International Nuclear Information System (INIS)

    Roe, S A

    2010-01-01

    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous JavaScript And XML) are used to dynamically inject the data into a web display accompanied by an XSLT transform template which determines how the data will be formatted. By tuning the transform to generate SVG (Scalable Vector Graphics) a direct graphical representation can be produced in the web page while retaining the database data as the XML source, allowing dynamic links to be generated in the web representation, but programmatic use of the data when used from a user application. With the release of the SVG 1.2 Tiny draft specification, the display can also be tailored for display on mobile devices. The technologies are described and a sample application demonstrated, showing conditions data from the ATLAS Semiconductor Tracker.

  3. Ajax, XSLT and SVG: Displaying ATLAS conditions data with new web technologies

    CERN Document Server

    Roe, S A

    2010-01-01

    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous JavaScript And XML) are used to dynamically inject the data into a web display accompanied by an XSLT transform template which determines how the data will be formatted. By tuning the transform to generate SVG (Scalable Vector Graphics) a direct graphical representation can be produced in the web page while retaining the database data as the XML source, allowing dynamic links to be generated in the web representation, but programmatic use of the data when used from a user application. With the release of the SVG 1.2 Tiny draft specification, the display can also be tailored for display on mobile devices. The technologies are described and a sample application demonstrated, showing conditions data from the ATLAS Sem...

  4. Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui

    Science.gov (United States)

    2012-01-01

    Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics

  5. Analysis and Design of Web-Based Database Application for Culinary Community

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2017-03-01

    Full Text Available This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the culinary community. This research used literature review, user interviews, and questionnaires. Moreover, the database system development life cycle was used as a guide for designing a database especially for conceptual database design, logical database design, and physical design database. Web-based application design used eight golden rules for user interface design. The result of this research is the availability of a web-based database application that can fulfill the needs of users in the culinary field related to communication and recipe management.

  6. ASP.NET web API build RESTful web applications and services on the .NET framework

    CERN Document Server

    Kanjilal, Joydip

    2013-01-01

    This book is a step-by-step, practical tutorial with a simple approach to help you build RESTful web applications and services on the .NET framework quickly and efficiently.This book is for ASP.NET web developers who want to explore REST-based services with C# 5. This book contains many real-world code examples with explanations whenever necessary. Some experience with C# and ASP.NET 4 is expected.

  7. Agricultural Library Information Retrieval Based on Improved Semantic Algorithm

    OpenAIRE

    Meiling , Xie

    2014-01-01

    International audience; To support users to quickly access information they need from the agricultural library’s vast information and to improve the low intelligence query service, a model for intelligent library information retrieval was constructed. The semantic web mode was introduced and the information retrieval framework was designed. The model structure consisted of three parts: Information data integration, user interface and information retrieval match. The key method supporting retr...

  8. Crawling Ajax-based Web Applications through Dynamic Analysis of User Interface State Changes

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.; Lenselink, S.

    2011-01-01

    Using JavaScript and dynamic DOM manipulation on the client-side of web applications is becoming a widespread approach for achieving rich interactivity and responsiveness in modern web applications. At the same time, such techniques, collectively known as Ajax, shatter the metaphor of web ‘pages’

  9. User Interface Design in Medical Distributed Web Applications.

    Science.gov (United States)

    Serban, Alexandru; Crisan-Vida, Mihaela; Mada, Leonard; Stoicu-Tivadar, Lacramioara

    2016-01-01

    User interfaces are important to facilitate easy learning and operating with an IT application especially in the medical world. An easy to use interface has to be simple and to customize the user needs and mode of operation. The technology in the background is an important tool to accomplish this. The present work aims to creating a web interface using specific technology (HTML table design combined with CSS3) to provide an optimized responsive interface for a complex web application. In the first phase, the current icMED web medical application layout is analyzed, and its structure is designed using specific tools, on source files. In the second phase, a new graphic adaptable interface to different mobile terminals is proposed, (using HTML table design (TD) and CSS3 method) that uses no source files, just lines of code for layout design, improving the interaction in terms of speed and simplicity. For a complex medical software application a new prototype layout was designed and developed using HTML tables. The method uses a CSS code with only CSS classes applied to one or multiple HTML table elements, instead of CSS styles that can be applied to just one DIV tag at once. The technique has the advantage of a simplified CSS code, and a better adaptability to different media resolutions compared to DIV-CSS style method. The presented work is a proof that adaptive web interfaces can be developed just using and combining different types of design methods and technologies, using HTML table design, resulting in a simpler to learn and use interface, suitable for healthcare services.

  10. Development of grid-like applications for public health using Web 2.0 mashup techniques.

    Science.gov (United States)

    Scotch, Matthew; Yip, Kevin Y; Cheung, Kei-Hoi

    2008-01-01

    Development of public health informatics applications often requires the integration of multiple data sources. This process can be challenging due to issues such as different file formats, schemas, naming systems, and having to scrape the content of web pages. A potential solution to these system development challenges is the use of Web 2.0 technologies. In general, Web 2.0 technologies are new internet services that encourage and value information sharing and collaboration among individuals. In this case report, we describe the development and use of Web 2.0 technologies including Yahoo! Pipes within a public health application that integrates animal, human, and temperature data to assess the risk of West Nile Virus (WNV) outbreaks. The results of development and testing suggest that while Web 2.0 applications are reasonable environments for rapid prototyping, they are not mature enough for large-scale public health data applications. The application, in fact a "systems of systems," often failed due to varied timeouts for application response across web sites and services, internal caching errors, and software added to web sites by administrators to manage the load on their servers. In spite of these concerns, the results of this study demonstrate the potential value of grid computing and Web 2.0 approaches in public health informatics.

  11. Information management on the basis of semantic-web techniques, or a Google for developers; Informationsmanagement auf der Basis von Semantic-Web Techniken oder Ein Google fuer Entwickler

    Energy Technology Data Exchange (ETDEWEB)

    Thelen, B. [Schenck Pegasus GmbH, Darmstadt (Germany); Sevilmis, N.; Stork, A. [Fraunhofer Inst. fuer Graphische Datenverarbeitung, Darmstadt (Germany); Castro, R. [Centro de Computacao Grafica, Guimaraes (Portugal); Jimenez, I.; Marcos, G.; Posada, J.; Smithers, T. [VICOMTech, San Sebastian (Spain); Mauri, M.; Pianciamore, M.; Selvini, P. [CEFRIEL, Milano (Italy); Zecchino, V. [Italdesign - Giugiaro SpA, Moncalieri, Torino (Italy)

    2005-07-01

    Information retrieval often suffers from the lack of suitable search tools or the query complexity. The search of some concrete information on the base of file names or the coincidental occurrence of key words in files is little helpful because the obtainable matches are too much subject to chance. Therefore an effective search must be based on the semantic interpretation of the query and additionally casts of the query into the context of an application domain. Here the development of the search machine prototype WIDE is presented, which builds up the query interpretation on Semantic Web techniques. The search machine can be configured for application domains and is able to map a query to different data sources in parallel. The search machine processes the retrieved results graphically and associates the concepts used in the Query with thematically related concepts. The search machine can be used to retrieve text documents or test bed results of experiments archived in ASAM-ODS data sources. (orig.)

  12. Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2009-01-01

    Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to

  13. Towards Second and Third Generation Web-Based Multimedia

    NARCIS (Netherlands)

    J.R. van Ossenbruggen (Jacco); J.P.T.M. Geurts (Joost); F.J. Cornelissen; L. Rutledge (Lloyd); L. Hardman (Lynda)

    2001-01-01

    textabstractFirst generation Web-content encodes information in handwritten (HTML) Web pages. Second generation Web content generates HTML pages on demand, e.g. by filling in templates with content retrieved dynamically from a database or transformation of structured documents using style sheets

  14. VennDiagramWeb: a web application for the generation of highly customizable Venn and Euler diagrams.

    Science.gov (United States)

    Lam, Felix; Lalansingh, Christopher M; Babaran, Holly E; Wang, Zhiyuan; Prokopec, Stephenie D; Fox, Natalie S; Boutros, Paul C

    2016-10-03

    Visualization of data generated by high-throughput, high-dimensionality experiments is rapidly becoming a rate-limiting step in computational biology. There is an ongoing need to quickly develop high-quality visualizations that can be easily customized or incorporated into automated pipelines. This often requires an interface for manual plot modification, rapid cycles of tweaking visualization parameters, and the generation of graphics code. To facilitate this process for the generation of highly-customizable, high-resolution Venn and Euler diagrams, we introduce VennDiagramWeb: a web application for the widely used VennDiagram R package. VennDiagramWeb is hosted at http://venndiagram.res.oicr.on.ca/ . VennDiagramWeb allows real-time modification of Venn and Euler diagrams, with parameter setting through a web interface and immediate visualization of results. It allows customization of essentially all aspects of figures, but also supports integration into computational pipelines via download of R code. Users can upload data and download figures in a range of formats, and there is exhaustive support documentation. VennDiagramWeb allows the easy creation of Venn and Euler diagrams for computational biologists, and indeed many other fields. Its ability to support real-time graphics changes that are linked to downloadable code that can be integrated into automated pipelines will greatly facilitate the improved visualization of complex datasets. For application support please contact Paul.Boutros@oicr.on.ca.

  15. Relational Constraint Driven Test Case Synthesis for Web Applications

    Directory of Open Access Journals (Sweden)

    Xiang Fu

    2010-09-01

    Full Text Available This paper proposes a relational constraint driven technique that synthesizes test cases automatically for web applications. Using a static analysis, servlets can be modeled as relational transducers, which manipulate backend databases. We present a synthesis algorithm that generates a sequence of HTTP requests for simulating a user session. The algorithm relies on backward symbolic image computation for reaching a certain database state, given a code coverage objective. With a slight adaptation, the technique can be used for discovering workflow attacks on web applications.

  16. HTML5 web application development by example

    CERN Document Server

    Gustafson, JM

    2013-01-01

    The best way to learn anything is by doing. The author uses a friendly tone and fun examples to ensure that you learn the basics of application development. Once you have read this book, you should have the necessary skills to build your own applications.If you have no experience but want to learn how to create applications in HTML5, this book is the only help you'll need. Using practical examples, HTML5 Web Application Development by Example will develop your knowledge and confidence in application development.

  17. Information retrieval implementing and evaluating search engines

    CERN Document Server

    Büttcher, Stefan; Cormack, Gordon V

    2016-01-01

    Information retrieval is the foundation for modern search engines. This textbook offers an introduction to the core topics underlying modern search technologies, including algorithms, data structures, indexing, retrieval, and evaluation. The emphasis is on implementation and experimentation; each chapter includes exercises and suggestions for student projects. Wumpus -- a multiuser open-source information retrieval system developed by one of the authors and available online -- provides model implementations and a basis for student work. The modular structure of the book allows instructors to use it in a variety of graduate-level courses, including courses taught from a database systems perspective, traditional information retrieval courses with a focus on IR theory, and courses covering the basics of Web retrieval. In addition to its classroom use, Information Retrieval will be a valuable reference for professionals in computer science, computer engineering, and software engineering.

  18. ChemiRs: a web application for microRNAs and chemicals.

    Science.gov (United States)

    Su, Emily Chia-Yu; Chen, Yu-Sing; Tien, Yun-Cheng; Liu, Jeff; Ho, Bing-Ching; Yu, Sung-Liang; Singh, Sher

    2016-04-18

    MicroRNAs (miRNAs) are about 22 nucleotides, non-coding RNAs that affect various cellular functions, and play a regulatory role in different organisms including human. Until now, more than 2500 mature miRNAs in human have been discovered and registered, but still lack of information or algorithms to reveal the relations among miRNAs, environmental chemicals and human health. Chemicals in environment affect our health and daily life, and some of them can lead to diseases by inferring biological pathways. We develop a creditable online web server, ChemiRs, for predicting interactions and relations among miRNAs, chemicals and pathways. The database not only compares gene lists affected by chemicals and miRNAs, but also incorporates curated pathways to identify possible interactions. Here, we manually retrieved associations of miRNAs and chemicals from biomedical literature. We developed an online system, ChemiRs, which contains miRNAs, diseases, Medical Subject Heading (MeSH) terms, chemicals, genes, pathways and PubMed IDs. We connected each miRNA to miRBase, and every current gene symbol to HUGO Gene Nomenclature Committee (HGNC) for genome annotation. Human pathway information is also provided from KEGG and REACTOME databases. Information about Gene Ontology (GO) is queried from GO Online SQL Environment (GOOSE). With a user-friendly interface, the web application is easy to use. Multiple query results can be easily integrated and exported as report documents in PDF format. Association analysis of miRNAs and chemicals can help us understand the pathogenesis of chemical components. ChemiRs is freely available for public use at http://omics.biol.ntnu.edu.tw/ChemiRs .

  19. DNA barcode goes two-dimensions: DNA QR code web server.

    Science.gov (United States)

    Liu, Chang; Shi, Linchun; Xu, Xiaolan; Li, Huan; Xing, Hang; Liang, Dong; Jiang, Kun; Pang, Xiaohui; Song, Jingyuan; Chen, Shilin

    2012-01-01

    The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, "DNA barcode" actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR) code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications.

  20. DNA barcode goes two-dimensions: DNA QR code web server.

    Directory of Open Access Journals (Sweden)

    Chang Liu

    Full Text Available The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, "DNA barcode" actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications.

  1. Datamart use for complex data retrieval in an ArcIMS application

    Energy Technology Data Exchange (ETDEWEB)

    Scherma, S. (Steven); Bolivar, Stephen L.

    2004-01-01

    This paper describes the use of datamarts and data warehousing concepts to expedite retrieval and display of complex attribute data from multi-million record databases. Los Alamos National Laboratory has developed an Internet application (SMART) using ArcIMS that relies on datamarts to quickly retrieve attribute data, associated with, but not contained within GIS layers. The volume of data and the complex relationships within the transactional database made data display within ArcIMS impractical without the use of datamarts. The technical issues and solutions involved in the development are discussed.

  2. Server Interface Descriptions for Automated Testing of JavaScript Web Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning; Møller, Anders; Su, Zhendong

    2013-01-01

    Automated testing of JavaScript web applications is complicated by the communication with servers. Specifically, it is difficult to test the JavaScript code in isolation from the server code and database contents. We present a practical solution to this problem. First, we demonstrate that formal...... server interface descriptions are useful in automated testing of JavaScript web applications for separating the concerns of the client and the server. Second, to support the construction of server interface descriptions for existing applications, we introduce an effective inference technique that learns...... communication patterns from sample data. By incorporating interface descriptions into the testing tool Artemis, our experimental results show that we increase the level of automation for high-coverage testing on a collection of JavaScript web applications that exchange JSON data between the clients and servers...

  3. A Sample WebQuest Applicable in Teaching Topological Concepts

    Science.gov (United States)

    Yildiz, Sevda Goktepe; Korpeoglu, Seda Goktepe

    2016-01-01

    In recent years, WebQuests have received a great deal of attention and have been used effectively in teaching-learning process in various courses. In this study, a WebQuest that can be applicable in teaching topological concepts for undergraduate level students was prepared. A number of topological concepts, such as countability, infinity, and…

  4. A web application for poloidal field analysis on HL-2M

    International Nuclear Information System (INIS)

    Song, X.M.; Pan, W.; Chen, L.Y.; Song, X.; Li, X.D.

    2014-01-01

    Highlights: • An original way to develop web application with a new framework (jQuery + PHP + Matlab) is introduced. • A convenient but powerful application for electromagnetic calculation is implemented. • The web application can run in any popular browser, on any hardware and in any operating system. • No any plugin is needed; no any maintenance is required. - Abstract: Recently, many web tools [1–3] in fusion society have been designed and demonstrated, which has been proved to be powerful and convenient to fusion researchers. Many physicists and engineers need a tool to compute the poloidal magnetic field for some purposes (for example, the calibration of magnetic probes for EFIT, the field null structure analysis for control, the design of some plasma diagnostic systems), so to develop a powerful and convenient web application for the calculation of magnetic field and magnetic flux produced by PF coils is very important. In this paper, a web application tool for poloidal field analysis on HL-2M with a totally original framework is presented. This web application is full of dynamic and interactive interface, and can run in any popular browser (IE, safari, firefox, opera), on any hardware (smart phone, PC, ipad, Mac) and operating system (ios, android, windows, linux, Mac OS). No any plugins is needed. The three layers (jQuery + PHP + Matlab) of this framework are introduced. The front top client layer is developed by jQuery code. The middle layer, which plays a role of a bridge to connect the server and client through socket communication, is developed by PHP code. The behind server layer is developed by Matlab, which compute the magnetic field or magnetic flux through a Special Function called Complete Elliptic Integral, and returns the results in the client favorite way, either by table or by JPG image. The field null structure and the vertical and radial field structure calculated by this tool are introduced with details. The idea to design a web

  5. A web application for poloidal field analysis on HL-2M

    Energy Technology Data Exchange (ETDEWEB)

    Song, X.M., E-mail: songxm@swip.ac.cn; Pan, W.; Chen, L.Y.; Song, X.; Li, X.D.

    2014-05-15

    Highlights: • An original way to develop web application with a new framework (jQuery + PHP + Matlab) is introduced. • A convenient but powerful application for electromagnetic calculation is implemented. • The web application can run in any popular browser, on any hardware and in any operating system. • No any plugin is needed; no any maintenance is required. - Abstract: Recently, many web tools [1–3] in fusion society have been designed and demonstrated, which has been proved to be powerful and convenient to fusion researchers. Many physicists and engineers need a tool to compute the poloidal magnetic field for some purposes (for example, the calibration of magnetic probes for EFIT, the field null structure analysis for control, the design of some plasma diagnostic systems), so to develop a powerful and convenient web application for the calculation of magnetic field and magnetic flux produced by PF coils is very important. In this paper, a web application tool for poloidal field analysis on HL-2M with a totally original framework is presented. This web application is full of dynamic and interactive interface, and can run in any popular browser (IE, safari, firefox, opera), on any hardware (smart phone, PC, ipad, Mac) and operating system (ios, android, windows, linux, Mac OS). No any plugins is needed. The three layers (jQuery + PHP + Matlab) of this framework are introduced. The front top client layer is developed by jQuery code. The middle layer, which plays a role of a bridge to connect the server and client through socket communication, is developed by PHP code. The behind server layer is developed by Matlab, which compute the magnetic field or magnetic flux through a Special Function called Complete Elliptic Integral, and returns the results in the client favorite way, either by table or by JPG image. The field null structure and the vertical and radial field structure calculated by this tool are introduced with details. The idea to design a web

  6. Workflow and web application for annotating NCBI BioProject transcriptome data.

    Science.gov (United States)

    Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A; Barrero, Luz S; Landsman, David; Mariño-Ramírez, Leonardo

    2017-01-01

    The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. URL: http://www.ncbi.nlm.nih.gov/projects/physalis/. Published by Oxford University Press 2017. This work is written by US Government employees and is in the public domain in the US.

  7. Use and utility of Web-based residency program information: a survey of residency applicants.

    Science.gov (United States)

    Embi, Peter J; Desai, Sima; Cooney, Thomas G

    2003-01-01

    The Internet has become essential to the residency application process. In recent years, applicants and residency programs have used the Internet-based tools of the National Residency Matching Program (NRMP, the Match) and the Electronic Residency Application Service (ERAS) to process and manage application and Match information. In addition, many residency programs have moved their recruitment information from printed brochures to Web sites. Despite this change, little is known about how applicants use residency program Web sites and what constitutes optimal residency Web site content, information that is critical to developing and maintaining such sites. To study the use and perceived utility of Web-based residency program information by surveying applicants to an internal medicine program. Our sample population was the applicants to the Oregon Health & Science University Internal Medicine Residency Program who were invited for an interview. We solicited participation using the group e-mail feature available through the Electronic Residency Application Service Post-Office application. To minimize the possibility for biased responses, the study was confined to the period between submission of National Residency Matching Program rank-order lists and release of Match results. Applicants could respond using an anonymous Web-based form or by reply to the e-mail solicitation. We tabulated responses, calculated percentages for each, and performed a qualitative analysis of comments. Of the 431 potential participants, 218 responded (51%) during the study period. Ninety-nine percent reported comfort browsing the Web; 52% accessed the Web primarily from home. Sixty-nine percent learned about residency Web sites primarily from residency-specific directories while 19% relied on general directories. Eighty percent found these sites helpful when deciding where to apply, 69% when deciding where to interview, and 36% when deciding how to rank order programs for the Match. Forty

  8. Simultenious binary hash and features learning for image retrieval

    Science.gov (United States)

    Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.

    2016-05-01

    Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.

  9. Web application security is a stack how to CYA (cover your apps) completely

    CERN Document Server

    Mac Vittie, Lori

    2015-01-01

    The web application stack - a growing threat vector   Understand the threat and learn how to defend your organisation This book is intended for application developers, system administrators and operators, as well as networking professionals who need a comprehensive top-level view of web application security in order to better defend and protect both the 'web' and the 'application' against potential attacks. This book examines the most common, fundamental attack vectors and shows readers the defence techniques used to combat them. ContentsIntroductionAttack SurfaceThreat VectorsThreat Mitigatio

  10. Security Testing in Agile Web Application Development - A Case Study Using the EAST Methodology

    CERN Document Server

    Erdogan, Gencer

    2010-01-01

    There is a need for improved security testing methodologies specialized for Web applications and their agile development environment. The number of web application vulnerabilities is drastically increasing, while security testing tends to be given a low priority. In this paper, we analyze and compare Agile Security Testing with two other common methodologies for Web application security testing, and then present an extension of this methodology. We present a case study showing how our Extended Agile Security Testing (EAST) performs compared to a more ad hoc approach used within an organization. Our working hypothesis is that the detection of vulnerabilities in Web applications will be significantly more efficient when using a structured security testing methodology specialized for Web applications, compared to existing ad hoc ways of performing security tests. Our results show a clear indication that our hypothesis is on the right track.

  11. PENGEMBANGAN PERANGKAT LUNAK SISTEM INFORMASI GEOGRAFIS BERBASIS WEB

    Directory of Open Access Journals (Sweden)

    Budi Santosa

    2015-04-01

    Full Text Available Geospatial information is currently not only can be displayed using GIS software in a stand alone but can use the Internet as a medium for distributing geospatial information. Through the internet the whole population in the world can access geospatial information and provides a medium for geographic information processing desired without being limited by location. Web-based GIS map evolved from a web and client server architecture for distributed into a unity. Internet technology provides a new form for all functions of information systems is data collection, data storage, data retrieval (retrieving, data analysis and visualization of data. In this paper, the latest technology, web-based GIS with emphasis on architecture and stage of development of web-based GIS software that starts from the needs analysis to the maintenance stage. The implementation phase of the development of web-based GIS software to produce a web-based GIS product is right with the right process as well.

  12. Development of an Web Service Architecture for Enterprise Application Integration

    International Nuclear Information System (INIS)

    Kim, Ji-Hyeon; Jung, Jae-Cheon; Chang, Young-Woo; Chang, Hoon-Seon; Kim, Jae-Cheol; Kim, Hang-Bae; Kim, Kyu-Ho; Lee, Dong-Chul

    2007-01-01

    The purpose of Enterprise Application Integration (EAI) is to enable the interoperability between two or more enterprise software systems. These systems, for example, can be an Enterprise Resource Planning (ERP) system, an Enterprise Asset Management (EAM) system or a Condition Monitoring system. Traditional EAI approach, based on point-to-point connection, is expensive, vendor specific with limited modules and restricted interoperability with other ERPs and applications. To overcome these drawbacks, the Web Service based EAI has emerged. It allows the integration without point to point linking and with less costs. Many approaches of Web service based EAI are combined with ORACLE, SAP, PeopleSoft, WebSphere, SIEBEL etc. as a system integration platform. The approach still has the restriction that only predefined clients can access the services. This means clients must know exactly the protocol for calling the services and if they don't have the access information they never can get the services. This is because these Web services are based on syntactic service description. In this paper, a semantic based EAI approach, that allows the uninformed clients to access the services, is introduced. The semantic EAI is designed with the Web services that have semantic service descriptions. The Semantic Web Services(SWS) are described in Web Ontology Language for Services(OWL-S), a semantic service ontology language, and advertised in Universal Description, Discovery and Integration (UDDI). Clients find desired services through the UDDI and get services from service providers through Web Service Description Language(WSDL)

  13. Uncovering Web search strategies in South African higher education

    Directory of Open Access Journals (Sweden)

    Surika Civilcharran

    2016-11-01

    Full Text Available Background: In spite of the enormous amount of information available on the Web and the fact that search engines are continuously evolving to enhance the search experience, students are nevertheless faced with the difficulty of effectively retrieving information. It is, therefore, imperative for the interaction between students and search tools to be understood and search strategies to be identified, in order to promote successful information retrieval. Objectives: This study identifies the Web search strategies used by postgraduate students and forms part of a wider study into information retrieval strategies used by postgraduate students at the University of KwaZulu-Natal (UKZN, Pietermaritzburg campus, South Africa. Method: Largely underpinned by Thatcher’s cognitive search strategies, the mixed-methods approach was utilised for this study, in which questionnaires were employed in Phase 1 and structured interviews in Phase 2. This article reports and reflects on the findings of Phase 2, which focus on identifying the Web search strategies employed by postgraduate students. The Phase 1 results were reported in Civilcharran, Hughes and Maharaj (2015. Results: Findings reveal the Web search strategies used for academic information retrieval. In spite of easy access to the invisible Web and the advent of meta-search engines, the use of Web search engines still remains the preferred search tool. The UKZN online library databases and especially the UKZN online library, Online Public Access Catalogue system, are being underutilised. Conclusion: Being ranked in the top three percent of the world’s universities, UKZN is investing in search tools that are not being used to their full potential. This evidence suggests an urgent need for students to be trained in Web searching and to have a greater exposure to a variety of search tools. This article is intended to further contribute to the design of undergraduate training programmes in order to deal

  14. Statistical Language Models and Information Retrieval: Natural Language Processing Really Meets Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; de Jong, Franciska M.G.

    2001-01-01

    Traditionally, natural language processing techniques for information retrieval have always been studied outside the framework of formal models of information retrieval. In this article, we introduce a new formal model of information retrieval based on the application of statistical language models.

  15. Cost reduction for web-based data imputation

    KAUST Repository

    Li, Zhixu; Shang, Shuo; Xie, Qing; Zhang, Xiangliang

    2014-01-01

    Web-based Data Imputation enables the completion of incomplete data sets by retrieving absent field values from the Web. In particular, complete fields can be used as keywords in imputation queries for absent fields. However, due to the ambiguity

  16. Advanced express web application development

    CERN Document Server

    Keig, Andrew

    2013-01-01

    A practical book, guiding the reader through the development of a single page application using a feature-driven approach.If you are an experienced JavaScript developer who wants to build highly scalable, real-world applications using Express, this book is ideal for you. This book is an advanced title and assumes that the reader has some experience with node, Javascript MVC web development frameworks, and has heard of Express before, or is familiar with it. You should also have a basic understanding of Redis and MongoDB. This book is not a tutorial on Node, but aims to explore some of the more

  17. CerebralWeb: a Cytoscape.js plug-in to visualize networks stratified by subcellular localization.

    Science.gov (United States)

    Frias, Silvia; Bryan, Kenneth; Brinkman, Fiona S L; Lynn, David J

    2015-01-01

    CerebralWeb is a light-weight JavaScript plug-in that extends Cytoscape.js to enable fast and interactive visualization of molecular interaction networks stratified based on subcellular localization or other user-supplied annotation. The application is designed to be easily integrated into any website and is configurable to support customized network visualization. CerebralWeb also supports the automatic retrieval of Cerebral-compatible localizations for human, mouse and bovine genes via a web service and enables the automated parsing of Cytoscape compatible XGMML network files. CerebralWeb currently supports embedded network visualization on the InnateDB (www.innatedb.com) and Allergy and Asthma Portal (allergen.innatedb.com) database and analysis resources. Database tool URL: http://www.innatedb.com/CerebralWeb © The Author(s) 2015. Published by Oxford University Press.

  18. AN INNOVATIVE WEB MINING APPLICATION ON BLOGS - A LAYOUT

    Directory of Open Access Journals (Sweden)

    S. Prakash

    2012-01-01

    Full Text Available Blogs and Web services agree to express user’s opinions and interests, in the form of small text messages which gives abbreviated and highly personalized remarks in real-time. Recognizing emotion is really significant for a text-based communication tool such as blogs. Nowadays, user opinions in the structure of comments, reviews in blogs have been utilized by researchers for various purposes. Among them the application of sentiment analysis techniques to these opinions is an interesting one. This paper deals with a proposal of a software structural design for constructing Web mining applications in the blog world. The design includes blog crawling and data mining algorithms, to offer a full-fledged and flexible key for constructing general-purpose Web mining applications. The structural design allocates some significant customizations, such as the construction of adapters for reading text from different blogs, and the utilization of different pre-processing methods and data mining procedures. The core of this paper is on explaining the innovative software structural design of the general framework offering thorough information about the data mining sub-framework.

  19. Finding Specification Pages from the Web

    Science.gov (United States)

    Yoshinaga, Naoki; Torisawa, Kentaro

    This paper presents a method of finding a specification page on the Web for a given object (e.g., ``Ch. d'Yquem'') and its class label (e.g., ``wine''). A specification page for an object is a Web page which gives concise attribute-value information about the object (e.g., ``county''-``Sauternes'') in well formatted structures. A simple unsupervised method using layout and symbolic decoration cues was applied to a large number of the Web pages to acquire candidate attributes for each class (e.g., ``county'' for a class ``wine''). We then filter out irrelevant words from the putative attributes through an author-aware scoring function that we called site frequency. We used the acquired attributes to select a representative specification page for a given object from the Web pages retrieved by a normal search engine. Experimental results revealed that our system greatly outperformed the normal search engine in terms of this specification retrieval.

  20. AMP: a science-driven web-based application for the TeraGrid

    Science.gov (United States)

    Woitaszek, M.; Metcalfe, T.; Shorrock, I.

    The Asteroseismic Modeling Portal (AMP) provides a web-based interface for astronomers to run and view simulations that derive the properties of Sun-like stars from observations of their pulsation frequencies. In this paper, we describe the architecture and implementation of AMP, highlighting the lightweight design principles and tools used to produce a functional fully-custom web-based science application in less than a year. Targeted as a TeraGrid science gateway, AMP's architecture and implementation are intended to simplify its orchestration of TeraGrid computational resources. AMP's web-based interface was developed as a traditional standalone database-backed web application using the Python-based Django web development framework, allowing us to leverage the Django framework's capabilities while cleanly separating the user interface development from the grid interface development. We have found this combination of tools flexible and effective for rapid gateway development and deployment.

  1. Web Engineering

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-06-23

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.

  2. Web application for marketing of digital art works and services

    OpenAIRE

    Vatovec, Jan

    2016-01-01

    The aim of the diploma thesis is to create a web application for digital artworks and services marketing. The decision to undertake this task is based on the authors’ understanding of the field and assessment that current solutions do not satisfy completely the needs of digital artists who work on the market of online subcultures built on fantasy characters (commercial and artists’ own creations). The final application comprises an interactive web gallery, auction-based marketing system for s...

  3. A Semantic Sensor Web for Environmental Decision Support Applications

    Directory of Open Access Journals (Sweden)

    Raúl García-Castro

    2011-09-01

    Full Text Available Sensing devices are increasingly being deployed to monitor the physical world around us. One class of application for which sensor data is pertinent is environmental decision support systems, e.g., flood emergency response. For these applications, the sensor readings need to be put in context by integrating them with other sources of data about the surrounding environment. Traditional systems for predicting and detecting floods rely on methods that need significant human resources. In this paper we describe a semantic sensor web architecture for integrating multiple heterogeneous datasets, including live and historic sensor data, databases, and map layers. The architecture provides mechanisms for discovering datasets, defining integrated views over them, continuously receiving data in real-time, and visualising on screen and interacting with the data. Our approach makes extensive use of web service standards for querying and accessing data, and semantic technologies to discover and integrate datasets. We demonstrate the use of our semantic sensor web architecture in the context of a flood response planning web application that uses data from sensor networks monitoring the sea-state around the coast of England.

  4. Analysis Tool Web Services from the EMBL-EBI.

    Science.gov (United States)

    McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo

    2013-07-01

    Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods.

  5. Web Platform Application

    Energy Technology Data Exchange (ETDEWEB)

    Paulsworth, Ashley [Sunvestment Group, Frederick, MD (United States); Kurtz, Jim [Sunvestment Group, Frederick, MD (United States); Brun de Pontet, Stephanie [Sunvestment Group, Frederick, MD (United States)

    2016-06-15

    Sunvestment Energy Group (previously called Sunvestment Group) was established to create a web application that brings together site hosts, those who will obtain the energy from the solar array, with project developers and funders, including affinity investors. Sunvestment Energy Group (SEG) uses a community-based model that engages with investors who have some affinity with the site host organization. In addition to a financial return, these investors receive non-financial value from their investments and are therefore willing to offer lower cost capital. This enables the site host to enjoy more savings from solar through these less expensive Community Power Purchase Agreements (CPPAs). The purpose of this award was to develop an online platform to bring site hosts and investors together virtually.

  6. An architecture for diversity-aware search for medical web content.

    Science.gov (United States)

    Denecke, K

    2012-01-01

    The Web provides a huge source of information, also on medical and health-related issues. In particular the content of medical social media data can be diverse due to the background of an author, the source or the topic. Diversity in this context means that a document covers different aspects of a topic or a topic is described in different ways. In this paper, we introduce an approach that allows to consider the diverse aspects of a search query when providing retrieval results to a user. We introduce a system architecture for a diversity-aware search engine that allows retrieving medical information from the web. The diversity of retrieval results is assessed by calculating diversity measures that rely upon semantic information derived from a mapping to concepts of a medical terminology. Considering these measures, the result set is diversified by ranking more diverse texts higher. The methods and system architecture are implemented in a retrieval engine for medical web content. The diversity measures reflect the diversity of aspects considered in a text and its type of information content. They are used for result presentation, filtering and ranking. In a user evaluation we assess the user satisfaction with an ordering of retrieval results that considers the diversity measures. It is shown through the evaluation that diversity-aware retrieval considering diversity measures in ranking could increase the user satisfaction with retrieval results.

  7. Sistem Informasi Perpustakaan Berbasis Web Application

    Directory of Open Access Journals (Sweden)

    Yudie Irawan

    2014-01-01

    Full Text Available Digital  library  system  contributes  the  development  of  digital  resource  digital  resource  that  can  be accessed  via the  Internet.  Librarymanagement system contributed to the development of automation membership data processing, circulation and cataloging. In this thesisis  to develop  a new  concept of  digital  library  systems  and  library  management  system  by  integrating  these  two systems  architecture. Integration  architecture  implemented  by  inserting  component  library  management  system  into  the  digital  library  system  architecture. Web application technology required for these components in order to be integrated with the digital library system components.  The newsystem  has  the advantage  of  this  development  application  utilization  of  borrowing,  membership  and  kataloging  to  a  sharable  over the internet,  so  applications  that  can  be used  together.  Information  can be  delivered  between the  library  catalog,  without  leaving the  digitallibrary function in the utilization of shared digital resources derived from uploading by each librarian.Keywords : Digital library system; Library management system; Web application

  8. Exploring the concept of web site customization : applications and antecedents

    NARCIS (Netherlands)

    Teerling, M.L.; Huizingh, Eelko K.R.E.

    2006-01-01

    While mass customization is the tailoring of products and services to the needs and wants of individual customers, web site customization is the tailoring of web sites to individual customers’ preferences. Based on a review of site customization applications, the authors propose a model with four

  9. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    Science.gov (United States)

    Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.

    2012-12-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  10. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    International Nuclear Information System (INIS)

    Andreeva, J; Dzhunov, I; Karavakis, E; Kokoszkiewicz, L; Nowotka, M; Saiz, P; Tuckett, D

    2012-01-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  11. Development of an Web Service Architecture for Enterprise Application Integration

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji-Hyeon; Jung, Jae-Cheon; Chang, Young-Woo; Chang, Hoon-Seon; Kim, Jae-Cheol; Kim, Hang-Bae [Korea Power Engineering Company, Daejeon (Korea, Republic of); Kim, Kyu-Ho; Lee, Dong-Chul [Korea Electric Power Data Network, Daejeon (Korea, Republic of)

    2007-07-01

    The purpose of Enterprise Application Integration (EAI) is to enable the interoperability between two or more enterprise software systems. These systems, for example, can be an Enterprise Resource Planning (ERP) system, an Enterprise Asset Management (EAM) system or a Condition Monitoring system. Traditional EAI approach, based on point-to-point connection, is expensive, vendor specific with limited modules and restricted interoperability with other ERPs and applications. To overcome these drawbacks, the Web Service based EAI has emerged. It allows the integration without point to point linking and with less costs. Many approaches of Web service based EAI are combined with ORACLE, SAP, PeopleSoft, WebSphere, SIEBEL etc. as a system integration platform. The approach still has the restriction that only predefined clients can access the services. This means clients must know exactly the protocol for calling the services and if they don't have the access information they never can get the services. This is because these Web services are based on syntactic service description. In this paper, a semantic based EAI approach, that allows the uninformed clients to access the services, is introduced. The semantic EAI is designed with the Web services that have semantic service descriptions. The Semantic Web Services(SWS) are described in Web Ontology Language for Services(OWL-S), a semantic service ontology language, and advertised in Universal Description, Discovery and Integration (UDDI). Clients find desired services through the UDDI and get services from service providers through Web Service Description Language(WSDL)

  12. SBMLmod: a Python-based web application and web service for efficient data integration and model simulation.

    Science.gov (United States)

    Schäuble, Sascha; Stavrum, Anne-Kristin; Bockwoldt, Mathias; Puntervoll, Pål; Heiland, Ines

    2017-06-24

    Systems Biology Markup Language (SBML) is the standard model representation and description language in systems biology. Enriching and analysing systems biology models by integrating the multitude of available data, increases the predictive power of these models. This may be a daunting task, which commonly requires bioinformatic competence and scripting. We present SBMLmod, a Python-based web application and service, that automates integration of high throughput data into SBML models. Subsequent steady state analysis is readily accessible via the web service COPASIWS. We illustrate the utility of SBMLmod by integrating gene expression data from different healthy tissues as well as from a cancer dataset into a previously published model of mammalian tryptophan metabolism. SBMLmod is a user-friendly platform for model modification and simulation. The web application is available at http://sbmlmod.uit.no , whereas the WSDL definition file for the web service is accessible via http://sbmlmod.uit.no/SBMLmod.wsdl . Furthermore, the entire package can be downloaded from https://github.com/MolecularBioinformatics/sbml-mod-ws . We envision that SBMLmod will make automated model modification and simulation available to a broader research community.

  13. A World Wide Web Human Dimensions Framework and Database for Wildlife and Forest Planning

    Science.gov (United States)

    Michael A. Tarrant; Alan D. Bright; H. Ken Cordell

    1999-01-01

    The paper describes a human dimensions framework(HDF) for application in wildlife and forest planning. The HDF is delivered via the world wide web and retrieves data on-line from the Social, Economic, Environmental, Leisure, and Attitudes (SEELA) database. The proposed HDF is guided by ten fundamental HD principles, and is applied to wildlife and forest planning using...

  14. [Development and evaluation of the medical imaging distribution system with dynamic web application and clustering technology].

    Science.gov (United States)

    Yokohama, Noriya; Tsuchimoto, Tadashi; Oishi, Masamichi; Itou, Katsuya

    2007-01-20

    It has been noted that the downtime of medical informatics systems is often long. Many systems encounter downtimes of hours or even days, which can have a critical effect on daily operations. Such systems remain especially weak in the areas of database and medical imaging data. The scheme design shows the three-layer architecture of the system: application, database, and storage layers. The application layer uses the DICOM protocol (Digital Imaging and Communication in Medicine) and HTTP (Hyper Text Transport Protocol) with AJAX (Asynchronous JavaScript+XML). The database is designed to decentralize in parallel using cluster technology. Consequently, restoration of the database can be done not only with ease but also with improved retrieval speed. In the storage layer, a network RAID (Redundant Array of Independent Disks) system, it is possible to construct exabyte-scale parallel file systems that exploit storage spread. Development and evaluation of the test-bed has been successful in medical information data backup and recovery in a network environment. This paper presents a schematic design of the new medical informatics system that can be accommodated from a recovery and the dynamic Web application for medical imaging distribution using AJAX.

  15. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks.

    Science.gov (United States)

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-02-01

    Hybrid mobile applications (apps) combine the features of Web applications and "native" mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources-file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies "bridges" that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources-the ability to read and write contacts list, local files, etc.-to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content

  16. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks

    Science.gov (United States)

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-01-01

    Hybrid mobile applications (apps) combine the features of Web applications and “native” mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources—file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies “bridges” that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources—the ability to read and write contacts list, local files, etc.—to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign

  17. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    CERN Document Server

    Andreeva, J; Karavakis, E; Kokoszkiewicz, L; Nowotka, M; Saiz, P; Tuckett, D

    2012-01-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Comp...

  18. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Co...

  19. Robust image obfuscation for privacy protection in Web 2.0 applications

    Science.gov (United States)

    Poller, Andreas; Steinebach, Martin; Liu, Huajian

    2012-03-01

    We present two approaches to robust image obfuscation based on permutation of image regions and channel intensity modulation. The proposed concept of robust image obfuscation is a step towards end-to-end security in Web 2.0 applications. It helps to protect the privacy of the users against threats caused by internet bots and web applications that extract biometric and other features from images for data-linkage purposes. The approaches described in this paper consider that images uploaded to Web 2.0 applications pass several transformations, such as scaling and JPEG compression, until the receiver downloads them. In contrast to existing approaches, our focus is on usability, therefore the primary goal is not a maximum of security but an acceptable trade-off between security and resulting image quality.

  20. Efficient Image Blur in Web-Based Applications

    DEFF Research Database (Denmark)

    Kraus, Martin

    2010-01-01

    Scripting languages require the use of high-level library functions to implement efficient image processing; thus, real-time image blur in web-based applications is a challenging task unless specific library functions are available for this purpose. We present a pyramid blur algorithm, which can ...

  1. Regional Geology Web Map Application Development: Javascript v2.0

    International Nuclear Information System (INIS)

    Russell, Glenn

    2017-01-01

    This is a milestone report for the FY2017 continuation of the Spent Fuel, Storage, and Waste, Technology (SFSWT) program (formerly Used Fuel Disposal (UFD) program) development of the Regional Geology Web Mapping Application by the Idaho National Laboratory Geospatial Science and Engineering group. This application was developed for general public use and is an interactive web-based application built in Javascript to visualize, reference, and analyze US pertinent geological features of the SFSWT program. This tool is a version upgrade from Adobe FLEX technology. It is designed to facilitate informed decision making of the geology of continental US relevant to the SFSWT program.

  2. Regional Geology Web Map Application Development: Javascript v2.0

    Energy Technology Data Exchange (ETDEWEB)

    Russell, Glenn [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-06-19

    This is a milestone report for the FY2017 continuation of the Spent Fuel, Storage, and Waste, Technology (SFSWT) program (formerly Used Fuel Disposal (UFD) program) development of the Regional Geology Web Mapping Application by the Idaho National Laboratory Geospatial Science and Engineering group. This application was developed for general public use and is an interactive web-based application built in Javascript to visualize, reference, and analyze US pertinent geological features of the SFSWT program. This tool is a version upgrade from Adobe FLEX technology. It is designed to facilitate informed decision making of the geology of continental US relevant to the SFSWT program.

  3. A Domain Specific Lexicon Acquisition Tool for Cross-Language Information Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; de Jong, Franciska M.G.; Kraaij, Wessel

    1997-01-01

    With the recent enormous increase of information dissemination via the web as incentive there is a growing interest in supporting tools for cross-language retrieval. In this paper we describe a disclosure and retrieval approach that fulfils the needs of both information providers and users by

  4. Modern tools for development of interactive web map applications for visualization spatial data on the internet

    Directory of Open Access Journals (Sweden)

    Horáková Bronislava

    2009-11-01

    Full Text Available In the last few years has begun the development of dynamic web applications, often called Web2.0. From this development wascreated a technology called Mashups. Mashups may easily combine huge amounts of data sources and functionalities of existing as wellas future web applications and services. Therefore they are used to develop a new device, which offers new possibilities of informationusage. This technology provides possibilities of developing basic as well as robust web applications not only for IT or GIS specialists,but also for common users. Software companies have developed web projects for building mashup application also called mashupeditors.

  5. A Method to Ease the Deployment of Web Applications that Involve Database Systems A Method to Ease the Deployment of Web Applications that Involve Database Systems

    Directory of Open Access Journals (Sweden)

    Antonio Vega Corona

    2012-02-01

    Full Text Available El crecimiento continuo de la Internet ha permitido a las personas, alrededor de todo mundo, realizar transacciones en línea, buscar información o navegar usando el explorador de la Web. A medida que más gente se siente cómoda usando los exploradores de Web, más empresas productoras de software tratan de ofrecer interfaces Web como una forma alternativa para proporcionar acceso a sus aplicaciones. La naturaleza de la conexión Web y las restricciones impuestas por el ancho de banda disponible, hacen la integración de aplicaciones Web y los sistemas de bases de datos críticas. Debido a que las aplicaciones que usan bases de datos proporcionan una interfase gráfica para editar la información en la base de datos y debido a que cada columna en una tabla de una base de datos corresponde a un control en una interfase gráfica, el desarrollo de estas aplicaciones puede consumirun tiempo considerable, ya que la validación de campos y reglas de integridad referencial deben ser respetadas. Se propone un diseño orientado a objetos para así facilitar el desarrollo de aplicaciones que usan sistemas de bases de datos.The continuous growth of the Internet has driven people, all around the globe, to performtransactions on-line, search information or navigate using a browser. As more people feelcomfortable using a Web browser, more software companies are trying to alternatively offerWeb interfaces to provide access to their applications. The consequent nature of the Webconnection and the restrictions imposed by the available bandwidth make the successfulintegration of Web applications and database systems critical. Because popular databaseapplications provide a user interface to edit and maintain the information in the databaseand because each column in the database table maps to a graphic user interface control,the deployment of these applications can be time consuming; appropriate fi eld validationand referential integrity rules must be observed

  6. Web application security analysis using the Kali Linux operating system

    OpenAIRE

    BABINCEV IVAN M.; VULETIC DEJAN V.

    2016-01-01

    The Kali Linux operating system is described as well as its purpose and possibilities. There are listed groups of tools that Kali Linux has together with the methods of their functioning, as well as a possibility to install and use tools that are not an integral part of Kali. The final part shows a practical testing of web applications using the tools from the Kali Linux operating system. The paper thus shows a part of the possibilities of this operating system in analaysing web applications ...

  7. A web-based application for initial screening of living kidney donors: development, implementation and evaluation.

    Science.gov (United States)

    Moore, D R; Feurer, I D; Zavala, E Y; Shaffer, D; Karp, S; Hoy, H; Moore, D E

    2013-02-01

    Most centers utilize phone or written surveys to screen candidates who self-refer to be living kidney donors. To increase efficiency and reduce resource utilization, we developed a web-based application to screen kidney donor candidates. The aim of this study was to evaluate the use of this web-based application. Method and time of referral were tabulated and descriptive statistics summarized demographic characteristics. Time series analyses evaluated use over time. Between January 1, 2011 and March 31, 2012, 1200 candidates self-referred to be living kidney donors at our center. Eight hundred one candidates (67%) completed the web-based survey and 399 (33%) completed a phone survey. Thirty-nine percent of donors accessed the application on nights and weekends. Postimplementation of the web-based application, there was a statistically significant increase (p web-based application as opposed to telephone contact. Also, there was a significant increase (p = 0.025) in the total number of self-referrals post-implementation from 61 to 116 per month. An interactive web-based application is an effective strategy for the initial screening of donor candidates. The web-based application increased the ability to interface with donors, process them efficiently and ultimately increased donor self-referral at our center. © Copyright 2012 The American Society of Transplantation and the American Society of Transplant Surgeons.

  8. Frame of reference of software architecture for web applications and mobile

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Maliza Martinez

    2016-08-01

    Full Text Available Given the need to have a guide for the implementation of informatics applications, and thus achieve automate tasks improving response times of users, designed the framework of reference the architecture software for web and mobile applications with technology free software and open source. The technology to be used is the Object Oriented Programming (OOP with JAVA programming language, a client / server architecture and style of multitier architecture, which will allow us to create scalable, robust and stable systems, together of Java Platform Enterprise Edition (JEE that helps us to implement business applications thanks to the JPA and EJB APIs. By the server for handling transactions, security, scalability and concurrency we have Wildfly application server. And on the client side, for creating graphical interfaces we use the ExtJS and Sencha Touch Framework, which are lightweight, high-performance libraries based on HTML5, JavaScript and CSS3. The report generator is JasperReports, because it has the ability to deliver rich content display and printer. The database engine is MySQL, because its connectivity, speed, and security make it a very appropriate server for access from the web. Finally, as editor of web and mobile applications, we have the integrated development environment Eclipse IDE platform of open source. In this paper we make a critical analysis of such applications and formulate the Framework of Software Architecture for the development and implementation of Web and Mobile Applications, which were implemented in the ECU911 Babahoyo and at the Instituto Tecnologico Superior Babahoyo, proving through its application their effectiveness and efficiency in the implementation of integrated systems

  9. Personalizing Web Search based on User Profile

    OpenAIRE

    Utage, Sharyu; Ahire, Vijaya

    2016-01-01

    Web Search engine is most widely used for information retrieval from World Wide Web. These Web Search engines help user to find most useful information. When different users Searches for same information, search engine provide same result without understanding who is submitted that query. Personalized web search it is search technique for proving useful result. This paper models preference of users as hierarchical user profiles. a framework is proposed called UPS. It generalizes profile and m...

  10. Applications and methods utilizing the Simple Semantic Web Architecture and Protocol (SSWAP for bioinformatics resource discovery and disparate data and service integration

    Directory of Open Access Journals (Sweden)

    Nelson Rex T

    2010-06-01

    Full Text Available Abstract Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap" offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS] used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL, genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL, Resource Description Framework (RDF and eXtensible Markup Language (XML documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST. Conclusions The need for semantic integration technologies has preceded

  11. Specification and Verification of Web Applications in Rewriting Logic

    Science.gov (United States)

    Alpuente, María; Ballis, Demis; Romero, Daniel

    This paper presents a Rewriting Logic framework that formalizes the interactions between Web servers and Web browsers through a communicating protocol abstracting HTTP. The proposed framework includes a scripting language that is powerful enough to model the dynamics of complex Web applications by encompassing the main features of the most popular Web scripting languages (e.g. PHP, ASP, Java Servlets). We also provide a detailed characterization of browser actions (e.g. forward/backward navigation, page refresh, and new window/tab openings) via rewrite rules, and show how our models can be naturally model-checked by using the Linear Temporal Logic of Rewriting (LTLR), which is a Linear Temporal Logic specifically designed for model-checking rewrite theories. Our formalization is particularly suitable for verification purposes, since it allows one to perform in-depth analyses of many subtle aspects related to Web interaction. Finally, the framework has been completely implemented in Maude, and we report on some successful experiments that we conducted by using the Maude LTLR model-checker.

  12. Design, Development and Testing of Web Services for Multi-Sensor Snow Cover Mapping

    Science.gov (United States)

    Kadlec, Jiri

    This dissertation presents the design, development and validation of new data integration methods for mapping the extent of snow cover based on open access ground station measurements, remote sensing images, volunteer observer snow reports, and cross country ski track recordings from location-enabled mobile devices. The first step of the data integration procedure includes data discovery, data retrieval, and data quality control of snow observations at ground stations. The WaterML R package developed in this work enables hydrologists to retrieve and analyze data from multiple organizations that are listed in the Consortium of Universities for the Advancement of Hydrologic Sciences Inc (CUAHSI) Water Data Center catalog directly within the R statistical software environment. Using the WaterML R package is demonstrated by running an energy balance snowpack model in R with data inputs from CUAHSI, and by automating uploads of real time sensor observations to CUAHSI HydroServer. The second step of the procedure requires efficient access to multi-temporal remote sensing snow images. The Snow Inspector web application developed in this research enables the users to retrieve a time series of fractional snow cover from the Moderate Resolution Imaging Spectroradiometer (MODIS) for any point on Earth. The time series retrieval method is based on automated data extraction from tile images provided by a Web Map Tile Service (WMTS). The average required time for retrieving 100 days of data using this technique is 5.4 seconds, which is significantly faster than other methods that require the download of large satellite image files. The presented data extraction technique and space-time visualization user interface can be used as a model for working with other multi-temporal hydrologic or climate data WMTS services. The third, final step of the data integration procedure is generating continuous daily snow cover maps. A custom inverse distance weighting method has been developed

  13. Towards New Web Application Development Practices

    Directory of Open Access Journals (Sweden)

    Angeliki Poulymenakou

    1998-11-01

    Full Text Available Electronic Commerce over the Internet, aims to become a global conveyor belt of business transactions. Web applications of increasing sophistication emerge in almost every business sector, reflecting a variety of technical and technological approaches. In this paper we argue that system developers need to reconsider their professional practices in the context of these new technologies by taking advantage of opportunities like short response cycles and easy diffusion of systems results, while they recognise the limitations of traditional practice. We discuss a framework of IS development issues for Internet based applications and propose guidelines towards new development practices.

  14. Discovering Land Cover Web Map Services from the Deep Web with JavaScript Invocation Rules

    Directory of Open Access Journals (Sweden)

    Dongyang Hou

    2016-06-01

    Full Text Available Automatic discovery of isolated land cover web map services (LCWMSs can potentially help in sharing land cover data. Currently, various search engine-based and crawler-based approaches have been developed for finding services dispersed throughout the surface web. In fact, with the prevalence of geospatial web applications, a considerable number of LCWMSs are hidden in JavaScript code, which belongs to the deep web. However, discovering LCWMSs from JavaScript code remains an open challenge. This paper aims to solve this challenge by proposing a focused deep web crawler for finding more LCWMSs from deep web JavaScript code and the surface web. First, the names of a group of JavaScript links are abstracted as initial judgements. Through name matching, these judgements are utilized to judge whether or not the fetched webpages contain predefined JavaScript links that may prompt JavaScript code to invoke WMSs. Secondly, some JavaScript invocation functions and URL formats for WMS are summarized as JavaScript invocation rules from prior knowledge of how WMSs are employed and coded in JavaScript. These invocation rules are used to identify the JavaScript code for extracting candidate WMSs through rule matching. The above two operations are incorporated into a traditional focused crawling strategy situated between the tasks of fetching webpages and parsing webpages. Thirdly, LCWMSs are selected by matching services with a set of land cover keywords. Moreover, a search engine for LCWMSs is implemented that uses the focused deep web crawler to retrieve and integrate the LCWMSs it discovers. In the first experiment, eight online geospatial web applications serve as seed URLs (Uniform Resource Locators and crawling scopes; the proposed crawler addresses only the JavaScript code in these eight applications. All 32 available WMSs hidden in JavaScript code were found using the proposed crawler, while not one WMS was discovered through the focused crawler

  15. A sea surface reflectance model for (AATSR, and application to aerosol retrievals

    Directory of Open Access Journals (Sweden)

    A. M. Sayer

    2010-07-01

    Full Text Available A model of the sea surface bidirectional reflectance distribution function (BRDF is presented for the visible and near-IR channels (over the spectral range 550 nm to 1.6 μm of the dual-viewing Along-Track Scanning Radiometers (ATSRs. The intended application is as part of the Oxford-RAL Aerosols and Clouds (ORAC retrieval scheme. The model accounts for contributions to the observed reflectance from whitecaps, sun-glint and underlight. Uncertainties in the parametrisations used in the BRDF model are propagated through into the forward model and retrieved state. The new BRDF model offers improved coverage over previous methods, as retrievals are possible into the sun-glint region, through the ATSR dual-viewing system. The new model has been applied in the ORAC aerosol retrieval algorithm to process Advanced ATSR (AATSR data from September 2004 over the south-eastern Pacific. The assumed error budget is shown to be generally appropriate, meaning the retrieved states are consistent with the measurements and a priori assumptions. The resulting field of aerosol optical depth (AOD is compared with colocated MODIS-Terra observations, AERONET observations at Tahiti, and cruises over the oceanic region. MODIS and AATSR show similar spatial distributions of AOD, although MODIS reports values which are larger and more variable. It is suggested that assumptions in the MODIS aerosol retrieval algorithm may lead to a positive bias in MODIS AOD of order 0.01 at 550 nm over ocean regions where the wind speed is high.

  16. Implementation of a scalable, web-based, automated clinical decision support risk-prediction tool for chronic kidney disease using C-CDA and application programming interfaces.

    Science.gov (United States)

    Samal, Lipika; D'Amore, John D; Bates, David W; Wright, Adam

    2017-11-01

    Clinical decision support tools for risk prediction are readily available, but typically require workflow interruptions and manual data entry so are rarely used. Due to new data interoperability standards for electronic health records (EHRs), other options are available. As a clinical case study, we sought to build a scalable, web-based system that would automate calculation of kidney failure risk and display clinical decision support to users in primary care practices. We developed a single-page application, web server, database, and application programming interface to calculate and display kidney failure risk. Data were extracted from the EHR using the Consolidated Clinical Document Architecture interoperability standard for Continuity of Care Documents (CCDs). EHR users were presented with a noninterruptive alert on the patient's summary screen and a hyperlink to details and recommendations provided through a web application. Clinic schedules and CCDs were retrieved using existing application programming interfaces to the EHR, and we provided a clinical decision support hyperlink to the EHR as a service. We debugged a series of terminology and technical issues. The application was validated with data from 255 patients and subsequently deployed to 10 primary care clinics where, over the course of 1 year, 569 533 CCD documents were processed. We validated the use of interoperable documents and open-source components to develop a low-cost tool for automated clinical decision support. Since Consolidated Clinical Document Architecture-based data extraction extends to any certified EHR, this demonstrates a successful modular approach to clinical decision support. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  17. Recent advancements on the development of web-based applications for the implementation of seismic analysis and surveillance systems

    Science.gov (United States)

    Friberg, P. A.; Luis, R. S.; Quintiliani, M.; Lisowski, S.; Hunter, S.

    2014-12-01

    Recently, a novel set of modules has been included in the Open Source Earthworm seismic data processing system, supporting the use of web applications. These include the Mole sub-system, for storing relevant event data in a MySQL database (see M. Quintiliani and S. Pintore, SRL, 2013), and an embedded webserver, Moleserv, for serving such data to web clients in QuakeML format. These modules have enabled, for the first time using Earthworm, the use of web applications for seismic data processing. These can greatly simplify the operation and maintenance of seismic data processing centers by having one or more servers providing the relevant data as well as the data processing applications themselves to client machines running arbitrary operating systems.Web applications with secure online web access allow operators to work anywhere, without the often cumbersome and bandwidth hungry use of secure shell or virtual private networks. Furthermore, web applications can seamlessly access third party data repositories to acquire additional information, such as maps. Finally, the usage of HTML email brought the possibility of specialized web applications, to be used in email clients. This is the case of EWHTMLEmail, which produces event notification emails that are in fact simple web applications for plotting relevant seismic data.Providing web services as part of Earthworm has enabled a number of other tools as well. One is ISTI's EZ Earthworm, a web based command and control system for an otherwise command line driven system; another is a waveform web service. The waveform web service serves Earthworm data to additional web clients for plotting, picking, and other web-based processing tools. The current Earthworm waveform web service hosts an advanced plotting capability for providing views of event-based waveforms from a Mole database served by Moleserve.The current trend towards the usage of cloud services supported by web applications is driving improvements in Java

  18. Database retrieval systems for nuclear and astronomical data

    International Nuclear Information System (INIS)

    Suda, Takuma; Korennov, Sergei; Otuka, Naohiko; Yamada, Shimako; Katsuta, Yutaka; Ohnishi, Akira; Kato, Kiyoshi; Fujimoto, Masayuki Y.

    2006-01-01

    Data retrieval and plot systems of nuclear and astronomical data are constructed on a common platform. Web-based systems will soon be opened to the users of both fields of nuclear physics and astronomy. (author)

  19. Analysis and Design of Web-Based Database Application for Culinary Community

    OpenAIRE

    Huda, Choirul; Awang, Osel Dharmawan; Raymond, Raymond; Raynaldi, Raynaldi

    2017-01-01

    This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the cu...

  20. A reverse engineering approach for automatic annotation of Web pages

    NARCIS (Netherlands)

    R. de Virgilio (Roberto); F. Frasincar (Flavius); W. Hop (Walter); S. Lachner (Stephan)

    2013-01-01

    textabstractThe Semantic Web is gaining increasing interest to fulfill the need of sharing, retrieving, and reusing information. Since Web pages are designed to be read by people, not machines, searching and reusing information on the Web is a difficult task without human participation. To this aim

  1. Using the fuzzy modeling for the retrieval algorithms

    International Nuclear Information System (INIS)

    Mohamed, A.H

    2010-01-01

    A rapid growth in number and size of images in databases and world wide web (www) has created a strong need for more efficient search and retrieval systems to exploit the benefits of this large amount of information. However, the collection of this information is now based on the image technology. One of the limitations of the current image analysis techniques necessitates that most image retrieval systems use some form of text description provided by the users as the basis to index and retrieve images. To overcome this problem, the proposed system introduces the using of fuzzy modeling to describe the image by using the linguistic ambiguities. Also, the proposed system can include vague or fuzzy terms in modeling the queries to match the image descriptions in the retrieval process. This can facilitate the indexing and retrieving process, increase their performance and decrease its computational time . Therefore, the proposed system can improve the performance of the traditional image retrieval algorithms.

  2. Developing Dynamic Single Page Web Applications Using Meteor : Comparing JavaScript Frameworks: Blaze and React

    OpenAIRE

    Yetayeh, Asabeneh

    2017-01-01

    This paper studies Meteor which is a JavaScript full-stack framework to develop interactive single page web applications. Meteor allows building web applications entirely in JavaScript. Meteor uses Blaze, React or AngularJS as a view layer and Node.js and MongoDB as a back-end. The main purpose of this study is to compare the performance of Blaze and React. A multi-user Blaze and React web applications with similar HTML and CSS were developed. Both applications were deployed on Heroku’s w...

  3. Great Basin land managers provide detailed feedback about usefulness of two climate information web applications

    Directory of Open Access Journals (Sweden)

    Chad Zanocco

    Full Text Available Land managers in the Great Basin are working to maintain or restore sagebrush ecosystems as climate change exacerbates existing threats. Web applications delivering climate change and climate impacts information have the potential to assist their efforts. Although many web applications containing climate information currently exist, few have been co-produced with land managers or have incorporated information specifically focused on land managers’ needs. Through surveys and interviews, we gathered detailed feedback from federal, state, and tribal sagebrush land managers in the Great Basin on climate information web applications targeting land management. We found that a managers are searching for weather and climate information they can incorporate into their current management strategies and plans; b they are willing to be educated on how to find and understand climate related web applications; c both field and administrative-type managers want data for timescales ranging from seasonal to decadal; d managers want multiple levels of climate information, from simple summaries, to detailed descriptions accessible through the application; and e managers are interested in applications that evaluate uncertainty and provide projected climate impacts. Keywords: Great Basin, Sagebrush, Land management, Climate change, Web application, Co-production

  4. Web API Fragility : How Robust is Your Web API Client

    NARCIS (Netherlands)

    Espinha, T.; Zaidman, A.; Gross, H.G.

    2014-01-01

    Web APIs provide a systematic and extensible approach for application-to-application interaction. A large number of mobile applications makes use of web APIs to integrate services into apps. Each Web API’s evolution pace is determined by their respective developer and mobile application developers

  5. Delve: A Data Set Retrieval and Document Analysis System

    KAUST Repository

    Akujuobi, Uchenna Thankgod

    2017-12-29

    Academic search engines (e.g., Google scholar or Microsoft academic) provide a medium for retrieving various information on scholarly documents. However, most of these popular scholarly search engines overlook the area of data set retrieval, which should provide information on relevant data sets used for academic research. Due to the increasing volume of publications, it has become a challenging task to locate suitable data sets on a particular research area for benchmarking or evaluations. We propose Delve, a web-based system for data set retrieval and document analysis. This system is different from other scholarly search engines as it provides a medium for both data set retrieval and real time visual exploration and analysis of data sets and documents.

  6. Web Page Recommendation Using Web Mining

    OpenAIRE

    Modraj Bhavsar; Mrs. P. M. Chavan

    2014-01-01

    On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each...

  7. Description and testing of the Geo Data Portal: Data integration framework and Web processing services for environmental science collaboration

    Science.gov (United States)

    Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Viger, Roland J.

    2011-01-01

    Interest in sharing interdisciplinary environmental modeling results and related data is increasing among scientists. The U.S. Geological Survey Geo Data Portal project enables data sharing by assembling open-standard Web services into an integrated data retrieval and analysis Web application design methodology that streamlines time-consuming and resource-intensive data management tasks. Data-serving Web services allow Web-based processing services to access Internet-available data sources. The Web processing services developed for the project create commonly needed derivatives of data in numerous formats. Coordinate reference system manipulation and spatial statistics calculation components implemented for the Web processing services were confirmed using ArcGIS 9.3.1, a geographic information science software package. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data.

  8. Usage of Web Service in Mobile Application for Parents and Students in Binus School Serpong

    OpenAIRE

    Karto Iskandar; Andrew Thejo Putrantob

    2016-01-01

    A web service is a service offered by a device electronically to communicate with other electronic device using the World wide web. Smartphone is an electronic device that almost everyone has, especially student and parent for getting information about the school. In BINUS School Serpong mobile application, web services used for getting data from web server like student and menu data. Problem faced by BINUS School Serpong today is the time-consuming application update when using the native ap...

  9. Towards Second and Third Generation Web-Based Multimedia

    OpenAIRE

    Ossenbruggen, Jacco; Geurts, Joost; Cornelissen, F.J.; Rutledge, Lloyd; Hardman, Lynda

    2001-01-01

    textabstractFirst generation Web-content encodes information in handwritten (HTML) Web pages. Second generation Web content generates HTML pages on demand, e.g. by filling in templates with content retrieved dynamically from a database or transformation of structured documents using style sheets (e.g. XSLT). Third generation Web pages will make use of rich markup (e.g. XML) along with metadata (e.g. RDF) schemes to make the content not only machine readable but also machine processable - a ne...

  10. SWHi system description : A case study in information retrieval, inference, and visualization in the Semantic Web

    NARCIS (Netherlands)

    Fahmi, Ismail; Zhang, Junte; Ellermann, Henk; Bouma, Gosse; Franconi, E; Kifer, M; May, W

    2007-01-01

    Search engines have become the most popular tools for finding information on the Internet. A real-world Semantic Web application can benefit from this by combining its features with some features from search engines. In this paper, we describe methods for indexing and searching a populated ontology

  11. Using declarative workflow languages to develop process-centric web applications

    NARCIS (Netherlands)

    Bernardi, M.L.; Cimitile, M.; Di Lucca, G.A.; Maggi, F.M.

    2012-01-01

    Nowadays, process-centric Web Applications (WAs) are extensively used in contexts where multi-user, coordinated work is required. Recently, Model Driven Engineering (MDE) techniques have been investigated for the development of this kind of applications. However, there are still some open issues.

  12. Spatiotemporal Land Use Change Analysis Using Open-source GIS and Web Based Application

    Directory of Open Access Journals (Sweden)

    Wan Yusryzal Wan Ibrahim

    2015-05-01

    Full Text Available Spatiotemporal changes are very important information to reveal the characteristics of the urbanization process. Sharing the information is beneficial for public awareness which then improves their participation in adaptive management for spatial planning process. Open-source software and web application are freely available tools that can be the best medium used by any individual or agencies to share this important information. The objective of the paper is to discuss on the spatiotemporal land use change in Iskandar Malaysia by using open-source GIS (Quantum GIS and publish them through web application (Mash-up. Land use in 1994 to 2011 were developed and analyzed to show the landscape change of the region. Subsequently, web application was setup to distribute the findings of the study. The result show there is significant changes of land use in the study area especially on the decline of agricultural and natural land which were converted to urban land uses. Residential and industrial areas largely replaced the agriculture and natural areas particularly along the coastal zone of the region. This information is published through interactive GIS web in order to share it with the public and stakeholders. There are some limitations of web application but still not hindering the advantages of using it. The integration of open-source GIS and web application is very helpful in sharing planning information particularly in the study area that experiences rapid land use and land cover change. Basic information from this study is vital for conducting further study such as projecting future land use change and other related studies in the area.

  13. Innovative grout/retrieval demonstration final report

    International Nuclear Information System (INIS)

    Loomis, G.G.; Thompson, D.N.

    1995-01-01

    This report presents the results of an evaluation of an innovative retrieval technique for buried transuranic waste. Application of this retrieval technique was originally designed for full pit retrieval; however, it applies equally to a hot spot retrieval technology. The technique involves grouting the buried soil waste matrix with a jet grouting procedure, applying an expansive demolition grout to the matrix, and retrieving the debris. The grouted matrix provides an agglomeration of fine soil particles and contaminants resulting in an inherent contamination control during the dusty retrieval process. A full-scale field demonstration of this retrieval technique was performed on a simulated waste pit at the Idaho National Engineering Laboratory. Details are reported on all phases of this proof-of-concept demonstration including pit construction, jet grouting activities, application of the demolition grout, and actual retrieval of the grouted pit. A quantitative evaluation of aerosolized soils and rare earth tracer spread is given for all phases of the demonstration, and these results are compared to a baseline retrieval activity using conventional retrieval means. 8 refs., 47 figs., 10 tabs

  14. 77 FR 74278 - Proposed Information Collection (Internet Student CPR Web Registration Application); Comment Request

    Science.gov (United States)

    2012-12-13

    ... (Internet Student CPR Web Registration Application); Comment Request AGENCY: Veterans Health Administration.... Title: Internet Student CPR Web Registration Application, VA Form 10-0468. OMB Control Number: 2900-0746... Minneapolis VA Medical Center Education Service. Students will be able to identify and register for a training...

  15. PLANNING APPLICATION OF WEB 2.0 FOR ORGANIZATIONAL LEARNING IN UNIVERSITAS PENDIDIKAN INDONESIA LIBRARY

    Directory of Open Access Journals (Sweden)

    Santi Santika

    2017-07-01

    Full Text Available Library of Universitas Pendidikan Indonesia (UPI has a quality policy commitment to continuous improvement in every area and process. It can be achieved by continuously optimizing organizational learning. Web 2.0 is a media application that can help the organizational learning process because it has the characteristics of read and write, as well as having the flexibility of time use, but the application must be in accordance with the culture and character of the organization. Therefore, this study aimed to find out the Web 2.0 application that can be applied to the organizational learning in the Library of UPI. The method used is a mixed method qualitative and quantitative approach. Research stage refers to the stage of planning and support phases of Web 2.0 Tools Implementation Model. The results showed that the application of Web 2.0 can be applied to the organizational learning in the Library UPI. It refers to the tendency of organizational culture Library of UPI that is good and tendency of HR Library UPI attitude against the use of the Internet and computers are very good. Web 2.0 applications that can be used by UPI library are blogs, online forums, and wiki as a primary tools. Facebook, Youtube, chat application, twitter and Instagram as a supporting tools.

  16. Personal health records: retrieving contextual information with Google Custom Search.

    Science.gov (United States)

    Ahsan, Mahmud; Seldon, H Lee; Sayeed, Shohel

    2012-01-01

    Ubiquitous personal health records, which can accompany a person everywhere, are a necessary requirement for ubiquitous healthcare. Contextual information related to health events is important for the diagnosis and treatment of disease and for the maintenance of good health, yet it is seldom recorded in a health record. We describe a dual cellphone-and-Web-based personal health record system which can include 'external' contextual information. Much contextual information is available on the Internet and we can use ontologies to help identify relevant sites and information. But a search engine is required to retrieve information from the Web and developing a customized search engine is beyond our scope, so we can use Google Custom Search API Web service to get contextual data. In this paper we describe a framework which combines a health-and-environment 'knowledge base' or ontology with the Google Custom Search API to retrieve relevant contextual information related to entries in a ubiquitous personal health record.

  17. Development of Web-Based Learning Application for Generation Z

    Science.gov (United States)

    Hariadi, Bambang; Dewiyani Sunarto, M. J.; Sudarmaningtyas, Pantjawati

    2016-01-01

    This study aimed to develop a web-based learning application as a form of learning revolution. The form of learning revolution includes the provision of unlimited teaching materials, real time class organization, and is not limited by time or place. The implementation of this application is in the form of hybrid learning by using Google Apps for…

  18. A Web of applicant attraction: person-organization fit in the context of Web-based recruitment.

    Science.gov (United States)

    Dineen, Brian R; Ash, Steven R; Noe, Raymond A

    2002-08-01

    Applicant attraction was examined in the context of Web-based recruitment. A person-organization (P-O) fit framework was adopted to examine how the provision of feedback to individuals regarding their potential P-O fit with an organization related to attraction. Objective and subjective P-O fit, agreement with fit feedback, and self-esteem also were examined in relation to attraction. Results of an experiment that manipulated fit feedback level after a self-assessment provided by a fictitious company Web site found that both feedback level and objective P-O fit were positively related to attraction. These relationships were fully mediated by subjective P-O fit. In addition, attraction was related to the interaction of objective fit, feedback, and agreement and objective fit, feedback, and self-esteem. Implications and future Web-based recruitment research directions are discussed.

  19. Semantic Indexing and Retrieval based on Formal Concept Analysis

    OpenAIRE

    Codocedo , Victor; Lykourentzou , Ioanna; Napoli , Amedeo

    2012-01-01

    Semantic indexing and retrieval has become an important research area, as the available amount of information on the Web is growing more and more. In this paper, we introduce an original approach to semantic indexing and retrieval based on Formal Concept Analysis. The concept lattice is used as a semantic index and we propose an original algorithm for traversing the lattice and answering user queries. This framework has been used and evaluated on song datasets.

  20. A NEW APPROACH FOR IMPROVING QUALITY OF WEB APPLICATIONS USING DESIGN PATTERNS

    OpenAIRE

    J. Srikanth R. Savithri

    2012-01-01

    Design patterns are descriptions of communicating objects and classes that are customized to solve a general design problem in a particular context, they describes the problem and its corresponding solution. Professional software engineers always use Design patterns for introducing abstractions in software and by the way they can build complex web applications. The right adoption of Design Patterns while designing web applications can promote the factors like reusability and consistency of th...

  1. CloudTPS: Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2010-01-01

    NoSQL Cloud data services provide scalability and high availability properties for web applications but at the same time they sacrifice data consistency. However, many applications cannot afford any data inconsistency. CloudTPS is a scalable transaction manager to allow cloud database services to

  2. Dancing with the Web: Students Bring Meaning to the Semantic Web

    Science.gov (United States)

    Brooks, Pauline

    2012-01-01

    This article will discuss the issues concerning the storage, retrieval and use of multimedia technology in dance, and how semantic web technologies can support those requirements. It will identify the key aims and outcomes of four international telematic dance projects, and review the use of reflective practice to engage students in their learning…

  3. Search-based Tier Assignment for Optimising Offline Availability in Multi-tier Web Applications

    OpenAIRE

    Philips, Laure; De Koster, Joeri; De Meuter, Wolfgang; De Roover, Coen

    2017-01-01

    Web programmers are often faced with several challenges in the development process of modern, rich internet applications. Technologies for the different tiers of the application have to be selected: a server-side language, a combination of JavaScript, HTML and CSS for the client, and a database technology. Meeting the expectations of contemporary web applications requires even more effort from the developer: many state of the art libraries must be mastered and glued together. This leads to an...

  4. Distributed Web-Scale Infrastructure For Crawling, Indexing And Search With Semantic Support

    Directory of Open Access Journals (Sweden)

    Stefan Dlugolinsky

    2012-01-01

    Full Text Available In this paper, we describe our work in progress in the scope of web-scale informationextraction and information retrieval utilizing distributed computing. Wepresent a distributed architecture built on top of the MapReduce paradigm forinformation retrieval, information processing and intelligent search supportedby spatial capabilities. Proposed architecture is focused on crawling documentsin several different formats, information extraction, lightweight semantic annotationof the extracted information, indexing of extracted information andfinally on indexing of documents based on the geo-spatial information foundin a document. We demonstrate the architecture on two use cases, where thefirst is search in job offers retrieved from the LinkedIn portal and the second issearch in BBC news feeds and discuss several problems we had to face duringthe implementation. We also discuss spatial search applications for both casesbecause both LinkedIn job offer pages and BBC news feeds contain a lot of spatialinformation to extract and process.

  5. A framework for efficient spatial web object retrieval

    DEFF Research Database (Denmark)

    Wu, Dinging; Cong, Gao; Jensen, Christian S.

    2012-01-01

    The conventional Internet is acquiring a geospatial dimension. Web documents are being geo-tagged and geo-referenced objects such as points of interest are being associated with descriptive text documents. The resulting fusion of geo-location and documents enables new kinds of queries that take...

  6. StreamStats: A water resources web application

    Science.gov (United States)

    Ries, Kernell G.; Guthrie, John G.; Rea, Alan H.; Steeves, Peter A.; Stewart, David W.

    2008-01-01

    . Streamflow measurements are collected systematically over a period of years at partial-record stations to estimate peak-flow or low-flow statistics. Streamflow measurements usually are collected at miscellaneous-measurement stations for specific hydrologic studies with various objectives.StreamStats is a Web-based Geographic Information System (GIS) application that was created by the USGS, in cooperation with Environmental Systems Research Institute, Inc. (ESRI)1, to provide users with access to an assortment of analytical tools that are useful for water-resources planning and management. StreamStats functionality is based on ESRI’s ArcHydro Data Model and Tools, described on the Web at http://resources.arcgis.com/en/communities/hydro/01vn0000000s000000.htm. StreamStats allows users to easily obtain streamflow statistics, basin characteristics, and descriptive information for USGS data-collection stations and user-selected ungaged sites. It also allows users to identify stream reaches that are upstream and downstream from user-selected sites, and to identify and obtain information for locations along the streams where activities that may affect streamflow conditions are occurring. This functionality can be accessed through a map-based user interface that appears in the user’s Web browser, or individual functions can be requested remotely as Web services by other Web or desktop computer applications. StreamStats can perform these analyses much faster than historically used manual techniques.StreamStats was designed so that each state would be implemented as a separate application, with a reliance on local partnerships to fund the individual applications, and a goal of eventual full national implementation. Idaho became the first state to implement StreamStats in 2003. By mid-2008, 14 states had applications available to the public, and 18 other states were in various stages of implementation.

  7. SCALEUS: Semantic Web Services Integration for Biomedical Applications.

    Science.gov (United States)

    Sernadela, Pedro; González-Castro, Lorena; Oliveira, José Luís

    2017-04-01

    In recent years, we have witnessed an explosion of biological data resulting largely from the demands of life science research. The vast majority of these data are freely available via diverse bioinformatics platforms, including relational databases and conventional keyword search applications. This type of approach has achieved great results in the last few years, but proved to be unfeasible when information needs to be combined or shared among different and scattered sources. During recent years, many of these data distribution challenges have been solved with the adoption of semantic web. Despite the evident benefits of this technology, its adoption introduced new challenges related with the migration process, from existent systems to the semantic level. To facilitate this transition, we have developed Scaleus, a semantic web migration tool that can be deployed on top of traditional systems in order to bring knowledge, inference rules, and query federation to the existent data. Targeted at the biomedical domain, this web-based platform offers, in a single package, straightforward data integration and semantic web services that help developers and researchers in the creation process of new semantically enhanced information systems. SCALEUS is available as open source at http://bioinformatics-ua.github.io/scaleus/ .

  8. Detection of the Security Vulnerabilities in Web Applications

    Directory of Open Access Journals (Sweden)

    2009-01-01

    Full Text Available The contemporary organizations develop business processes in a very complex environment. The IT&C technologies are used by organizations to improve their competitive advantages. But, the IT&C technologies are not perfect. They are developed in an iterative process and their quality is the result of the lifecycle activities. The audit and evaluation processes are required by the increased complexity of the business processes supported by IT&C technologies. In order to organize and develop a high-quality audit process, the evaluation team must analyze the risks, threats and vulnerabilities of the information system. The paper highlights the security vulnerabilities in web applications and the processes of their detection. The web applications are used as IT&C tools to support the distributed information processes. They are a major component of the distributed information systems. The audit and evaluation processes are carried out in accordance with the international standards developed for information system security assurance.

  9. Web-based recruitment: effects of information, organizational brand, and attitudes toward a Web site on applicant attraction.

    Science.gov (United States)

    Allen, David G; Mahto, Raj V; Otondo, Robert F

    2007-11-01

    Recruitment theory and research show that objective characteristics, subjective considerations, and critical contact send signals to prospective applicants about the organization and available opportunities. In the generating applicants phase of recruitment, critical contact may consist largely of interactions with recruitment sources (e.g., newspaper ads, job fairs, organization Web sites); however, research has yet to fully address how all 3 types of signaling mechanisms influence early job pursuit decisions in the context of organizational recruitment Web sites. Results based on data from 814 student participants searching actual organization Web sites support and extend signaling and brand equity theories by showing that job information (directly) and organization information (indirectly) are related to intentions to pursue employment when a priori perceptions of image are controlled. A priori organization image is related to pursuit intentions when subsequent information search is controlled, but organization familiarity is not, and attitudes about a recruitment source also influence attraction and partially mediate the effects of organization information. Theoretical and practical implications for recruitment are discussed. (c) 2007 APA

  10. Analysis of Decision Making and Incentives in Danish Green Web Applications

    DEFF Research Database (Denmark)

    Scheele, Christian Elling

    2013-01-01

    Traditional information campaigns aimed at incentivising the kind of behaviour change that will lead to more sustainable levels of energy consumption have been proven inefficient. Politicians and government bodies could consider using green web applications as an alternative. However, there is li...... normative or behavioural gains. The third approach is based on a socio-psychological decision model in which values, attitudes and norms affect the choices we make. All three theoretical approaches aim at explaining decision-making in the context of energy consumption......., there is little research documenting how such applications actually motivate behaviour change. There is a need for a better understanding of how such applications work and whether they are effective. This paper addresses the first question by demonstrating how three Danish green web applications employ different...

  11. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.

    Science.gov (United States)

    Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.

  12. Developing responsive web applications with Ajax and jQuery

    CERN Document Server

    Patel, Sandeep Kumar

    2014-01-01

    This book is a standard tutorial for web application developers presented in a comprehensive, step-by-step manner to explain the nuances involved. It has an abundance of code and examples supporting explanations of each feature. This book is intended for Java developers wanting to create rich and responsive applications using AJAX. Basic experience of using jQuery is assumed.

  13. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    OpenAIRE

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    2017-01-01

    © 2017 The Author(s). An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate w...

  14. HTSstation: a web application and open-access libraries for high-throughput sequencing data analysis.

    Science.gov (United States)

    David, Fabrice P A; Delafontaine, Julien; Carat, Solenne; Ross, Frederick J; Lefebvre, Gregory; Jarosz, Yohan; Sinclair, Lucas; Noordermeer, Daan; Rougemont, Jacques; Leleu, Marion

    2014-01-01

    The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch.

  15. Web 2.0 applications in medicine: trends and topics in the literature.

    Science.gov (United States)

    Boudry, Christophe

    2015-04-01

    The World Wide Web has changed research habits, and these changes were further expanded when "Web 2.0" became popular in 2005. Bibliometrics is a helpful tool used for describing patterns of publication, for interpreting progression over time, and the geographical distribution of research in a given field. Few studies employing bibliometrics, however, have been carried out on the correlative nature of scientific literature and Web 2.0. The aim of this bibliometric analysis was to provide an overview of Web 2.0 implications in the biomedical literature. The objectives were to assess the growth rate of literature, key journals, authors, and country contributions, and to evaluate whether the various Web 2.0 applications were expressed within this biomedical literature, and if so, how. A specific query with keywords chosen to be representative of Web 2.0 applications was built for the PubMed database. Articles related to Web 2.0 were downloaded in Extensible Markup Language (XML) and were processed through developed hypertext preprocessor (PHP) scripts, then imported to Microsoft Excel 2010 for data processing. A total of 1347 articles were included in this study. The number of articles related to Web 2.0 has been increasing from 2002 to 2012 (average annual growth rate was 106.3% with a maximum of 333% in 2005). The United States was by far the predominant country for authors, with 514 articles (54.0%; 514/952). The second and third most productive countries were the United Kingdom and Australia, with 87 (9.1%; 87/952) and 44 articles (4.6%; 44/952), respectively. Distribution of number of articles per author showed that the core population of researchers working on Web 2.0 in the medical field could be estimated at approximately 75. In total, 614 journals were identified during this analysis. Using Bradford's law, 27 core journals were identified, among which three (Studies in Health Technology and Informatics, Journal of Medical Internet Research, and Nucleic Acids

  16. MARS: Microarray analysis, retrieval, and storage system

    Directory of Open Access Journals (Sweden)

    Scheideler Marcel

    2005-04-01

    Full Text Available Abstract Background Microarray analysis has become a widely used technique for the study of gene-expression patterns on a genomic scale. As more and more laboratories are adopting microarray technology, there is a need for powerful and easy to use microarray databases facilitating array fabrication, labeling, hybridization, and data analysis. The wealth of data generated by this high throughput approach renders adequate database and analysis tools crucial for the pursuit of insights into the transcriptomic behavior of cells. Results MARS (Microarray Analysis and Retrieval System provides a comprehensive MIAME supportive suite for storing, retrieving, and analyzing multi color microarray data. The system comprises a laboratory information management system (LIMS, a quality control management, as well as a sophisticated user management system. MARS is fully integrated into an analytical pipeline of microarray image analysis, normalization, gene expression clustering, and mapping of gene expression data onto biological pathways. The incorporation of ontologies and the use of MAGE-ML enables an export of studies stored in MARS to public repositories and other databases accepting these documents. Conclusion We have developed an integrated system tailored to serve the specific needs of microarray based research projects using a unique fusion of Web based and standalone applications connected to the latest J2EE application server technology. The presented system is freely available for academic and non-profit institutions. More information can be found at http://genome.tugraz.at.

  17. Cost reduction for web-based data imputation

    KAUST Repository

    Li, Zhixu

    2014-01-01

    Web-based Data Imputation enables the completion of incomplete data sets by retrieving absent field values from the Web. In particular, complete fields can be used as keywords in imputation queries for absent fields. However, due to the ambiguity of these keywords and the data complexity on the Web, different queries may retrieve different answers to the same absent field value. To decide the most probable right answer to each absent filed value, existing method issues quite a few available imputation queries for each absent value, and then vote on deciding the most probable right answer. As a result, we have to issue a large number of imputation queries for filling all absent values in an incomplete data set, which brings a large overhead. In this paper, we work on reducing the cost of Web-based Data Imputation in two aspects: First, we propose a query execution scheme which can secure the most probable right answer to an absent field value by issuing as few imputation queries as possible. Second, we recognize and prune queries that probably will fail to return any answers a priori. Our extensive experimental evaluation shows that our proposed techniques substantially reduce the cost of Web-based Imputation without hurting its high imputation accuracy. © 2014 Springer International Publishing Switzerland.

  18. XML representation and management of temporal information for web-based cultural heritage applications

    Directory of Open Access Journals (Sweden)

    Fabio Grandi

    2006-01-01

    Full Text Available In this paper we survey the recent activities and achievements of our research group in the deployment of XMLrelated technologies in Cultural Heritage applications concerning the encoding of temporal semantics in Web documents. In particular we will review "The Valid Web", which is an XML/XSL infrastructure we defined and implemented for the definition and management of historical information within multimedia documents available on the Web, and its further extension to the effective encoding of advanced temporal features like indeterminacy, multiple granularities and calendars, enabling an efficient processing in a user-friendly Web-based environment. Potential uses of the developed infrastructures include a broad range of applications in the cultural heritage domain, where the historical perspective is relevant, with potentially positive impacts on E-Education and E-Science.

  19. Medical Student Perceptions of Learner-Initiated Feedback Using a Mobile Web Application

    Directory of Open Access Journals (Sweden)

    Amy C Robertson

    2017-12-01

    Full Text Available Feedback, especially timely, specific, and actionable feedback, frequently does not occur. Efforts to better understand methods to improve the effectiveness of feedback are an important area of educational research. This study represents preliminary work as part of a plan to investigate the perceptions of a student-driven system to request feedback from faculty using a mobile device and Web-based application. We hypothesize that medical students will perceive learner-initiated, timely feedback to be an essential component of clinical education. Furthermore, we predict that students will recognize the use of a mobile device and Web application to be an advantageous and effective method when requesting feedback from supervising physicians. Focus group data from 18 students enrolled in a 4-week anesthesia clerkship revealed the following themes: (1 students often have to solicit feedback, (2 timely feedback is perceived as being advantageous, (3 feedback from faculty is perceived to be more effective, (4 requesting feedback from faculty physicians poses challenges, (5 the decision to request feedback may be influenced by the student’s clinical performance, and (6 using a mobile device and Web application may not guarantee timely feedback. Students perceived using a mobile Web-based application to initiate feedback from supervising physicians to be a valuable method of assessment. However, challenges and barriers were identified.

  20. [Design and implementation of medical instrument standard information retrieval system based on APS.NET].

    Science.gov (United States)

    Yu, Kaijun

    2010-07-01

    This paper Analys the design goals of Medical Instrumentation standard information retrieval system. Based on the B /S structure,we established a medical instrumentation standard retrieval system with ASP.NET C # programming language, IIS f Web server, SQL Server 2000 database, in the. NET environment. The paper also Introduces the system structure, retrieval system modules, system development environment and detailed design of the system.

  1. WeBIAS: a web server for publishing bioinformatics applications.

    Science.gov (United States)

    Daniluk, Paweł; Wilczyński, Bartek; Lesyng, Bogdan

    2015-11-02

    One of the requirements for a successful scientific tool is its availability. Developing a functional web service, however, is usually considered a mundane and ungratifying task, and quite often neglected. When publishing bioinformatic applications, such attitude puts additional burden on the reviewers who have to cope with poorly designed interfaces in order to assess quality of presented methods, as well as impairs actual usefulness to the scientific community at large. In this note we present WeBIAS-a simple, self-contained solution to make command-line programs accessible through web forms. It comprises a web portal capable of serving several applications and backend schedulers which carry out computations. The server handles user registration and authentication, stores queries and results, and provides a convenient administrator interface. WeBIAS is implemented in Python and available under GNU Affero General Public License. It has been developed and tested on GNU/Linux compatible platforms covering a vast majority of operational WWW servers. Since it is written in pure Python, it should be easy to deploy also on all other platforms supporting Python (e.g. Windows, Mac OS X). Documentation and source code, as well as a demonstration site are available at http://bioinfo.imdik.pan.pl/webias . WeBIAS has been designed specifically with ease of installation and deployment of services in mind. Setting up a simple application requires minimal effort, yet it is possible to create visually appealing, feature-rich interfaces for query submission and presentation of results.

  2. The Semantic Web: opportunities and challenges for next-generation Web applications

    Directory of Open Access Journals (Sweden)

    2002-01-01

    Full Text Available Recently there has been a growing interest in the investigation and development of the next generation web - the Semantic Web. While most of the current forms of web content are designed to be presented to humans, but are barely understandable by computers, the content of the Semantic Web is structured in a semantic way so that it is meaningful to computers as well as to humans. In this paper, we report a survey of recent research on the Semantic Web. In particular, we present the opportunities that this revolution will bring to us: web-services, agent-based distributed computing, semantics-based web search engines, and semantics-based digital libraries. We also discuss the technical and cultural challenges of realizing the Semantic Web: the development of ontologies, formal semantics of Semantic Web languages, and trust and proof models. We hope that this will shed some light on the direction of future work on this field.

  3. Working with Data: Discovering Knowledge through Mining and Analysis; Systematic Knowledge Management and Knowledge Discovery; Text Mining; Methodological Approach in Discovering User Search Patterns through Web Log Analysis; Knowledge Discovery in Databases Using Formal Concept Analysis; Knowledge Discovery with a Little Perspective.

    Science.gov (United States)

    Qin, Jian; Jurisica, Igor; Liddy, Elizabeth D.; Jansen, Bernard J; Spink, Amanda; Priss, Uta; Norton, Melanie J.

    2000-01-01

    These six articles discuss knowledge discovery in databases (KDD). Topics include data mining; knowledge management systems; applications of knowledge discovery; text and Web mining; text mining and information retrieval; user search patterns through Web log analysis; concept analysis; data collection; and data structure inconsistency. (LRW)

  4. University of Glasgow at WebCLEF 2005

    DEFF Research Database (Denmark)

    Macdonald, C.; Plachouras, V.; He, B.

    2006-01-01

    We participated in the WebCLEF 2005 monolingual task. In this task, a search system aims to retrieve relevant documents from a multilingual corpus of Web documents from Web sites of European governments. Both the documents and the queries are written in a wide range of European languages......, namely content, title, and anchor text of incoming hyperlinks. We use a technique called per-field normalisation, which extends the Divergence From Randomness (DFR) framework, to normalise the term frequencies, and to combine them across the three fields. We also employ the length of the URL path of Web...

  5. Cross-Dataset Analysis and Visualization Driven by Expressive Web Services

    Science.gov (United States)

    Alexandru Dumitru, Mircea; Catalin Merticariu, Vlad

    2015-04-01

    The deluge of data that is hitting us every day from satellite and airborne sensors is changing the workflow of environmental data analysts and modelers. Web geo-services play now a fundamental role, and are no longer needed to preliminary download and store the data, but rather they interact in real-time with GIS applications. Due to the very large amount of data that is curated and made available by web services, it is crucial to deploy smart solutions for optimizing network bandwidth, reducing duplication of data and moving the processing closer to the data. In this context we have created a visualization application for analysis and cross-comparison of aerosol optical thickness datasets. The application aims to help researchers identify and visualize discrepancies between datasets coming from various sources, having different spatial and time resolutions. It also acts as a proof of concept for integration of OGC Web Services under a user-friendly interface that provides beautiful visualizations of the explored data. The tool was built on top of the World Wind engine, a Java based virtual globe built by NASA and the open source community. For data retrieval and processing we exploited the OGC Web Coverage Service potential: the most exciting aspect being its processing extension, a.k.a. the OGC Web Coverage Processing Service (WCPS) standard. A WCPS-compliant service allows a client to execute a processing query on any coverage offered by the server. By exploiting a full grammar, several different kinds of information can be retrieved from one or more datasets together: scalar condensers, cross-sectional profiles, comparison maps and plots, etc. This combination of technology made the application versatile and portable. As the processing is done on the server-side, we ensured that the minimal amount of data is transferred and that the processing is done on a fully-capable server, leaving the client hardware resources to be used for rendering the visualization

  6. Local File Disclosure Vulnerability: A Case Study of Public-Sector Web Applications

    Science.gov (United States)

    Ahmed, M. Imran; Maruf Hassan, Md; Bhuyian, Touhid

    2018-01-01

    Almost all public-sector organisations in Bangladesh now offer online services through web applications, along with the existing channels, in their endeavour to realise the dream of a ‘Digital Bangladesh’. Nations across the world have joined the online environment thanks to training and awareness initiatives by their government. File sharing and downloading activities using web applications have now become very common, not only ensuring the easy distribution of different types of files and documents but also enormously reducing the time and effort of users. Although the online services that are being used frequently have made users’ life easier, it has increased the risk of exploitation of local file disclosure (LFD) vulnerability in the web applications of different public-sector organisations due to unsecure design and careless coding. This paper analyses the root cause of LFD vulnerability, its exploitation techniques, and its impact on 129 public-sector websites in Bangladesh by examining the use of manual black box testing approach.

  7. A RESTful interface to pseudonymization services in modern web applications.

    Science.gov (United States)

    Lablans, Martin; Borg, Andreas; Ückert, Frank

    2015-02-07

    Medical research networks rely on record linkage and pseudonymization to determine which records from different sources relate to the same patient. To establish informational separation of powers, the required identifying data are redirected to a trusted third party that has, in turn, no access to medical data. This pseudonymization service receives identifying data, compares them with a list of already reported patient records and replies with a (new or existing) pseudonym. We found existing solutions to be technically outdated, complex to implement or not suitable for internet-based research infrastructures. In this article, we propose a new RESTful pseudonymization interface tailored for use in web applications accessed by modern web browsers. The interface is modelled as a resource-oriented architecture, which is based on the representational state transfer (REST) architectural style. We translated typical use-cases into resources to be manipulated with well-known HTTP verbs. Patients can be re-identified in real-time by authorized users' web browsers using temporary identifiers. We encourage the use of PID strings for pseudonyms and the EpiLink algorithm for record linkage. As a proof of concept, we developed a Java Servlet as reference implementation. The following resources have been identified: Sessions allow data associated with a client to be stored beyond a single request while still maintaining statelessness. Tokens authorize for a specified action and thus allow the delegation of authentication. Patients are identified by one or more pseudonyms and carry identifying fields. Relying on HTTP calls alone, the interface is firewall-friendly. The reference implementation has proven to be production stable. The RESTful pseudonymization interface fits the requirements of web-based scenarios and allows building applications that make pseudonymization transparent to the user using ordinary web technology. The open-source reference implementation implements the

  8. Development of a Web application for a real time information system; Desarrollo de una aplicacion web para un sistema de informacion en tiempo real

    Energy Technology Data Exchange (ETDEWEB)

    Espinosa R, Alfredo; Silva F, Brisa M; Quintero R, Agustin [Instituto de Investigaciones Electricas, Cuernavaca, Morelos (Mexico)

    2007-07-01

    In this article its is described a technique for the development of a Web application for a real time information system that allows the remote and concurrent connection of different equipment to the network historical data base of the system, without the need of the installation of any software component in the remote equipment of the user who makes the consultation. It defines and establishes the software architecture that allows the development of the Web application, the analysis stages, the operation of the technology to be used, as well as the design, development and implementation of the application. Finally, the accomplishments obtained with the development of the Web application for a real time information system are described. [Spanish] En este articulo se describe una tecnica para el desarrollo de una aplicacion web para un sistema de informacion en tiempo real, que permita la conexion remota y concurrente de diferentes equipos en la red a la base de datos historica del sistema, sin necesidad de que se instale ningun componente de software en el equipo remoto del usuario que realiza la consulta. Se define y establece la arquitectura de software que permite el desarrollo de la aplicacion web, las etapas de analisis, el funcionamiento de la tecnologia a utilizar, asi como el diseno, desarrollo e implementacion de la aplicacion. Finalmente, se describen los logros obtenidos con el desarrollo de la aplicacion web para un sistema de informacion en tiempo real.

  9. Demonstration: SpaceExplorer - A Tool for Designing Ubiquitous Web Applications for Collections of Displays

    DEFF Research Database (Denmark)

    Hansen, Thomas Riisgaard

    2007-01-01

    This demonstration presents a simple browser plug-in that grant web applications the ability to use multiple nearby devices for displaying web content. A web page can e.g. be designed to present additional information on nearby devices. The demonstration introduces a light weight peer-to-peer arc...

  10. Web Mining and Social Networking

    DEFF Research Database (Denmark)

    Xu, Guandong; Zhang, Yanchun; Li, Lin

    This book examines the techniques and applications involved in the Web Mining, Web Personalization and Recommendation and Web Community Analysis domains, including a detailed presentation of the principles, developed algorithms, and systems of the research in these areas. The applications of web ...... sense of individuals or communities. The volume will benefit both academic and industry communities interested in the techniques and applications of web search, web data management, web mining and web knowledge discovery, as well as web community and social network analysis.......This book examines the techniques and applications involved in the Web Mining, Web Personalization and Recommendation and Web Community Analysis domains, including a detailed presentation of the principles, developed algorithms, and systems of the research in these areas. The applications of web...... mining, and the issue of how to incorporate web mining into web personalization and recommendation systems are also reviewed. Additionally, the volume explores web community mining and analysis to find the structural, organizational and temporal developments of web communities and reveal the societal...

  11. System and Method for Providing Web-Based Remote Application Service

    OpenAIRE

    Shuen-Tai Wang; Yu-Ching Lin; Hsi-Ya Chang

    2017-01-01

    With the development of virtualization technologies, a new type of service named cloud computing service is produced. Cloud users usually encounter the problem of how to use the virtualized platform easily over the web without requiring the plug-in or installation of special software. The object of this paper is to develop a system and a method enabling process interfacing within an automation scenario for accessing remote application by using the web browser. To meet this challenge, we have ...

  12. CERN's web application updates for electron and laser beam technologies

    CERN Document Server

    Sigas, Christos

    2017-01-01

    This report describes the modifications at CERN's web application for electron and laser beam technologies. There are updates at both the front and the back end of the application. New electron and laser machines were added and also old machines were updated. There is also a new feature for printing needed information.

  13. A boosting framework for visuality-preserving distance metric learning and its application to medical image retrieval.

    Science.gov (United States)

    Yang, Liu; Jin, Rong; Mummert, Lily; Sukthankar, Rahul; Goode, Adam; Zheng, Bin; Hoi, Steven C H; Satyanarayanan, Mahadev

    2010-01-01

    Similarity measurement is a critical component in content-based image retrieval systems, and learning a good distance metric can significantly improve retrieval performance. However, despite extensive study, there are several major shortcomings with the existing approaches for distance metric learning that can significantly affect their application to medical image retrieval. In particular, "similarity" can mean very different things in image retrieval: resemblance in visual appearance (e.g., two images that look like one another) or similarity in semantic annotation (e.g., two images of tumors that look quite different yet are both malignant). Current approaches for distance metric learning typically address only one goal without consideration of the other. This is problematic for medical image retrieval where the goal is to assist doctors in decision making. In these applications, given a query image, the goal is to retrieve similar images from a reference library whose semantic annotations could provide the medical professional with greater insight into the possible interpretations of the query image. If the system were to retrieve images that did not look like the query, then users would be less likely to trust the system; on the other hand, retrieving images that appear superficially similar to the query but are semantically unrelated is undesirable because that could lead users toward an incorrect diagnosis. Hence, learning a distance metric that preserves both visual resemblance and semantic similarity is important. We emphasize that, although our study is focused on medical image retrieval, the problem addressed in this work is critical to many image retrieval systems. We present a boosting framework for distance metric learning that aims to preserve both visual and semantic similarities. The boosting framework first learns a binary representation using side information, in the form of labeled pairs, and then computes the distance as a weighted Hamming

  14. Web Application for Actuarial Calculations for Insurance

    OpenAIRE

    Dobrev, Hristo; Kyurkchiev, Nikolay

    2013-01-01

    Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013 During the last 10 years a growing interest in the modernization of vocational education of actuaries, the content of actuarial study programs, consistent with global traditions and trends is indicated. Web application for insurance actuarial calculations is explored. Association for the Development of the Information Society, Institute of Mathematics and...

  15. Consistency in the World Wide Web

    DEFF Research Database (Denmark)

    Thomsen, Jakob Grauenkjær

    Tim Berners-Lee envisioned that computers will behave as agents of humans on the World Wide Web, where they will retrieve, extract, and interact with information from the World Wide Web. A step towards this vision is to make computers capable of extracting this information in a reliable...... and consistent way. In this dissertation we study steps towards this vision by showing techniques for the specication, the verication and the evaluation of the consistency of information in the World Wide Web. We show how to detect certain classes of errors in a specication of information, and we show how...... the World Wide Web, in order to help perform consistent evaluations of web extraction techniques. These contributions are steps towards having computers reliable and consistently extract information from the World Wide Web, which in turn are steps towards achieving Tim Berners-Lee's vision. ii...

  16. The effect of query complexity on Web searching results

    Directory of Open Access Journals (Sweden)

    B.J. Jansen

    2000-01-01

    Full Text Available This paper presents findings from a study of the effects of query structure on retrieval by Web search services. Fifteen queries were selected from the transaction log of a major Web search service in simple query form with no advanced operators (e.g., Boolean operators, phrase operators, etc. and submitted to 5 major search engines - Alta Vista, Excite, FAST Search, Infoseek, and Northern Light. The results from these queries became the baseline data. The original 15 queries were then modified using the various search operators supported by each of the 5 search engines for a total of 210 queries. Each of these 210 queries was also submitted to the applicable search service. The results obtained were then compared to the baseline results. A total of 2,768 search results were returned by the set of all queries. In general, increasing the complexity of the queries had little effect on the results with a greater than 70% overlap in results, on average. Implications for the design of Web search services and directions for future research are discussed.

  17. Empower the patients with a dialogue-based web application

    DEFF Research Database (Denmark)

    Bjørnes, Charlotte D.; Cummings, Elizabeth; Nøhr, Christian

    2012-01-01

    -based web application was designed and implemented to accommodate patients' information and communication needs in short stay hospital settings. To ensure the system meet the patients' needs, both patients and healthcare professionals were involved in the design process by applying various participatory...

  18. SproutCore web application development

    CERN Document Server

    Keating, Tyler

    2013-01-01

    Written as a practical, step-by-step tutorial, Creating HTML5 Apps with SproutCore is full of engaging examples to help you learn in a practical context.This book is for any person looking to write software for the Web or already writing software for the Web. Whether your background is in web development or in software development, Creating HTML5 Apps with SproutCore will help you expand your skills so that you will be ready to apply the software development principles in the web development space.

  19. An Intelligent Web Digital Image Metadata Service Platform for Social Curation Commerce Environment

    Directory of Open Access Journals (Sweden)

    Seong-Yong Hong

    2015-01-01

    Full Text Available Information management includes multimedia data management, knowledge management, collaboration, and agents, all of which are supporting technologies for XML. XML technologies have an impact on multimedia databases as well as collaborative technologies and knowledge management. That is, e-commerce documents are encoded in XML and are gaining much popularity for business-to-business or business-to-consumer transactions. Recently, the internet sites, such as e-commerce sites and shopping mall sites, deal with a lot of image and multimedia information. This paper proposes an intelligent web digital image information retrieval platform, which adopts XML technology for social curation commerce environment. To support object-based content retrieval on product catalog images containing multiple objects, we describe multilevel metadata structures representing the local features, global features, and semantics of image data. To enable semantic-based and content-based retrieval on such image data, we design an XML-Schema for the proposed metadata. We also describe how to automatically transform the retrieval results into the forms suitable for the various user environments, such as web browser or mobile device, using XSLT. The proposed scheme can be utilized to enable efficient e-catalog metadata sharing between systems, and it will contribute to the improvement of the retrieval correctness and the user’s satisfaction on semantic-based web digital image information retrieval.

  20. Photonics Applications and Web Engineering: WILGA 2017

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2017-08-01

    XLth Wilga Summer 2017 Symposium on Photonics Applications and Web Engineering was held on 28 May-4 June 2017. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, modern optics, mechatronics, applied physics, electronics technologies and applications. There were presented around 300 oral and poster papers in a few main topical tracks, which are traditional for Wilga, including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Things, measurement systems for astronomy, high energy physics experiments, and other. The paper is a traditional introduction to the 2017 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations. This year Symposium was divided to the following topical sessions/conferences: Optics, Optoelectronics and Photonics, Computational and Artificial Intelligence, Biomedical Applications, Astronomical and High Energy Physics Experiments Applications, Material Research and Engineering, and Advanced Photonics and Electronics Applications in Research and Industry.

  1. Examining the application of Web 2.0 in medical-related organisations.

    Science.gov (United States)

    Chu, Samuel Kai Wah; Woo, Matsuko; King, Ronnel B; Choi, Stephen; Cheng, Miffy; Koo, Peggy

    2012-03-01

    This study surveyed Web 2.0 application in three types of selected health or medical-related organisations such as university medical libraries, hospitals and non-profit medical-related organisations. Thirty organisations participated in an online survey on the perceived purposes, benefits and difficulties in using Web 2.0. A phone interview was further conducted with eight organisations (26.7%) to collect information on the use of Web 2.0. Data were analysed using both quantitative and qualitative approaches. Results showed that knowledge and information sharing and the provision of a better communication platform were rated as the main purposes of using Web 2.0. Time constraints and low staff engagement were the most highly rated difficulties. In addition, most participants found Web 2.0 to be beneficial to their organisations. Medical-related organisations that adopted Web 2.0 technologies have found them useful, with benefits outweighing the difficulties in the long run. The implications of this study are discussed to help medical-related organisations make decisions regarding the use of Web 2.0 technologies. © 2011 The authors. Health Information and Libraries Journal © 2011 Health Libraries Group.

  2. Near-Real Time Satellite-Retrieved Cloud and Surface Properties for Weather and Aviation Safety Applications

    Science.gov (United States)

    Minnis, P.; Smith, W., Jr.; Bedka, K. M.; Nguyen, L.; Palikonda, R.; Hong, G.; Trepte, Q.; Chee, T.; Scarino, B. R.; Spangenberg, D.; Sun-Mack, S.; Fleeger, C.; Ayers, J. K.; Chang, F. L.; Heck, P. W.

    2014-12-01

    Cloud properties determined from satellite imager radiances provide a valuable source of information for nowcasting and weather forecasting. In recent years, it has been shown that assimilation of cloud top temperature, optical depth, and total water path can increase the accuracies of weather analyses and forecasts. Aircraft icing conditions can be accurately diagnosed in near-real time (NRT) retrievals of cloud effective particle size, phase, and water path, providing valuable data for pilots. NRT retrievals of surface skin temperature can also be assimilated in numerical weather prediction models to provide more accurate representations of solar heating and longwave cooling at the surface, where convective initiation. These and other applications are being exploited more frequently as the value of NRT cloud data become recognized. At NASA Langley, cloud properties and surface skin temperature are being retrieved in near-real time globally from both geostationary (GEO) and low-earth orbiting (LEO) satellite imagers for weather model assimilation and nowcasting for hazards such as aircraft icing. Cloud data from GEO satellites over North America are disseminated through NCEP, while those data and global LEO and GEO retrievals are disseminated from a Langley website. This paper presents an overview of the various available datasets, provides examples of their application, and discusses the use of the various datasets downstream. Future challenges and areas of improvement are also presented.

  3. Near-Real Time Satellite-Retrieved Cloud and Surface Properties for Weather and Aviation Safety Applications

    Science.gov (United States)

    Minnis, Patrick; Smith, William L., Jr.; Bedka, Kristopher M.; Nguyen, Louis; Palikonda, Rabindra; Hong, Gang; Trepte, Qing Z.; Chee, Thad; Scarino, Benjamin; Spangenberg, Douglas A.; hide

    2014-01-01

    Cloud properties determined from satellite imager radiances provide a valuable source of information for nowcasting and weather forecasting. In recent years, it has been shown that assimilation of cloud top temperature, optical depth, and total water path can increase the accuracies of weather analyses and forecasts. Aircraft icing conditions can be accurately diagnosed in near-­-real time (NRT) retrievals of cloud effective particle size, phase, and water path, providing valuable data for pilots. NRT retrievals of surface skin temperature can also be assimilated in numerical weather prediction models to provide more accurate representations of solar heating and longwave cooling at the surface, where convective initiation. These and other applications are being exploited more frequently as the value of NRT cloud data become recognized. At NASA Langley, cloud properties and surface skin temperature are being retrieved in near-­-real time globally from both geostationary (GEO) and low-­-earth orbiting (LEO) satellite imagers for weather model assimilation and nowcasting for hazards such as aircraft icing. Cloud data from GEO satellites over North America are disseminated through NCEP, while those data and global LEO and GEO retrievals are disseminated from a Langley website. This paper presents an overview of the various available datasets, provides examples of their application, and discusses the use of the various datasets downstream. Future challenges and areas of improvement are also presented.

  4. Web Project Management

    OpenAIRE

    Suralkar, Sunita; Joshi, Nilambari; Meshram, B B

    2013-01-01

    This paper describes about the need for Web project management, fundamentals of project management for web projects: what it is, why projects go wrong, and what's different about web projects. We also discuss Cost Estimation Techniques based on Size Metrics. Though Web project development is similar to traditional software development applications, the special characteristics of Web Application development requires adaption of many software engineering approaches or even development of comple...

  5. A novel 2.5D approach for interfacing with web applications

    OpenAIRE

    Sarkar, Saurabh

    2012-01-01

    Web applications need better user interface to be interactive and attractive. A new approach/concept of dimensional enhancement - 2.5D "a 2D display of a virtual 3D environment", which can be implemented in social networking sites and further in other system applications.

  6. Web-based interventions in nursing.

    Science.gov (United States)

    Im, Eun-Ok; Chang, Sun Ju

    2013-02-01

    With recent advances in computer and Internet technologies and high funding priority on technological aspects of nursing research, researchers at the field level began to develop, use, and test various types of Web-based interventions. Despite high potential impacts of Web-based interventions, little is still known about Web-based interventions in nursing. In this article, to identify strengths and weaknesses of Web-based nursing interventions, a literature review was conducted using multiple databases with combined keywords of "online," "Internet" or "Web," "intervention," and "nursing." A total of 95 articles were retrieved through the databases and sorted by research topics. These articles were then analyzed to identify strengths and weaknesses of Web-based interventions in nursing. A strength of the Web-based interventions was their coverage of various content areas. In addition, many of them were theory-driven. They had advantages in their flexibility and comfort. They could provide consistency in interventions and require less cost in the intervention implementation. However, Web-based intervention studies had selected participants. They lacked controllability and had high dropouts. They required technical expertise and high development costs. Based on these findings, directions for future Web-based intervention research were provided.

  7. The Application of Similar Image Retrieval in Electronic Commerce

    Science.gov (United States)

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system. PMID:24883411

  8. The Application of Similar Image Retrieval in Electronic Commerce

    Directory of Open Access Journals (Sweden)

    YuPing Hu

    2014-01-01

    Full Text Available Traditional online shopping platform (OSP, which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers’ experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system.

  9. The application of similar image retrieval in electronic commerce.

    Science.gov (United States)

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system.

  10. Designing a Web Spam Classifier Based on Feature Fusion in the Layered Multi-Population Genetic Programming Framework

    Directory of Open Access Journals (Sweden)

    Amir Hosein KEYHANIPOUR

    2013-11-01

    Full Text Available Nowadays, Web spam pages are a critical challenge for Web retrieval systems which have drastic influence on the performance of such systems. Although these systems try to combat the impact of spam pages on their final results list, spammers increasingly use more sophisticated techniques to increase the number of views for their intended pages in order to have more commercial success. This paper employs the recently proposed Layered Multi-population Genetic Programming model for Web spam detection task as well application of correlation coefficient analysis for feature space reduction. Based on our tentative results, the designed classifier, which is based on a combination of easy to compute features, has a very reasonable performance in comparison with similar methods.

  11. FASH: A web application for nucleotides sequence search

    Directory of Open Access Journals (Sweden)

    Chew Paul

    2008-05-01

    Full Text Available Abstract FASH (Fourier Alignment Sequence Heuristics is a web application, based on the Fast Fourier Transform, for finding remote homologs within a long nucleic acid sequence. Given a query sequence and a long text-sequence (e.g, the human genome, FASH detects subsequences within the text that are remotely-similar to the query. FASH offers an alternative approach to Blast/Fasta for querying long RNA/DNA sequences. FASH differs from these other approaches in that it does not depend on the existence of contiguous seed-sequences in its initial detection phase. The FASH web server is user friendly and very easy to operate. Availability FASH can be accessed at https://fash.bgu.ac.il:8443/fash/default.jsp (secured website

  12. Using Open Web APIs in Teaching Web Mining

    Science.gov (United States)

    Chen, Hsinchun; Li, Xin; Chau, M.; Ho, Yi-Jen; Tseng, Chunju

    2009-01-01

    With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems…

  13. Identify, Organize, and Retrieve Items Using Zotero

    Science.gov (United States)

    Clark, Brian; Stierman, John

    2009-01-01

    Librarians build collections. To do this they use tools that help them identify, organize, and retrieve items for the collection. Zotero (zoh-TAIR-oh) is such a tool that helps the user build a library of useful books, articles, web sites, blogs, etc., discovered while surfing online. A visit to Zotero's homepage, www.zotero.org, shows a number of…

  14. Photonics applications and web engineering: WILGA Summer 2016

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2016-09-01

    Wilga Summer 2016 Symposium on Photonics Applications and Web Engineering was held on 29 May - 06 June. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, electronics technologies and applications. There were presented around 300 presentations in a few main topical tracks including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Thins, and other. The paper is an introduction the 2016 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations.

  15. Photonics applications and web engineering: WILGA Summer 2015

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2015-09-01

    Wilga Summer 2015 Symposium on Photonics Applications and Web Engineering was held on 23-31 May. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, electronics technologies and applications. There were presented around 300 presentations in a few main topical tracks including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Thins, and other. The paper is an introduction the 2015 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations.

  16. Use of information-retrieval languages in automated retrieval of experimental data from long-term storage

    Science.gov (United States)

    Khovanskiy, Y. D.; Kremneva, N. I.

    1975-01-01

    Problems and methods are discussed of automating information retrieval operations in a data bank used for long term storage and retrieval of data from scientific experiments. Existing information retrieval languages are analyzed along with those being developed. The results of studies discussing the application of the descriptive 'Kristall' language used in the 'ASIOR' automated information retrieval system are presented. The development and use of a specialized language of the classification-descriptive type, using universal decimal classification indices as the main descriptors, is described.

  17. Free web-based modelling platform for managed aquifer recharge (MAR) applications

    Science.gov (United States)

    Stefan, Catalin; Junghanns, Ralf; Glaß, Jana; Sallwey, Jana; Fatkhutdinov, Aybulat; Fichtner, Thomas; Barquero, Felix; Moreno, Miguel; Bonilla, José; Kwoyiga, Lydia

    2017-04-01

    Managed aquifer recharge represents a valuable instrument for sustainable water resources management. The concept implies purposeful infiltration of surface water into underground for later recovery or environmental benefits. Over decades, MAR schemes were successfully installed worldwide for a variety of reasons: to maximize the natural storage capacity of aquifers, physical aquifer management, water quality management, and ecological benefits. The INOWAS-DSS platform provides a collection of free web-based tools for planning, management and optimization of main components of MAR schemes. The tools are grouped into 13 specific applications that cover most relevant challenges encountered at MAR sites, both from quantitative and qualitative perspectives. The applications include among others the optimization of MAR site location, the assessment of saltwater intrusion, the restoration of groundwater levels in overexploited aquifers, the maximization of natural storage capacity of aquifers, the improvement of water quality, the design and operational optimization of MAR schemes, clogging development and risk assessment. The platform contains a collection of about 35 web-based tools of various degrees of complexity, which are either included in application specific workflows or used as standalone modelling instruments. Among them are simple tools derived from data mining and empirical equations, analytical groundwater related equations, as well as complex numerical flow and transport models (MODFLOW, MT3DMS and SEAWAT). Up to now, the simulation core of the INOWAS-DSS, which is based on the finite differences groundwater flow model MODFLOW, is implemented and runs on the web. A scenario analyser helps to easily set up and evaluate new management options as well as future development such as land use and climate change and compare them to previous scenarios. Additionally simple tools such as analytical equations to assess saltwater intrusion are already running online

  18. A Formal Approach to Exploiting Multi-Stage Attacks based on File-System Vulnerabilities of Web Applications (Extended Version)

    OpenAIRE

    De Meo, Federico; Viganò, Luca

    2017-01-01

    Web applications require access to the file-system for many different tasks. When analyzing the security of a web application, secu- rity analysts should thus consider the impact that file-system operations have on the security of the whole application. Moreover, the analysis should take into consideration how file-system vulnerabilities might in- teract with other vulnerabilities leading an attacker to breach into the web application. In this paper, we first propose a classification of file-...

  19. Understanding human quality judgment in assessing online forum contents for thread retrieval purpose

    Science.gov (United States)

    Ismail, Zuriati; Salim, Naomie; Huspi, Sharin Hazlin

    2017-10-01

    Compared to traditional materials or journals, user-generated contents are not peer-reviewed. Lack of quality control and the explosive growth of web contents make the task of finding quality information on the web especially critical. The existence of new facilities for producing web contents such as forum makes this issue more significant. This study focuses on online forums threads or discussion, where the forums contain valuable human-generated information in a form of discussions. Due to the unique structure of the online forum pages, special techniques are required to organize and search for information in these forums. Quality biased retrieval is a retrieval approach that search for relevant document and prioritized higher quality documents. Despite major concern of quality content and recent development of quality biased retrieval, there is an urgent need to understand how quality content is being judged, for retrieval and performance evaluation purposes. Furthermore, even though there are various studies on the quality of information, there is no standard framework that has been established. The primary aim of this paper is to contribute to the understanding of human quality judgment in assessing online forum contents. The foundation of this study is to compare and evaluate different frameworks (for quality biased retrieval and information quality). This led to the finding that many quality dimensions are redundant and some dimensions are understood differently between different studies. We conducted a survey on crowdsourcing community to measure the importance of each quality dimensions found in various frameworks. Accuracy and ease of understanding are among top important dimensions while threads popularity and contents manipulability are among least important dimensions. This finding is beneficial in evaluating contents of online forum.

  20. Agent Based Knowledge Management Solution using Ontology, Semantic Web Services and GIS

    Directory of Open Access Journals (Sweden)

    Andreea DIOSTEANU

    2009-01-01

    Full Text Available The purpose of our research is to develop an agent based knowledge management application framework using a specific type of ontology that is able to facilitate semantic web service search and automatic composition. This solution can later on be used to develop complex solutions for location based services, supply chain management, etc. This application for modeling knowledge highlights the importance of agent interaction that leads to efficient enterprise interoperability. Furthermore, it proposes an "agent communication language" ontology that extends the OWL Lite standard approach and makes it more flexible in retrieving proper data for identifying the agents that can best communicate and negotiate.

  1. Semantic similarity measure in biomedical domain leverage web search engine.

    Science.gov (United States)

    Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei

    2010-01-01

    Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.

  2. The Use of QBIC Content-Based Image Retrieval System

    Directory of Open Access Journals (Sweden)

    Ching-Yi Wu

    2004-03-01

    Full Text Available The fast increase in digital images has caught increasing attention on the development of image retrieval technologies. Content-based image retrieval (CBIR has become an important approach in retrieving image data from a large collection. This article reports our results on the use and users study of a CBIR system. Thirty-eight students majored in art and design were invited to use the IBM’s OBIC (Query by Image Content system through the Internet. Data from their information needs, behaviors, and retrieval strategies were collected through an in-depth interview, observation, and self-described think-aloud process. Important conclusions are:(1)There are four types of information needs for image data: implicit, inspirational, ever-changing, and purposive. The types of needs may change during the retrieval process. (2)CBIR is suitable for the example-type query, text retrieval is suitable for the scenario-type query, and image browsing is suitable for the symbolic query. (3)Different from text retrieval, detailed description of the query condition may lead to retrieval failure more easily. (4)CBIR is suitable for the domain-specific image collection, not for the images on the Word-Wide Web.[Article content in Chinese

  3. Effects of customization on application decisions and applicant pool characteristics in a web-based recruitment context.

    Science.gov (United States)

    Dineen, Brian R; Noe, Raymond A

    2009-01-01

    The authors examined 2 forms of customization in a Web-based recruitment context. Hypotheses were tested in a controlled study in which participants viewed multiple Web-based job postings that each included information about multiple fit categories. Results indicated that customization of information regarding person-organization (PO), needs-supplies, and demands-abilities (DA) fit (fit information customization) and customization of the order in which these fit categories were presented (configural customization) had differential effects on outcomes. Specifically, (a) applicant pool PO and DA fit were greater when fit information customization was provided, (b) applicant pool fit in high- versus low-relevance fit categories was better differentiated when configural customization was provided, and (c) overall application rates were lower when either or both forms of customization were provided. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  4. Checklist of accessibility in Web informational environments

    Directory of Open Access Journals (Sweden)

    Christiane Gomes dos Santos

    2017-01-01

    Full Text Available This research deals with the process of search, navigation and retrieval of information by the person with blindness in web environment, focusing on knowledge of the areas of information recovery and architecture, to understanding the strategies used by these people to access the information on the web. It aims to propose the construction of an accessibility verification instrument, checklist, to be used to analyze the behavior of people with blindness in search actions, navigation and recovery sites and pages. It a research exploratory and descriptive of qualitative nature, with the research methodology, case study - the research to establish a specific study with the simulation of search, navigation and information retrieval using speech synthesis system, NonVisual Desktop Access, in assistive technologies laboratory, to substantiate the construction of the checklist for accessibility verification. It is considered the reliability of performed research and its importance for the evaluation of accessibility in web environment to improve the access of information for people with limited reading in order to be used on websites and pages accessibility check analysis.

  5. The Use of Web Based Expert System Application for Identification and Intervention of Children with Special Needs in Inclusive School

    Directory of Open Access Journals (Sweden)

    Dian Atnantomi Wiliyanto

    2017-11-01

    Full Text Available This research is conducted to determine the effectiveness of web based expert system application for identification and intervention of children with special needs in inclusive school. 40 teachers of inclusive school in Surakarta participated in this research. The result showed that: (1 web based expert system application was suitable with the needs of teachers/officers, had 50% (excellence criteria, (2 web based expert system application was worthwhile for identification of children with special needs, had 50% (excellence criteria, (3 web based expert system application was easy to use, had 52.5% (good criteria, and (4 web based expert system application had result accuracy in making decision, had 52.5% (good criteria. It shows that the use of web based expert system application is effective to be used by teachers in inclusive school in conducting identification and intervention with percentage on average was more than 50%.

  6. Access Control of Web and Java Based Applications

    Science.gov (United States)

    Tso, Kam S.; Pajevski, Michael J.; Johnson, Bryan

    2011-01-01

    Cyber security has gained national and international attention as a result of near continuous headlines from financial institutions, retail stores, government offices and universities reporting compromised systems and stolen data. Concerns continue to rise as threats of service interruption, and spreading of viruses become ever more prevalent and serious. Controlling access to application layer resources is a critical component in a layered security solution that includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. In this paper we discuss the development of an application-level access control solution, based on an open-source access manager augmented with custom software components, to provide protection to both Web-based and Java-based client and server applications.

  7. Web service composition: a semantic web and automated planning technique application

    Directory of Open Access Journals (Sweden)

    Jaime Alberto Guzmán Luna

    2008-09-01

    Full Text Available This article proposes applying semantic web and artificial intelligence planning techniques to a web services composition model dealing with problems of ambiguity in web service description and handling incomplete web information. The model uses an OWL-S services and implements a planning technique which handles open world semantics in its reasoning process to resolve these problems. This resulted in a web services composition system incorporating a module for interpreting OWL-S services and converting them into a planning problem in PDDL (a planning module handling incomplete information and an execution service module concurrently interacting with the planner for executing each composition plan service.

  8. Comprehensive Analysis of Semantic Web Reasoners and Tools: A Survey

    Science.gov (United States)

    Khamparia, Aditya; Pandey, Babita

    2017-01-01

    Ontologies are emerging as best representation techniques for knowledge based context domains. The continuing need for interoperation, collaboration and effective information retrieval has lead to the creation of semantic web with the help of tools and reasoners which manages personalized information. The future of semantic web lies in an ontology…

  9. A Web System Trace Model and Its Application to Web Design

    OpenAIRE

    Kong, Xiaoying; Liu, Li; Lowe, David

    2007-01-01

    Traceability analysis is crucial to the development of web-centric systems, particularly those with frequent system changes, fine-grained evolution and maintenance, and high level of requirements uncertainty. A trace model at the level of the web system architecture is presented in this paper to address the specific challenges of developing web-centric systems. The trace model separates the concerns of different stakeholders in the web development life cycle into viewpoints; and c...

  10. Raising Reliability of Web Search Tool Research through Replication and Chaos Theory

    OpenAIRE

    Nicholson, Scott

    1999-01-01

    Because the World Wide Web is a dynamic collection of information, the Web search tools (or "search engines") that index the Web are dynamic. Traditional information retrieval evaluation techniques may not provide reliable results when applied to the Web search tools. This study is the result of ten replications of the classic 1996 Ding and Marchionini Web search tool research. It explores the effects that replication can have on transforming unreliable results from one iteration into replica...

  11. Extending Symfony 2 web application framework

    CERN Document Server

    Armand, Sébastien

    2014-01-01

    Symfony is a high performance PHP framework for developing MVC web applications. Symfony1 allowed for ease of use but its shortcoming was the difficulty of extending it. However, this difficulty has now been eradicated by the more powerful and extensible Symfony2. Information on more advanced techniques for extending Symfony can be difficult to find, so you need one resource that contains the advanced features in a way you can understand. This tutorial offers solutions to all your Symfony extension problems. You will get to grips with all the extension points that Symfony, Twig, and Doctrine o

  12. Migrating Existing PHP Web Applications to the Cloud

    Directory of Open Access Journals (Sweden)

    Ionut VODA

    2014-01-01

    Full Text Available The purpose of this paper is to present a set of best practices for moving PHP web applications from a traditional hosting to a Cloud based one. PHP applications are widespread nowadays and they come in many shapes and sizes and that is why they require a special attention. The paper goes beyond just moving the code in the Cloud and setting up the run-time environment as some architectural changes must be done at application level most of the time. The decision of how and when to make these changes can make the difference between a successful migra-tion and a failed one. It will be presented how to decouple and scale an application, how to scale a database while following the high availability principles.

  13. Cloud retrievals from satellite data using optimal estimation: evaluation and application to ATSR

    Directory of Open Access Journals (Sweden)

    C. A. Poulsen

    2012-08-01

    Full Text Available Clouds play an important role in balancing the Earth's radiation budget. Hence, it is vital that cloud climatologies are produced that quantify cloud macro and micro physical parameters and the associated uncertainty. In this paper, we present an algorithm ORAC (Oxford-RAL retrieval of Aerosol and Cloud which is based on fitting a physically consistent cloud model to satellite observations simultaneously from the visible to the mid-infrared, thereby ensuring that the resulting cloud properties provide both a good representation of the short-wave and long-wave radiative effects of the observed cloud. The advantages of the optimal estimation method are that it enables rigorous error propagation and the inclusion of all measurements and any a priori information and associated errors in a rigorous mathematical framework. The algorithm provides a measure of the consistency between retrieval representation of cloud and satellite radiances. The cloud parameters retrieved are the cloud top pressure, cloud optical depth, cloud effective radius, cloud fraction and cloud phase.

    The algorithm can be applied to most visible/infrared satellite instruments. In this paper, we demonstrate the applicability to the Along-Track Scanning Radiometers ATSR-2 and AATSR. Examples of applying the algorithm to ATSR-2 flight data are presented and the sensitivity of the retrievals assessed, in particular the algorithm is evaluated for a number of simulated single-layer and multi-layer conditions. The algorithm was found to perform well for single-layer cloud except when the cloud was very thin; i.e., less than 1 optical depths. For the multi-layer cloud, the algorithm was robust except when the upper ice cloud layer is less than five optical depths. In these cases the retrieved cloud top pressure and cloud effective radius become a weighted average of the 2 layers. The sum of optical depth of multi-layer cloud is retrieved well until the cloud becomes thick

  14. EpiCollect: linking smartphones to web applications for epidemiology, ecology and community data collection.

    Directory of Open Access Journals (Sweden)

    David M Aanensen

    2009-09-01

    Full Text Available Epidemiologists and ecologists often collect data in the field and, on returning to their laboratory, enter their data into a database for further analysis. The recent introduction of mobile phones that utilise the open source Android operating system, and which include (among other features both GPS and Google Maps, provide new opportunities for developing mobile phone applications, which in conjunction with web applications, allow two-way communication between field workers and their project databases.Here we describe a generic framework, consisting of mobile phone software, EpiCollect, and a web application located within www.spatialepidemiology.net. Data collected by multiple field workers can be submitted by phone, together with GPS data, to a common web database and can be displayed and analysed, along with previously collected data, using Google Maps (or Google Earth. Similarly, data from the web database can be requested and displayed on the mobile phone, again using Google Maps. Data filtering options allow the display of data submitted by the individual field workers or, for example, those data within certain values of a measured variable or a time period.Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting 'citizen scientists' to contribute data easily to central databases through their mobile phone.

  15. EpiCollect: linking smartphones to web applications for epidemiology, ecology and community data collection.

    Science.gov (United States)

    Aanensen, David M; Huntley, Derek M; Feil, Edward J; al-Own, Fada'a; Spratt, Brian G

    2009-09-16

    Epidemiologists and ecologists often collect data in the field and, on returning to their laboratory, enter their data into a database for further analysis. The recent introduction of mobile phones that utilise the open source Android operating system, and which include (among other features) both GPS and Google Maps, provide new opportunities for developing mobile phone applications, which in conjunction with web applications, allow two-way communication between field workers and their project databases. Here we describe a generic framework, consisting of mobile phone software, EpiCollect, and a web application located within www.spatialepidemiology.net. Data collected by multiple field workers can be submitted by phone, together with GPS data, to a common web database and can be displayed and analysed, along with previously collected data, using Google Maps (or Google Earth). Similarly, data from the web database can be requested and displayed on the mobile phone, again using Google Maps. Data filtering options allow the display of data submitted by the individual field workers or, for example, those data within certain values of a measured variable or a time period. Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting 'citizen scientists' to contribute data easily to central databases through their mobile phone.

  16. Establishing and Applying Criteria for Evaluating the Ease of Use of Dynamic Platforms for Teaching Web Application Development

    Science.gov (United States)

    Dehinbo, Johnson

    2011-01-01

    The widespread use of the Internet and the World Wide Web led to the availability of many platforms for developing dynamic Web application and the problem of choosing the most appropriate platform that will be easy to use for undergraduate students of web applications development in tertiary institutions. Students beginning to learn web…

  17. The design and implementation of web mining in web sites security

    Science.gov (United States)

    Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li

    2003-06-01

    The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.

  18. Update on Small Modular Reactors Dynamic System Modeling Tool: Web Application

    International Nuclear Information System (INIS)

    Hale, Richard Edward; Cetiner, Sacit M.; Fugate, David L.; Batteh, John J; Tiller, Michael M.

    2015-01-01

    Previous reports focused on the development of component and system models as well as end-to-end system models using Modelica and Dymola for two advanced reactor architectures: (1) Advanced Liquid Metal Reactor and (2) fluoride high-temperature reactor (FHR). The focus of this report is the release of the first beta version of the web-based application for model use and collaboration, as well as an update on the FHR model. The web-based application allows novice users to configure end-to-end system models from preconfigured choices to investigate the instrumentation and controls implications of these designs and allows for the collaborative development of individual component models that can be benchmarked against test systems for potential inclusion in the model library. A description of this application is provided along with examples of its use and a listing and discussion of all the models that currently exist in the library.

  19. The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications

    Directory of Open Access Journals (Sweden)

    Katayama Toshiaki

    2011-08-01

    Full Text Available Abstract Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i a workflow to annotate 100,000 sequences from an invertebrate species; ii an integrated system for analysis of the transcription factor binding sites (TFBSs enriched based on differential gene expression data obtained from a microarray experiment; iii a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i the absence of several useful data or analysis functions in the Web service "space"; ii the lack of documentation of methods; iii lack of

  20. The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications

    Science.gov (United States)

    2011-01-01

    Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i) a workflow to annotate 100,000 sequences from an invertebrate species; ii) an integrated system for analysis of the transcription factor binding sites (TFBSs) enriched based on differential gene expression data obtained from a microarray experiment; iii) a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv) a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i) the absence of several useful data or analysis functions in the Web service "space"; ii) the lack of documentation of methods; iii) lack of compliance with the SOAP

  1. Information Retrieval and Text Mining Technologies for Chemistry.

    Science.gov (United States)

    Krallinger, Martin; Rabal, Obdulia; Lourenço, Anália; Oyarzabal, Julen; Valencia, Alfonso

    2017-06-28

    Efficient access to chemical information contained in scientific literature, patents, technical reports, or the web is a pressing need shared by researchers and patent attorneys from different chemical disciplines. Retrieval of important chemical information in most cases starts with finding relevant documents for a particular chemical compound or family. Targeted retrieval of chemical documents is closely connected to the automatic recognition of chemical entities in the text, which commonly involves the extraction of the entire list of chemicals mentioned in a document, including any associated information. In this Review, we provide a comprehensive and in-depth description of fundamental concepts, technical implementations, and current technologies for meeting these information demands. A strong focus is placed on community challenges addressing systems performance, more particularly CHEMDNER and CHEMDNER patents tasks of BioCreative IV and V, respectively. Considering the growing interest in the construction of automatically annotated chemical knowledge bases that integrate chemical information and biological data, cheminformatics approaches for mapping the extracted chemical names into chemical structures and their subsequent annotation together with text mining applications for linking chemistry with biological information are also presented. Finally, future trends and current challenges are highlighted as a roadmap proposal for research in this emerging field.

  2. User Interface Composition with COTS-UI and Trading Approaches: Application for Web-Based Environmental Information Systems

    Science.gov (United States)

    Criado, Javier; Padilla, Nicolás; Iribarne, Luis; Asensio, Jose-Andrés

    Due to the globalization of the information and knowledge society on the Internet, modern Web-based Information Systems (WIS) must be flexible and prepared to be easily accessible and manageable in real-time. In recent times it has received a special interest the globalization of information through a common vocabulary (i.e., ontologies), and the standardized way in which information is retrieved on the Web (i.e., powerful search engines, and intelligent software agents). These same principles of globalization and standardization should also be valid for the user interfaces of the WIS, but they are built on traditional development paradigms. In this paper we present an approach to reduce the gap of globalization/standardization in the generation of WIS user interfaces by using a real-time "bottom-up" composition perspective with COTS-interface components (type interface widgets) and trading services.

  3. Advancements in web-database applications for rabies surveillance

    Directory of Open Access Journals (Sweden)

    Bélanger Denise

    2011-08-01

    Full Text Available Abstract Background Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. Results RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include 1 automatic integration of multi-agency data and diagnostic results on a daily basis; 2 a web-based data editing interface that enables authorized users to add, edit and extract data; and 3 an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. Conclusions RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from

  4. Programming Web services with Perl

    CERN Document Server

    Ray, Randy J

    2003-01-01

    Given Perl's natural fit for web applications development, it's no surprise that Perl is also a natural choice for web services development. It's the most popular web programming language, with strong implementations of both SOAP and XML-RPC, the leading ways to distribute applications using web services. But books on web services focus on writing these applications in Java or Visual Basic, leaving Perl programmers with few resources to get them started. Programming Web Services with Perl changes that, bringing Perl users all the information they need to create web services using their favori

  5. Concept similarity and related categories in information retrieval using formal concept analysis

    Science.gov (United States)

    Eklund, P.; Ducrou, J.; Dau, F.

    2012-11-01

    The application of formal concept analysis to the problem of information retrieval has been shown useful but has lacked any real analysis of the idea of relevance ranking of search results. SearchSleuth is a program developed to experiment with the automated local analysis of Web search using formal concept analysis. SearchSleuth extends a standard search interface to include a conceptual neighbourhood centred on a formal concept derived from the initial query. This neighbourhood of the concept derived from the search terms is decorated with its upper and lower neighbours representing more general and special concepts, respectively. SearchSleuth is in many ways an archetype of search engines based on formal concept analysis with some novel features. In SearchSleuth, the notion of related categories - which are themselves formal concepts - is also introduced. This allows the retrieval focus to shift to a new formal concept called a sibling. This movement across the concept lattice needs to relate one formal concept to another in a principled way. This paper presents the issues concerning exploring, searching, and ordering the space of related categories. The focus is on understanding the use and meaning of proximity and semantic distance in the context of information retrieval using formal concept analysis.

  6. Intelligent Information Retrieval: An Introduction.

    Science.gov (United States)

    Gauch, Susan

    1992-01-01

    Discusses the application of artificial intelligence to online information retrieval systems and describes several systems: (1) CANSEARCH, from MEDLINE; (2) Intelligent Interface for Information Retrieval (I3R); (3) Gausch's Query Reformulation; (4) Environmental Pollution Expert (EP-X); (5) PLEXUS (gardening); and (6) SCISOR (corporate…

  7. Key Technologies and Applications of Satellite and Sensor Web-coupled Real-time Dynamic Web Geographic Information System

    Directory of Open Access Journals (Sweden)

    CHEN Nengcheng

    2017-10-01

    Full Text Available The geo-spatial information service has failed to reflect the live status of spot and meet the needs of integrated monitoring and real-time information for a long time. To tackle the problems in observation sharing and integrated management of space-borne, air-borne, and ground-based platforms and efficient service of spatio-temporal information, an observation sharing model was proposed. The key technologies in real-time dynamic geographical information system (GIS including maximum spatio-temporal coverage-based optimal layout of earth-observation sensor Web, task-driven and feedback-based control, real-time access of streaming observations, dynamic simulation, warning and decision support were detailed. An real-time dynamic Web geographical information system (WebGIS named GeoSensor and its applications in sensing and management of spatio-temporal information of Yangtze River basin including navigation, flood prevention, and power generation were also introduced.

  8. Internet: A place for patent retrieval | Mukesh | African Journal of ...

    African Journals Online (AJOL)

    -review, we are presenting some web links that will help any researcher to get acquainted with the rules and regulation of filling an intellectual property of some countries as internet is now viewed as the place form where retrieval of information ...

  9. SWORS: a system for the efficient retrieval of relevant spatial web objects

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian S.

    2012-01-01

    Spatial web objects that possess both a geographical location and a textual description are gaining in prevalence. This gives prominence to spatial keyword queries that exploit both location and textual arguments. Such queries are used in many web services such as yellow pages and maps services....

  10. CERN Web Application Detection. Refactoring and release as open source software

    CERN Document Server

    Lizonczyk, Piotr

    2015-01-01

    This paper covers my work during my assignment as participant of CERN Summer Students 2015 programme. The project was aimed at refactoring and publication of the Web Application Detection tool, which was developed at CERN and priorly used internally by the Computer Security team. The range of tasks performed include initial refactoring of code, which was developed like a script rather than a Python package, through extracting components that were not specific to CERN usage, the subsequent final release of the source code on GitHub and the integration with third-party software i.e. the w3af tool. Ultimately, Web Application Detection software received positive responses, being downloaded ca. 1500 times at the time of writing this report.

  11. The semantic web : research and applications : 7th extended semantic web conference, ESWC 2010, Heraklion, Crete, Greece, May 30 - June 3, 2010 : proceedings

    NARCIS (Netherlands)

    Aroyo, L.M.; Antoniou, G.; Hyvönen, E.; Teije, ten A.; Stuckenschmidt, H.; Cabral, L.; Tudorache, T.

    2010-01-01

    Preface. This volume contains papers from the technical program of the 7th Extended Semantic Web Conference (ESWC 2010), held from May 30 to June 3, 2010, in Heraklion, Greece. ESWC 2010 presented the latest results in research and applications of Semantic Web technologies. ESWC 2010 built on the

  12. Web services foundations

    CERN Document Server

    Bouguettaya, Athman; Daniel, Florian

    2013-01-01

    Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet.Web Services Foundations is the first installment of a two-book collection coverin

  13. USING WEB MINING IN E-COMMERCE APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Claudia Elena Dinucă

    2011-09-01

    Full Text Available Nowadays, the web is an important part of our daily life. The web is now the best medium of doing business. Large companies rethink their business strategy using the web to improve business. Business carried on the Web offers the opportunity to potential customers or partners where their products and specific business can be found. Business presence through a company web site has several advantages as it breaks the barrier of time and space compared with the existence of a physical office. To differentiate through the Internet economy, winning companies have realized that e-commerce transactions is more than just buying / selling, appropriate strategies are key to improve competitive power. One effective technique used for this purpose is data mining. Data mining is the process of extracting interesting knowledge from data. Web mining is the use of data mining techniques to extract information from web data. This article presents the three components of web mining: web usage mining, web structure mining and web content mining.

  14. Semantic Advertising for Web 3.0

    Science.gov (United States)

    Thomas, Edward; Pan, Jeff Z.; Taylor, Stuart; Ren, Yuan; Jekjantuk, Nophadol; Zhao, Yuting

    Advertising on the World Wide Web is based around automatically matching web pages with appropriate advertisements, in the form of banner ads, interactive adverts, or text links. Traditionally this has been done by manual classification of pages, or more recently using information retrieval techniques to find the most important keywords from the page, and match these to keywords being used by adverts. In this paper, we propose a new model for online advertising, based around lightweight embedded semantics. This will improve the relevancy of adverts on the World Wide Web and help to kick-start the use of RDFa as a mechanism for adding lightweight semantic attributes to the Web. Furthermore, we propose a system architecture for the proposed new model, based on our scalable ontology reasoning infrastructure TrOWL.

  15. A Proposed Smart E-Learning System Using Cloud Computing Services: PaaS, IaaS and Web 3.0

    Directory of Open Access Journals (Sweden)

    Dr.Mona M. Nasr

    2012-09-01

    Full Text Available E-learning systems need to improve its infrastructure, which can devote the required computation and storage resources for e-learning systems. Microsoft cloud computing technologies although in their early stages, have managed to change the way applications are going to be developed and accessed. The objective of the paper is to combine various technologies to design architecture which describe E-learning systems. Web 3.0 uses widget aggregation, intelligent retrieval, user interest modeling and semantic annotation. These technologies are aimed at running applications as services over the internet on a flexible infrastructure. Cloud computing provides a low cost solution to academic institutions for their researchers, faculty and learners. In this paper we integrate cloud computing as a platform with web 3.0 for building intelligent e-learning systems.

  16. Functionality for learning networks: lessons learned from social web applications

    NARCIS (Netherlands)

    Berlanga, Adriana; Sloep, Peter; Brouns, Francis; Van Rosmalen, Peter; Bitter-Rijpkema, Marlies; Koper, Rob

    2007-01-01

    Berlanga, A. J., Sloep, P., Brouns, F., Van Rosmalen, P., Bitter-Rijpkema, M., & Koper, R. (2007). Functionality for learning networks: lessons learned from social web applications. Proceedings of the ePortfolio 2007 Conference. October, 18-19, 2007, Maastricht, The Netherlands. [See also

  17. Tracking Outfield Employees using GPS in Web Applications

    Directory of Open Access Journals (Sweden)

    Kasinathan Vinothini

    2018-01-01

    Full Text Available This paper presents e-Track, a web-based tracking system for outfield employees in order to cater for various business activities as demanded by the business owners. Such demands may range from a simple task assignment, to employee location tracking and remote observation of the employees’ task progress. The objective of the proposed system is two-fold. First, the employees to access the application and clocks-in work. Second, a standalone web system for the employers to determine the approximate location of the staff assigned with outfield duties. The IP address recognition will ensure no buddy punching takes place. e-Track is hoped to increase efficiency among employees by saving time travelling between branches during outfield duties. In the future, e-Track will be integrated with claim and payment modules to support arrangement for outfield duties.

  18. SWS: accessing SRS sites contents through Web Services

    OpenAIRE

    Romano, Paolo; Marra, Domenico

    2008-01-01

    Background Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available datab...

  19. Atmospheric Retrievals from Exoplanet Observations and Simulations with BART

    Science.gov (United States)

    Harrington, Joseph

    the planet has uniform composition and the same temperature profile everywhere. We do not know this assumption's impact. While Spitzer and HST have few exoplanet observing modes, JWST will have over 20. Given the signal challenges and the complexity of retrieval, modeling the observations and data analysis is the best way to optimize an observing plan. Our project solves all of these problems. Using only open-source codes, with tools available to the community for their immediate application in JWST and HST proposals and analyses, we will produce a faithful simulator of 2D spectral and photometric frames from each JWST exoplanet mode (WFC3 spatial scan mode works already), including jitter and intrapixel effects. We will extract and calibrate data, analyzing them with BART. Given planetary input spectra for terrestrial, super-Earth, Neptune, and Jupiterclass planets, and a variety of stellar spectra, we will determine the best combination of observations to recover each atmosphere, and the limits where low SNR or spectral coverage produce deceptive results. To facilitate these analyses, we will adapt an existing cloud model to BART, add condensate code now being written to its thermochemical model, include scattering, add a 3D atmosphere module (for dayside occultation mapping and the 1D vs. 3D question), and improve performance and documentation, among other improvements. We will host a web site and community discussions online and at conferences about retrieval issues. We will develop validation tests for radiative-transfer and BART-style retrieval codes, and provide examples to validate others' codes. We will engage the retrieval community in data challenges. We will provide web-enabled tools to specify planets easily for modeling. We will make all of these tools, tests, and comparisons available online so everyone can use them to maximize NASA's investment in high-end observing capabilities to characterize exoplanets.

  20. C#: Connecting a Mobile Application to Oracle Server via Web Services

    Directory of Open Access Journals (Sweden)

    Daniela Ilea

    2008-01-01

    Full Text Available This article is focused on mobile development using Visual Studio 2005, web services and their connection to Oracle Server, willing to help programmers to realize simple and useful mobile applications.

  1. WEBnm@: a web application for normal mode analyses of proteins

    Directory of Open Access Journals (Sweden)

    Reuter Nathalie

    2005-03-01

    Full Text Available Abstract Background Normal mode analysis (NMA has become the method of choice to investigate the slowest motions in macromolecular systems. NMA is especially useful for large biomolecular assemblies, such as transmembrane channels or virus capsids. NMA relies on the hypothesis that the vibrational normal modes having the lowest frequencies (also named soft modes describe the largest movements in a protein and are the ones that are functionally relevant. Results We developed a web-based server to perform normal modes calculations and different types of analyses. Starting from a structure file provided by the user in the PDB format, the server calculates the normal modes and subsequently offers the user a series of automated calculations; normalized squared atomic displacements, vector field representation and animation of the first six vibrational modes. Each analysis is performed independently from the others and results can be visualized using only a web browser. No additional plug-in or software is required. For users who would like to analyze the results with their favorite software, raw results can also be downloaded. The application is available on http://www.bioinfo.no/tools/normalmodes. We present here the underlying theory, the application architecture and an illustration of its features using a large transmembrane protein as an example. Conclusion We built an efficient and modular web application for normal mode analysis of proteins. Non specialists can easily and rapidly evaluate the degree of flexibility of multi-domain protein assemblies and characterize the large amplitude movements of their domains.

  2. Application of a regularized model inversion system (REGFLEC) to multi-temporal RapidEye imagery for retrieving vegetation characteristics

    KAUST Repository

    Houborg, Rasmus

    2015-10-14

    Accurate retrieval of canopy biophysical and leaf biochemical constituents from space observations is critical to diagnosing the functioning and condition of vegetation canopies across spatio-temporal scales. Retrieved vegetation characteristics may serve as important inputs to precision farming applications and as constraints in spatially and temporally distributed model simulations of water and carbon exchange processes. However significant challenges remain in the translation of composite remote sensing signals into useful biochemical, physiological or structural quantities and treatment of confounding factors in spectrum-trait relations. Bands in the red-edge spectrum have particular potential for improving the robustness of retrieved vegetation properties. The development of observationally based vegetation retrieval capacities, effectively constrained by the enhanced information content afforded by bands in the red-edge, is a needed investment towards optimizing the benefit of current and future satellite sensor systems. In this study, a REGularized canopy reFLECtance model (REGFLEC) for joint leaf chlorophyll (Chll) and leaf area index (LAI) retrieval is extended to sensor systems with a band in the red-edge region for the first time. Application to time-series of 5 m resolution multi-spectral RapidEye data is demonstrated over an irrigated agricultural region in central Saudi Arabia, showcasing the value of satellite-derived crop information at this fine scale for precision management. Validation against in-situ measurements in fields of alfalfa, Rhodes grass, carrot and maize indicate improved accuracy of retrieved vegetation properties when exploiting red-edge information in the model inversion process. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  3. New nuclear data service at CNEA: retrieval of the update libraries from a local Web-Server; Nuevo servicio de datos nucleares en CNEA: obtencion de bibliotecas actualizadas desde un Servidor Local

    Energy Technology Data Exchange (ETDEWEB)

    Suarez, Patricia M [Comision Nacional de Energia Atomica, Ezeiza (Argentina). Centro Atomico Ezeiza; Pepe, Maria E [Comision Nacional de Energia Atomica, General San Martin (Argentina). Centro Atomico Constituyentes; Sbaffoni, Maria M [Comision Nacional de Energia Atomica, Buenos Aires (Argentina). Gerencia de Tecnologia

    2000-07-01

    A new On-line Nuclear Data Service was implemented at National Atomic Energy Commission (CNEA) Web-Site. The information usually issued by the Nuclear Data Section of IAEA (NDS-IAEA) on CD-ROM, as well as complementary libraries periodically downloaded from the a mirror server of NDS-IAEA Service located at IPEN, Brazil are available on the new CNEA Web page. In the site, users can find numerical data on neutron, charged-particle, and photonuclear reactions, nuclear structure, and decay data, with related bibliographic information. This data server is permanently maintained and updated by CNEA staff members. This crew also offers assistance on the use and retrieval of nuclear data to local users. (author)

  4. Web2Quests: Updating a Popular Web-Based Inquiry-Oriented Activity

    Science.gov (United States)

    Kurt, Serhat

    2009-01-01

    WebQuest is a popular inquiry-oriented activity in which learners use Web resources. Since the creation of the innovation, almost 15 years ago, the Web has changed significantly, while the WebQuest technique has changed little. This article examines possible applications of new Web trends on WebQuest instructional strategy. Some possible…

  5. A Framework for Automated Testing of JavaScript Web Applications

    DEFF Research Database (Denmark)

    Artzi, Shay; Dolby, Julian; Jensen, Simon Holm

    2011-01-01

    Current practice in testing JavaScript web applications requires manual construction of test cases, which is difficult and tedious. We present a framework for feedback-directed automated test generation for JavaScript in which execution is monitored to collect information that directs the test...

  6. Development of Grid-like Applications for Public Health Using Web 2.0 Mashup Techniques

    OpenAIRE

    Scotch, Matthew; Yip, Kevin Y.; Cheung, Kei-Hoi

    2008-01-01

    Development of public health informatics applications often requires the integration of multiple data sources. This process can be challenging due to issues such as different file formats, schemas, naming systems, and having to scrape the content of web pages. A potential solution to these system development challenges is the use of Web 2.0 technologies. In general, Web 2.0 technologies are new internet services that encourage and value information sharing and collaboration among individuals....

  7. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    Science.gov (United States)

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  8. WEB 2.0 applications by SMES as tool for innovation and improvement; Aplicaciones de la WEB 2.0 en las PYMES como herramienta para la innovacion y mejora

    Energy Technology Data Exchange (ETDEWEB)

    Jaca-Garcia, M. C.; Serrano-Barcena, N.

    2010-07-01

    The term Web 2.0 is associated with the development of Web-based technology applications and tools used by communities of users. Those applications let these users access and produce information in a simple way, without the necessity of complicated software on their computers. this technology can also be used by small and medium companies to improve of any type of projects involving collaborative work. This paper presents different Web 2.0 applications that can be used by SMES (Small and medium enterprises) and the steps that should be addressed to implement them. Different examples of its uses are also explained. (Author) 15 refs.

  9. RETRIEVAL EVENTS EVALUATION

    International Nuclear Information System (INIS)

    Wilson, T.

    1999-01-01

    The purpose of this analysis is to evaluate impacts to the retrieval concept presented in the Design Analysis ''Retrieval Equipment and Strategy'' (Reference 6), from abnormal events based on Design Basis Events (DBE) and Beyond Design Basis Events (BDBE) as defined in two recent analyses: (1) DBE/Scenario Analysis for Preclosure Repository Subsurface Facilities (Reference 4); and (2) Preliminary Preclosure Design Basis Event Calculations for the Monitored Geologic Repository (Reference 5) The objective of this task is to determine what impacts the DBEs and BDBEs have on the equipment developed for retrieval. The analysis lists potential impacts and recommends changes to be analyzed in subsequent design analyses for developed equipment, or recommend where additional equipment may be needed, to allow retrieval to be performed in all DBE or BDBE situations. This analysis supports License Application design and therefore complies with the requirements of Systems Description Document input criteria comparison as presented in Section 7, Conclusions. In addition, the analysis discusses the impacts associated with not using concrete inverts in the emplacement drifts. The ''Retrieval Equipment and Strategy'' analysis was based on a concrete invert configuration in the emplacement drift. The scope of the analysis, as presented in ''Development Plan for Retrieval Events Evaluation'' (Reference 3) includes evaluation and criteria of the following: Impacts to retrieval from the emplacement drift based on DBE/BDBEs, and changes to the invert configuration for the preclosure period. Impacts to retrieval from the main drifts based on DBE/BDBEs for the preclosure period

  10. VaProS: a database-integration approach for protein/genome information retrieval

    KAUST Repository

    Gojobori, Takashi; Ikeo, Kazuho; Katayama, Yukie; Kawabata, Takeshi; Kinjo, Akira R.; Kinoshita, Kengo; Kwon, Yeondae; Migita, Ohsuke; Mizutani, Hisashi; Muraoka, Masafumi; Nagata, Koji; Omori, Satoshi; Sugawara, Hideaki; Yamada, Daichi; Yura, Kei

    2016-01-01

    Life science research now heavily relies on all sorts of databases for genome sequences, transcription, protein three-dimensional (3D) structures, protein–protein interactions, phenotypes and so forth. The knowledge accumulated by all the omics research is so vast that a computer-aided search of data is now a prerequisite for starting a new study. In addition, a combinatory search throughout these databases has a chance to extract new ideas and new hypotheses that can be examined by wet-lab experiments. By virtually integrating the related databases on the Internet, we have built a new web application that facilitates life science researchers for retrieving experts’ knowledge stored in the databases and for building a new hypothesis of the research target. This web application, named VaProS, puts stress on the interconnection between the functional information of genome sequences and protein 3D structures, such as structural effect of the gene mutation. In this manuscript, we present the notion of VaProS, the databases and tools that can be accessed without any knowledge of database locations and data formats, and the power of search exemplified in quest of the molecular mechanisms of lysosomal storage disease. VaProS can be freely accessed at http://p4d-info.nig.ac.jp/vapros/.

  11. VaProS: a database-integration approach for protein/genome information retrieval

    KAUST Repository

    Gojobori, Takashi

    2016-12-24

    Life science research now heavily relies on all sorts of databases for genome sequences, transcription, protein three-dimensional (3D) structures, protein–protein interactions, phenotypes and so forth. The knowledge accumulated by all the omics research is so vast that a computer-aided search of data is now a prerequisite for starting a new study. In addition, a combinatory search throughout these databases has a chance to extract new ideas and new hypotheses that can be examined by wet-lab experiments. By virtually integrating the related databases on the Internet, we have built a new web application that facilitates life science researchers for retrieving experts’ knowledge stored in the databases and for building a new hypothesis of the research target. This web application, named VaProS, puts stress on the interconnection between the functional information of genome sequences and protein 3D structures, such as structural effect of the gene mutation. In this manuscript, we present the notion of VaProS, the databases and tools that can be accessed without any knowledge of database locations and data formats, and the power of search exemplified in quest of the molecular mechanisms of lysosomal storage disease. VaProS can be freely accessed at http://p4d-info.nig.ac.jp/vapros/.

  12. Characteristics of scientific web publications

    DEFF Research Database (Denmark)

    Thorlund Jepsen, Erik; Seiden, Piet; Ingwersen, Peter Emil Rerup

    2004-01-01

    were generated based on specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AllTheWeb, and AltaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality...... of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various...... types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both Alta...

  13. Web Mining and Social Networking

    CERN Document Server

    Xu, Guandong; Li, Lin

    2011-01-01

    This book examines the techniques and applications involved in the Web Mining, Web Personalization and Recommendation and Web Community Analysis domains, including a detailed presentation of the principles, developed algorithms, and systems of the research in these areas. The applications of web mining, and the issue of how to incorporate web mining into web personalization and recommendation systems are also reviewed. Additionally, the volume explores web community mining and analysis to find the structural, organizational and temporal developments of web communities and reveal the societal s

  14. Folksonomies indexing and retrieval in web 2.0

    CERN Document Server

    Peters, Isabella

    2009-01-01

    In Web 2.0 users not only make heavy use of Col-laborative Information Services in order to create, publish and share digital information resources - what is more, they index and represent these re-sources via own keywords, so-called tags. The sum of this user-generated metadata of a Collaborative Information Service is also called Folksonomy. In contrast to professionally created and highly struc-tured metadata, e.g. subject headings, thesauri, clas-sification systems or ontologies, which are applied in libraries, corporate information architectures or commercial databases and which were deve

  15. Reads2Type: a web application for rapid microbial taxonomy identification

    DEFF Research Database (Denmark)

    Saputra, Dhany; Rasmussen, Simon; Larsen, Mette Voldby

    2015-01-01

    genome of microbial isolates. Therefore we have developed Reads2Type, a web-based tool for taxonomy identification based on whole bacterial genome sequence data. Raw sequencing data provided by the user are mapped against a set of marker probes that are derived from currently available bacteria complete......, as the entire computational analysis is done on the computer of whom utilizes the web application. This also prevents data privacy issues to arise. The Reads2Type tool is available at http://www.cbs.dtu.dk/~dhany/reads2type.html ....

  16. AngularJS web application development

    CERN Document Server

    Darwin, Peter Bacon

    2013-01-01

    The book will be a step-by-step guide showing the readers how to build a complete web app with AngularJSJavaScript developers who want to learn AngularJS for developing web apps. Knowledge of JavaScript and HTML is expected. No knowledge of AngularJS is required.

  17. microCOMB web application for the identification of gene expression components

    OpenAIRE

    Skok, Boštjan

    2016-01-01

    The goal of this thesis is to develop a web application that functions as user interface for microCOMB and manages it's gene expression database. The main functions of the application are to enable the user to upload expression profiles to be analyzed and show it's result, store user history of completed analyses and keep the public database up to date. In the thesis we describe the technologies used, architecture, development process and application functionality. During the development and ...

  18. Metadata Schema Used in OCLC Sampled Web Pages

    Directory of Open Access Journals (Sweden)

    Fei Yu

    2005-12-01

    Full Text Available The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources. However, many questions have been raised about the use of metadata schemas such as which metadata schemas have been used on the Web? How did they describe Web accessible information? What is the distribution of these metadata schemas among Web pages? Do certain schemas dominate the others? To address these issues, this study analyzed 16,383 Web pages with meta tags extracted from 200,000 OCLC sampled Web pages in 2000. It found that only 8.19% Web pages used meta tags; description tags, keyword tags, and Dublin Core tags were the only three schemas used in the Web pages. This article revealed the use of meta tags in terms of their function distribution, syntax characteristics, granularity of the Web pages, and the length distribution and word number distribution of both description and keywords tags.

  19. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain

    Science.gov (United States)

    2011-01-01

    Background Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Methods Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. Results The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. Conclusions In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This

  20. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain.

    Science.gov (United States)

    Dominkovics, Pau; Granell, Carlos; Pérez-Navarro, Antoni; Casals, Martí; Orcau, Angels; Caylà, Joan A

    2011-11-29

    Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web

  1. Development of WEB Applications of The Component – Open Source

    Directory of Open Access Journals (Sweden)

    Arturo Sergio Medina Castillo

    2013-06-01

    Full Text Available Nowadays software development not starting from scratch, however already has a set of tools provided by frameworks, which enables faster application development, relevant and indispensable factor for supporting continuous improvement processes seeking higher levels of competitiveness in this global society.In all respects the development of Web applications, whether open source or proprietary, is developing rapidly, by providing service levels of communication, interoperability, access to internal and external customers that allows management support different business processes.

  2. AmWeb: a novel interactive web tool for antimicrobial resistance surveillance, applicable to both community and hospital patients.

    Science.gov (United States)

    Ironmonger, Dean; Edeghere, Obaghe; Gossain, Savita; Bains, Amardeep; Hawkey, Peter M

    2013-10-01

    Antimicrobial resistance (AMR) is recognized as one of the most significant threats to human health. Local and regional AMR surveillance enables the monitoring of temporal changes in susceptibility to antibiotics and can provide prescribing guidance to healthcare providers to improve patient management and help slow the spread of antibiotic resistance in the community. There is currently a paucity of routine community-level AMR surveillance information. The HPA in England sponsored the development of an AMR surveillance system (AmSurv) to collate local laboratory reports. In the West Midlands region of England, routine reporting of AMR data has been established via the AmSurv system from all diagnostic microbiology laboratories. The HPA Regional Epidemiology Unit developed a web-enabled database application (AmWeb) to provide microbiologists, pharmacists and other stakeholders with timely access to AMR data using user-configurable reporting tools. AmWeb was launched in the West Midlands in January 2012 and is used by microbiologists and pharmacists to monitor resistance profiles, perform local benchmarking and compile data for infection control reports. AmWeb is now being rolled out to all English regions. It is expected that AmWeb will become a valuable tool for monitoring the threat from newly emerging or currently circulating resistant organisms and helping antibiotic prescribers to select the best treatment options for their patients.

  3. BioModels.net Web Services, a free and integrated toolkit for computational modelling software.

    Science.gov (United States)

    Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille

    2010-05-01

    Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.

  4. Novel applications of intelligent systems

    CERN Document Server

    Kasabov, Nikola; Filev, Dimitar; Jotsov, Vladimir

    2016-01-01

    In this carefully edited book some selected results of theoretical and applied research in the field of broadly perceived intelligent systems are presented. The problems vary from industrial to web and problem independent applications. All this is united under the slogan: "Intelligent systems conquer the world”. The book brings together innovation projects with analytical research, invention, retrieval and processing of knowledge and logical applications in technology. This book is aiming to a wide circle of readers and particularly to the young generation of IT/ICT experts who will build the next generations of intelligent systems.

  5. Web and Mobile Based HIV Prevention and Intervention Programs Pros and Cons - A Review.

    Science.gov (United States)

    Niakan, Sharareh; Mehraeen, Esmaeil; Noori, Tayebeh; Gozali, Elahe

    2017-01-01

    With the increasing growth of HIV positive people the use of information and communication technologies (ICT) can play an important role in controlling the spread of the AIDS. Web and Mobile are the new technologies that young people take advantage from them. In this study a review to investigate the web and mobile based HIV prevention and intervention programs was carried out. A scoping review was conducted including PubMed, Science direct, Web of Science and Proquest to find relevant sources that published in 2009 to 2016. To identify published, original research that reported the web and mobile-based HIV prevention and intervention programs, an organized search was conducted with the following search keywords in combination: HIV, AIDS, m-Health, Mobile phone, Cell phone, Smartphone, Mobile health, internet, and web. Using the employed strategies, 173 references retrieved. Searched articles were compared based on their titles and abstracts. To identify duplicated articles, the title and abstracts were considered and 101 duplicated references were excluded. By going through the full text of related papers, 35 articles were found to be more related to the questions of this paper from which 72 final included. The advantages of web and mobile-based interventions include the possibility to provide constancy in the delivery of an intervention, impending low cost, and the ability to spread the intervention to an extensive community. Online programs such as Chat room-based Education program, Web-based therapeutic education system, and Online seek information can use for HIV/AIDS prevention. To use of mobile for HIV/AIDS prevention and intervention, programs including in: Health system focused applications, Population health focused applications, and Health messaging can be used.

  6. Spatial Search Techniques for Mobile 3D Queries in Sensor Web Environments

    Directory of Open Access Journals (Sweden)

    James D. Carswell

    2013-03-01

    Full Text Available Developing mobile geo-information systems for sensor web applications involves technologies that can access linked geographical and semantically related Internet information. Additionally, in tomorrow’s Web 4.0 world, it is envisioned that trillions of inexpensive micro-sensors placed throughout the environment will also become available for discovery based on their unique geo-referenced IP address. Exploring these enormous volumes of disparate heterogeneous data on today’s location and orientation aware smartphones requires context-aware smart applications and services that can deal with “information overload”. 3DQ (Three Dimensional Query is our novel mobile spatial interaction (MSI prototype that acts as a next-generation base for human interaction within such geospatial sensor web environments/urban landscapes. It filters information using “Hidden Query Removal” functionality that intelligently refines the search space by calculating the geometry of a three dimensional visibility shape (Vista space at a user’s current location. This 3D shape then becomes the query “window” in a spatial database for retrieving information on only those objects visible within a user’s actual 3D field-of-view. 3DQ reduces information overload and serves to heighten situation awareness on constrained commercial off-the-shelf devices by providing visibility space searching as a mobile web service. The effects of variations in mobile spatial search techniques in terms of query speed vs. accuracy are evaluated and presented in this paper.

  7. Advanced web services

    CERN Document Server

    Bouguettaya, Athman; Daniel, Florian

    2013-01-01

    Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet. This book is the second installment of a two-book collection covering the state-o

  8. Development of a Web Application: Recording Learners' Mouse Trajectories and Retrieving their Study Logs to Identify the Occurrence of Hesitation in Solving Word-Reordering Problems

    Directory of Open Access Journals (Sweden)

    Mitsumasa Zushi

    2014-04-01

    Full Text Available Most computer marking systems evaluate the results of the answers reached by learners without looking into the process by which the answers are produced, which will be insufficient to ascertain learners' understanding level because correct answers may well include lucky hunches, namely accidentally correct but not confident answers. In order to differentiate these lucky answers from confident correct ones, we have developed a Web application that can record mouse trajectories during the performance of tasks. Mathematical analyses of these trajectories have revealed that some parameters for mouse movements can be useful indicators to identify the occurrence of hesitation resulting from lack of knowledge or confidence in solving problems.

  9. Enhanced reproducibility of SADI web service workflows with Galaxy and Docker.

    Science.gov (United States)

    Aranguren, Mikel Egaña; Wilkinson, Mark D

    2015-01-01

    Semantic Web technologies have been widely applied in the life sciences, for example by data providers such as OpenLifeData and through web services frameworks such as SADI. The recently reported OpenLifeData2SADI project offers access to the vast OpenLifeData data store through SADI services. This article describes how to merge data retrieved from OpenLifeData2SADI with other SADI services using the Galaxy bioinformatics analysis platform, thus making this semantic data more amenable to complex analyses. This is demonstrated using a working example, which is made distributable and reproducible through a Docker image that includes SADI tools, along with the data and workflows that constitute the demonstration. The combination of Galaxy and Docker offers a solution for faithfully reproducing and sharing complex data retrieval and analysis workflows based on the SADI Semantic web service design patterns.

  10. Usage, Barriers, and Training of Web 2.0 Technology Applications

    Science.gov (United States)

    Pritchett, Christopher G.; Pritchett, Christal C.; Wohleb, Elisha C.

    2013-01-01

    This research study was designed to determine the degree of use of Web 2.0 technology applications by certified education professionals and examine differences among various groups as well as reasons for these differences. A quantitative survey instrument was developed to gather demographic information and data. Participants reported they would be…

  11. Promoting Reflective Thinking Skills by Using Web 2.0 Application

    Science.gov (United States)

    Abdullah, Mohamed

    2015-01-01

    The study aims to investigate are using Web 2.0 applications promoting reflective thinking skills for higher education student in faculty for education. Although the literature reveals that technology integration is a trend in higher education and researchers and educators have increasingly shared their ideas and examples of implementations of Web…

  12. BioTapestry now provides a web application and improved drawing and layout tools.

    Science.gov (United States)

    Paquette, Suzanne M; Leinonen, Kalle; Longabaugh, William J R

    2016-01-01

    Gene regulatory networks (GRNs) control embryonic development, and to understand this process in depth, researchers need to have a detailed understanding of both the network architecture and its dynamic evolution over time and space. Interactive visualization tools better enable researchers to conceptualize, understand, and share GRN models. BioTapestry is an established application designed to fill this role, and recent enhancements released in Versions 6 and 7 have targeted two major facets of the program. First, we introduced significant improvements for network drawing and automatic layout that have now made it much easier for the user to create larger, more organized network drawings. Second, we revised the program architecture so it could continue to support the current Java desktop Editor program, while introducing a new BioTapestry GRN Viewer that runs as a JavaScript web application in a browser. We have deployed a number of GRN models using this new web application. These improvements will ensure that BioTapestry remains viable as a research tool in the face of the continuing evolution of web technologies, and as our understanding of GRN models grows.

  13. Semantic web in the e-learning

    Directory of Open Access Journals (Sweden)

    Andrenizia Aquino Eluan

    2008-01-01

    Full Text Available With the evolution of the technology of information and communication, the Web is adding diversity of resources that can facilitate the development of some areas of the knowledge, because promotes the access and the use of information globalised, accessible and without borders. Discusses the semantic Web as a means of sharing information to adopt standards for interoperability to the communication in network. Among the concerns that surround the education area, are the strategies of search and information retrieval in a relevant and effective for the knowledge of construction and learning. In this context, is the Distance Education, which area can enjoy the resources of the Semantic Web and the advantages of using ontology, which will be presented in this article

  14. A comparison of clinicians' access to online knowledge resources using two types of information retrieval applications in an academic hospital setting.

    Science.gov (United States)

    Hunt, Sevgin; Cimino, James J; Koziol, Deloris E

    2013-01-01

    The research studied whether a clinician's preference for online health knowledge resources varied with the use of two applications that were designed for information retrieval in an academic hospital setting. The researchers analyzed a year's worth of computer log files to study differences in the ways that four clinician groups (attending physicians, housestaff physicians, nurse practitioners, and nurses) sought information using two types of information retrieval applications (health resource links or Infobutton icons) across nine resources while they reviewed patients' laboratory results. From a set of 14,979 observations, the authors found statistically significant differences among the 4 clinician groups for accessing resources using the health resources application (Pinformation-seeking behavior of clinicians may vary in relation to their role and the way in which the information is presented. Studying these behaviors can provide valuable insights to those tasked with maintaining information retrieval systems' links to appropriate online knowledge resources.

  15. A Web-based Architecture Enabling Multichannel Telemedicine Applications

    Directory of Open Access Journals (Sweden)

    Fabrizio Lamberti

    2003-02-01

    Full Text Available Telemedicine scenarios include today in-hospital care management, remote teleconsulting, collaborative diagnosis and emergency situations handling. Different types of information need to be accessed by means of etherogeneous client devices in different communication environments in order to enable high quality continuous sanitary assistance delivery wherever and whenever needed. In this paper, a Web-based telemedicine architecture based on Java, XML and XSL technologies is presented. By providing dynamic content delivery services and Java based client applications for medical data consultation and modification, the system enables effective access to an Electronic Patient Record based standard database by means of any device equipped with a Web browser, such as traditional Personal Computers and workstation as well as modern Personal Digital Assistants. The effectiveness of the proposed architecture has been evaluated in different scenarios, experiencing fixed and mobile clinical data transmissions over Local Area Networks, wireless LANs and wide coverage telecommunication network including GSM and GPRS.

  16. Improving information retrieval with multiple health terminologies in a quality-controlled gateway.

    Science.gov (United States)

    Soualmia, Lina F; Sakji, Saoussen; Letord, Catherine; Rollin, Laetitia; Massari, Philippe; Darmoni, Stéfan J

    2013-01-01

    The Catalog and Index of French-language Health Internet resources (CISMeF) is a quality-controlled health gateway, primarily for Web resources in French (n=89,751). Recently, we achieved a major improvement in the structure of the catalogue by setting-up multiple terminologies, based on twelve health terminologies available in French, to overcome the potential weakness of the MeSH thesaurus, which is the main and pivotal terminology we use for indexing and retrieval since 1995. The main aim of this study was to estimate the added-value of exploiting several terminologies and their semantic relationships to improve Web resource indexing and retrieval in CISMeF, in order to provide additional health resources which meet the users' expectations. Twelve terminologies were integrated into the CISMeF information system to set up multiple-terminologies indexing and retrieval. The same sets of thirty queries were run: (i) by exploiting the hierarchical structure of the MeSH, and (ii) by exploiting the additional twelve terminologies and their semantic links. The two search modes were evaluated and compared. The overall coverage of the multiple-terminologies search mode was improved by comparison to the coverage of using the MeSH (16,283 vs. 14,159) (+15%). These additional findings were estimated at 56.6% relevant results, 24.7% intermediate results and 18.7% irrelevant. The multiple-terminologies approach improved information retrieval. These results suggest that integrating additional health terminologies was able to improve recall. Since performing the study, 21 other terminologies have been added which should enable us to make broader studies in multiple-terminologies information retrieval.

  17. Application of object modeling technique to medical image retrieval system

    International Nuclear Information System (INIS)

    Teshima, Fumiaki; Abe, Takeshi

    1993-01-01

    This report describes the results of discussions on the object-oriented analysis methodology, which is one of the object-oriented paradigms. In particular, we considered application of the object modeling technique (OMT) to the analysis of a medical image retrieval system. The object-oriented methodology places emphasis on the construction of an abstract model from real-world entities. The effectiveness of and future improvements to OMT are discussed from the standpoint of the system's expandability. These discussions have elucidated that the methodology is sufficiently well-organized and practical to be applied to commercial products, provided that it is applied to the appropriate problem domain. (author)

  18. Authoring support in concept-based web information systems for educational applications

    NARCIS (Netherlands)

    Aroyo, L.M.; Dicheva, D.

    2004-01-01

    The increasing complexity of concept-based web information systems (WIS) and their educational applications requires more intelligent support for their authoring. We propose an ontological approach towards a common authoring framework for such systems to formally describe the overall authoring

  19. Biomedical information retrieval across languages.

    Science.gov (United States)

    Daumke, Philipp; Markü, Kornél; Poprat, Michael; Schulz, Stefan; Klar, Rüdiger

    2007-06-01

    This work presents a new dictionary-based approach to biomedical cross-language information retrieval (CLIR) that addresses many of the general and domain-specific challenges in current CLIR research. Our method is based on a multilingual lexicon that was generated partly manually and partly automatically, and currently covers six European languages. It contains morphologically meaningful word fragments, termed subwords. Using subwords instead of entire words significantly reduces the number of lexical entries necessary to sufficiently cover a specific language and domain. Mediation between queries and documents is based on these subwords as well as on lists of word-n-grams that are generated from large monolingual corpora and constitute possible translation units. The translations are then sent to a standard Internet search engine. This process makes our approach an effective tool for searching the biomedical content of the World Wide Web in different languages. We evaluate this approach using the OHSUMED corpus, a large medical document collection, within a cross-language retrieval setting.

  20. Lexical Link Analysis Application: Improving Web Service to Acquisition Visibility Portal Phase III

    Science.gov (United States)

    2015-04-30

    ååì~ä=^Åèìáëáíáçå= oÉëÉ~êÅÜ=póãéçëáìã= qÜìêëÇ~ó=pÉëëáçåë= sçäìãÉ=ff= = Lexical Link Analysis Application: Improving Web Service to Acquisition...2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Lexical Link Analysis Application: Improving Web Service...processes. Lexical Link Analysis (LLA) can help, by applying automation to reveal and depict???to decisionmakers??? the correlations, associations, and