WorldWideScience

Sample records for generation web part

  1. Semiautomatic Web service generation

    OpenAIRE

    Fuentes, José María de; Corella, Miguel Ángel; Castells, Pablo; Rico, Mariano

    2005-01-01

    Proceedings of the IADIS International Conference WWW/Internet 2005, held in Lisbon (Portugal). The lack of a critical mass of actually deployed web services, semantic or not, is an important hurdle for the advancement and further innovation in web service technologies. In this paper we introduce Federica, a platform for semi-automatic generation and implementation of semantic web services by exploiting existing web applications published in internet. Federica generates semantical...

  2. Web Tools: The Second Generation

    Science.gov (United States)

    Pascopella, Angela

    2008-01-01

    Web 2.0 tools and technologies, or second generation tools, help districts to save time and money, and eliminate the need to transfer or move files back and forth across computers. Many Web 2.0 tools help students think critically and solve problems, which falls under the 21st-century skills. The second-generation tools are growing in popularity…

  3. Web Tools: The Second Generation

    Science.gov (United States)

    Pascopella, Angela

    2008-01-01

    Web 2.0 tools and technologies, or second generation tools, help districts to save time and money, and eliminate the need to transfer or move files back and forth across computers. Many Web 2.0 tools help students think critically and solve problems, which falls under the 21st-century skills. The second-generation tools are growing in popularity…

  4. Web Page Design (Part Three).

    Science.gov (United States)

    Descy, Don E.

    1997-01-01

    Discusses fonts as well as design considerations that should be reviewed when designing World Wide Web pages and sites to make them easier for clients to use and easier to maintain. Also discusses the simplicity of names; organization of pages, folders, and files; and sites to help build Web sites. (LRW)

  5. Web Page Design (Part One).

    Science.gov (United States)

    Descy, Don E.

    1997-01-01

    Discusses rules for Web page design: consider audiences' Internet skills and equipment; know your content; outline the material; map or sketch the site; be consistent; regulate size of graphics to control download time; place eye catching material in the first 300 pixels; moderate use of color to control file size and bandwidth; include a…

  6. Automatic generation of Web mining environments

    Science.gov (United States)

    Cibelli, Maurizio; Costagliola, Gennaro

    1999-02-01

    The main problem related to the retrieval of information from the world wide web is the enormous number of unstructured documents and resources, i.e., the difficulty of locating and tracking appropriate sources. This paper presents a web mining environment (WME), which is capable of finding, extracting and structuring information related to a particular domain from web documents, using general purpose indices. The WME architecture includes a web engine filter (WEF), to sort and reduce the answer set returned by a web engine, a data source pre-processor (DSP), which processes html layout cues in order to collect and qualify page segments, and a heuristic-based information extraction system (HIES), to finally retrieve the required data. Furthermore, we present a web mining environment generator, WMEG, that allows naive users to generate a WME specific to a given domain by providing a set of specifications.

  7. Ontology Enabled Generation of Embedded Web Services

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius; Zhang, Weishan; Soares, Goncalo Teofilo Afonso Pinheiro

    2008-01-01

    and software platforms, and of devices state and context changes. To address these challenges, we developed a Web service compiler, Limbo, in which Web Ontology Language (OWL) ontologies are used to make the Limbo compiler aware of its compilation context, such as targeted hardware and software. At the same...... time, knowledge on device details, platform dependencies, and resource/power consumption is built into the supporting ontologies, which are used to configure Limbo for generating resource efficient web service code. A state machine ontology is used to generate stub code to facilitate handling of state...

  8. Hidden Web Data Extraction Using Dynamic Rule Generation

    Directory of Open Access Journals (Sweden)

    Anuradha

    2011-08-01

    Full Text Available World Wide Web is a global information medium of interlinked hypertext documents accessed via computers connected to the internet. Most of the users rely on traditional search engines to search theinformation on the web. These search engines deal with the Surface Web which is a set of Web pages directly accessible through hyperlinks and ignores a large part of the Web called Hidden Web which is hidden to present-day search engines. It lies behind search forms and this part of the web containing an almost endless amount of sources providing high quality information stored in specialized databases can be found in the depths of the WWW. A large amount of this Hidden web is structured i.e Hidden websites contain the information in the form of lists and tables. However visiting dozens of these sites and analyzing the results is very much time consuming task for user. Hence, it is desirable to build a prototype which will minimize user’s effort and give him high quality information in integrated form. This paper proposes a novel method that extracts the data records from the lists and tables of various hidden web sites of same domain using dynamic rule generation and forms a repository which is used for later searching. By searching the data from this repository, user will find the desired data at one place. It reduces the user’s effort to look at various result pages of different hidden websites.

  9. Autonomic html interface generator for web applications

    CERN Document Server

    Bassil, Youssef; 10.5121/ijwest.2012.3104

    2012-01-01

    Recent advances in computing systems have led to a new digital era in which every area of life is nearly interrelated with information technology. However, with the trend towards large-scale IT systems, a new challenge has emerged. The complexity of IT systems is becoming an obstacle that hampers the manageability, operability, and maintainability of modern computing infrastructures. Autonomic computing popped up to provide an answer to these ever-growing pitfalls. Fundamentally, autonomic systems are self-configuring, self-healing, self-optimizing, and self-protecting; hence, they can automate all complex IT processes without human intervention. This paper proposes an autonomic HTML web-interface generator based on XML Schema and Style Sheet specifications for self-configuring graphical user interfaces of web applications. The goal of this autonomic generator is to automate the process of customizing GUI web-interfaces according to the ever-changing business rules, policies, and operating environment with th...

  10. Next generation of weather generators on web service framework

    Science.gov (United States)

    Chinnachodteeranun, R.; Hung, N. D.; Honda, K.; Ines, A. V. M.

    2016-12-01

    Weather generator is a statistical model that synthesizes possible realization of long-term historical weather in future. It generates several tens to hundreds of realizations stochastically based on statistical analysis. Realization is essential information as a crop modeling's input for simulating crop growth and yield. Moreover, they can be contributed to analyzing uncertainty of weather to crop development stage and to decision support system on e.g. water management and fertilizer management. Performing crop modeling requires multidisciplinary skills which limit the usage of weather generator only in a research group who developed it as well as a barrier for newcomers. To improve the procedures of performing weather generators as well as the methodology to acquire the realization in a standard way, we implemented a framework for providing weather generators as web services, which support service interoperability. Legacy weather generator programs were wrapped in the web service framework. The service interfaces were implemented based on an international standard that was Sensor Observation Service (SOS) defined by Open Geospatial Consortium (OGC). Clients can request realizations generated by the model through SOS Web service. Hierarchical data preparation processes required for weather generator are also implemented as web services and seamlessly wired. Analysts and applications can invoke services over a network easily. The services facilitate the development of agricultural applications and also reduce the workload of analysts on iterative data preparation and handle legacy weather generator program. This architectural design and implementation can be a prototype for constructing further services on top of interoperable sensor network system. This framework opens an opportunity for other sectors such as application developers and scientists in other fields to utilize weather generators.

  11. Web Services-Based Test Report Generation

    Institute of Scientific and Technical Information of China (English)

    LUO Ling; BAI Xiaoying

    2005-01-01

    Tests involving a large number of test cases and test scenarios are always time- and effort-intensive, and use ad hoc approaches. Test management is needed to control the complexity and the quality of the testing of large software systems. The reporting mechanism is critical for monitoring the testing progress, analyzing test results, and evaluating the test effectiveness for a disciplined testing process throughout the testing lifecycle. This paper presents an XML-based report generation method for large system testing. The service-oriented architecture enables flexible test report generation, presentation, and exchange to facilitate collaboration in a distributed environment. The results show that proper reporting can effectively improve the visibility of the testing process and that this web-based approach is critical to enhance communication among multiple testing groups.

  12. Lipids: Part of the tangled web

    Energy Technology Data Exchange (ETDEWEB)

    Krauss, R.M.

    1992-08-01

    Analysis of LDL subclasses by non-denaturing gradient gel electrophoresis has led to the identification of a subclass pattern characterized by predominance of small LDL, designated LDL subclass pattern B. The prevalence of pattern B in the general population is approximately 25%, but varies as a function of age and gender, being relatively uncommon in children and in premenopausal women. The remainder of the population has a predominance of larger LDL (pattern A) or an intermediate pattern. Our findings indicate that LDL subclass pattern B is an integral part of the ``tangled web`` of interrelated coronary disease risk factors associated with insulin resistance. It may be that the pathologic features of this lipoprotein profile, including the relative atherogenicity of small, dense LDL and IDL, contribute importantly to the increased risk of cardiovascular disease in subjects with insulin resistance and hypertension. Furthermore, pattern B serves as a marker for a common genetic trait which may underlie a substantial portion of the familial predisposition to coronary artery disease in the general population. Studies of hormonal, dietary, and pharmacologic influences on expression of this atherogenic phenotype should lead to more effective identification and management of high-risk individuals, and improved approaches to disease prevention in high-risk families.

  13. Lipids: Part of the tangled web

    Energy Technology Data Exchange (ETDEWEB)

    Krauss, R.M.

    1992-08-01

    Analysis of LDL subclasses by non-denaturing gradient gel electrophoresis has led to the identification of a subclass pattern characterized by predominance of small LDL, designated LDL subclass pattern B. The prevalence of pattern B in the general population is approximately 25%, but varies as a function of age and gender, being relatively uncommon in children and in premenopausal women. The remainder of the population has a predominance of larger LDL (pattern A) or an intermediate pattern. Our findings indicate that LDL subclass pattern B is an integral part of the tangled web'' of interrelated coronary disease risk factors associated with insulin resistance. It may be that the pathologic features of this lipoprotein profile, including the relative atherogenicity of small, dense LDL and IDL, contribute importantly to the increased risk of cardiovascular disease in subjects with insulin resistance and hypertension. Furthermore, pattern B serves as a marker for a common genetic trait which may underlie a substantial portion of the familial predisposition to coronary artery disease in the general population. Studies of hormonal, dietary, and pharmacologic influences on expression of this atherogenic phenotype should lead to more effective identification and management of high-risk individuals, and improved approaches to disease prevention in high-risk families.

  14. The Semantic Web: opportunities and challenges for next-generation Web applications

    Directory of Open Access Journals (Sweden)

    2002-01-01

    Full Text Available Recently there has been a growing interest in the investigation and development of the next generation web - the Semantic Web. While most of the current forms of web content are designed to be presented to humans, but are barely understandable by computers, the content of the Semantic Web is structured in a semantic way so that it is meaningful to computers as well as to humans. In this paper, we report a survey of recent research on the Semantic Web. In particular, we present the opportunities that this revolution will bring to us: web-services, agent-based distributed computing, semantics-based web search engines, and semantics-based digital libraries. We also discuss the technical and cultural challenges of realizing the Semantic Web: the development of ontologies, formal semantics of Semantic Web languages, and trust and proof models. We hope that this will shed some light on the direction of future work on this field.

  15. Generating Best Features for Web Page Classification

    Directory of Open Access Journals (Sweden)

    K. Selvakuberan

    2008-03-01

    Full Text Available As the Internet provides millions of web pages for each and every search term, getting interesting and required results quickly from the Web becomes very difficult. Automatic classification of web pages into relevant categories is the current research topic which helps the search engine to get relevant results. As the web pages contain many irrelevant, infrequent and stop words that reduce the performance of the classifier, extracting or selecting representative features from the web page is an essential pre-processing step. The goal of this paper is to find minimum number of highly qualitative features by integrating feature selection techniques. We conducted experiments with various numbers of features selected by different feature selection algorithms on a well defined initial set of features and show that cfssubset evaluator combined with term frequency method gives minimal qualitative features enough to attain considerable classification accuracy.

  16. Black Generation Y Students’ attitudes towards web advertising value

    OpenAIRE

    Ayesha Lian Bevan-Dye

    2013-01-01

    The study reported on in this article sought to determine black Generation Y students’ attitudes towards Web advertising value. The black Generation Y cohort (individuals born between 1986 and 2005) represent 84 percent of the country’s Generation Y and 33 percent of the total population, yet their consumer behaviour remains largely under researched. A structured, self-administered questionnaire was used to gather data on attitudes towards Web advertising value and the value antecedents of in...

  17. Web matrices: structural properties and generating combinatorial identities

    CERN Document Server

    Dukes, Mark

    2016-01-01

    In this paper we present new results for the combinatorics of web diagrams and web worlds. These are discrete objects that arise in the physics of calculating scattering amplitudes in non-abelian gauge theories. Web-colouring and web-mixing matrices (collectively known as web matrices) are indexed by ordered pairs of web-diagrams and contain information relating the number of colourings of the first web diagram that will produce the second diagram. We introduce the black diamond product on power series and show how it determines the web-colouring matrix of disjoint web worlds. Furthermore, we show that combining known physical results with the black diamond product gives a new technique for generating combinatorial identities. Due to the complicated action of the product on power series, the resulting identities appear highly non-trivial. We present two results to explain repeated entries that appear in the web matrices. The first of these shows how diagonal web matrix entries will be the same if the comparab...

  18. The Aalborg Survey / Part 1 - Web Based Survey

    DEFF Research Database (Denmark)

    Harder, Henrik; Christensen, Cecilie Breinholm

    Background and purpose The Aalborg Survey consists of four independent parts: a web, GPS and an interview based survey and a literature study, which together form a consistent investigation and research into use of urban space, and specifically into young people’s use of urban space: what young......) and the research focus within the cluster of Mobility and Tracking Technologies (MoTT), AAU. Summary / Part 1 Web Base Survey The 1st part of the research project Diverse Urban Spaces (DUS) has been carried out during the period from December 1st 2007 to February 1st 2008 as a Web Based Survey of the 27.040 gross...... people do in urban spaces, where they are in the urban spaces and when the young people are in the urban spaces. The answers to these questions form the framework and enable further academic discussions and conclusions in relation to the overall research project Diverse Urban Spaces (DUS). The primary...

  19. Next-Generation Web Frameworks in Python

    CERN Document Server

    Daly, Liza

    2007-01-01

    With its flexibility, readability, and maturecode libraries, Python is a naturalchoice for developing agile and maintainableweb applications. Severalframeworks have emerged in the last fewyears that share ideas with Ruby on Railsand leverage the expressive nature of Python.This Short Cut will tell you whatyou need to know about the hottest fullstackframeworks: Django, Pylons, andTurboGears. Their philosophies, relativestrengths, and development status aredescribed in detail. What you won't find out is, "Which oneshould I use?" The short answer is thatall of them can be used to build web appl

  20. The Future of Web Maps in Next Generation Textbooks

    Science.gov (United States)

    DiBiase, D.; Prasad, S.

    2014-12-01

    The reformation of the "Object Formerly Known as Textbook" (coined by the Chronicle of Higher Education) toward a digital future is underway. Emerging nextgen texts look less like electronic books ("ebooks") and more like online courseware. In addition to text and illustrations, nextgen textbooks for STEM subjects are likely to combine quizzes, grade management tools, support for social learning, and interactive media including web maps. Web maps are interactive, multi-scale, online maps that enable teachers and learners to explore, interrogate, and mash up the wide variety of map layers available in the cloud. This presentation will show how web maps coupled with interactive quizzes enable students' purposeful explorations and interpretations of spatial patterns related to humankind's interactions with the earth. Attendees will also learn about Esri's offer to donate ArcGIS Online web mapping subscriptions to every U.S. school as part of the President Obama's ConnectED initiative.

  1. Generating personalized web search using semantic context.

    Science.gov (United States)

    Xu, Zheng; Chen, Hai-Yan; Yu, Jie

    2015-01-01

    The "one size fits the all" criticism of search engines is that when queries are submitted, the same results are returned to different users. In order to solve this problem, personalized search is proposed, since it can provide different search results based upon the preferences of users. However, existing methods concentrate more on the long-term and independent user profile, and thus reduce the effectiveness of personalized search. In this paper, the method captures the user context to provide accurate preferences of users for effectively personalized search. First, the short-term query context is generated to identify related concepts of the query. Second, the user context is generated based on the click through data of users. Finally, a forgetting factor is introduced to merge the independent user context in a user session, which maintains the evolution of user preferences. Experimental results fully confirm that our approach can successfully represent user context according to individual user information needs.

  2. Generating Personalized Web Search Using Semantic Context

    Directory of Open Access Journals (Sweden)

    Zheng Xu

    2015-01-01

    Full Text Available The “one size fits the all” criticism of search engines is that when queries are submitted, the same results are returned to different users. In order to solve this problem, personalized search is proposed, since it can provide different search results based upon the preferences of users. However, existing methods concentrate more on the long-term and independent user profile, and thus reduce the effectiveness of personalized search. In this paper, the method captures the user context to provide accurate preferences of users for effectively personalized search. First, the short-term query context is generated to identify related concepts of the query. Second, the user context is generated based on the click through data of users. Finally, a forgetting factor is introduced to merge the independent user context in a user session, which maintains the evolution of user preferences. Experimental results fully confirm that our approach can successfully represent user context according to individual user information needs.

  3. Web Page Recommendation Using Web Mining

    Directory of Open Access Journals (Sweden)

    Modraj Bhavsar

    2014-07-01

    Full Text Available On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1 First we describe the basics of web mining, types of web mining. 2 Details of each web mining technique.3We propose the architecture for the personalized web page recommendation.

  4. Automated Generation of Web Services for Visualization Toolkits

    Science.gov (United States)

    Jensen, P. A.; Yuen, D. A.; Erlebacher, G.; Bollig, E. F.; Kigelman, D. G.; Shukh, E. A.

    2005-12-01

    The recent explosion in the size and complexity of geophysical data and an increasing trend for collaboration across large geographical areas demand the use of remote, full featured visualization toolkits. As the scientific community shifts toward grid computing to handle these increased demands, new web services are needed to assemble powerful distributed applications. Recent research has established the possibility of converting toolkits such as VTK [1] and Matlab [2] into remote visualization services. We are investigating an automated system to allow these toolkits to export their functions as web services under the standardized protocols SOAP and WSDL using pre-existing software (gSOAP [3]) and a custom compiler for Tcl-based scripts. The compiler uses a flexible parser and type inferring mechanism to convert the Tcl into a C++ program that allows the desired Tcl procedures to be exported as SOAP-accessible functions and the VTK rendering window to be captured offscreen and encapsulated for forwarding through a web service. Classes for a client-side Java applet to access the rendering window remotely are also generated. We will use this system to demonstrate the streamlined generation of a standards-compliant web service (suitable for grid deployment) from a Tcl script for VTK. References: [1] The Visualization Toolkit, http://www.vtk.org [2] Matlab, http://www.mathworks.com [3] gSOAP, http://www.cs.fsu.edu/~engelen/soap.html

  5. Ontodog: a web-based ontology community view generation tool.

    Science.gov (United States)

    Zheng, Jie; Xiang, Zuoshuang; Stoeckert, Christian J; He, Yongqun

    2014-05-01

    Biomedical ontologies are often very large and complex. Only a subset of the ontology may be needed for a specified application or community. For ontology end users, it is desirable to have community-based labels rather than the labels generated by ontology developers. Ontodog is a web-based system that can generate an ontology subset based on Excel input, and support generation of an ontology community view, which is defined as the whole or a subset of the source ontology with user-specified annotations including user-preferred labels. Ontodog allows users to easily generate community views with minimal ontology knowledge and no programming skills or installation required. Currently >100 ontologies including all OBO Foundry ontologies are available to generate the views based on user needs. We demonstrate the application of Ontodog for the generation of community views using the Ontology for Biomedical Investigations as the source ontology.

  6. Designing and Implementing Weather Generators as Web Services

    Directory of Open Access Journals (Sweden)

    Rassarin Chinnachodteeranun

    2016-12-01

    Full Text Available Climate and weather realizations are essential inputs for simulating crop growth and yields to analyze the risks associated with future conditions. To simplify the procedure of generating weather realizations and make them available over the Internet, we implemented novel mechanisms for providing weather generators as web services, as well as a mechanism for sharing identical weather realizations given a climatological information. A web service for preparing long-term climate data was implemented based on an international standard, Sensor Observation Service (SOS. The weather generator services, which are the core components of the framework, analyze climatological data, and can take seasonal climate forecasts as inputs for generating weather realizations. The generated weather realizations are encoded in a standard format, which are ready for use to crop modeling. All outputs are generated in SOS standard, which broadens the extent of data sharing and interoperability with other sectoral applications, e.g., water resources management. These services facilitate the development of other applications requiring input weather realizations, as these can be obtained easily by just calling the service. The workload of analysts related to data preparation and handling of legacy weather generator programs can be reduced. The architectural design and implementation presented here can be used as a prototype for constructing further services on top of an interoperable sensor network system.

  7. Generating domain specific sentiment lexicons using the Web Directory

    Directory of Open Access Journals (Sweden)

    Akshay Minocha

    2012-10-01

    Full Text Available based. There has been a demand for the construction of generated and labeled sentiment lexicon. Fordata on the social web (E.g., tweets, methods which make use of the synonymy relation don't work well,as we completely ignore the significance of terms belonging to specific domains. Here we propose togenerate a sentiment lexicon for any domain specified, using a twofold method. First we build sentimentscores using the micro-blogging data, and then we use these scores on the ontological structure providedby Open Directory Project [1], to build a custom sentiment lexicon for analyzing domain specific microbloggingdata.

  8. Automatic Generation of Web Applications from Visual High-Level Functional Web Components

    Directory of Open Access Journals (Sweden)

    Quan Liang Chen

    2009-01-01

    Full Text Available This paper presents high-level functional Web components such as frames, framesets, and pivot tables, which conventional development environments for Web applications have not yet supported. Frameset Web components provide several editing facilities such as adding, deleting, changing, and nesting of framesets to make it easier to develop Web applications that use frame facilities. Pivot table Web components sum up various kinds of data in two dimensions. They reduce the amount of code to be written by developers greatly. The paper also describes the system that implements these high-level functional components as visual Web components. This system assists designers in the development of Web applications based on the page-transition framework that models a Web application as a set of Web page transitions, and by using visual Web components, makes it easier to write processes to be executed when a Web page transfers to another.

  9. On-line Generation of Suggestions for Web Users

    OpenAIRE

    2004-01-01

    One important class of Data Mining applications is the so-called "Web Mining" that analyzes and extracts important and non-trivial knowledge from Web related data. Typical applications of Web Mining are represented by the personalization or recommender systems.These systems are aimed to extract knowledge from the analysis of historical information of a web server in order to improve the web site expressiveness in terms of readability and content availability. Typically, these systems are made...

  10. Drugs on the Internet, part II: antidepressant medication web sites.

    Science.gov (United States)

    Morgan, Melissa; Montagne, Michael

    2011-01-01

    Antidepressant medications have been the fastest growing category of use of pharmaceutical products over the past decade. Selected Internet web sites providing information on antidepressant medications were identified and assessed using code of conduct criteria for posting health information on the Internet as developed by the Health on the Internet Foundation. Thirteen representative web sites were evaluated. Degree of compliance with each of the eight criterion varied by site, though all 13 sites met the criterion for legality of content and conduct on their web site. WebMD and FamilyDoctor.org met most of the criteria, while pharmaceutical company sites tended to meet the fewest criteria.

  11. VennDiagramWeb: a web application for the generation of highly customizable Venn and Euler diagrams.

    Science.gov (United States)

    Lam, Felix; Lalansingh, Christopher M; Babaran, Holly E; Wang, Zhiyuan; Prokopec, Stephenie D; Fox, Natalie S; Boutros, Paul C

    2016-10-03

    Visualization of data generated by high-throughput, high-dimensionality experiments is rapidly becoming a rate-limiting step in computational biology. There is an ongoing need to quickly develop high-quality visualizations that can be easily customized or incorporated into automated pipelines. This often requires an interface for manual plot modification, rapid cycles of tweaking visualization parameters, and the generation of graphics code. To facilitate this process for the generation of highly-customizable, high-resolution Venn and Euler diagrams, we introduce VennDiagramWeb: a web application for the widely used VennDiagram R package. VennDiagramWeb is hosted at http://venndiagram.res.oicr.on.ca/ . VennDiagramWeb allows real-time modification of Venn and Euler diagrams, with parameter setting through a web interface and immediate visualization of results. It allows customization of essentially all aspects of figures, but also supports integration into computational pipelines via download of R code. Users can upload data and download figures in a range of formats, and there is exhaustive support documentation. VennDiagramWeb allows the easy creation of Venn and Euler diagrams for computational biologists, and indeed many other fields. Its ability to support real-time graphics changes that are linked to downloadable code that can be integrated into automated pipelines will greatly facilitate the improved visualization of complex datasets. For application support please contact Paul.Boutros@oicr.on.ca.

  12. 60. The World-Wide Inaccessible Web, Part 1: Browsing

    Science.gov (United States)

    Baggaley, Jon; Batpurev, Batchuluun

    2007-01-01

    Two studies are reported, comparing the browser loading times of webpages created using common Web development techniques. The loading speeds were estimated in 12 Asian countries by members of the "PANdora" network, funded by the International Development Research Centre (IDRC) to conduct collaborative research in the development of effective…

  13. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases.

    Science.gov (United States)

    Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel

    2013-04-15

    In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.

  14. Validated CMS: Towards New Generation of Web Content Management Systems on Web 2.0

    Directory of Open Access Journals (Sweden)

    Noura Aknin

    2012-11-01

    Full Text Available Web 2.0 makes users the main actors for publishing content and creating applications on the web. The increasing of information overload and consequently the decrease of its quality are the main problems of this domain. Content Management Systems (CMS provide the ability to publish on the web by offering simple publishing tools for ordinary users with no technical skills. Content available on the web created using the CMS is not well controlled and requires efficient process that evaluates its quality. Therefore, Content Management Systems contribute to the problem related to information and content quality similarly to Web 2.0 tools. The mechanism of validating content has proved a high-level of content’s quality control by involving users in the process according to Web 2.0 philosophy. From these perspectives, we develop Validated Content Management System (VCMS as a new Web 2.0 tool that supports content validation mechanisms. This article presents the VCMS and its ability to provide an effective quality control for web content. We introduce a new manner of collaborative publishing and we give an overview about features of our system and its core architecture.

  15. Online Test Automation for new Generation of Silverlight Web Applications

    OpenAIRE

    Appasami Govindasamy; Suresh Joseph. K; Annadurai P.

    2011-01-01

    New Interactive, attractive and device independent web application’s Graphical User Interfaces (GUI) are developed by new technologies like Silverlight and Moonlight. Silverlight is Microsoft's cross platform runtime and development technology for running Web based multimedia applications in windows platform. Moonlight is an open-source implementation of the Silverlight development platform for Linux and other operating systems. Manufacturing Execution systems (MES) is a framework which tries...

  16. Extracting Topic Words from the Web for Dialogue Sentence Generation

    OpenAIRE

    下川, 尚亮; Rafal, Rzepka; 荒木, 健治

    2009-01-01

    In this paper we extract topic words from Internet Relay Chat utterances. In such dialogues there are many more spoken language expressions than in blogs or usual Web pages and we presume that the always changing topic is difficult to determine only by nouns which are usually used for topic recognition. In this paper we propose a method for determining a conversation topic considering also association adjectives and verbs retrieved from the Web. Our first experiments show that extracting asso...

  17. 基于Web 2.0的零件库管理系统%Web 2.0-based parts library management system

    Institute of Scientific and Technical Information of China (English)

    黄沈权; 顾新建; 祁国宁; 张勇为

    2009-01-01

    To promote the ordering of parts classification and correlation in Web-based parts library, methods and technologies of Web 2.0 were introduced into Web-based parts library. Firstly, the difference between Web 2.0based parts library and Web 1.0-based parts library was analyzed, and the key technologies involving in Web 2.0based parts library were summarized. Secondly, the framework of Web 2.0-based parts library was presented. Around this framework, the application technologies of Web 2.0 technology in parts library were studied, furthermore, combining Web 2.0-based self-organizing model, parts library's self-organizing technology and tag ontology construction and maintenance's self-organizing technology were studied. Finally, the prototype of Web 2.0-based parts library management system was developed to provide a novel way for promoting the popularization of parts library's construction, maintenance, use and the ordering of resources in parts library.%为促进基于web的零件库中零部件分类和相互关系的有序化,将Web 2.0的方法和技术引入Web零件库中.分析了基于Web 2.0的零件库与基于Web 1.0的零件库的区别,总结了基于Web 2.0的零件库所涉及的关键技术.阐述了基于Web 2.0零件库的体系结构和应用技术,围绕该结构研究了web 2.0技术在零件库中的应用技术,基于Web 2.0的自组织模式研究了零件库自组织技术和标签本体建设与维护的自组织技术.最后,开发了基于Web 2.0的零件库管理系统原型,为促进零件库中资源的有序化和零件库建设、维护、使用的大众化提供了新思路.

  18. Web-based ERP systems: the new generation : case study: mySAP ERP

    OpenAIRE

    Gomis, Marie-Joseph

    2007-01-01

    With the proliferation of Internet, ERP systems like all the domains of Information Technology have known an important evolution. This final thesis project is a study about the evolution of ERP systems, more precisely about their migration to the Web giving birth to a new generation of systems: the Web-Based or Web-enabled ERP systems. This migration to the Web is justified by the difficulty of making possible the communication between partner’s legacy systems and the organizations’ ERP syste...

  19. The generation of large networks from web of science data

    NARCIS (Netherlands)

    Leydesdorff, L.; Khan, G.F.; Bornmann, L.

    2014-01-01

    During the 1990s, one of us developed a series of freeware routines (http://www.leydesdorff.net/indicators) that enable the user to organize downloads from the Web of Science (Thomson Reuters) into a relational database, and then to export matrices for further analysis in various formats (for exampl

  20. The generation of large networks from web of science data

    NARCIS (Netherlands)

    Leydesdorff, L.; Khan, G.F.; Bornmann, L.

    2014-01-01

    During the 1990s, one of us developed a series of freeware routines (http://www.leydesdorff.net/indicators) that enable the user to organize downloads from the Web of Science (Thomson Reuters) into a relational database, and then to export matrices for further analysis in various formats (for exampl

  1. Development of Web-Based Learning Application for Generation Z

    Science.gov (United States)

    Hariadi, Bambang; Dewiyani Sunarto, M. J.; Sudarmaningtyas, Pantjawati

    2016-01-01

    This study aimed to develop a web-based learning application as a form of learning revolution. The form of learning revolution includes the provision of unlimited teaching materials, real time class organization, and is not limited by time or place. The implementation of this application is in the form of hybrid learning by using Google Apps for…

  2. The generation of large networks from web of science data

    NARCIS (Netherlands)

    Leydesdorff, L.; Khan, G.F.; Bornmann, L.

    2014-01-01

    During the 1990s, one of us developed a series of freeware routines (http://www.leydesdorff.net/indicators) that enable the user to organize downloads from the Web of Science (Thomson Reuters) into a relational database, and then to export matrices for further analysis in various formats (for

  3. MDA-BASED ATL TRANSFORMATION TO GENERATE MVC 2 WEB MODELS

    Directory of Open Access Journals (Sweden)

    Samir Mbarki

    2011-09-01

    Full Text Available Development and maintenance of Web application is still a complex and error-prone process. We need integrated techniques and tool support for automated generation of Web systems and a ready prescription for easy maintenance. The MDA approach proposes an architecture taking into account the development and maintenance of large and complex software. In this paper, we apply MDA approach for generating PSM from UML design to MVC 2Web implementation. That is why we have developed two meta-models handling UML class diagrams and MVC 2 Web applications, then we have to set up transformation rules. These last are expressed in ATL language. To specify the transformation rules (especially CRUD methods we used a UML profiles. To clearly illustrate the result generated by this transformation, weconverted the XMI file generated in an EMF (Eclipse Modeling Framework model.

  4. Dispersion engineered cob-web photonic crystal fibers for efficient supercontinuum generation

    OpenAIRE

    Sørensen, Niels Thorkild; Nikolov, N. I.; Bang, Ole; Bjarklev, Anders Overgaard; Hougaard, Kristian G.; Hansen, Kim Per

    2004-01-01

    Highly nonlinear cob-web photonic crystal fibers are engineered to have dispersion profiles for efficient direct degenerate four-wave mixing and optimized supercontinuum generation with low-power picosecond pulses. This process is robust to fiber irregularities.

  5. MDA-based ATL transformation to generate MVC 2 web models

    CERN Document Server

    Rahmouni, M'hamed

    2011-01-01

    Development and maintenance of Web application is still a complex and error-prone process. We need integrated techniques and tool support for automated generation of Web systems and a ready prescription for easy maintenance. The MDA approach proposes an architecture taking into account the development and maintenance of large and complex software. In this paper, we apply MDA approach for generating PSM from UML design to MVC 2Web implementation. That is why we have developed two meta-models handling UML class diagrams and MVC 2 Web applications, then we have to set up transformation rules. These last are expressed in ATL language. To specify the transformation rules (especially CRUD methods) we used a UML profiles. To clearly illustrate the result generated by this transformation, we converted the XMI file generated in an EMF (Eclipse Modeling Framework) model.

  6. Flexible Generation of Pervasive Web Services using OSGi Declarative Services and OWL Ontologies

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius; Zhang, Weishan; Fernandes, Joao

    2008-01-01

    There is a growing trend to deploy web services in pervasive computing environments. Implementing web services on networked, embedded devices leads to a set of challenges, including productivity of development, efficiency of web services, and handling of variability and dependencies of hardware...... and software platforms. To address these challenges, we developed a web service compiler called Limbo, in which Web Ontology Language (OWL) ontologies are used to make the Limbo compiler aware of its compilation context such as device hardware and software details, platform dependencies, and resource....../power consumption. The ontologies are used to configure Limbo for generating resource-efficient web service code. The architecture of Limbo follows the Blackboard architectural style and Limbo is implemented using the OSGi Declarative Services component model. The component model provides high flexibility...

  7. An interactive web-based design system for rubber injection mold: Automotive rubber parts

    OpenAIRE

    Chamnarn Thongmark; Jariyaporn Onwong

    2016-01-01

    This research aims at integrating a knowledge-based system and web-based technology to facilitate the rubber and rubber composite injection mold design. The system integrates both of computer-aided design and web-based management by using the application programming interface. The research processes started with gathering data and knowledge concerning on rubber injection mold design and process, with the designed framework of the system included. An example part was demonstrated i...

  8. An interactive web-based design system for rubber injection mold: Automotive rubber parts

    OpenAIRE

    Chamnarn Thongmark; Jariyaporn Onwong

    2016-01-01

    This research aims at integrating a knowledge-based system and web-based technology to facilitate the rubber and rubber composite injection mold design. The system integrates both of computer-aided design and web-based management by using the application programming interface. The research processes started with gathering data and knowledge concerning on rubber injection mold design and process, with the designed framework of the system included. An example part was demonstrated i...

  9. Dispersion engineered cob-web photonic crystal fibers for efficient supercontinuum generation

    DEFF Research Database (Denmark)

    Sørensen, Niels Thorkild; Nikolov, N.I.; Bang, Ole;

    2004-01-01

    Highly nonlinear cob-web photonic crystal fibers are engineered to have dispersion profiles for efficient direct degenerate four-wave mixing and optimized supercontinuum generation with low-power picosecond pulses. This process is robust to fiber irregularities.......Highly nonlinear cob-web photonic crystal fibers are engineered to have dispersion profiles for efficient direct degenerate four-wave mixing and optimized supercontinuum generation with low-power picosecond pulses. This process is robust to fiber irregularities....

  10. Design & Deploy Web 2.0 enable services over Next Generation Network Platform

    CERN Document Server

    Lakhtaria, Kamaljit I; 10.5121/ijdms.2010.2305

    2010-01-01

    The Next Generation Networks (NGN) aims to integrate for IP-based telecom infrastructures and provide most advance & high speed emerging value added services. NGN capable to provide higher innovative services, these services will able to integrate communication and Web service into a single platform. IP Multimedia Subsystem, a NGN leading technology, enables a variety of NGN-compliant communications services to interoperate while being accessed through different kinds of access networks, preferably broadband. IMS–NGN services essential by both consumer and corporate users are by now used to access services, even communications services through the web and web-based communities and social networks, It is key for success of IMS-based services to be provided with efficient web access, so users can benefit from those new services by using web-based applications and user interfaces, not only NGN-IMS User Equipments and SIP protocol. Many Service are under planning which provided only under convergence of ...

  11. Analyzing Web 2.0 Integration with Next Generation Networks for Services Rendering

    CERN Document Server

    Lakhtaria, Kamaljit I

    2010-01-01

    The Next Generation Networks (NGN) aims to integrate for IP-based telecom infrastructures and provide most advance & high speed emerging value added services. NGN capable to provide higher innovative services, these services will able to integrate communication and Web service into a single platform. IP Multimedia Subsystem, a NGN leading technology, enables a variety of NGN-compliant communications services to interoperate while being accessed through different kinds of access networks, preferably broadband. IMS–NGN services essential by both consumer and corporate users are by now used to access services, even communications services through the web and web-based communities and social networks, It is key for success of IMS-based services to be provided with efficient web access, so users can benefit from those new services by using web-based applications and user interfaces, not only NGN-IMS User Equipments and SIP protocol. Many Service are under planning which provided only under convergence of ...

  12. Web services in third-generation service platforms

    NARCIS (Netherlands)

    Lagerberg, Ko; Plas, Dirk-Jaap; Wegdam, Maarten

    2002-01-01

    In third-generation (3G) networks, third-party service developers will have access to the mobile network resources using open network interfaces, such as the 3rd Generation Partnership Project's (3GPP's) Open Service Access (OSA). The service platforms that offer these interfaces provide interoperab

  13. Automatic WSDL-guided Test Case Generation for PropEr Testing of Web Services

    Directory of Open Access Journals (Sweden)

    Konstantinos Sagonas

    2012-10-01

    Full Text Available With web services already being key ingredients of modern web systems, automatic and easy-to-use but at the same time powerful and expressive testing frameworks for web services are increasingly important. Our work aims at fully automatic testing of web services: ideally the user only specifies properties that the web service is expected to satisfy, in the form of input-output relations, and the system handles all the rest. In this paper we present in detail the component which lies at the heart of this system: how the WSDL specification of a web service is used to automatically create test case generators that can be fed to PropEr, a property-based testing tool, to create structurally valid random test cases for its operations and check its responses. Although the process is fully automatic, our tool optionally allows the user to easily modify its output to either add semantic information to the generators or write properties that test for more involved functionality of the web services.

  14. BEAT: A Web-Based Boolean Expression Fault-Based Test Case Generation Tool

    Science.gov (United States)

    Chen, T. Y.; Grant, D. D.; Lau, M. F.; Ng, S. P.; Vasa, V. R.

    2006-01-01

    BEAT is a Web-based system that generates fault-based test cases from Boolean expressions. It is based on the integration of our several fault-based test case selection strategies. The generated test cases are considered to be fault-based, because they are aiming at the detection of particular faults. For example, when the Boolean expression is in…

  15. Statmaster and HEROS - web-based courses first and second generation

    DEFF Research Database (Denmark)

    Larsen, Pia Veldt; Rootzen, Helle

    2008-01-01

    With the increasing focus on life-long learning, and with the convenience and accessibility of the Internet, the market for web-based courses has expanded vastly in recent times–in particular in connection with continuing education. However, teaching web-based courses presents various technical...... as well as pedagogical challenges. Some of these challenges are addressed, and means to dealing with them are suggested. A second generation of web-based courses is comprised of learning objects, which allows for tailoring courses for specialized groups of students, and accommodate individualized learning....... The concept of learning objects and how they are used to form new courses are discussed....

  16. Generation Gap and the Impact of the Web on Goods Quality Perceptions

    Science.gov (United States)

    Wan, Yun; Nakayama, Makoto; Sutcliffe, Norma

    This study explores how age and general online shopping experience affect consumer perceptions on product quality uncertainty. Using the survey data collected from 549 consumers, we investigated how they perceive the uncertainty of product quality on six search, experience and credence goods. The ANOVA results show that age and the Web shopping experience of consumers are significant factors. A generation gap is indeed seen for all but one experience good. Web shopping experience is not a significant factor for search goods but is for experience and credence goods. There is an interaction effect between age and Web shopping experience for one credence good. Implications of these results are discussed.

  17. Web 2.0 and the Net Generation - A Critical Perspective

    DEFF Research Database (Denmark)

    2012-01-01

    In the recent years, social media and web 2.0 have been hot topics within educational debates and within the research area of networked learning. The latter is evident from symposia and papers from the last years' networked learning conferences, but also European research projects, special issues......, and books have revolved around social media, web 2.0, personal learning environments, student-centred learning, and student-generated content. Alongside these internet developments we have witnessed debates on what schools and universities can do to cater to the 'net-generation' or the 'digital natives' in...

  18. Implementing the WebSocket Protocol Based on Formal Modelling and Automated Code Generation

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2014-01-01

    protocols. Furthermore, we perform formal verification of the CPN model prior to code generation, and test the implementation for interoperability against the Autobahn WebSocket test-suite resulting in 97% and 99% success rate for the client and server implementation, respectively. The tests show...... with pragmatic annotations for automated code generation of protocol software. The contribution of this paper is an application of the approach as implemented in the PetriCode tool to obtain protocol software implementing the IETF WebSocket protocol. This demonstrates the scalability of our approach to real...

  19. Web 2.0 and the Net Generation - A Critical Perspective

    DEFF Research Database (Denmark)

    Ryberg, Thomas

    2012-01-01

    , and books have revolved around social media, web 2.0, personal learning environments, student-centred learning, and student-generated content. Alongside these internet developments we have witnessed debates on what schools and universities can do to cater to the 'net-generation' or the 'digital natives' in......In the recent years, social media and web 2.0 have been hot topics within educational debates and within the research area of networked learning. The latter is evident from symposia and papers from the last years' networked learning conferences, but also European research projects, special issues...

  20. Web 2.0 and the Net Generation - A Critical Perspective

    DEFF Research Database (Denmark)

    2012-01-01

    In the recent years, social media and web 2.0 have been hot topics within educational debates and within the research area of networked learning. The latter is evident from symposia and papers from the last years' networked learning conferences, but also European research projects, special issues......, and books have revolved around social media, web 2.0, personal learning environments, student-centred learning, and student-generated content. Alongside these internet developments we have witnessed debates on what schools and universities can do to cater to the 'net-generation' or the 'digital natives' in...

  1. Analyzing Web 2.0 Integration with Next Generation Networks for Services Rendering

    Directory of Open Access Journals (Sweden)

    Kamaljit I. Lakhtaria

    2010-08-01

    Full Text Available The Next Generation Networks (NGN aims to integrate for IP-based telecominfrastructures and provide most advance & high speed emerging value added services.NGN capable to provide higher innovative services, these services will able to integratecommunication and Web service into a single platform. IP Multimedia Subsystem, aNGN leading technology, enables a variety of NGN-compliant communications servicesto interoperate while being accessed through different kinds of access networks,preferably broadband. IMS–NGN services essential by both consumer and corporateusers are by now used to access services, even communications services through the weband web-based communities and social networks, It is key for success of IMS-basedservices to be provided with efficient web access, so users can benefit from those newservices by using web-based applications and user interfaces, not only NGN-IMS UserEquipments and SIP protocol. Many Service are under planning which provided onlyunder convergence of IMS & Web 2.0. Convergence between Web 2.0 and NGN-IMScreates and serves new invented innovative, entertainment and information appealing aswell as user centric services and applications. These services merge features from WWWand Communication worlds. On the one hand, interactivity, ubiquity, social orientation,user participation and content generation, etc. are relevant characteristics coming fromWeb 2.0 services. Parallel IMS enables services including multimedia telephony, mediasharing (video-audio, instant messaging with presence and context, online directory,etc. all of them applicable to mobile, fixed or convergent telecom networks. With thispaper, this paper brings out the benefits of adopting web 2.0 technologies for telecomservices. As the services are today mainly driven by the user's needs, and proposed theconcept of unique customizable service interface.

  2. DISCRETIZATION APPROACH USING RAY-TESTING MODEL IN PARTING LINE AND PARTING SURFACE GENERATION

    Institute of Scientific and Technical Information of China (English)

    HAN Jianwen; JIAN Bin; YAN Guangrong; LEI Yi

    2007-01-01

    Surface classification, 3D parting line, parting surface generation and demoldability analysis which is helpful to select optimal parting direction and optimal parting line are involved in automatic cavity design based on the ray-testing model. A new ray-testing approach is presented to classify the part surfaces to core/cavity surfaces and undercut surfaces by automatic identifying the visibility of surfaces. A simple, direct and efficient algorithm to identify surface visibility is developed. The algorithm is robust and adapted to rather complicated geometry, so it is valuable in computer-aided mold design systems. To validate the efficiency of the approach, an experimental program is implemented. Case studies show that the approach is practical and valuable in automatic parting line and parting surface generation.

  3. A Two-Tiered Model for Analyzing Library Web Site Usage Statistics, Part 1: Web Server Logs.

    Science.gov (United States)

    Cohen, Laura B.

    2003-01-01

    Proposes a two-tiered model for analyzing web site usage statistics for academic libraries: one tier for library administrators that analyzes measures indicating library use, and a second tier for web site managers that analyzes measures aiding in server maintenance and site design. Discusses the technology of web site usage statistics, and…

  4. Automatic Generation of Data Types for Classification of Deep Web Sources

    Energy Technology Data Exchange (ETDEWEB)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automatic generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.

  5. Brainstorming Design for Health: Helping Patients Utilize Patient-Generated Information on the Web.

    Science.gov (United States)

    Huh, Jina; Hartzler, Andrea; Munson, Sean; Anderson, Nick; Edwards, Kelly; Gore, John L; McDonald, David; O'Leary, Jim; Parker, Andrea; Streat, Derek; Yetisgen-Yildiz, Meliha; Pratt, Wanda; Ackerman, Mark S

    2012-01-01

    Researchers and practitioners show increasing sinterest in utilizing patient-generated information on the Web. Although the HCI and CSCW communities have provided many exciting opportunities for exploring new ideas and building broad agenda in health, few venues offer a platform for interdisciplinary and collaborative brainstorming about design challenges and opportunities in this space. The goal of this workshop is to provide participants with opportunities to interact with stakeholders from diverse backgrounds and practices-researchers, practitioners, designers, programmers, and ethnographers-and together generate tangible design outcomes that utilize patient-generated information on the Web. Through small multidisciplinary group work, we will provide participants with new collaboration opportunities, understanding of the state of the art, inspiration for future work, and ideally avenues for continuing to develop research and design ideas generated at the workshop.

  6. Report covering examination of parts from downhole steam generators. [Combustor head and sleeve parts

    Energy Technology Data Exchange (ETDEWEB)

    Pettit, F. S.; Meier, G. H.

    1983-08-01

    Combustor head and sleeve parts were examined by using optical and scanning electron metallography after use in oxygen/diesel and air/diesel downhole steam generators. The degradation of the different alloy components is described in terms of reactions with oxygen, sulfur and carbon in the presence of cyclic stresses, all generated by the combustion process. Recommendations are presented for component materials (alloys and coatings) to extend component lives in the downhole steam generators. 9 references, 22 figures, 3 tables.

  7. Technical Evaluation Report 61: The World-Wide Inaccessible Web, Part 2: Internet routes

    Directory of Open Access Journals (Sweden)

    Jim Klaas

    2007-06-01

    Full Text Available In the previous report in this series, Web browser loading times were measured in 12 Asian countries, and were found to be up to four times slower than commonly prescribed as acceptable. Failure of webpages to load at all was frequent. The current follow-up study compares these loading times with the complexity of the Internet routes linking the Web users and the Web servers hosting them. The study was conducted in the same 12 Asian countries, with the assistance of members of the International Development Research Centre’s PANdora distance education research network. The data were generated by network members in Bhutan, Cambodia, India, Indonesia, Laos, Mongolia, the Philippines, Sri Lanka, Pakistan, Singapore, Thailand, and Vietnam. Additional data for the follow-up study were collected in China. Using a ‘traceroute’ routine, the study indicates that webpage loading time is linked to the complexity of the Internet routes between Web users and the host server. It is indicated that distance educators can apply such information in the design of improved online delivery and mirror sites, notably in areas of the developing world which currently lack an effective infrastructure for online education.

  8. Unified framework for generation of 3D web visualization for mechatronic systems

    Science.gov (United States)

    Severa, O.; Goubej, M.; Konigsmarkova, J.

    2015-11-01

    The paper deals with development of a unified framework for generation of 3D visualizations of complex mechatronic systems. It provides a high-fidelity representation of executed motion by allowing direct employment of a machine geometry model acquired from a CAD system. Open-architecture multi-platform solution based on latest web standards is achieved by utilizing a web browser as a final 3D renderer. The results are applicable both for simulations and development of real-time human machine interfaces. Case study of autonomous underwater vehicle control is provided to demonstrate the applicability of the proposed approach.

  9. An interactive web-based design system for rubber injection mold: Automotive rubber parts

    Directory of Open Access Journals (Sweden)

    Chamnarn Thongmark

    2016-10-01

    Full Text Available This research aims at integrating a knowledge-based system and web-based technology to facilitate the rubber and rubber composite injection mold design. The system integrates both of computer-aided design and web-based management by using the application programming interface. The research processes started with gathering data and knowledge concerning on rubber injection mold design and process, with the designed framework of the system included. An example part was demonstrated in order to validate the developed system. Based on standardized procedures, the system provides counseling that is able to resolve relevant issues at the early stage of the mold design. The system can be used for both designing and training in rubber mold fabrication.

  10. THE GENERATIVE REPRODUCTIVE CHARACTERISTICS OF RED DRAGON FRUIT (Hylocereus.polyrhizus (Web.) Britton & Rose, CACTACEAE)

    OpenAIRE

    Eniek Kriswiyanti

    2013-01-01

    This study aimed to investigate the generative reproductive characteristics of red dragon fruit (Hylocereus polyrhizus (Web.) Britton & Rose), that is the characteristics and the development of male and female reproductive system and the embryo. The flowering morphology was observed through the gametogenesis process and the process of embryogenesis was observed using the methods of squash, cytolysis, and embedding of flower before anthesis, during and after anthesis. The results showed th...

  11. 基于WebPart的高校网站群建设方案设计%Design of Universities Website Group Construction Scheme Based on WebPart

    Institute of Scientific and Technical Information of China (English)

    高洁羽

    2013-01-01

    分析高校网站建设的现状,针对现在大部分高校的众多网站片面的重视个性化特色,缺乏统一规划,信息孤立,松散耦合的问题,提出一个基于WebPart的高校网站群建设方案,该方案用网站群的理念构建系统框架实现高校网站集群统一规划,信息共享;采用WebPart技术实现二级子网站个性化的需求,为各大高校的网站群建设提供一定的参考。%Analyzes the current situation of university websit e, presents a design of universities website group construction scheme based on WebPart against most of the universities website one-sided emphasis personalization, the lack of unified planning, information isolation, loosely coupled problem. This scheme uses the idea of website group to build the system framework of univer-sity website. Unified planning and implementation of the Web cluster, information sharing. This scheme uses WebPart technology to make the second-level site personalized. Provides some reference for the construction of major colleges and universities website group.

  12. A century of influence: Part 2. The greatest generation.

    Science.gov (United States)

    Burke, Chris

    2015-08-01

    The story of orthodontics during the first 100 years of Journal publication can be told through the people who lived it. As part of the American Journal of Orthodontics and Dentofacial Orthopedics' Centennial celebration, we present 100 people who most influenced the specialty during the last 100 years. Part 2 picks up with "the greatest generation" and describes those born in the first 2 decades of the 20th century. Whether born in Europe or the United States, their lives and educations were disrupted by world war. Many served during the years of conflict, and a few paid an even heavier price. After World War II, they returned home or immigrated to the United States and resumed their life's work in orthodontics.

  13. The Creative task Creator: a tool for the generation of customized, Web-based creativity tasks.

    Science.gov (United States)

    Pretz, Jean E; Link, John A

    2008-11-01

    This article presents a Web-based tool for the creation of divergent-thinking and open-ended creativity tasks. A Java program generates HTML forms with PHP scripting that run an Alternate Uses Task and/or open-ended response items. Researchers may specify their own instructions, objects, and time limits, or use default settings. Participants can also be prompted to select their best responses to the Alternate Uses Task (Silvia et al., 2008). Minimal programming knowledge is required. The program runs on any server, and responses are recorded in a standard MySQL database. Responses can be scored using the consensual assessment technique (Amabile, 1996) or Torrance's (1998) traditional scoring method. Adoption of this Web-based tool should facilitate creativity research across cultures and access to eminent creators. The Creative Task Creator may be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive.

  14. A Web Tool for Generating High Quality Machine-readable Biological Pathways.

    Science.gov (United States)

    Ramirez-Gaona, Miguel; Marcu, Ana; Pon, Allison; Grant, Jason; Wu, Anthony; Wishart, David S

    2017-02-08

    PathWhiz is a web server built to facilitate the creation of colorful, interactive, visually pleasing pathway diagrams that are rich in biological information. The pathways generated by this online application are machine-readable and fully compatible with essentially all web-browsers and computer operating systems. It uses a specially developed, web-enabled pathway drawing interface that permits the selection and placement of different combinations of pre-drawn biological or biochemical entities to depict reactions, interactions, transport processes and binding events. This palette of entities consists of chemical compounds, proteins, nucleic acids, cellular membranes, subcellular structures, tissues, and organs. All of the visual elements in it can be interactively adjusted and customized. Furthermore, because this tool is a web server, all pathways and pathway elements are publicly accessible. This kind of pathway "crowd sourcing" means that PathWhiz already contains a large and rapidly growing collection of previously drawn pathways and pathway elements. Here we describe a protocol for the quick and easy creation of new pathways and the alteration of existing pathways. To further facilitate pathway editing and creation, the tool contains replication and propagation functions. The replication function allows existing pathways to be used as templates to create or edit new pathways. The propagation function allows one to take an existing pathway and automatically propagate it across different species. Pathways created with this tool can be "re-styled" into different formats (KEGG-like or text-book like), colored with different backgrounds, exported to BioPAX, SBGN-ML, SBML, or PWML data exchange formats, and downloaded as PNG or SVG images. The pathways can easily be incorporated into online databases, integrated into presentations, posters or publications, or used exclusively for online visualization and exploration. This protocol has been successfully applied to

  15. WaveNet: A Web-Based Metocean Data Access, Processing and Analysis Tool; Part 5 - WW3 Database

    Science.gov (United States)

    2015-02-01

    data for project planning, design , and evaluation studies, including how to generate input files for numerical wave models. WaveNet employs a Google ...ERDC/CHL CHETN-IV-103 February 2015 Approved for public release; distribution is unlimited. WaveNet: A Web -Based Metocean Data Access, Processing...modeling and planning missions require metocean data (e.g., winds, waves, tides, water levels). WaveNet is a web -based graphical-user-interface (GUI

  16. Human Trafficking in the United States. Part II. Survey of U.S. Government Web Resources for Publications and Data

    Science.gov (United States)

    Panigabutra-Roberts, Anchalee

    2012-01-01

    This second part of a two-part series is a survey of U.S. government web resources on human trafficking in the United States, particularly of the online publications and data included on agencies' websites. Overall, the goal is to provide an introduction, an overview, and a guide on this topic for library staff to use in their research and…

  17. Differences of perceived image generated through the Web site: Empirical Evidence Obtained in Spanish Destinations

    Directory of Open Access Journals (Sweden)

    Juan Jose Blazquez-Resino

    2016-11-01

    Full Text Available In this paper, a study of the perceived destination image created by promotional Web Pages is expounded in an attempt to identify their differences as generators of destination image in the consumers’ mind. Specifically, it seeks to analyse whether the web sites of different Spanish regions improve the image that consumers have of the destination, identifying their main dimensions and analysing its effect on satisfaction and intentions of the future behaviour of potential visitors. To achieve these objectives and verify the hypotheses, a laboratory experiment was performed, where it was determined what changes are produced in the tourist´s previous image after browsing the tourist webs of three different regions. Moreover, it analyses the differences in the effect of the perceived image on satisfaction and potential visitors´ future behavioural intentions. The results obtained enable us to identify differences in the composition of the perceived image according to the destination, while confirming the significant effect of different perceived image dimensions regarding satisfaction. The results allow managers to gain a better understanding of the effectiveness of their sites from a consumer perspective as well as suggestions to follow in order to achieve greater efficiency in their communication actions in order to improve the motivation of visitors to go to the destination.

  18. A web based tool for storing and visualising data generated within a smart home.

    Science.gov (United States)

    McDonald, H A; Nugent, C D; Moore, G; Finlay, D D; Hallberg, J

    2011-01-01

    There is a growing need to re-assess the current approaches available to researchers for storing and managing heterogeneous data generated within a smart home environment. In our current work we have developed the homeML Application; a web based tool to support researchers engaged in the area of smart home research as they perform experiments. Within this paper the homeML Application is presented which includes the fundamental components of the homeML Repository and the homeML Toolkit. Results from a usability study conducted by 10 computer science researchers are presented; the initial results of which have been positive.

  19. Reducing Excessive Amounts of Data: Multiple Web Queries for Generation of Pun Candidates

    Directory of Open Access Journals (Sweden)

    Pawel Dybala

    2011-01-01

    Full Text Available Humor processing is still a less studied issue, both in NLP and AI. In this paper we contribute to this field. In our previous research we showed that adding a simple pun generator to a chatterbot can significantly improve its performance. The pun generator we used generated only puns based on words (not phrases. In this paper we introduce the next stage of the system's development—an algorithm allowing generation of phrasal pun candidates. We show that by using only the Internet (without any hand-made humor-oriented lexicons, it is possible to generate puns based on complex phrases. As the output list is often excessively long, we also propose a method for reducing the number of candidates by comparing two web-query-based rankings. The evaluation experiment showed that the system achieved an accuracy of 72.5% for finding proper candidates in general, and the reduction method allowed us to significantly shorten the candidates list. The parameters of the reduction algorithm are variable, so that the balance between the number of candidates and the quality of output can be manipulated according to needs.

  20. Collection Selection for Distributed Web Search

    NARCIS (Netherlands)

    Bockting, S.

    2009-01-01

    Current popular web search engines, such as Google, Live Search and Yahoo!, rely on crawling to build an index of the World Wide Web. Crawling is a continuous process to keep the index fresh and generates an enormous amount of data traffic. By far the largest part of the web remains unindexed, becau

  1. The Role of Web Interviews as Part of a National Travel Survey

    DEFF Research Database (Denmark)

    Christensen, Linda

    2013-01-01

    Purpose — The paper is analysing the effect of adding a web survey to a traditional telephone-based national travel survey by asking the respondents to check in on the web and answer the questions there (Computer Assisted Web Interview, CAWI). If they are not participating by web they are as usual...... increase the quality of the survey in general. Originality/value of paper — In many countries authorities are considering how to reduce the cost of their national travel surveys. The value of the paper is to show that a combination of a CAWI and a CATI could be a good solution. Furthermore, it shows...

  2. PhpHMM Tool for Generating Speech Recogniser Source Codes Using Web Technologies

    Directory of Open Access Journals (Sweden)

    R. Krejčí

    2011-01-01

    Full Text Available This paper deals with the “phpHMM” software tool, which facilitates the development and optimisation of speech recognition algorithms. This tool is being developed in the Speech Processing Group at the Department of Circuit Theory, CTU in Prague, and it is used to generate the source code of a speech recogniser by means of the PHP scripting language and the MySQL database. The input of the system is a model of speech in a standard HTK format and a list of words to be recognised. The output consists of the source codes and data structures in C programming language, which are then compiled into an executable program. This tool is operated via a web interface.

  3. Web-based hazard and near-miss reporting as part of a patient safety curriculum.

    Science.gov (United States)

    Currie, Leanne M; Desjardins, Karen S; Levine, Ellen Sunni; Stone, Patricia W; Schnall, Rebecca; Li, Jianhua; Bakken, Suzanne

    2009-12-01

    As part of a patient safety curriculum, we developed a Web-based hazard and near-miss reporting system for postbaccalaureate nursing students to use during their clinical experiences in the first year of their combined BS-MS advanced practice nurse program. The 25-week clinical rotations included 2 days per week for 5 weeks each in community, medical-surgical, obstetrics, pediatrics, and psychiatric settings. During a 3-year period, 453 students made 21,276 reports. Of the 10,206 positive (yes) responses to a hazard or near miss, 6,005 hazards (59%) and 4,200 near misses (41%) were reported. The most common reports were related to infection, medication, environmental, fall, and equipment issues. Of the near misses, 1,996 (48%) had planned interceptions and 2,240 (52%) had unplanned interceptions. Types of hazards and near misses varied by rotation. Incorporating hazard and near-miss reporting into the patient safety curriculum was an innovative strategy to promote mindfulness among nursing students.

  4. Performance of a Web-based Method for Generating Synoptic Reports

    Science.gov (United States)

    Renshaw, Megan A.; Renshaw, Scott A.; Mena-Allauca, Mercy; Carrion, Patricia P.; Mei, Xiaorong; Narciandi, Arniris; Gould, Edwin W.; Renshaw, Andrew A.

    2017-01-01

    Context: The College of American Pathologists (CAP) requires synoptic reporting of all tumor excisions. Objective: To compare the performance of different methods of generating synoptic reports. Methods: Completeness, amendment rates, rate of timely ordering of ancillary studies (KRAS in T4/N1 colon carcinoma), and structured data file extraction were compared for four different synoptic report generating methods. Results: Use of the printed tumor protocols directly from the CAP website had the lowest completeness (84%) and highest amendment (1.8%) rates. Reformatting these protocols was associated with higher completeness (94%, P < 0.001) and reduced amendment (1%, P = 0.20) rates. Extraction into a structured data file was successful 93% of the time. Word-based macros improved completeness (98% vs. 94%, P < 0.001) but not amendment rates (1.5%). KRAS was ordered before sign out 89% of the time. In contrast, a web-based product with a reminder flag when items were missing, an embedded flag for data extraction, and a reminder to order KRAS when appropriate resulted in improved completeness (100%, P = 0.005), amendment rates (0.3%, P = 0.03), KRAS ordering before sign out (100%, P = 0.23), and structured data extraction (100%, P < 0.001) without reducing the speed (P = 0.34) or accuracy (P = 1.00) of data extraction by the reader. Conclusion: Completeness, amendment rates, ancillary test ordering rates, and data extraction rates vary significantly with the method used to construct the synoptic report. A web-based method compares favorably with all other methods examined and does not reduce reader usability.

  5. Uncovering the Hidden Web, Part II: Resources for Your Classroom. ERIC Digest.

    Science.gov (United States)

    Mardis, Marcia

    Too often, search engines don't see and directories can overlook clearinghouses, digital libraries, full-text databases, and learning objects. In contrast, the hidden Web is rich with these high quality and cutting-edge learning materials. By integrating resources from the hidden Web into the classroom, educators extend their instruction in new…

  6. Uncovering the Hidden Web, Part I: Finding What the Search Engines Don't. ERIC Digest.

    Science.gov (United States)

    Mardis, Marcia

    Currently, the World Wide Web contains an estimated 7.4 million sites (OCLC, 2001). Yet even the most experienced searcher, using the most robust search engines, can access only about 16% of these pages (Dahn, 2001). The other 84% of the publicly available information on the Web is referred to as the "hidden,""invisible," or…

  7. Large-Scale Multiobjective Static Test Generation for Web-Based Testing with Integer Programming

    Science.gov (United States)

    Nguyen, M. L.; Hui, Siu Cheung; Fong, A. C. M.

    2013-01-01

    Web-based testing has become a ubiquitous self-assessment method for online learning. One useful feature that is missing from today's web-based testing systems is the reliable capability to fulfill different assessment requirements of students based on a large-scale question data set. A promising approach for supporting large-scale web-based…

  8. THE GENERATIVE REPRODUCTIVE CHARACTERISTICS OF RED DRAGON FRUIT (Hylocereus.polyrhizus (Web. Britton & Rose, CACTACEAE

    Directory of Open Access Journals (Sweden)

    Eniek Kriswiyanti

    2013-04-01

    Full Text Available This study aimed to investigate the generative reproductive characteristics of red dragon fruit (Hylocereus polyrhizus (Web. Britton & Rose, that is the characteristics and the development of male and female reproductive system and the embryo. The flowering morphology was observed through the gametogenesis process and the process of embryogenesis was observed using the methods of squash, cytolysis, and embedding of flower before anthesis, during and after anthesis. The results showed that the flower funnelled form with many calyxes, stamens and crowns. The pollen was circular, sulcus, trilate, reticulate, and spheroid. The development of microgametophyte was at the 2nd stages with three nucleuses and the pistil developed after anthesis.  Anthesis was taken place at night time, fertilization and pollen developed before anthesis and has not germinating. A single pistil, the head of pistil with many branches, the style longer than the stamen, open type.  Seeds anathrophus, endosperm and embryo with the globular shaped, developed 5 days after anthesis, the embryo torpedo likes and the differentiation of primer tissues seen in the seeds 7 days after anthesis, and arilus generated from development of megasporangium. Keywords: embedding, self incompatibility, sferoidal, anatrophus, arillus.

  9. MACHINE LEARNING IMPLEMENTATION FOR THE CLASSIFICATION OF ATTACKS ON WEB SYSTEMS. PART 1

    Directory of Open Access Journals (Sweden)

    K. Smirnova

    2017-08-01

    Full Text Available The possibility of applying machine learning is considered for the classification of malicious requests to a Web application. This approach excludes the use of deterministic analysis systems (for example, expert systems, and based on the application of a cascade of neural networks or perceptrons on an approximate model to the real human brain. The main idea of the work is to enable to describe complex attack vectors consisting of feature sets, abstract terms for compiling a training sample, controlling the quality of recognition and classifying each of the layers (networks participating in the work, with the ability to adjust not the entire network, But only a small part of it, in the training of which a mistake or inaccuracy crept in.  The design of the developed network can be described as a cascaded, scalable neural network.  The developed system of intrusion detection uses a three-layer neural network. Layers can be built independently of each other by cascades. In the first layer, for each class of attack recognition, there is a corresponding network and correctness is checked on this network. To learn this layer, we have chosen classes of things that can be classified uniquely as yes or no, that is, they are linearly separable. Thus, a layer is obtained not just of neurons, but of their microsets, which can best determine whether is there some data class in the query or not. The following layers are not trained to recognize the attacks themselves, they are trained that a set of attacks creates certain threats. This allows you to more accurately recognize the attacker's attempts to bypass the defense system, as well as classify the target of the attack, and not just its fact. Simple layering allows you to minimize the percentage of false positives.

  10. Start Your Search Engines. Part One: Taming Google--and Other Tips to Master Web Searches

    Science.gov (United States)

    Adam, Anna; Mowers, Helen

    2008-01-01

    There are a lot of useful tools on the Web, all those social applications, and the like. Still most people go online for one thing--to perform a basic search. For most fact-finding missions, the Web is there. But--as media specialists well know--the sheer wealth of online information can hamper efforts to focus on a few reliable references.…

  11. Differences in Learning Preferences by Generational Cohort: Implications for Instructional Design in Corporate Web-Based Learning

    Science.gov (United States)

    Kriegel, Jessica

    2013-01-01

    In today's global and high-tech economy, the primary contributing factor to sustainable competitive advantage is the strategic development of employees, an organization's only unique asset. However, with four generations actively present in the workforce and the proliferation of web-based learning as a key method for developing…

  12. Differences in Learning Preferences by Generational Cohort: Implications for Instructional Design in Corporate Web-Based Learning

    Science.gov (United States)

    Kriegel, Jessica

    2013-01-01

    In today's global and high-tech economy, the primary contributing factor to sustainable competitive advantage is the strategic development of employees, an organization's only unique asset. However, with four generations actively present in the workforce and the proliferation of web-based learning as a key method for developing…

  13. Using Semantic Web technologies for the generation of domain-specific templates to support clinical study metadata standards.

    Science.gov (United States)

    Jiang, Guoqian; Evans, Julie; Endle, Cory M; Solbrig, Harold R; Chute, Christopher G

    2016-01-01

    The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as a semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study is to develop and evaluate a Semantic Web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain-specific templates to support clinical study metadata standards development. We developed a template generation and visualization system based on an open source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a "mind map" based tool for the visualization of generated domain-specific templates. We also developed a RESTful Web Service informed by the Clinical Information Modeling Initiative (CIMI) reference model for access to the generated domain-specific templates. A preliminary usability study is performed and all reviewers (n = 3) had very positive responses for the evaluation questions in terms of the usability and the capability of meeting the system requirements (with the average score of 4.6). Semantic Web technologies provide a scalable infrastructure and have great potential to enable computable semantic interoperability of models in the intersection of health care and clinical research.

  14. Next-Generation Real-Time Geodetic Station Sensor Web for Natural Hazards Research and Applications

    Science.gov (United States)

    Bock, Y.; Clayton, R. W.; Fang, P.; Geng, J.; Gutman, S. I.; Kedar, S.; Laber, J. L.; Moore, A. W.; Owen, S. E.; Small, I.; Squibb, M. B.; Webb, F.; Yu, E.

    2012-12-01

    We report on a NASA AIST project focused on better forecasting, assessing, and mitigating natural hazards, including earthquakes, tsunamis, and extreme storms and flooding through development and implementation of a modular technology for the next-generation in-situ geodetic station, and a Geodetic Sensor Web to support the flow of information from multiple stations to scientists, mission planners, decision makers, and first responders. Meaningful warnings save lives when issued within 1-2 minutes for destructive earthquakes, several tens of minutes for tsunamis, and up to several hours for extreme storms and flooding, and can be provided by on-site fusion of multiple data types and generation of higher-order data products: GPS and accelerometer measurements to estimate point displacements, and GPS and meteorological measurements to estimate moisture variability in the free atmosphere. By operating semi-autonomously, each station can provide low-latency, high-fidelity and compact data products within the constraints of narrow communications bandwidth that often accompanies natural disasters. The project encompasses the following tasks, including hardware and software components: (1) Development of a power-efficient, low-cost, plug-in Geodetic Module for fusion of data from in situ sensors including GPS, a MEMS accelerometer package, and a MEMS meteorological sensor package, for deployment at 26 existing continuous GPS stations in southern California. The low-cost modular design is scalable to the many existing continuous GPS stations worldwide. (2) Estimation of new on-the-fly data products with 1 mm precision and accuracy, including three-dimensional broadband displacements and precipitable water, by new software embedded in the Geodetic Module's processor, rather than at a central processing facility. (3) Development of a Geodetic Sensor Web to allow the semi-autonomous sensors to transmit and receive information in real time by means of redundant sensor proxy

  15. Web服务和下一代劳动力%Web Services and Next Generation Workforce

    Institute of Scientific and Technical Information of China (English)

    GLEESON Michael; REYNOLDS Gordon; DUGGAN Bryan

    2007-01-01

    Over the course of the past 15 years there has been an increasing trend towards the provision of service based functionality by Information Systems (IS). This has been due to a number of driving forces, however the main impetus of this tendency has been the development and implementation of Web Services technology. Web Services offer a solution to deliver a dynamic, task-driven computing environment and shared business processes, while also reducing costs. Web Services have overturned many traditional assumptions about Information Systems. They have enabled innovation and afforded a degree of flexibility not available previously from an Information System. This paper will give a brief overview of Web Services and corresponding technologies. It will then examine the required skills of graduates and identify the roles where these proficiencies will be required. Finally, it will propose that the flexibility afforded by the use of Web Services must be emulated in the teaching of Web Services.

  16. Recent advances in the Lesser Antilles observatories Part 2 : WebObs - an integrated web-based system for monitoring and networks management

    Science.gov (United States)

    Beauducel, François; Bosson, Alexis; Randriamora, Frédéric; Anténor-Habazac, Christian; Lemarchand, Arnaud; Saurel, Jean-Marie; Nercessian, Alexandre; Bouin, Marie-Paule; de Chabalier, Jean-Bernard; Clouard, Valérie

    2010-05-01

    Seismological and Volcanological observatories have common needs and often common practical problems for multi disciplinary data monitoring applications. In fact, access to integrated data in real-time and estimation of measurements uncertainties are keys for an efficient interpretation, but instruments variety, heterogeneity of data sampling and acquisition systems lead to difficulties that may hinder crisis management. In Guadeloupe observatory, we have developed in the last years an operational system that attempts to answer the questions in the context of a pluri-instrumental observatory. Based on a single computer server, open source scripts (Matlab, Perl, Bash, Nagios) and a Web interface, the system proposes: an extended database for networks management, stations and sensors (maps, station file with log history, technical characteristics, meta-data, photos and associated documents); a web-form interfaces for manual data input/editing and export (like geochemical analysis, some of the deformation measurements, ...); routine data processing with dedicated automatic scripts for each technique, production of validated data outputs, static graphs on preset moving time intervals, and possible e-mail alarms; computers, acquisition processes, stations and individual sensors status automatic check with simple criteria (files update and signal quality), displayed as synthetic pages for technical control. In the special case of seismology, WebObs includes a digital stripchart multichannel continuous seismogram associated with EarthWorm acquisition chain (see companion paper Part 1), event classification database, location scripts, automatic shakemaps and regional catalog with associated hypocenter maps accessed through a user request form. This system leads to a real-time Internet access for integrated monitoring and becomes a strong support for scientists and technicians exchange, and is widely open to interdisciplinary real-time modeling. It has been set up at

  17. Influence Of Process Conditions On Melt Blown Web Structure. Part IV - Fiber Diameter

    Directory of Open Access Journals (Sweden)

    Randall R. Bresee

    2006-08-01

    Full Text Available We are continuing an effort to quantitatively measure the influence of processing variables on the structure of polypropylene melt blown webs. In this paper, we report experimental measurements of the influence of die-to-collector distance, primary airflow rate, die temperature, collector speed and resin throughput rate on the diameter of fibers in fully-formed webs. This enabled us to quantitatively compare the influence of these processing variables on fiber diameter as well as achieve greater understanding of the melt blowing process.

  18. Spare parts management for nuclear power generation facilities

    Science.gov (United States)

    Scala, Natalie Michele

    With deregulation, utilities in the power sector face a much more urgent imperative to emphasize cost efficiencies as compared to the days of regulation. One major opportunity for cost savings is through reductions in spare parts inventories. Most utilities are accustomed to carrying large volumes of expensive, relatively slow-moving parts because of a high degree of risk-averseness. This attitude towards risk is rooted in the days of regulation. Under regulation, companies recovered capital inventory costs by incorporating them into the base rate charged to their customers. In a deregulated environment, cost recovery is no longer guaranteed. Companies must therefore reexamine their risk profile and develop policies for spare parts inventory that are appropriate for a competitive business environment. This research studies the spare parts inventory management problem in the context of electric utilities, with a focus on nuclear power. It addresses three issues related to this problem: criticality, risk, and policy. With respect to criticality and risk, a methodology is presented that incorporates the use of influence diagrams and the Analytic Hierarchy Process (AHP). A new method is developed for group aggregation in the AHP when Saaty and Vargas' (2007) dispersion test fails and decision makers are unwilling or unable to revise their judgments. With respect to policy, a quantitative model that ranks the importance of keeping a part in inventory and recommends a corresponding stocking policy through the use of numerical simulation is developed. This methodology and its corresponding models will enable utilities that have transitioned from a regulated to a deregulated environment become more competitive in their operations while maintaining safety and reliability standards. Furthermore, the methodology developed is general enough so that other utility plants, especially those in the nuclear sector, will be able to use this approach. In addition to regulated

  19. A thermoelectric power generating heat exchanger: Part I - Experimental realization

    CERN Document Server

    Bjørk, R; Pryds, N; Lindeburg, N; Viereck, P

    2016-01-01

    An experimental realization of a heat exchanger with commercial thermoelectric generators (TEGs) is presented. The power producing capabilities as a function of flow rate and temperature span are characterized for two different commercial heat transfer fluids and for three different thermal interface materials. The device is shown to produce 2 W per TEG or 0.22 W cm$^{-2}$ at a fluid temperature difference of 175 $^\\circ$C and a flow rate per fluid channel of 5 L min$^{-1}$. One experimentally realized design produced 200 W in total from 100 TEGs. For the design considered here, the power production is shown to depend more critically on the fluid temperature span than on the fluid flow rate. Finally, the temperature span across the TEG is shown to be 55% to 75% of the temperature span between the hot and cold fluids.

  20. Allergy Risk Finder: Hypothesis Generation System for Allergy Risks via Web Service.

    Science.gov (United States)

    Aramaki, Eiji; Shikata, Shuko; Watabe, Eriko; Miyabe, Mai; Usuda, Yasuyuki; Ayaya, Satsuki; Kumagaya, Shinichiro

    2015-01-01

    This study's aim was to build a web service that automatically collects and tests hypotheses for possible allergy risks. We crowdsourced for unknown allergy risks, and obtained odds ratios. By using the collected hypotheses, we built a web service that estimates allergy risks from a questionnaire (consisting of 10 questions that we gathered from the crowdsourcing task), and at the end, we asked the users their new hypotheses on possible allergy risks. The web service also asked the users to send their original hypotheses to contribute to find the cause of allergy. In the near future, clinical trials to validate the hypotheses found in this study are desired.

  1. ZOMG - I. How the cosmic web inhibits halo growth and generates assembly bias

    Science.gov (United States)

    Borzyszkowski, Mikolaj; Porciani, Cristiano; Romano-Díaz, Emilio; Garaldi, Enrico

    2017-07-01

    The clustering of dark matter haloes with fixed mass depends on their formation history, an effect known as assembly bias. We use zoom N-body simulations to investigate the origin of this phenomenon. For each halo at redshift z = 0, we determine the time in which the physical volume containing its final mass becomes stable. We consider five examples for which this happens at z ˜ 1.5 and two that do not stabilize by z = 0. The zoom simulations show that early-collapsing haloes do not grow in mass at z = 0 while late-forming ones show a net inflow. The reason is that 'accreting' haloes are located at the nodes of a network of thin filaments feeding them. Conversely, each 'stalled' halo lies within a prominent filament that is thicker than the halo size. Infalling material from the surroundings becomes part of the filament while matter within it recedes from the halo. We conclude that assembly bias originates from quenching halo growth due to tidal forces following the formation of non-linear structures in the cosmic web, as previously conjectured in the literature. Also the internal dynamics of the haloes change: the velocity anisotropy profile is biased towards radial (tangential) orbits in accreting (stalled) haloes. Our findings reveal the cause of the yet unexplained dependence of halo clustering on the anisotropy. Finally, we extend the excursion-set theory to account for these effects. A simple criterion based on the ellipticity of the linear tidal field combined with the spherical-collapse model provides excellent predictions for both classes of haloes.

  2. Tools for Modeling and Generating Safe Interface Interactions in Web Applications

    OpenAIRE

    Brambilla, Marco; Cabot, Jordi; Grossniklaus, Michael

    2010-01-01

    Modern Web applications that embed sophisticated user interfaces and business logic have rendered the original interaction paradigm of the Web obsolete. In previous work, we have advocated a paradigm shift from static content pages that are browsed by hyperlinks to a state-based model where back and forward navigation is replaced by a full-fledged interactive application paradigm, featuring undo and redo capabilities, with support for exception management policies and transactional properties...

  3. Wikis, blogs and podcasts: a new generation of Web-based tools for virtual collaborative clinical practice and education.

    Science.gov (United States)

    Boulos, Maged N Kamel; Maramba, Inocencio; Wheeler, Steve

    2006-08-15

    We have witnessed a rapid increase in the use of Web-based 'collaborationware' in recent years. These Web 2.0 applications, particularly wikis, blogs and podcasts, have been increasingly adopted by many online health-related professional and educational services. Because of their ease of use and rapidity of deployment, they offer the opportunity for powerful information sharing and ease of collaboration. Wikis are Web sites that can be edited by anyone who has access to them. The word 'blog' is a contraction of 'Web Log' - an online Web journal that can offer a resource rich multimedia environment. Podcasts are repositories of audio and video materials that can be "pushed" to subscribers, even without user intervention. These audio and video files can be downloaded to portable media players that can be taken anywhere, providing the potential for "anytime, anywhere" learning experiences (mobile learning). Wikis, blogs and podcasts are all relatively easy to use, which partly accounts for their proliferation. The fact that there are many free and Open Source versions of these tools may also be responsible for their explosive growth. Thus it would be relatively easy to implement any or all within a Health Professions' Educational Environment. Paradoxically, some of their disadvantages also relate to their openness and ease of use. With virtually anybody able to alter, edit or otherwise contribute to the collaborative Web pages, it can be problematic to gauge the reliability and accuracy of such resources. While arguably, the very process of collaboration leads to a Darwinian type 'survival of the fittest' content within a Web page, the veracity of these resources can be assured through careful monitoring, moderation, and operation of the collaborationware in a closed and secure digital environment. Empirical research is still needed to build our pedagogic evidence base about the different aspects of these tools in the context of medical/health education. If

  4. Wikis, blogs and podcasts: a new generation of Web-based tools for virtual collaborative clinical practice and education

    Directory of Open Access Journals (Sweden)

    Maramba Inocencio

    2006-08-01

    Full Text Available Abstract Background We have witnessed a rapid increase in the use of Web-based 'collaborationware' in recent years. These Web 2.0 applications, particularly wikis, blogs and podcasts, have been increasingly adopted by many online health-related professional and educational services. Because of their ease of use and rapidity of deployment, they offer the opportunity for powerful information sharing and ease of collaboration. Wikis are Web sites that can be edited by anyone who has access to them. The word 'blog' is a contraction of 'Web Log' – an online Web journal that can offer a resource rich multimedia environment. Podcasts are repositories of audio and video materials that can be "pushed" to subscribers, even without user intervention. These audio and video files can be downloaded to portable media players that can be taken anywhere, providing the potential for "anytime, anywhere" learning experiences (mobile learning. Discussion Wikis, blogs and podcasts are all relatively easy to use, which partly accounts for their proliferation. The fact that there are many free and Open Source versions of these tools may also be responsible for their explosive growth. Thus it would be relatively easy to implement any or all within a Health Professions' Educational Environment. Paradoxically, some of their disadvantages also relate to their openness and ease of use. With virtually anybody able to alter, edit or otherwise contribute to the collaborative Web pages, it can be problematic to gauge the reliability and accuracy of such resources. While arguably, the very process of collaboration leads to a Darwinian type 'survival of the fittest' content within a Web page, the veracity of these resources can be assured through careful monitoring, moderation, and operation of the collaborationware in a closed and secure digital environment. Empirical research is still needed to build our pedagogic evidence base about the different aspects of these tools in

  5. Dark Web

    CERN Document Server

    Chen, Hsinchun

    2012-01-01

    The University of Arizona Artificial Intelligence Lab (AI Lab) Dark Web project is a long-term scientific research program that aims to study and understand the international terrorism (Jihadist) phenomena via a computational, data-centric approach. We aim to collect "ALL" web content generated by international terrorist groups, including web sites, forums, chat rooms, blogs, social networking sites, videos, virtual world, etc. We have developed various multilingual data mining, text mining, and web mining techniques to perform link analysis, content analysis, web metrics (technical

  6. Getting to the Source: a Survey of Quantitative Data Sources Available to the Everyday Librarian: Part 1: Web Server Log Analysis

    Directory of Open Access Journals (Sweden)

    Lisa Goddard

    2007-03-01

    Full Text Available This is the first part of a two‐part article that provides a survey of data sources which are likely to be immediately available to the typical practitioner who wishes to engage instatistical analysis of collections and services within his or her own library. Part I outlines the data elements which can be extracted from web server logs, and discusses web log analysis tools. Part II looks at logs, reports, and data sources from proxy servers, resource vendors, link resolvers, federated search engines, institutional repositories, electronic reference services, and the integrated library system.

  7. Technical Evaluation Report 60: The World-Wide Inaccessible Web, Part 1: Browsing

    Directory of Open Access Journals (Sweden)

    Batchuluun Batpurev

    2007-06-01

    Full Text Available Two studies are reported, comparing the browser loading times of webpages created using common Web development techniques. The loading speeds were estimated in 12 Asian countries by members of the PANdora network, funded by the International Development Research Centre (IDRC to conduct collaborative research in the development of effective distance education (DE practices. An online survey tool with stopwatch-type counter was used. Responses were obtained from Bhutan, Cambodia, India, Indonesia, Laos, Mongolia, the Philippines, Sri Lanka, Pakistan, Singapore, Thailand, and Vietnam. In most of the survey conditions, browser loading times were noted up to four times slower than commonly prescribed as acceptable. Failure of pages to load at all was frequent. The speediest loading times were observed when the online material was hosted locally, and was created either in the Docebo learning management system (LMS, or in the HTML option provided by the Moodle LMS. It is recommended that formative evaluation of this type should become standard practice in the selection and use of online programming techniques, in order to preserve the accessibility of the World-Wide-Web across large geographical distances, as for DE in the developing world.

  8. Visualization of seismic tomography on Google Earth: Improvement of KML generator and its web application to accept the data file in European standard format

    Science.gov (United States)

    Yamagishi, Y.; Yanaka, H.; Tsuboi, S.

    2009-12-01

    We have developed a conversion tool for the data of seismic tomography into KML, called KML generator, and made it available on the web site (http://www.jamstec.go.jp/pacific21/google_earth). The KML generator enables us to display vertical and horizontal cross sections of the model on Google Earth in three-dimensional manner, which would be useful to understand the Earth's interior. The previous generator accepts text files of grid-point data having longitude, latitude, and seismic velocity anomaly. Each data file contains the data for each depth. Metadata, such as bibliographic reference, grid-point interval, depth, are described in other information file. We did not allow users to upload their own tomographic model to the web application, because there is not standard format to represent tomographic model. Recently European seismology research project, NEIRES (Network of Research Infrastructures for European Seismology), advocates that the data of seismic tomography should be standardized. They propose a new format based on JSON (JavaScript Object Notation), which is one of the data-interchange formats, as a standard one for the tomography. This format consists of two parts, which are metadata and grid-point data values. The JSON format seems to be powerful to handle and to analyze the tomographic model, because the structure of the format is fully defined by JavaScript objects, thus the elements are directly accessible by a script. In addition, there exist JSON libraries for several programming languages. The International Federation of Digital Seismograph Network (FDSN) adapted this format as a FDSN standard format for seismic tomographic model. There might be a possibility that this format would not only be accepted by European seismologists but also be accepted as the world standard. Therefore we improve our KML generator for seismic tomography to accept the data file having also JSON format. We also improve the web application of the generator so that the

  9. ZOMG I: How the cosmic web inhibits halo growth and generates assembly bias

    CERN Document Server

    Borzyszkowski, Mikolaj; Romano-Diaz, Emilio; Garaldi, Enrico

    2016-01-01

    The clustering of dark-matter haloes with fixed mass depends on their formation history, an effect known as assembly bias. We investigate the origin of this phenomenon using zoom N-body simulations. We follow the formation of seven galaxy-sized haloes selected using a definition of collapse time that generates strong assembly bias. Haloes at redshift zero are classified according to the time in which the physical volume containing their final mass becomes stable. For `stalled' haloes this happens at z~1.5 while for `accreting' haloes this has not happened yet. The zoom simulations confirm that stalled haloes do not grow in mass while accreting haloes show a net inflow. The reason is that accreting haloes are located at the nodes of a network of thin filaments which feed them. Conversely, each stalled halo lies within a prominent filament that is thicker than the halo size. Infalling material from the surroundings becomes part of the filament while matter within it recedes from the halo. We conclude that assem...

  10. Pse-in-One: a web server for generating various modes of pseudo components of DNA, RNA, and protein sequences.

    Science.gov (United States)

    Liu, Bin; Liu, Fule; Wang, Xiaolong; Chen, Junjie; Fang, Longyun; Chou, Kuo-Chen

    2015-07-01

    With the avalanche of biological sequences generated in the post-genomic age, one of the most challenging problems in computational biology is how to effectively formulate the sequence of a biological sample (such as DNA, RNA or protein) with a discrete model or a vector that can effectively reflect its sequence pattern information or capture its key features concerned. Although several web servers and stand-alone tools were developed to address this problem, all these tools, however, can only handle one type of samples. Furthermore, the number of their built-in properties is limited, and hence it is often difficult for users to formulate the biological sequences according to their desired features or properties. In this article, with a much larger number of built-in properties, we are to propose a much more flexible web server called Pse-in-One (http://bioinformatics.hitsz.edu.cn/Pse-in-One/), which can, through its 28 different modes, generate nearly all the possible feature vectors for DNA, RNA and protein sequences. Particularly, it can also generate those feature vectors with the properties defined by users themselves. These feature vectors can be easily combined with machine-learning algorithms to develop computational predictors and analysis methods for various tasks in bioinformatics and system biology. It is anticipated that the Pse-in-One web server will become a very useful tool in computational proteomics, genomics, as well as biological sequence analysis. Moreover, to maximize users' convenience, its stand-alone version can also be downloaded from http://bioinformatics.hitsz.edu.cn/Pse-in-One/download/, and directly run on Windows, Linux, Unix and Mac OS. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Generating dynamic higher-order Markov models in web usage mining

    OpenAIRE

    Borges, J; Levene, Mark

    2005-01-01

    Markov models have been widely used for modelling users’ web navigation behaviour. In previous work we have presented a dynamic clustering-based Markov model that accurately represents second-order transition probabilities given by a collection of navigation sessions. Herein, we propose a generalisation of the method that takes into account higher-order conditional probabilities. The method makes use of the state cloning concept together with a clustering technique to separate the navigation ...

  12. XPLANE, a Generative Computer Aided Process Planning System for Part Manufacturing

    NARCIS (Netherlands)

    Erve, van 't A.H.

    1986-01-01

    This paper reports on the development of XPLANE, a generative computer aided process planning system for part manufacturing. Described is its position and functioning as a part of a more extended computer aided manufacturing system that includes a link to CAD systems, as well as systems for computer

  13. Defrosting the digital library: bibliographic tools for the next generation web.

    Science.gov (United States)

    Hull, Duncan; Pettifer, Steve R; Kell, Douglas B

    2008-10-01

    Many scientists now manage the bulk of their bibliographic information electronically, thereby organizing their publications and citation material from digital libraries. However, a library has been described as "thought in cold storage," and unfortunately many digital libraries can be cold, impersonal, isolated, and inaccessible places. In this Review, we discuss the current chilly state of digital libraries for the computational biologist, including PubMed, IEEE Xplore, the ACM digital library, ISI Web of Knowledge, Scopus, Citeseer, arXiv, DBLP, and Google Scholar. We illustrate the current process of using these libraries with a typical workflow, and highlight problems with managing data and metadata using URIs. We then examine a range of new applications such as Zotero, Mendeley, Mekentosj Papers, MyNCBI, CiteULike, Connotea, and HubMed that exploit the Web to make these digital libraries more personal, sociable, integrated, and accessible places. We conclude with how these applications may begin to help achieve a digital defrost, and discuss some of the issues that will help or hinder this in terms of making libraries on the Web warmer places in the future, becoming resources that are considerably more useful to both humans and machines.

  14. Defrosting the digital library: bibliographic tools for the next generation web.

    Directory of Open Access Journals (Sweden)

    Duncan Hull

    2008-10-01

    Full Text Available Many scientists now manage the bulk of their bibliographic information electronically, thereby organizing their publications and citation material from digital libraries. However, a library has been described as "thought in cold storage," and unfortunately many digital libraries can be cold, impersonal, isolated, and inaccessible places. In this Review, we discuss the current chilly state of digital libraries for the computational biologist, including PubMed, IEEE Xplore, the ACM digital library, ISI Web of Knowledge, Scopus, Citeseer, arXiv, DBLP, and Google Scholar. We illustrate the current process of using these libraries with a typical workflow, and highlight problems with managing data and metadata using URIs. We then examine a range of new applications such as Zotero, Mendeley, Mekentosj Papers, MyNCBI, CiteULike, Connotea, and HubMed that exploit the Web to make these digital libraries more personal, sociable, integrated, and accessible places. We conclude with how these applications may begin to help achieve a digital defrost, and discuss some of the issues that will help or hinder this in terms of making libraries on the Web warmer places in the future, becoming resources that are considerably more useful to both humans and machines.

  15. Automated generation of a World Wide Web-based data entry and check program for medical applications.

    Science.gov (United States)

    Kiuchi, T; Kaihara, S

    1997-02-01

    The World Wide Web-based form is a promising method for the construction of an on-line data collection system for clinical and epidemiological research. It is, however, laborious to prepare a common gateway interface (CGI) program for each project, which the World Wide Web server needs to handle the submitted data. In medicine, it is even more laborious because the CGI program must check deficits, type, ranges, and logical errors (bad combination of data) of entered data for quality assurance as well as data length and meta-characters of the entered data to enhance the security of the server. We have extended the specification of the hypertext markup language (HTML) form to accommodate information necessary for such data checking and we have developed software named AUTOFORM for this purpose. The software automatically analyzes the extended HTML form and generates the corresponding ordinary HTML form, 'Makefile', and C source of CGI programs. The resultant CGI program checks the entered data through the HTML form, records them in a computer, and returns them to the end-user. AUTOFORM drastically reduces the burden of development of the World Wide Web-based data entry system and allows the CGI programs to be more securely and reliably prepared than had they been written from scratch.

  16. Model Driven Automatic Generation of Web Application Systems%模型驱动下的Web应用系统自动生成

    Institute of Scientific and Technical Information of China (English)

    王海林

    2012-01-01

    In order to promote software development efficiency,it proposes an approach of model driven automatic generation of Web applications. The approach takes MetaEdit+ as a meta-modeling tool. The first step the approach suggests is to build Web application meta -models and to customize DSL. The further step is to build Web application domain models by DSL. Then by using generator definition language MERL which MetaEdit+ provides,the software developers can design conveniently JSP generator,Servlet generator,Javabeans generator and database generator that Web application systems need. These generators can produce the whole Web application system directly from the Web application graph models. Finally, the approach of model driven automatic generation of Web applications will be introduced in detail through an instance named WebShopping. The test result indicates that the generated Web application can run correctly on the Web application server in the Windows operating system environment.%为了提高Web应用系统开发效率,提出了模型驱动下的Web应用系统自动生成方法.这种生成方法是以MetaEdit+作为元建模工具,首先创建Web应用系统元模型、定制DSL,进而建立Web应用系统领域模型,然后通过MetaEdit+提供的生成器定义语言MERL,软件开发人员可以很方便地设计出Web应用系统所需的JSP生成器、Servlet生成器、Jay -abeans生成器和数据库生成器,从Web应用系统图形模型直接生成整个Web应用系统.最后通过一个WebShopping实例详细介绍了模型驱动下的Web应用系统生成方法及生成过程.经测试,所生成的Web应用系统可以在Windows操作系统中的Web应用服务器上正确运行.

  17. What is the invisible web? A crawler perspective

    OpenAIRE

    Arroyo, Natalia

    2004-01-01

    The invisible Web, also known as the deep Web or dark matter, is an important problem for Webometrics due to difficulties of conceptualization and measurement. The invisible Web has been defined to be the part of the Web that cannot be indexed by search engines, including databases and dynamically generated pages. Some authors have recognized that this is a quite subjective concept that depends on the point of view of the observer: what is visible for one observer may be invisible for others....

  18. WEB CONTENT EXTRACTION USING HYBRID APPROACH

    Directory of Open Access Journals (Sweden)

    K. Nethra

    2014-01-01

    Full Text Available The World Wide Web has rich source of voluminous and heterogeneous information which continues to expand in size and complexity. Many Web pages are unstructured and semi-structured, so it consists of noisy information like advertisement, links, headers, footers etc. This noisy information makes extraction of Web content tedious. Many techniques that were proposed for Web content extraction are based on automatic extraction and hand crafted rule generation. Automatic extraction technique is done through Web page segmentation, but it increases the time complexity. Hand crafted rule generation uses string manipulation function for rule generation, but generating those rules is very difficult. A hybrid approach is proposed to extract main content from Web pages. A HTML Web page is converted to DOM tree and features are extracted and with the extracted features, rules are generated. Decision tree classification and Naïve Bayes classification are machine learning methods used for rules generation. By using the rules, noisy part in the Web page is discarded and informative content in the Web page is extracted. The performance of both decision tree classification and Naïve Bayes classification are measured with metrics like precision, recall, F-measure and accuracy.

  19. Generational influences in academic emergency medicine: teaching and learning, mentoring, and technology (part I).

    Science.gov (United States)

    Mohr, Nicholas M; Moreno-Walton, Lisa; Mills, Angela M; Brunett, Patrick H; Promes, Susan B

    2011-02-01

    For the first time in history, four generations are working together-traditionalists, baby boomers, generation Xers (Gen Xers), and millennials. Members of each generation carry with them a unique perspective of the world and interact differently with those around them. Through a review of the literature and consensus by modified Delphi methodology of the Society for Academic Emergency Medicine Aging and Generational Issues Task Force, the authors have developed this two-part series to address generational issues present in academic emergency medicine (EM). Understanding generational characteristics and mitigating strategies can help address some common issues encountered in academic EM. Through recognition of the unique characteristics of each of the generations with respect to teaching and learning, mentoring, and technology, academicians have the opportunity to strategically optimize interactions with one another.

  20. Generational Influences in Academic Emergency Medicine: Teaching and Learning, Mentoring, and Technology (Part I)

    Science.gov (United States)

    Mohr, Nicholas M.; Moreno-Walton, Lisa; Mills, Angela M.; Brunett, Patrick H.; Promes, Susan B.

    2010-01-01

    For the first time in history, four generations are working together – Traditionalists, Baby Boomers, Generation Xers, and Millennials. Members of each generation carry with them a unique perspective of the world and interact differently with those around them. Through a review of the literature and consensus by modified Delphi methodology of the Society for Academic Emergency Medicine (SAEM) Aging and Generational Issues Task Force, the authors have developed this two-part series to address generational issues present in academic emergency medicine (EM). Understanding generational characteristics and mitigating strategies can help address some common issues encountered in academic EM. Through recognition of the unique characteristics of each of the generations with respect to teaching and learning, mentoring, and technology, academicians have the opportunity to strategically optimize interactions with one another. PMID:21314779

  1. LISA, the next generation: from a web-based application to a fat client.

    Science.gov (United States)

    Pierlet, Noëlla; Aerts, Werner; Vanautgaerden, Mark; Van den Bosch, Bart; De Deurwaerder, André; Schils, Erik; Noppe, Thomas

    2008-01-01

    The LISA application, developed by the University Hospitals Leuven, permits referring physicians to consult the electronic medical records of their patients over the internet in a highly secure way. We decided to completely change the way we secured the application, discard the existing web application and build a completely new application, based on the in-house developed hospital information system, used in the University Hospitals Leuven. The result is a fat Java client, running on a Windows Terminal Server, secured by a commercial SSL-VPN solution.

  2. Improving Business Intelligence Applications by Using New Generation of Web and Mobile Technologies

    Directory of Open Access Journals (Sweden)

    Mihaela-Laura IVAN

    2015-01-01

    Full Text Available The current paper presents an overview about the combination of the new technologies like: SAP HANA, SAP UI5 and SAP Fiori. There are presented the many advantages of which the final user can benefit, such as: fast performance when querying a request, data modelled with the last database innovation of SAP, SAP HANA database, which brings the biggest feature which is in-memory processing. This in-memory computing of SAP HANA enables people to focus on innovation. In the third section are described two examples of web and mobile applications developed with these technologies.

  3. deepTools2: a next generation web server for deep-sequencing data analysis.

    Science.gov (United States)

    Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas

    2016-07-08

    We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Application of Designing Economic Mechanisms to Power Market - Part 1 Generation Side Power Market Design

    Directory of Open Access Journals (Sweden)

    XIE Qingyang

    2013-04-01

    Full Text Available The paper studies on the core philosophy and algorithm of the designing economic mechanisms theory, a new algorithm of designing incentive compatible power market mechanisms is proposed, a generation side power market mechanism model which has features of inventive compatibility, informationally efficient and decentralized decision is constructed. The power market based on the designing economic mechanisms theory can lead to the Pareto Optimality of the resource allocation; meanwhile GENCOs are permitted to pursue profits maximization. The paper is in two parts. Part 1 focuses on the process of constructing a generation side power market competitive mechanism model based on the designing economic mechanisms theory. Part 2 presents the characteristic analysis of the generation side power market competitive mechanism.

  5. Using Data Crawlers and Semantic Web to Build Financial XBRL Data Generators: The SONAR Extension Approach

    Directory of Open Access Journals (Sweden)

    Miguel Ángel Rodríguez-García

    2014-01-01

    Full Text Available Precise, reliable and real-time financial information is critical for added-value financial services after the economic turmoil from which markets are still struggling to recover. Since the Web has become the most significant data source, intelligent crawlers based on Semantic Technologies have become trailblazers in the search of knowledge combining natural language processing and ontology engineering techniques. In this paper, we present the SONAR extension approach, which will leverage the potential of knowledge representation by extracting, managing, and turning scarce and disperse financial information into well-classified, structured, and widely used XBRL format-oriented knowledge, strongly supported by a proof-of-concept implementation and a thorough evaluation of the benefits of the approach.

  6. Using data crawlers and semantic Web to build financial XBRL data generators: the SONAR extension approach.

    Science.gov (United States)

    Rodríguez-García, Miguel Ángel; Rodríguez-González, Alejandro; Colomo-Palacios, Ricardo; Valencia-García, Rafael; Gómez-Berbís, Juan Miguel; García-Sánchez, Francisco

    2014-01-01

    Precise, reliable and real-time financial information is critical for added-value financial services after the economic turmoil from which markets are still struggling to recover. Since the Web has become the most significant data source, intelligent crawlers based on Semantic Technologies have become trailblazers in the search of knowledge combining natural language processing and ontology engineering techniques. In this paper, we present the SONAR extension approach, which will leverage the potential of knowledge representation by extracting, managing, and turning scarce and disperse financial information into well-classified, structured, and widely used XBRL format-oriented knowledge, strongly supported by a proof-of-concept implementation and a thorough evaluation of the benefits of the approach.

  7. EVOLUTION OF THE WORLD WIDE WEB: FROM WEB 1.0 TO WEB 4.0

    Directory of Open Access Journals (Sweden)

    Sareh Aghaei

    2012-02-01

    Full Text Available The World Wide Web as the largest information construct has had much progress since its advent. Thispaper provides a background of the evolution of the web from web 1.0 to web 4.0. Web 1.0 as a web ofinformation connections, Web 2.0 as a web of people connections, Web 3.0 as a web of knowledgeconnections and web 4.0 as a web of intelligence connections are described as four generations of the webin the paper.

  8. Effects of Carburized Parts on Residual Stresses of Thin-Rimmed Spur Gears with Symmetric Web Arrangements Due to Case-Carburizing

    Institute of Scientific and Technical Information of China (English)

    Kouitsu Miyachika; Wei-Dong Xue; Satoshi Oda; Hidefumi Mada; Hiroshige Fujio

    2004-01-01

    This paper presents a study on effects of carburized parts on residual stresses of thin-rimmed spur gears with symmetric web arrangements due to the case-carburizing. The carbon content of each element of the FEM gear model due to carburizing was obtained according to Vickers hardness Hv - carbon content C% and C% - d (distance from surface)charts. A heat conduction analysis and an elastic-plastic stress analysis during the case-carburizing process of thin-rimmed spur gears with symmetric web arrangements were carried out for various case-carburizing conditions by using the three-dimensional finite element method (3D-FEM) program developed by authors, and then residual stresses were obtained.The effects of the carburized part, the web structure, and the rim thickness on the residual stress were determined.

  9. Usability and printing technology: Ergonomic design of a new generation of web offset printing machines; Usability und Drucktechnik: Ergonomische Gestaltung einer neuen Generation von Rollendruckmaschinen

    Energy Technology Data Exchange (ETDEWEB)

    Enke, G. [MAN Roland Druckmaschinen AG, Augsburg (Germany)

    2002-07-01

    This paper describes the development of a user interface design of a new generation of a digital web offset printing machine, which integrates the whole printing workflow from prepress to postpress. The features of this highly complex printing machine DICOweb and the development of its user interface are presented. The operator was in the center of interest for designing the user interface. The DICOweb user interface allows easy operating and presents a clearly arranged view of the process and current operating status, without needing high initial periods of the operators. To guarantee a high flexibility and direct operation of the machine, a touchscreen user interface was applied. The screen design of the touchscreen requires a special style-guide. (orig.)

  10. Generational influences in academic emergency medicine: structure, function, and culture (Part II).

    Science.gov (United States)

    Mohr, Nicholas M; Smith-Coggins, Rebecca; Larrabee, Hollynn; Dyne, Pamela L; Promes, Susan B

    2011-02-01

    Strategies for approaching generational issues that affect teaching and learning, mentoring, and technology in emergency medicine (EM) have been reported. Tactics to address generational influences involving the structure and function of the academic emergency department (ED), organizational culture, and EM schedule have not been published. Through a review of the literature and consensus by modified Delphi methodology of the Society for Academic Emergency Medicine Aging and Generational Issues Task Force, the authors have developed this two-part series to address generational issues present in academic EM. Understanding generational characteristics and mitigating strategies can address some common issues encountered in academic EM. By understanding the differences and strengths of each of the cohorts in academic EM departments and considering simple mitigating strategies, faculty leaders can maximize their cooperative effectiveness and face the challenges of a new millennium.

  11. Web Log Pre-processing and Analysis for Generation of Learning Profiles in Adaptive E-learning

    Directory of Open Access Journals (Sweden)

    Radhika M. Pai

    2016-04-01

    Full Text Available Adaptive E-learning Systems (AESs enhance the efficiency of online courses in education by providing personalized contents and user interfaces that changes according to learner’s requirements and usage patterns. This paper presents the approach to generate learning profile of each learner which helps to identify the learning styles and provide Adaptive User Interface which includes adaptive learning components and learning material. The proposed method analyzes the captured web usage data to identify the learning profile of the learners. The learning profiles are identified by an algorithmic approach that is based on the frequency of accessing the materials and the time spent on the various learning components on the portal. The captured log data is pre-processed and converted into standard XML format to generate learners sequence data corresponding to the different sessions and time spent. The learning style model adopted in this approach is Felder-Silverman Learning Style Model (FSLSM. This paper also presents the analysis of learner’s activities, preprocessed XML files and generated sequences.

  12. Web Log Pre-processing and Analysis for Generation of Learning Profiles in Adaptive E-learning

    Directory of Open Access Journals (Sweden)

    Radhika M. Pai

    2016-03-01

    Full Text Available Adaptive E-learning Systems (AESs enhance the efficiency of online courses in education by providing personalized contents and user interfaces that changes according to learner’s requirements and usage patterns. This paper presents the approach to generate learning profile of each learner which helps to identify the learning styles and provide Adaptive User Interface which includes adaptive learning components and learning material. The proposed method analyzes the captured web usage data to identify the learning profile of the learners. The learning profiles are identified by an algorithmic approach that is based on the frequency of accessing the materials and the time spent on the various learning components on the portal. The captured log data is pre-processed and converted into standard XML format to generate learners sequence data corresponding to the different sessions and time spent. The learning style model adopted in this approach is Felder-Silverman Learning Style Model (FSLSM. This paper also presents the analysis of learner’s activities, preprocessed XML files and generated sequences.

  13. Web2.0环境中用户生成内容的自组织%Self-Organization of User Generated Content in Web 2.0 Environment

    Institute of Scientific and Technical Information of China (English)

    李鹏

    2012-01-01

    This paper analyzes the connotation and feature of user generated content in Web 2.0 environment from the content of information,the media by which generates it and the user, then finds that user generated content fits the conditions of self-organization. Based on the review on self-organization of some types of user generated content such as social tags, Wiki, blog, microblog and virtual community, this paper constructs the self-organization schema of user generated content in Web 2.0 environment, then elaborates its levels, contents and the mechanism of evolution, finally pointes out some problems should be studied in future.%从信息内容、生成媒介、用户三方面分析Web2.0环境中用户生成内容的内涵与特征,认为Web2.0环境中的用户生成内容符合自组织的条件。在分析目前对社会化标签、维基百科、博客、微博、虚拟社区等几种类型用户生成内容自组织研究现状的基础上,构建Web2.0环境中用户生成内容自组织模式,阐述用户生成内容自组织的层次、内容与演化机制,并指出进一步研究应注意的问题。

  14. Linking user-generated video annotations to the web of data

    NARCIS (Netherlands)

    Hildebrand, M.; Ossenbruggen, J.R. van

    2012-01-01

    In the audiovisual domain tagging games are explored as a method to collect user-generated metadata. For example, the Netherlands Institute for Sound and Vision deployed the video labelling game "Waisda?" to collect user tags for videos from their collection. These tags are potentially useful to imp

  15. Linking user-generated video annotations to the web of data

    NARCIS (Netherlands)

    M. Hildebrand (Michiel); J.R. van Ossenbruggen (Jacco)

    2012-01-01

    htmlabstractIn the audiovisual domain tagging games are explored as a method to collect user-generated metadata. For example, the Netherlands Institute for Sound and Vision deployed the video labelling game "Waisda?" to collect user tags for videos from their collection. These tags are potentially u

  16. Earth Observation oriented teaching materials development based on OGC Web services and Bashyt generated reports

    Science.gov (United States)

    Stefanut, T.; Gorgan, D.; Giuliani, G.; Cau, P.

    2012-04-01

    Creating e-Learning materials in the Earth Observation domain is a difficult task especially for non-technical specialists who have to deal with distributed repositories, large amounts of information and intensive processing requirements. Furthermore, due to the lack of specialized applications for developing teaching resources, technical knowledge is required also for defining data presentation structures or in the development and customization of user interaction techniques for better teaching results. As a response to these issues during the GiSHEO FP7 project [1] and later in the EnviroGRIDS FP7 [2] project, we have developed the eGLE e-Learning Platform [3], a tool based application that provides dedicated functionalities to the Earth Observation specialists for developing teaching materials. The proposed architecture is built around a client-server design that provides the core functionalities (e.g. user management, tools integration, teaching materials settings, etc.) and has been extended with a distributed component implemented through the tools that are integrated into the platform, as described further. Our approach in dealing with multiple transfer protocol types, heterogeneous data formats or various user interaction techniques involve the development and integration of very specialized elements (tools) that can be customized by the trainers in a visual manner through simple user interfaces. In our concept each tool is dedicated to a specific data type, implementing optimized mechanisms for searching, retrieving, visualizing and interacting with it. At the same time, in each learning resource can be integrated any number of tools, through drag-and-drop interaction, allowing the teacher to retrieve pieces of data of various types (e.g. images, charts, tables, text, videos etc.) from different sources (e.g. OGC web services, charts created through Bashyt application, etc.) through different protocols (ex. WMS, BASHYT API, FTP, HTTP etc.) and to display

  17. Het WEB leert begrijpen

    CERN Multimedia

    Stroeykens, Steven

    2004-01-01

    The WEB could be much more useful if the computers understood something of information on the Web pages. That explains the goal of the "semantic Web", a project in which takes part, amongst others, Tim Berners Lee, the inventor of the original WEB

  18. SEMANTIC WEB-BASED SOFTWARE ENGINEERING BY AUTOMATED REQUIREMENTS ONTOLOGY GENERATION IN SOA

    Directory of Open Access Journals (Sweden)

    Vahid Rastgoo

    2014-04-01

    Full Text Available This paper presents an approach for automated generation of requirements ontology using UML diagrams in service-oriented architecture (SOA. The goal of this paper is to convenience progress of software engineering processes like software design, software reuse, service discovering and etc. The proposed method is based on a four conceptual layers. The first layer includes requirements achieved by stakeholders, the second one designs service-oriented diagrams from the data in first layer and extracts XMI codes of them. The third layer includes requirement ontology and protocol ontology to describe behavior of services and relationships between them semantically. Finally the forth layer makes standard the concepts exists in ontologies of previous layer. The generated ontology exceeds absolute domain ontology because it considers the behavior of services moreover the hierarchical relationship of them. Experimental results conducted on a set of UML4Soa diagrams in different scopes demonstrate the improvement of the proposed approach from different points of view such as: completeness of requirements ontology, automatic generation and considering SOA.

  19. 面向Web零件库的可拓关联搜索%Adjustable relevance search in Web-based parts library

    Institute of Scientific and Technical Information of China (English)

    顾复; 张树有

    2011-01-01

    To solve the data heterogeneity and information immensity problems in the web-based parts libraries, Adjustable Relevance Search (ARS) oriented to web-based parts library was put forward. Resource Description Framework Schema(RDFS) was used to construct ontology model for parts' information resources as nodes. Nodes were connected with each other by various semantic relationships to form semantic-Web so as to realize extended relevance search. Based on this semantic-Web model, the working process and algorithm of ARS were put forward and explained. The feasibility and practicality of ARS in Web-based parts library based on semantic-network was demonstrated by a simple programming example designed for injection molding machine.%针对由于Web零件库信息量大、数据的异构性强而出现的问题,提出面向Web零件库的可拓关联搜索.用资源描述框架主题建立零部件信息资源的本体模型,并作为语义网络中的节点;通过语义关系将各节点连接成一个语义网络,进而实现扩展关联搜索.基于该语义网络模型,提出了可拓关联搜索的流程与算法.通过针对注塑机零部件的Web零件库编程实例,验证了可拓关联搜索在Web零件库中的可行性与实用性.

  20. Diagnosis of Wind Energy System Faults Part I : Modeling of the Squirrel Cage Induction Generator

    Directory of Open Access Journals (Sweden)

    Lahcène Noureddine

    2015-08-01

    Full Text Available Generating electrical power from wind energy is becoming increasingly important throughout the world. This fast development has attracted many researchers and electrical engineers to work on this field. The authors develop a dynamic model of the squirrel cage induction generator exists usually on wind energy systems, for the diagnosis of broken rotor bars defects from an approach of magnetically coupled multiple circuits. The generalized model is established on the base of mathematical recurrences. The winding function theory is used for determining the rotor resistances and the inductances in the case of n- broken bars. Simulation results, in Part. II of this paper, confirm the validity of the proposed model.

  1. Thermoelectric Generators for Automotive Waste Heat Recovery Systems Part II: Parametric Evaluation and Topological Studies

    Science.gov (United States)

    Kumar, Sumeet; Heister, Stephen D.; Xu, Xianfan; Salvador, James R.; Meisner, Gregory P.

    2013-06-01

    A comprehensive numerical model has been proposed to model thermoelectric generators (TEGs) for automotive waste heat recovery. Details of the model and results from the analysis of General Motors' prototype TEG were described in part I of the study. In part II of this study, parametric evaluations are considered to assess the influence of heat exchanger, geometry, and thermoelectric module configurations to achieve optimization of the baseline model. The computational tool is also adapted to model other topologies such as transverse and circular configurations (hexagonal and cylindrical) maintaining the same volume as the baseline TEG. Performance analysis of these different topologies and parameters is presented and compared with the baseline design.

  2. 新一代Web技术的发展及其应用%Developments and Applications of the New Generation Web Technology

    Institute of Scientific and Technical Information of China (English)

    陈凯

    2012-01-01

    Faced with the rapid development of the Intemet and the popularity of multimedia applications, a new generation of Web standards HTML5 came into being and is accelerating the development and perfection. Through analysis of the evolution process of the Web technology, this paper mainly discusses the new generation of Web technology developments and the applications based HTML5.%面对互联网发展的突飞猛进以及多媒体应用的普及,新一代的Web标准HTML5应运而生并正加速发展和完善。本文通过分析Web技术的演进过程,阐述了以HTML5为主的新一代Web技术的发展状况和应用情况。

  3. Post-Web 2.0 Pedagogy: From Student-Generated Content to International Co-Production Enabled by Mobile Social Media

    Science.gov (United States)

    Cochrane, Thomas; Antonczak, Laurent; Wagner, Daniel

    2013-01-01

    The advent of web 2.0 has enabled new forms of collaboration centred upon user-generated content, however, mobile social media is enabling a new wave of social collaboration. Mobile devices have disrupted and reinvented traditional media markets and distribution: iTunes, Google Play and Amazon now dominate music industry distribution channels,…

  4. Dynamic test generation approach for web applications%Web应用程序的动态测试生成方法

    Institute of Scientific and Technical Information of China (English)

    郭玉环; 高建华

    2013-01-01

    Current approaches and tools for web-page validation cannot handle the common errors of the dynamically generated pages, for example, Web script conflicts and deformed dynamically-generated web pages. So, a dynamic test generation approach for the web applications is presented This approach uses an explicit-state model checking, generates tests automatically, runs the tests to capture the logical constraints of the input, and finally outputs a fault report. An application case of a campus BBS system is presented to verify the effectiveness of the approach in fault detection.%现有的网页验证方法和工具无法处理动态页面的Web脚本冲突和动态生成的畸形网页错误,为此,提出一个在Web应用领域的动态测试生成方法.该方法使用显式状态模型校验,自动生成测试,并运行测试来捕获输入的逻辑约束,最终输出一个故障报告集.通过一个校园BBS系统实例,验证了该方法在故障检测方面的有效性.

  5. Post-Web 2.0 Pedagogy: From Student-Generated Content to International Co-Production Enabled by Mobile Social Media

    Science.gov (United States)

    Cochrane, Thomas; Antonczak, Laurent; Wagner, Daniel

    2013-01-01

    The advent of web 2.0 has enabled new forms of collaboration centred upon user-generated content, however, mobile social media is enabling a new wave of social collaboration. Mobile devices have disrupted and reinvented traditional media markets and distribution: iTunes, Google Play and Amazon now dominate music industry distribution channels,…

  6. THE PILOT STUDY OF CHARACTERISTICS OF HOUSEHOLD WASTE GENERATED IN SUBURBAN PARTS OF RURAL AREAS

    Directory of Open Access Journals (Sweden)

    Aleksandra Steinhoff-Wrześniewska

    2015-02-01

    Full Text Available The subject of the studies were waste generated in suburban households, in 3-bag system. The sum of wastes generated during the four analyzed seasons (spring, summer, autumn, winter – 1 year, in the households under study, per 1 person, amounted to 170,3 kg (in wet mass basis. For 1 person, most domestic waste was generated in autumn – 45,5 kg per capita and the least in winter – 39,0 kg per capita. The analysis performed of sieved composition (size fraction showed that fractions: >100 mm, 40–100 mm, 20–40 mm constituted totally 80% of the mass of wastes (average in a year. The lowest fraction (<10 mm, whose significant part constitutes ashes, varied depending on the season of year: from 3.5% to 12.8%. In the morphological composition of the households analyzed (on average in 4 seasons, biowastes totally formed over 53% of the whole mass of wastes. A significant part of waste generated were also glass waste (10,7% average per year and disposable nappies (8,3% average per year. The analysis of basic chemical components of biowastes showed that in case of utilizing them for production of compost, it would be necessary to modify (correct the ratios C/N and C/P. Analysis of the chemical composition showed that the biowastes were characterized by very high moisture content and neutral pH.

  7. Sustainable Materials Management (SMM) Web Academy Webinar: Recycling Right: Tactics and Tools for Effective Residential Outreach (Part 1)

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  8. Sustainable Materials Management (SMM) Web Academy Webinar: Recycling Right: Tactics and Tools for Effective Residential Outreach (Part 2)

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  9. Effect of acid pretreatment on different parts of corn stalk for second generation ethanol production.

    Science.gov (United States)

    Li, Ping; Cai, Di; Luo, Zhangfeng; Qin, Peiyong; Chen, Changjing; Wang, Yong; Zhang, Changwei; Wang, Zheng; Tan, Tianwei

    2016-04-01

    In this study, the effects of different parts of corn stalk, including stem, leaf, flower, cob and husk on second generation ethanol production were evaluated. FTIR, XRD and SEM were performed to investigate the effect of dilute acid pretreatment. The bagasse obtained after pretreatment were further hydrolyzed by cellulase and used as the substrate for ethanol fermentation. As results, hemicelluloses fractions in different parts of corn stalk were dramatically removed and the solid fractions showed vivid compositions and crystallinities. Compared with other parts of corn stalk, the cob had higher sugar content and better enzymatic digestibility. The highest glucose yield of 94.2% and ethanol production of 24.0 g L(-1) were achieved when the cob was used as feedstock, while the glucose yield and the ethanol production were only 86.0% and 17.1 g L(-1) in the case of flower.

  10. A Metod of Ontology Generation in Web Personalization%一种Web个性化中的本体论生成方法

    Institute of Scientific and Technical Information of China (English)

    张彬; 蒋涛; 徐雨明

    2007-01-01

    在语义Web中,用户访问行为模型可以作为本体论共享.如何把Web访问活动转变成为体体论是一个非常关键的问题.为了解决这个技术问题,本文提出了一种在Web个性化中集成模糊逻辑和形式化概念分析的本体论自动生成方法.文章首先对Web个性化语义Web和本体论进行了介绍,然后该方法的体系结构及其生成过程也被详细地论论述.%Semantic Web,user access behavior models can be shared as ontology.How to transform web access activities into ontology is a critical problem.To solve the technical issue,the paper proposes an automatic method of ontology generation integrated fuzzy logic and formal concept analysis in Web personalization.The basic notion of Web personalization as well as semantic Web and ontology are firstly introduced.Then the architecture and structure,the processing of ontology generation are analyzed in detail for the method.

  11. Identification and Opinion Extraction throughUser Generated Content on Web Based Social Media

    Directory of Open Access Journals (Sweden)

    Dr. Deepak Arora,

    2014-06-01

    Full Text Available Nowadays internet is becoming a platform where different user can post there ideas and opinions. The social networking sites and blogs offer a wide variety of such informative text which can be used to establish or determine a mindset for a particular product, person or individual. These blogs can be used as a vast source of information through which one can predict opinion as well as planning for different business strategies. Due to huge amount of information there is always need of specific tool or approach to mine useful text called opinion. Authors have proposed an approach of mining and classification for different real time datasets gathered from various sources of information, freely available on internet. Authors have tested the approach over these datasets and found suitable results. In this paper we propose a method that classifies a user-generated content on the basis of positive, negative, neutral, double negative, negative positive, triple negative.Authors has proposed rules for analyzing ideas and tested against dataset using Naive Bayes and Support Vector machine (SVM model for accuracy and found best result 80.39 % for NB and 81.37 % for SVM.

  12. AMPLISAS: a web server for multilocus genotyping using next-generation amplicon sequencing data.

    Science.gov (United States)

    Sebastian, Alvaro; Herdegen, Magdalena; Migalska, Magdalena; Radwan, Jacek

    2016-03-01

    Next-generation sequencing (NGS) technologies are revolutionizing the fields of biology and medicine as powerful tools for amplicon sequencing (AS). Using combinations of primers and barcodes, it is possible to sequence targeted genomic regions with deep coverage for hundreds, even thousands, of individuals in a single experiment. This is extremely valuable for the genotyping of gene families in which locus-specific primers are often difficult to design, such as the major histocompatibility complex (MHC). The utility of AS is, however, limited by the high intrinsic sequencing error rates of NGS technologies and other sources of error such as polymerase amplification or chimera formation. Correcting these errors requires extensive bioinformatic post-processing of NGS data. Amplicon Sequence Assignment (AMPLISAS) is a tool that performs analysis of AS results in a simple and efficient way, while offering customization options for advanced users. AMPLISAS is designed as a three-step pipeline consisting of (i) read demultiplexing, (ii) unique sequence clustering and (iii) erroneous sequence filtering. Allele sequences and frequencies are retrieved in excel spreadsheet format, making them easy to interpret. AMPLISAS performance has been successfully benchmarked against previously published genotyped MHC data sets obtained with various NGS technologies.

  13. WEB-ISM’: A New Art Movement for E-Generation Children

    Directory of Open Access Journals (Sweden)

    Martha Ioannidou

    2014-09-01

    Full Text Available If we as educators want to make a change, it’s high time we take the bull by the horns and realistically address and debate current issues related not only to art context but to art pedagogy. In this new information age, an innovative e-collaborative project focusing on education in and through arts, designed, continually supported and revised by the associated university departments, can work as a medium for developing cognitive skills that carry over into other areas. Among the main aims of such a project, one could name a few general ones in the abstract, which will be expanded and explained in the full paper: It will offer participants the abilities to reflect, to link information from diverse subjects and sources, to generate ideas; to honour various traditions, to recognize historical achievements and enjoy new technologies via arts, to respect different cultures and accept various points of view; to testify and connect opinions, to develop higher-order thinking skills in students, to work and communicate creatively outside the boundaries of a single class, a single school, a single country; to draw reasoned conclusions and grasp the connections that lead to creative solutions. This special @rt-platform, that we intend to ‘build’, can act as a kind of unifying force, as a ‘cultural bridge’, with arts being the key to accept and respect the ‘other’, to understand the world’s cultures and civilization’s legacies. This alone can be the main aim, the reason, in a general attempt of working together in establishing world’s peace.

  14. Snow Web 2.0: The Next Generation of Antarctic Meteorological Monitoring Systems?

    Science.gov (United States)

    Coggins, J.; McDonald, A.; Plank, G.; Pannell, M.; Ward, R.; Parsons, S.

    2012-04-01

    Adequate in-situ observation of the Antarctic lower atmosphere has proved problematic, due to a combination of the inhospitable nature and extent of the continent. Traditional weather stations are expensive, subject to extreme weather for long periods and are often isolated, and as such are prone to failure and logistically difficult to repair. We have developed the first generation of an extended system of atmospheric sensors, each costing a fraction of the price of a traditional weather station. The system is capable of performing all of the monitoring tasks of a traditional station, but has built-in redundancy over the traditional approach because many units can be deployed in a relatively small area for similar expenditure as one large weather station. Furthermore, each unit is equipped with wireless networking capabilities and so is able to share information with those units in its direct vicinity. This allows for the ferrying of collected information to a manned observation station and hence the ability to monitor data in real-time. The distributed nature of the data collected can then be used as a stand-alone product to investigate small-scale weather and climate phenomena or integrated into larger studies and be used to monitor wide regions. GPS hardware installed on each unit also allows for high-resolution glacier or ice-shelf tracking. As a testing and data gathering study, eighteen such weather stations were deployed in the vicinity of Scott Base, Ross Island, Antarctica over the 2011/12 summer season. This presentation reports on findings from this field study, and discusses possibilities for the future.

  15. Designing next-generation platforms for evaluating scientific output: What scientists can learn from the social web

    Directory of Open Access Journals (Sweden)

    Tal eYarkoni

    2012-10-01

    Full Text Available Traditional pre-publication peer review of scientific output is a slow, inefficient, and unreliable process. Efforts to replace or supplement traditional evaluation models with open evaluation platforms that leverage advances in information technology are slowly gaining traction, but remain in the early stages of design and implementation. Here I discuss a number of considerations relevant to the development of such platforms. I focus particular attention on three core elements that next-generation evaluation platforms should strive to emphasize, including (a open and transparent access to accumulated evaluation data, (b personalized and highly customizable performance metrics, and (c appropriate short-term incentivization of the userbase. Because all of these elements have already been successfully implemented on a large scale in hundreds of existing social web applications, I argue that development of new scientific evaluation platforms should proceed largely by adapting existing techniques rather than engineering entirely new evaluation mechanisms. Successful implementation of open evaluation platforms has the potential to substantially advance both the pace and the quality of scientific publication and evaluation, and the scientific community has a vested interest in shifting towards such models as soon as possible.

  16. Management and share of regulatory information through web; development of regulatory information management system for Korea next generation reactors

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J. S.; Lee, J. H.; Jeong, Y. H.; Lee, S. H. [KINS, Taejon (Korea, Republic of); Yun, Y. C.; Park, M. I. [LG-EDS Systems, Seoul (Korea, Republic of)

    2001-05-01

    The Regulatory Information Management System developed by the Korea Institute of Nuclear Safety supports researchers who are in charge of developing SRRG for the Korea Next Generation Reactors, manage the developed SRRG and development process, and make it possible to share the SRRG information and background knowledge through the internet with the nuclear-related personnel and the public. From the experience of the system operation, the search engine is repalced to manage the native SRRG files directly. This change eliminates the inconsistency between native files and database files and improve the search exactness by automatic indexing function. The user interface of the internet homepage (http://kngr.kins.re.kr) is completely rebuilded and allows SRRG developers to manage the search system and the atomic energy regulations database on the Web without the help of the client programs. General users are also able to utilize more convenient search function and additional information by the improved interface. The system is running under the backup system and firewall system for the data protection and security.

  17. Research and Implementation of the Web Page Generation System Based on Responsive Web Design%基于响应式Web设计的网页生成系统研究与实现

    Institute of Scientific and Technical Information of China (English)

    臧进进; 鄂海红

    2015-01-01

    With the rise of the mobile Internet, more and more people start using mobile devices to access various sites. Web pages that can fit different terminals become the key for individuals and enterprises to design and develop the web. In this paper, the author first discusses the responsive Web design-related technology, then designs and implements a new web page generation system. The system can shield the technical details of the development of the web page, so that users can create the web page in the way of "what you see is what you get". Through the use of response Web de-sign, the pages generated through this system can automatic response, dynamic adjustment of the layout of the structure, interaction style depending on the access device. Finally, the same content presented in different formats for different devices users.%随着移动互联网的兴起,越来越多的人开始使用移动设备访问各类网站。制作可以适配不同终端的网页成为了个人和企业网站设计和开发的关键。本文在论述了响应式Web设计相关技术的基础上,设计并实现了一套新型网页生成系统。该系统通过响应式Web设计的开发方式,为用户屏蔽网页开发的技术细节,让用户能够以“所见即所得”的方式创建网页。使用该系统生成的网页能随着接入设备的不同而自行响应,动态地调整布局结构、交互样式,将相同的内容以不同的格式呈现给不同的设备用户。

  18. Retrieval of complex χ((2)) parts for quantitative analysis of sum-frequency generation intensity spectra.

    Science.gov (United States)

    Hofmann, Matthias J; Koelsch, Patrick

    2015-10-07

    Vibrational sum-frequency generation (SFG) spectroscopy has become an established technique for in situ surface analysis. While spectral recording procedures and hardware have been optimized, unique data analysis routines have yet to be established. The SFG intensity is related to probing geometries and properties of the system under investigation such as the absolute square of the second-order susceptibility χ((2)) (2). A conventional SFG intensity measurement does not grant access to the complex parts of χ((2)) unless further assumptions have been made. It is therefore difficult, sometimes impossible, to establish a unique fitting solution for SFG intensity spectra. Recently, interferometric phase-sensitive SFG or heterodyne detection methods have been introduced to measure real and imaginary parts of χ((2)) experimentally. Here, we demonstrate that iterative phase-matching between complex spectra retrieved from maximum entropy method analysis and fitting of intensity SFG spectra (iMEMfit) leads to a unique solution for the complex parts of χ((2)) and enables quantitative analysis of SFG intensity spectra. A comparison between complex parts retrieved by iMEMfit applied to intensity spectra and phase sensitive experimental data shows excellent agreement between the two methods.

  19. Development of Web GIS for complex processing and visualization of climate geospatial datasets as an integral part of dedicated Virtual Research Environment

    Science.gov (United States)

    Gordov, Evgeny; Okladnikov, Igor; Titov, Alexander

    2017-04-01

    For comprehensive usage of large geospatial meteorological and climate datasets it is necessary to create a distributed software infrastructure based on the spatial data infrastructure (SDI) approach. Currently, it is generally accepted that the development of client applications as integrated elements of such infrastructure should be based on the usage of modern web and GIS technologies. The paper describes the Web GIS for complex processing and visualization of geospatial (mainly in NetCDF and PostGIS formats) datasets as an integral part of the dedicated Virtual Research Environment for comprehensive study of ongoing and possible future climate change, and analysis of their implications, providing full information and computing support for the study of economic, political and social consequences of global climate change at the global and regional levels. The Web GIS consists of two basic software parts: 1. Server-side part representing PHP applications of the SDI geoportal and realizing the functionality of interaction with computational core backend, WMS/WFS/WPS cartographical services, as well as implementing an open API for browser-based client software. Being the secondary one, this part provides a limited set of procedures accessible via standard HTTP interface. 2. Front-end part representing Web GIS client developed according to a "single page application" technology based on JavaScript libraries OpenLayers (http://openlayers.org/), ExtJS (https://www.sencha.com/products/extjs), GeoExt (http://geoext.org/). It implements application business logic and provides intuitive user interface similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Boundless/OpenGeo architecture was used as a basis for Web-GIS client development. According to general INSPIRE requirements to data visualization Web GIS provides such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map

  20. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...... are crucial to be formalized by the semantic web ontologies for adaptive web. We use examples from an eLearning domain to illustrate the principles which are broadly applicable to any information domain on the web....

  1. DNA Data Visualization (DDV): Software for Generating Web-Based Interfaces Supporting Navigation and Analysis of DNA Sequence Data of Entire Genomes.

    Science.gov (United States)

    Neugebauer, Tomasz; Bordeleau, Eric; Burrus, Vincent; Brzezinski, Ryszard

    2015-01-01

    Data visualization methods are necessary during the exploration and analysis activities of an increasingly data-intensive scientific process. There are few existing visualization methods for raw nucleotide sequences of a whole genome or chromosome. Software for data visualization should allow the researchers to create accessible data visualization interfaces that can be exported and shared with others on the web. Herein, novel software developed for generating DNA data visualization interfaces is described. The software converts DNA data sets into images that are further processed as multi-scale images to be accessed through a web-based interface that supports zooming, panning and sequence fragment selection. Nucleotide composition frequencies and GC skew of a selected sequence segment can be obtained through the interface. The software was used to generate DNA data visualization of human and bacterial chromosomes. Examples of visually detectable features such as short and long direct repeats, long terminal repeats, mobile genetic elements, heterochromatic segments in microbial and human chromosomes, are presented. The software and its source code are available for download and further development. The visualization interfaces generated with the software allow for the immediate identification and observation of several types of sequence patterns in genomes of various sizes and origins. The visualization interfaces generated with the software are readily accessible through a web browser. This software is a useful research and teaching tool for genetics and structural genomics.

  2. Publishing on the WWW : part 5 : a brief history of the internet and the World Wide Web

    OpenAIRE

    Grech, Victor E.

    2001-01-01

    This article focuses on the history of the Internet and the World Wide Web, the media that in recent years have created the concept of objects existing ‘on-line’ in a virtual computer environment. These objects naturally include online journals such as Images in Paediatric Cardiology.

  3. Publishing on the WWW. Part 5 - A brief history of the Internet and the World Wide Web

    OpenAIRE

    Grech, V

    2001-01-01

    This article focuses on the history of the Internet and the World Wide Web, the media that in recent years have created the concept of objects existing ‘on-line’ in a virtual computer environment. These objects naturally include on-line journals such as Images in Paediatric Cardiology.

  4. Separation of Lift-Generated Vortex Wakes Into Two Diverging Parts

    Science.gov (United States)

    Rossow, Vernon J.; Brown, Anthony P.

    2010-01-01

    As part of an ongoing study of the spreading rate of lift-generated vortex wakes, the present investigation considers possible reasons as to why segments of lift-generated wakes sometimes depart from the main part of the wake to move rapidly in either an upward or downward direction. It is assumed that deficiencies or enhancements of the lift carry over across the fuselage-shrouded wing are the driving mechanism for departures of wake-segments. The computations presented first indicate that upwardly departing wake segments that were observed and photographed could have been produced by a deficiency in lift carryover across the fuselage-shrouded part of the wing. Computations made of idealized vortex wakes indicate that upward departure of a wake segment requires a centerline reduction in the span loading of 70% or more, whether the engines are at idle or robust thrust. Similarly, it was found that downward departure of wake segments is produced when the lift over the center part of the wing is enhanced. However, it was also found that downward departures do not occur without the presence of robust engine-exhaust streams (i.e., engines must NOT be at idle). In those cases, downward departures of a wake segment occurs when the centerline value of the loading is enhanced by any amount between about 10% to 100%. Observations of condensation trails indicate that downward departure of wake segments is rare. Upward departures of wake segments appears to be more common but still rare. A study to determine the part of the aircraft that causes wake departures has not been carried out. However, even though departures of wake segments rarely occur, some aircraft do regularly shed these wake structures. If aircraft safety is to be assured to a high degree of reliability, and a solution for eliminating them is not implemented, existing guidelines for the avoidance of vortex wakes [1,3] may need to be broadened to include possible increases in wake sizes caused by vertical

  5. Google's Web Page Ranking Applied to Different Topological Web Graph Structures.

    Science.gov (United States)

    Meghabghab, George

    2001-01-01

    This research, part of the ongoing study to better understand Web page ranking on the Web, looks at a Web page as a graph structure or Web graph, and classifies different Web graphs in the new coordinate space (out-degree, in-degree). Google's Web ranking algorithm (Brin & Page, 1998) on ranking Web pages is applied in this new coordinate…

  6. Generating Orthorectified Multi-Perspective 2.5D Maps to Facilitate Web GIS-Based Visualization and Exploitation of Massive 3D City Models

    Directory of Open Access Journals (Sweden)

    Jianming Liang

    2016-11-01

    Full Text Available 2.5D map is a convenient and efficient approach to exploiting a massive three-dimensional (3D city model in web GIS. With the rapid development of oblique airborne photogrammetry and photo-based 3D reconstruction, 3D city models are becoming more and more accessible. 3D Geographic Information System (GIS can support the interactive visualization of massive 3D city models on various platforms and devices. However, the value and accessibility of existing 3D city models can be augmented by integrating them into web-based two-dimensional (2D GIS applications. In this paper, we present a step-by-step workflow for generating orthorectified oblique images (2.5D maps from massive 3D city models. The proposed framework can produce 2.5D maps from an arbitrary perspective, defined by the elevation angle and azimuth angle of a virtual orthographic camera. We demonstrate how 2.5D maps can benefit web-based visualization and exploitation of massive 3D city models. We conclude that a 2.5D map is a compact data representation optimized for web data streaming of 3D city models and that geometric analysis of buildings can be effectively conducted on 2.5D maps.

  7. Third-Generation Fatty Emulsions as Part of Parenteral Feeding in Operated Cancer Patients

    Directory of Open Access Journals (Sweden)

    S. V. Lomidze

    2010-01-01

    Full Text Available Objective: to study the efficacy of third- versus secondary-generation fatty emulsions as part of parenteral nutrition in patients operated on for gastric cancer. Subjects and methods. Envelope randomization was used to make up two groups, each comprising 10 patients, operated on for gastric cancer in the scope of gastrectomy. A control group received parenteral nutrition having the following components: Lipofundin MST/LST 20%, (500 ml daily + Nutriflex 48/150 (B. Braun (1000 ml daily, 1744 kcal/day. The study group patients were given Lipoplus 20% (500 ml daily + Nutriflex 48/150 (1000 ml daily, 1745 kcal/day. Parenteral nutrition was used on postoperative days 1 to 5. Results. Nutritional status evaluation revealed a significant increase in the concentration of total protein and albumin in the control and study group patients on postoperative day 6. The use of both second- and third-generation fatty emulsions caused a significant increase in the concentration of triglycerides on day 6 after surgery; no differences were found between the groups. On day 6 following surgery, there was a significant decrease in IL-4 in both groups (p<0.05. At the same time the Lipofundin MST/LST group showed a significantly lower concentration of IL-4 than did the study group (p<0.05. After termination of a parenteral nutrition course, the study and control groups showed a significant decrease in one of the major pro-inflammatory cytokines — IL-6. Conclusion. In the study group, the serum anti-inflammatory activity of IL-4 was more evident than that in the control group and the proinflammatory activity (IL-6 concentration decreased, which can support that as compared with the second-generation fatty emulsions, third-generation ones with a balanced omega 3 to omega-6 fatty acid ratio (1:2.7 had a normalizing effect on systemic inflammatory processes and cytokine balance with increased anti-inflammatory and reduced proinflammatory activities. Key words: third-generation

  8. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...

  9. MetaSurv: Web-Platform Generator for the Monitoring of Health Indicators and Interactive Geographical Information System.

    Science.gov (United States)

    Toubiana, Laurent; Moreau, Stéphane; Bonnard, Gaétan

    2005-01-01

    The control of the transmissible epidemics of diseases requires fast and effective tools for data acquisition, analysis, and information feedback to the actors of health like to general public. We present a tool for the fast creation of platforms of monitoring on Internet allowing the collection and the analysis in real time of the epidemic data of any origin with the dynamic and interactive cartographic representation. A Web-based Geographic Information System (Web-GIS) has been designed for communicable diseases monitoring. The Web-GIS was coupled to a data warehouse and embedded in an n-tier architecture designed as the Multi-Source Information System. It allows to access views of communicable diseases. Thus it is a useful tool for supporting health care decision-making for communicable diseases.This tool is based on the 20 years experiment of the Network Sentinels, with the daily participation of the general practitioners.

  10. Generation Y, Learner Autonomy and the Potential of Web 2.0 Tools for Language Learning and Teaching

    Science.gov (United States)

    Morgan, Liam

    2012-01-01

    Purpose: The purpose of this paper is to examine the relationship between the development of learner autonomy and the application of Web 2.0 tools in the language classroom. Design/methodology/approach: The approach taken is that of qualitative action research within an explicit theoretical framework and the data were collected via surveys and…

  11. Performance of Generating Plant: Managing the Changes. Part 2: Thermal Generating Plant Unavailability Factors and Availability Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Curley, G. Michael [North American Electric Reliability Corporation (United States); Mandula, Jiri [International Atomic Energy Agency (IAEA)

    2008-05-15

    The WEC Committee on the Performance of Generating Plant (PGP) has been collecting and analysing power plant performance statistics worldwide for more than 30 years and has produced regular reports, which include examples of advanced techniques and methods for improving power plant performance through benchmarking. A series of reports from the various working groups was issued in 2008. This reference presents the results of Working Group 2 (WG2). WG2's main task is to facilitate the collection and input on an annual basis of power plant performance data (unit-by-unit and aggregated data) into the WEC PGP database. The statistics will be collected for steam, nuclear, gas turbine and combined cycle, hydro and pump storage plant. WG2 will also oversee the ongoing development of the availability statistics database, including the contents, the required software, security issues and other important information. The report is divided into two sections: Thermal generating, combined cycle/co-generation, combustion turbine, hydro and pumped storage unavailability factors and availability statistics; and nuclear power generating units.

  12. Local inactivation of funnel-web spider (Atrax robustus) venom by first-aid measures: potentially lifesaving part of treatment.

    Science.gov (United States)

    Sutherland, S K; Duncan, A W; Tibballs, J

    1980-10-18

    Venom of the male Sydney funnel-web spider was injected subcutaneously into the limbs of monkeys (Macaca fascicularis), and the central movement of venom was delayed by first-aid treatment. This treatment consisted of the application of firm pressure over the site of injection and immobilization of the limb. It was found that quantities of venom as high as 2 mg were inactivated when the first-aid procedures were maintained for 24 hours. Over a six-hour period, 0.5 mg of venom could be inactivated. Since the amount of venom injected by the spider into a human victim is unlikely to exceed 0.2 mg, these findings have immediate application both to the first aid and to actual medical management of human victims.

  13. Decontamination effects of low-temperature plasma generated by corona discharge. Part II: new insights.

    Science.gov (United States)

    Scholtz, V; Julák, J; Kríha, V; Mosinger, J; Kopecká, S

    2007-01-01

    The second part of our paper presents the results of experiments with the decontamination of surfaces by low-temperature plasma generated by corona discharge in air at atmospheric pressure. A simple device is described and the effects of the corona discharge on model microorganisms, viz. the yeast Candida albicans, Gram-negative bacteria Escherichia coli, Enterobacter aerogenes, Neisseria sicca, Stenotrophomonas maltophilia, Gram-positive bacteria Deinococcus radiodurans, Enterococcus faecium, Staphylococcus epidermidis, Streptococcus sanguinis, and vegetative and spore forms of Geobacillus stearothermophilus are discussed. A similar microbicidal effect after about one-minute exposure was observed in all vegetative forms of the microorganisms. Measurement in growth inhibition zones on a semisolid medium was used to determine the dependence of the microbicidal effect on exposure time and the distance between electrodes. Counting of colonies served to assess the microbicidal effect of the discharge on contaminated inert surfaces observable after more than 1 min exposure. Geobacillus stearothermophilus spores were found to have several times lower susceptibility to the action of the discharge and the microbicidal effect was observed only after an 8 min exposure. Reaction with the iodide reagent did not unambiguously demonstrate the difference between ozone and singlet oxygen as presumed active components of the corona. The area distribution of reactive oxygen species was determined; it was found to differ from the Wartburg law depending on exposure time. Qualitative evidence was obtained on the penetration of the reactive oxygen species into the semisolid medium.

  14. Teacher education in the generative virtual classroom: developing learning theories through a web-delivered, technology-and-science education context

    Science.gov (United States)

    Schaverien, Lynette

    2003-12-01

    This paper reports the use of a research-based, web-delivered, technology-and-science education context (the Generative Virtual Classroom) in which student-teachers can develop their ability to recognize, describe, analyse and theorize learning. Addressing well-recognized concerns about narrowly conceived, anachronistic and ineffective technology-and-science education, this e-learning environment aims to use advanced technologies for learning, to bring about larger scale improvement in classroom practice than has so far been effected by direct intervention through teacher education. Student-teachers' short, intensive engagement with the Generative Virtual Classroom during their practice teaching is examined. Findings affirm the worth of this research-based e-learning system for teacher education and the power of a biologically based, generative theory to make sense of the learning that occurred.

  15. Is there a "net generation" in veterinary medicine? A comparative study on the use of the Internet and Web 2.0 by students and the veterinary profession.

    Science.gov (United States)

    Tenhaven, Christoph; Tipold, Andrea; Fischer, Martin R; Ehlers, Jan P

    2013-01-01

    Einleitung: Informelles und formelles lebenslanges Lernen ist im Studium und Beruf essenziell. Dazu können neben der klassischen Fortbildung auch Web 2.0 Tools benutzt werden. Umstritten in der Literatur ist allerdings, ob es unter den „Unter 30 Jährigen“ eine sogenannte Net-Generation gibt. Ziel: Überprüfung der Hypothese, dass eine Net-Generation unter Studierenden und jungen Tierärzten existiert. Methode: Eine Onlineumfrage unter Studierenden und der Tierärzteschaft wurde im deutschsprachigen Raum durchgeführt, die per Onlinemedien und klassischen Printmedien beworben wurde.Ergebnisse: An der Befragung haben 1780 Personen teilgenommen. Es gibt unterschiedliches Nutzungsverhalten von Studierenden und der Tierärzteschaft bei sozialen Netzwerken (91,9% vs. 69%) und IMs (55,9% vs. 24,5%). Alle Tools wurden hauptsächlich passiv und privat genutzt, im geringeren Maße auch für den Beruf und das Studium.Ausblick: Der Einsatz von Web 2.0 Tools ist sinnvoll, jedoch ist eine Vermittlung von Informations- und Medienkompetenz, Erstellung von Verhaltensregeln im Internet und Überprüfung von „user generated content“ essentiell.

  16. Web Security, Privacy & Commerce

    CERN Document Server

    Garfinkel, Simson

    2011-01-01

    Since the first edition of this classic reference was published, World Wide Web use has exploded and e-commerce has become a daily part of business and personal life. As Web use has grown, so have the threats to our security and privacy--from credit card fraud to routine invasions of privacy by marketers to web site defacements to attacks that shut down popular web sites. Web Security, Privacy & Commerce goes behind the headlines, examines the major security risks facing us today, and explains how we can minimize them. It describes risks for Windows and Unix, Microsoft Internet Exp

  17. Web Search Results Summarization Using Similarity Assessment

    Directory of Open Access Journals (Sweden)

    Sawant V.V.

    2014-06-01

    Full Text Available Now day’s internet has become part of our life, the WWW is most important service of internet because it allows presenting information such as document, imaging etc. The WWW grows rapidly and caters to a diversified levels and categories of users. For user specified results web search results are extracted. Millions of information pouring online, users has no time to surf the contents completely .Moreover the information available is repeated or duplicated in nature. This issue has created the necessity to restructure the search results that could yield results summarized. The proposed approach comprises of different feature extraction of web pages. Web page visual similarity assessment has been employed to address the problems in different fields including phishing, web archiving, web search engine etc. In this approach, initially by enters user query the number of search results get stored. The Earth Mover's Distance is used to assessment of web page visual similarity, in this technique take the web page as a low resolution image, create signature of that web page image with color and co-ordinate features .Calculate the distance between web pages by applying EMD method. Compute the Layout Similarity value by using tag comparison algorithm and template comparison algorithm. Textual similarity is computed by using cosine similarity, and hyperlink analysis is performed to compute outward links. The final similarity value is calculated by fusion of layout, text, hyperlink and EMD value. Once the similarity matrix is found clustering is employed with the help of connected component. Finally group of similar web pages i.e. summarized results get displayed to user. Experiment conducted to demonstrate the effectiveness of four methods to generate summarized result on different web pages and user queries also.

  18. Web Mining: An Overview

    Directory of Open Access Journals (Sweden)

    P. V. G. S. Mudiraj B. Jabber K. David raju

    2011-12-01

    Full Text Available Web usage mining is a main research area in Web mining focused on learning about Web users and their interactions with Web sites. The motive of mining is to find users’ access models automatically and quickly from the vast Web log data, such as frequent access paths, frequent access page groups and user clustering. Through web usage mining, the server log, registration information and other relative information left by user provide foundation for decision making of organizations. This article provides a survey and analysis of current Web usage mining systems and technologies. There are generally three tasks in Web Usage Mining: Preprocessing, Pattern analysis and Knowledge discovery. Preprocessing cleans log file of server by removing log entries such as error or failure and repeated request for the same URL from the same host etc... The main task of Pattern analysis is to filter uninteresting information and to visualize and interpret the interesting pattern to users. The statistics collected from the log file can help to discover the knowledge. This knowledge collected can be used to take decision on various factors like Excellent, Medium, Weak users and Excellent, Medium and Weak web pages based on hit counts of the web page in the web site. The design of the website is restructured based on user’s behavior or hit counts which provides quick response to the web users, saves memory space of servers and thus reducing HTTP requests and bandwidth utilization. This paper addresses challenges in three phases of Web Usage mining along with Web Structure Mining.This paper also discusses an application of WUM, an online Recommender System that dynamically generates links to pages that have not yet been visited by a user and might be of his potential interest. Differently from the recommender systems proposed so far, ONLINE MINER does not make use of any off-line component, and is able to manage Web sites made up of pages dynamically generated.

  19. Performance of Generating Plant: Managing the Changes. Part 1: International availability data exchange for thermal generating plant

    Energy Technology Data Exchange (ETDEWEB)

    Stallard, G.S.; Deschaine, R. [Black and Veatch (United States)

    2008-05-15

    The WEC Committee on the Performance of Generating Plant (PGP) has been collecting and analysing power plant performance statistics worldwide for more than 30 years and has produced regular reports, which include examples of advanced techniques and methods for improving power plant performance through benchmarking. A series of reports from the various working groups was issued in 2008. This reference presents the results of Working Group 1 (WG1). WG1's primary focus is to analyse the best ways to measure, evaluate, and apply power plant performance and availability data to promote plant performance improvements worldwide. The paper explores the specific work activities of 2004-2007 to extend traditional analysis and benchmarking frameworks. It is divided into two major topics: Overview of current electric supply industry issues/trends; and, Technical Methods/Tools to evaluate performance in today's ESI.

  20. ICO amplicon NGS data analysis: a Web tool for variant detection in common high-risk hereditary cancer genes analyzed by amplicon GS Junior next-generation sequencing.

    Science.gov (United States)

    Lopez-Doriga, Adriana; Feliubadaló, Lídia; Menéndez, Mireia; Lopez-Doriga, Sergio; Morón-Duran, Francisco D; del Valle, Jesús; Tornero, Eva; Montes, Eva; Cuesta, Raquel; Campos, Olga; Gómez, Carolina; Pineda, Marta; González, Sara; Moreno, Victor; Capellá, Gabriel; Lázaro, Conxi

    2014-03-01

    Next-generation sequencing (NGS) has revolutionized genomic research and is set to have a major impact on genetic diagnostics thanks to the advent of benchtop sequencers and flexible kits for targeted libraries. Among the main hurdles in NGS are the difficulty of performing bioinformatic analysis of the huge volume of data generated and the high number of false positive calls that could be obtained, depending on the NGS technology and the analysis pipeline. Here, we present the development of a free and user-friendly Web data analysis tool that detects and filters sequence variants, provides coverage information, and allows the user to customize some basic parameters. The tool has been developed to provide accurate genetic analysis of targeted sequencing of common high-risk hereditary cancer genes using amplicon libraries run in a GS Junior System. The Web resource is linked to our own mutation database, to assist in the clinical classification of identified variants. We believe that this tool will greatly facilitate the use of the NGS approach in routine laboratories.

  1. The Web 2.0 as Marketing Tool: Opportunities for SMEs

    NARCIS (Netherlands)

    Constantinides, Efthymios

    2008-01-01

    The new generation of Internet applications widely known as Social Media or Web 2.0 offers corporations a whole range of opportunities for improving their marketing efficiency and internal operations. Web 2.0 applications have already become part of the daily life of an increasing number of consumer

  2. The Web 2.0 as Marketing Tool: Opportunities for SMEs

    NARCIS (Netherlands)

    Constantinides, Efthymios

    2008-01-01

    The new generation of Internet applications widely known as Social Media or Web 2.0 offers corporations a whole range of opportunities for improving their marketing efficiency and internal operations. Web 2.0 applications have already become part of the daily life of an increasing number of consumer

  3. The Web 2.0 as Marketing Tool: Opportunities for SMEs

    NARCIS (Netherlands)

    Constantinides, Efthymios

    2008-01-01

    The new generation of Internet applications widely known as Social Media or Web 2.0 offers corporations a whole range of opportunities for improving their marketing efficiency and internal operations. Web 2.0 applications have already become part of the daily life of an increasing number of

  4. Rendering of surface-geometries at job-generation level for camouflaging the layered nature of Additively Manufactured parts

    DEFF Research Database (Denmark)

    Pedersen, D. B.; Hansen, H. N.; Nielsen, J. S.

    2014-01-01

    . An often used method for visualization of complex geometries within CGI is by producing a geometrical primitive after which the primitive is passed through a renderer.[2] Examples can be geometries with hair, leather structure and their like. Should the entire geometry including surface definition sought......] It is proposed to camouflage these layers in order to produce parts with a better visual appeal, and to add functional surface structures. In order to generate such surface structures without adding a challenging computational overhead at job-generation, inspiration from Computer Generated Imaging (CGI) is found...

  5. The Web 2.0 as Marketing Tool: Opportunities for SMEs

    OpenAIRE

    Constantinides, Efthymios

    2008-01-01

    The new generation of Internet applications widely known as Social Media or Web 2.0 offers corporations a whole range of opportunities for improving their marketing efficiency and internal operations. Web 2.0 applications have already become part of the daily life of an increasing number of consumers who regard them as prime channels of communication, information exchange, sharing of expertise, dissemination of individual creativity and entertainment. Web logs, podcasts, online forums and soc...

  6. An Efficient Cluster Based Web Object Filters From Web Pre-Fetching And Web Caching On Web User Navigation

    Directory of Open Access Journals (Sweden)

    A. K. Santra

    2012-05-01

    Full Text Available The World Wide Web is a distributed internet system, which provides dynamic and interactive services includes on line tutoring, video/audio conferencing, e-commerce, and etc., which generated heavy demand on network resources and web servers. It increase over the past few year at a very rapidly rate, due to which the amount of traffic over the internet is increasing. As a result, the network performance has now become very slow. Web Pre-fetching and Caching is one of the effective solutions to reduce the web access latency and improve the quality of service. The existing model presented a Cluster based pre-fetching scheme identified clusters of correlated Web pages based on users access patterns. Web Pre-fetching and Caching cause significant improvements on the performance of Web infrastructure. In this paper, we present an efficient Cluster based Web Object Filters from Web Pre-fetching and Web caching scheme to evaluate the web user navigation patterns and user references of product search. Clustering of web page objects obtained from pre-fetched and web cached contents. User Navigation is evaluated from the web cluster objects with similarity retrieval in subsequent user sessions. Web Object Filters are built with the interpretation of the cluster web pages related to the unique users by discarding redundant pages. Ranking is done on users web page product preferences at multiple sessions of each individual user. The performance is measured in terms of Objective function, Number of clusters and cluster accuracy.

  7. AMPLIFICATION OF AZOSPIRILLUM SP. JG3 GLPD GENE FRAGMENT USING DEGENERATE PRIMERS GENERATED BY WEB-BASED TOOLS

    Directory of Open Access Journals (Sweden)

    Stalis Norma Ethica

    2013-12-01

    Full Text Available Primaclade and In Silico web-based tools were used as a strategy to obtain the correct-size PCR amplicon targeting a fragment of gene encoding glycerol-3-phosphate dehydrogenase (glpD of Azospirillum sp. JG3. The bacterial strains are soil, Gram-negative PGPR (Plant-Growth Promoting Rhizobacteria isolated from an agricultural land in Purwokerto, Central Java, Indonesia, which have ability to produce several commercial enzymes. The aim is to obtain a pair of reliable degenerate primers from a limited number of glpD sequences from other Azospirilla retrieved in GenBank using bioinformatics approach. We demonstrated degenerate primer design that led to successful PCR amplification corresponding to the targeted DNA fragment. Homology analysis showed that the obtained DNA fragment is 61% and 99% similar to sn-glycerol-3-phosphate dehydrogenase genes of Azospirillum brasilense and Stenotrophomonas maltophili respectively.

  8. 基于 WebGIS 技术的新一代桥梁管理系统研究与开发%Research and Development of a New Generation Bridge Management System Based on WebGIS Technology

    Institute of Scientific and Technical Information of China (English)

    胡振中; 李连友; 陈祥祥; 赵红蕊

    2015-01-01

    桥梁管理系统(BM S )是辅助管理者进行桥梁日常管理、模拟分析和制定决策的管理工具。为解决当前BM S中普遍存在的管理体系不完整、分析和评估模型落后、人机交互体验性较差、维护困难等问题,采用网络地理信息系统(WebGIS )技术以增强数据可视化效果,提高人机交互体验,并解决资源共享等问题;通过建立科学的退化模型和费用模型以提高桥梁全寿命期分析和评估的准确性。在此基础上,研发了一个原型系统,通过记录桥梁的历史检测数据,挖掘可用信息,对桥梁未来状态以及经济效益进行预测,从而实现对桥梁运维全过程的跟踪、分析和评估。应用表明,基于WebGIS技术的新一代BM S能提高桥梁管理的效率。%The bridge management system (BMS) is a kind of management toolkit helping the management staff to carry out the bridge routine management ,simulation analysis and decision‐making .To solve the problems ,such as the imperfect management systems ,out‐of‐date analysis and evaluation models ,poor human‐machine interaction experiences and inconvenient maintenance , extensively existing in the current BMSs , the Web Geographic Information System (WebGIS ) technology is used in this research to enhance the data visualization effect ,improve the human‐ma‐chine interaction experiences and give solution to the resource sharing on the one hand and on the other hand ,to improve the accuracy of life‐cycle analysis and evaluation of the bridges by means of establishing the wise degradation and cost models .On the basis of the efforts having been made ,a prototype of a new generation BMS is developed and by means of recording the historic inspection data and mining the usable data of the bridges ,the future conditions and economic benefits of the bridges are predicted and consequently ,the tracking ,analysis and evaluation of the w hole process of the

  9. WEB view layer generation technology based on the template%基于模板的WEB视图层生成技术

    Institute of Scientific and Technical Information of China (English)

    张铁头; 刘磊

    2012-01-01

      The change of views for WEB Sytstem is frequently,through the template technology can separate the page layout and data in the page structure adjustment to not need changes to the other parts of the system,it is easy to page design also facilitate division of labor.FreeMarker is a Java development of general template engine,is suitable for the formation of the WEB view layer.%  WEB系统的视图层变化比较频繁,通过模板技术可以分离页面布局和其中显示的数据,在进行页面结构调整时不需要改动系统的其它部分,便于页面设计也便于分工.FreeMarker是一个Java开发的通用模板引擎,非常适用于WEB视图层的生成.

  10. Automatic Extraction of Destinations, Origins and Route Parts from Human Generated Route Directions

    Science.gov (United States)

    Zhang, Xiao; Mitra, Prasenjit; Klippel, Alexander; Maceachren, Alan

    Researchers from the cognitive and spatial sciences are studying text descriptions of movement patterns in order to examine how humans communicate and understand spatial information. In particular, route directions offer a rich source of information on how cognitive systems conceptualize movement patterns by segmenting them into meaningful parts. Route directions are composed using a plethora of cognitive spatial organization principles: changing levels of granularity, hierarchical organization, incorporation of cognitively and perceptually salient elements, and so forth. Identifying such information in text documents automatically is crucial for enabling machine-understanding of human spatial language. The benefits are: a) creating opportunities for large-scale studies of human linguistic behavior; b) extracting and georeferencing salient entities (landmarks) that are used by human route direction providers; c) developing methods to translate route directions to sketches and maps; and d) enabling queries on large corpora of crawled/analyzed movement data. In this paper, we introduce our approach and implementations that bring us closer to the goal of automatically processing linguistic route directions. We report on research directed at one part of the larger problem, that is, extracting the three most critical parts of route directions and movement patterns in general: origin, destination, and route parts. We use machine-learning based algorithms to extract these parts of routes, including, for example, destination names and types. We prove the effectiveness of our approach in several experiments using hand-tagged corpora.

  11. Building Social Web Applications

    CERN Document Server

    Bell, Gavin

    2009-01-01

    Building a web application that attracts and retains regular visitors is tricky enough, but creating a social application that encourages visitors to interact with one another requires careful planning. This book provides practical solutions to the tough questions you'll face when building an effective community site -- one that makes visitors feel like they've found a new home on the Web. If your company is ready to take part in the social web, this book will help you get started. Whether you're creating a new site from scratch or reworking an existing site, Building Social Web Applications

  12. Dynamic Web Pages: Performance Impact on Web Servers.

    Science.gov (United States)

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  13. Web Accessibility and Guidelines

    Science.gov (United States)

    Harper, Simon; Yesilada, Yeliz

    Access to, and movement around, complex online environments, of which the World Wide Web (Web) is the most popular example, has long been considered an important and major issue in the Web design and usability field. The commonly used slang phrase ‘surfing the Web’ implies rapid and free access, pointing to its importance among designers and users alike. It has also been long established that this potentially complex and difficult access is further complicated, and becomes neither rapid nor free, if the user is disabled. There are millions of people who have disabilities that affect their use of the Web. Web accessibility aims to help these people to perceive, understand, navigate, and interact with, as well as contribute to, the Web, and thereby the society in general. This accessibility is, in part, facilitated by the Web Content Accessibility Guidelines (WCAG) currently moving from version one to two. These guidelines are intended to encourage designers to make sure their sites conform to specifications, and in that conformance enable the assistive technologies of disabled users to better interact with the page content. In this way, it was hoped that accessibility could be supported. While this is in part true, guidelines do not solve all problems and the new WCAG version two guidelines are surrounded by controversy and intrigue. This chapter aims to establish the published literature related to Web accessibility and Web accessibility guidelines, and discuss limitations of the current guidelines and future directions.

  14. Strategic Generation with Conjectured Transmission Price Responses in a Mixed Transmission Pricing System. Part 1. Formulation

    Energy Technology Data Exchange (ETDEWEB)

    Hobbs, B.F. [Department of Geography and Environmental Engineering, The Johns Hopkins University, Baltimore, MD (United States); Rijkers, F.A.M. [Office of Energy Regulation DTe, Den Haag (Netherlands)

    2004-05-01

    The conjectured supply function (CSF) model calculates an oligopolistic equilibrium among competing generating companies (GenCos), presuming that GenCos anticipate that rival firms will react to price increases by expanding their sales at an assumed rate. The CSF model is generalized here to include each generator's conjectures concerning how the price of transmission services (point-to-point service and constrained interfaces) will be affected by the amount of those services that the generator demands. This generalization reflects the market reality that large producers will anticipate that they can favorably affect transmission prices by their actions. The model simulates oligopolistic competition among generators while simultaneously representing a mixed transmission pricing system. This mixed system includes fixed transmission tariffs, congestion-based pricing of physical transmission constraints (represented as a linearized dc load flow), and auctions of interface capacity in a path-based pricing system. Pricing inefficiencies, such as export fees and no credit for counterflows, can be simulated. The model is formulated as a linear mixed complementarity problem, which enables very large market models to be solved. In the second paper of this two-paper series, the capabilities of the model are illustrated with an application to northwest Europe, where transmission pricing is based on such a mixture of approaches.

  15. A thermoelectric power generating heat exchanger: Part I – Experimental realization

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Sarhadi, Ali; Pryds, Nini;

    2016-01-01

    An experimental realization of a heat exchanger with commercial thermoelectric generators (TEGs) is presented. The power producing capabilities as a function of flow rate and temperature span are characterized for two different commercial heat transfer fluids and for three different thermal...

  16. Web applications using the Google Web Toolkit

    OpenAIRE

    von Wenckstern, Michael

    2013-01-01

    This diploma thesis describes how to create or convert traditional Java programs to desktop-like rich internet applications with the Google Web Toolkit. The Google Web Toolkit is an open source development environment, which translates Java code to browser and device independent HTML and JavaScript. Most of the GWT framework parts, including the Java to JavaScript compiler as well as important security issues of websites will be introduced. The famous Agricola board game will be ...

  17. Web applications using the Google Web Toolkit

    OpenAIRE

    von Wenckstern, Michael

    2013-01-01

    This diploma thesis describes how to create or convert traditional Java programs to desktop-like rich internet applications with the Google Web Toolkit. The Google Web Toolkit is an open source development environment, which translates Java code to browser and device independent HTML and JavaScript. Most of the GWT framework parts, including the Java to JavaScript compiler as well as important security issues of websites will be introduced. The famous Agricola board game will be ...

  18. HIDDEN WEB EXTRACTOR DYNAMIC WAY TO UNCOVER THE DEEP WEB

    Directory of Open Access Journals (Sweden)

    DR. ANURADHA

    2012-06-01

    Full Text Available In this era of digital tsunami of information on the web, everyone is completely dependent on the WWW for information retrieval. This has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. The web databases are hidden behind the query interfaces. In this paper, we propose a Hidden Web Extractor (HWE that can automatically discover and download data from the Hidden Web databases. Since the only “entry point” to a Hidden Web site is a query interface, the main challenge that a Hidden WebExtractor has to face is how to automatically generate meaningful queries for the unlimited number of website pages.

  19. Scalable hosting of web applications

    NARCIS (Netherlands)

    Sivasubramanian, S.

    2007-01-01

    Modern Web sites have evolved from simple monolithic systems to complex multitiered systems. In contrast to traditional Web sites, these sites do not simply deliver pre-written content but dynamically generate content using (one or more) multi-tiered Web applications. In this thesis, we

  20. Scalable hosting of web applications

    NARCIS (Netherlands)

    Sivasubramanian, S.

    2007-01-01

    Modern Web sites have evolved from simple monolithic systems to complex multitiered systems. In contrast to traditional Web sites, these sites do not simply deliver pre-written content but dynamically generate content using (one or more) multi-tiered Web applications. In this thesis, we addresse

  1. Web Browser Trends and Technologies.

    Science.gov (United States)

    Goodwin-Jones, Bob

    2000-01-01

    Discusses Web browsers and how their capabilities have been expanded, support for Web browsing on different devices (cell phones, palmtop computers, TV sets), and browser support for the next-generation Web authoring language, XML ("extensible markup language"). (Author/VWL)

  2. A Web Service Tool (SOAR) for the Dynamic Generation of L1 Grids of Coincident AIRS, AMSU and MODIS Satellite Sounding Radiance Data for Climate Studies

    Science.gov (United States)

    Halem, M.; Yesha, Y.; Tilmes, C.; Chapman, D.; Goldberg, M.; Zhou, L.

    2007-05-01

    Three decades of Earth remote sensing from NASA, NOAA and DOD operational and research satellites carrying successive generations of improved atmospheric sounder instruments have resulted in petabytes of radiance data with varying spatial and spectral resolutions being stored at different data archives in various data formats by the respective agencies. This evolution of sounders and the diversities of these archived data sets have led to data processing obstacles limiting the science community from readily accessing and analyzing such long-term climate data records. We address this problem by the development of a web based Service Oriented Atmospheric Radiance (SOAR) system built on the SOA paradigm that makes it practical for the science community to dynamically access, manipulate and generate long term records of L1 pre-gridded sounding radiances of coincident multi-sensor data for regions specified according to user chosen criteria. SOAR employs a modification of the standard Client Server interactions that allows users to represent themselves directly to the Process Server through their own web browsers. The browser uses AJAX to request Javascript libraries and DHTML interfaces that define the possible client interactions and communicates the SOAP messages to the Process server allowing for dynamic web dialogs with the user to take place on the fly. The Process Server is also connected to an underlying high performance compute cluster and storage system which provides much of the data processing capabilities required to service the client requests. The compute cluster employs optical communications to NOAA and NASA for accessing the data and under the governance of the Process Server invokes algorithms for on-demand spatial, temporal, and spectral gridding. Scientists can choose from a variety of statistical averaging techniques for compositing satellite observed sounder radiances from the AIRS, AMSU or MODIS instruments to form spatial-temporal grids for

  3. Operation modes of a hydro-generator as a part of the inverter micro hydropower plant

    Science.gov (United States)

    Lukutin, B. V.; Shandarova, E. B.; Matukhin, D. L.; Makarova, A. F.; Fuks, I. L.

    2016-04-01

    The paper dwells on the selection problem of power equipment for a stand-alone inverter micro hydropower plant, in particular a hydro-generator, and evaluation of its operation modes. Numerical experiments included the modes calculation of hydroelectric units of the same type with various nominal power, supplied to the consumer according to the unchanged electric load curve. The studies developed requirements for a hydro-turbine and a synchronous generator in terms of a speed range and installed capacity, depending on the load curve. The possibility of using general industrial hydroelectric units with nominal power equal to half-maximum capacity of a typical daily load curve in rural areas was shown.

  4. UG二次开发技术在Web零件库系统开发中的应用研究%Application of UG Secondary Development Technology in Development of Web Parts Library System

    Institute of Scientific and Technical Information of China (English)

    何丽; 孙文磊; 王宏伟

    2011-01-01

    通过研究C#编程语言对UG NX 6.0软件的二次开发技术,并结合ASP.NET技术实现了Web零件库系统中零件三维图形的在线动态浏览、零件表达式及属性信息的提取和零件在线参数化驱动等功能.介绍了上述功能的实现方法与过程,并给出了基于ASP.NET的Web零件库系统的应用实例.%UG secondary development for UC NX 6. 0 software by C# programming language was researched. Several important functions in web parts library system were realized with ASP. NET technology, as follows; browsing three-dimensional graphics of parts in Web parts library system, picking up expressions and attributes information in part files and driving to parameterized parts. The de tailed methods and processes to realize above functions were introduced and an applied example of Web parts library system based on ASP. NET was given.

  5. WEB LOG EXPLORER – CONTROL OF MULTIDIMENSIONAL DYNAMICS OF WEB PAGES

    Directory of Open Access Journals (Sweden)

    Mislav Šimunić

    2012-07-01

    Full Text Available Demand markets dictate and pose increasingly more requirements to the supplymarket that are not easily satisfied. The supply market presenting its web pages to thedemand market should find the best and quickest ways to respond promptly to the changesdictated by the demand market. The question is how to do that in the most efficient andquickest way. The data on the usage of web pages on a specific web site are recorded in alog file. The data in a log file are stochastic and unordered and require systematicmonitoring, categorization, analyses, and weighing. From the data processed in this way, itis necessary to single out and sort the data by their importance that would be a basis for acontinuous generation of dynamics/changes to the web site pages in line with the criterionchosen. To perform those tasks successfully, a new software solution is required. For thatpurpose, the authors have developed the first version of the WLE (WebLogExplorersoftware solution, which is actually realization of web page multidimensionality and theweb site as a whole. The WebLogExplorer enables statistical and semantic analysis of a logfile and on the basis thereof, multidimensional control of the web page dynamics. Theexperimental part of the work was done within the web site of HTZ (Croatian NationalTourist Board being the main portal of the global tourist supply in the Republic of Croatia(on average, daily "log" consists of c. 600,000 sets, average size of log file is 127 Mb, andc. 7000-8000 daily visitors on the web site.

  6. Exploring the academic invisible web

    OpenAIRE

    Lewandowski, Dirk; Mayr, Philipp

    2006-01-01

    Purpose: To provide a critical review of Bergman’s 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodol...

  7. Thermoelectric Generators for Automotive Waste Heat Recovery Systems Part I: Numerical Modeling and Baseline Model Analysis

    Science.gov (United States)

    Kumar, Sumeet; Heister, Stephen D.; Xu, Xianfan; Salvador, James R.; Meisner, Gregory P.

    2013-04-01

    A numerical model has been developed to simulate coupled thermal and electrical energy transfer processes in a thermoelectric generator (TEG) designed for automotive waste heat recovery systems. This model is capable of computing the overall heat transferred, the electrical power output, and the associated pressure drop for given inlet conditions of the exhaust gas and the available TEG volume. Multiple-filled skutterudites and conventional bismuth telluride are considered for thermoelectric modules (TEMs) for conversion of waste heat from exhaust into usable electrical power. Heat transfer between the hot exhaust gas and the hot side of the TEMs is enhanced with the use of a plate-fin heat exchanger integrated within the TEG and using liquid coolant on the cold side. The TEG is discretized along the exhaust flow direction using a finite-volume method. Each control volume is modeled as a thermal resistance network which consists of integrated submodels including a heat exchanger and a thermoelectric device. The pressure drop along the TEG is calculated using standard pressure loss correlations and viscous drag models. The model is validated to preserve global energy balances and is applied to analyze a prototype TEG with data provided by General Motors. Detailed results are provided for local and global heat transfer and electric power generation. In the companion paper, the model is then applied to consider various TEG topologies using skutterudite and bismuth telluride TEMs.

  8. Cortical hyperpolarization-activated depolarizing current takes part in the generation of focal paroxysmal activities

    Science.gov (United States)

    Timofeev, Igor; Bazhenov, Maxim; Sejnowski, Terrence; Steriade, Mircea

    2002-01-01

    During paroxysmal neocortical oscillations, sudden depolarization leading to the next cycle occurs when the majority of cortical neurons are hyperpolarized. Both the Ca2+-dependent K+ currents (IK(Ca)) and disfacilitation play critical roles in the generation of hyperpolarizing potentials. In vivo experiments and computational models are used here to investigate whether the hyperpolarization-activated depolarizing current (Ih) in cortical neurons also contributes to the generation of paroxysmal onsets. Hyperpolarizing current pulses revealed a depolarizing sag in ≈20% of cortical neurons. Intracellular recordings from glial cells indirectly indicated an increase in extracellular potassium concentration ([K+]o) during paroxysmal activities, leading to a positive shift in the reversal potential of K+-mediated currents, including Ih. In the paroxysmal neocortex, ≈20% of neurons show repolarizing potentials originating from hyperpolarizations associated with depth-electroencephalogram positive waves of spike-wave complexes. The onset of these repolarizing potentials corresponds to maximal [K+]o as estimated from dual simultaneous impalements from neurons and glial cells. Computational models showed how, after the increased [K+]o, the interplay between Ih, IK(Ca), and a persistent Na+ current, INa(P), could organize paroxysmal oscillations at a frequency of 2–3 Hz. PMID:12089324

  9. Potential of Power Generation from Biogas. Part II: Municipal Solid Waste

    Directory of Open Access Journals (Sweden)

    Vera-Romero Iván

    2015-07-01

    Full Text Available The objective of this work is to estimate the amount of biogas that could be obtained from the anaerobic decomposition of the organic fraction of the municipal solid waste (MSW disposed in a sanitary landfill, by capturing and taking advantage of it to generate electricity which can be consumed by Ciénega Region of Chapala in the state of Michoacán, México. To estimate the biogas captured, the Mexican Model of Biogas version 2.0 was used; capturing MSW for 11 years with a project life of 21 years. For the analysis of power generation an average cost for schedule rate 5-A from the CFE for public service was used. Four possible scenarios were evaluated: optimal, intermediate optimal, intermediate pessimistic and pessimistic; varying characteristics such as adequate handling site, fire presence, coverage, leachate, among others. Each of the scenarios, economically justify the construction of an inter-municipal landfill obtaining substantial long-term economic benefits. (26.5×106 USD, 22.8×106 , 17.9×106 and 11.7×106 respectively, while contributing to climate change mitigation and prevention of diseases.

  10. Performance Based Novel Techniques for Semantic Web Mining

    Directory of Open Access Journals (Sweden)

    Mahendra Thakur

    2012-01-01

    Full Text Available The explosive growth in the size and use of the World Wide Web continuously creates new great challenges and needs. The need for predicting the users preferences in order to expedite and improve the browsing though a site can be achieved through personalizing of the websites. Most of the research efforts in web personalization correspond to the evolution of extensive research in web usage mining, i.e. the exploitation of the navigational patterns of the web site visitors. When a personalization system relies solely on usage-based results, however, valuable information conceptually related to what is finally recommended may be missed. Moreover, the structural properties of the web site are often disregarded. In this paper, we propose novel techniques that use the content semantics and the structural properties of a web site in order to improve the effectiveness of web personalization. In the first part of our work we present standing for Semantic Web Personalization, a personalization system that integrates usage data with content semantics, expressed in ontology terms, in order to compute semantically enhanced navigational patterns and effectively generate useful recommendations. To the best of our knowledge, our proposed technique is the only semantic web personalization system that may be used by non-semantic web sites. In the second part of our work, we present a novel approach for enhancing the quality of recommendations based on the underlying structure of a web site. We introduce UPR (Usage-based PageRank, a PageRank-style algorithm that relies on the recorded usage data and link analysis techniques. Overall, we demonstrate that our proposed hybrid personalization framework results in more objective and representative predictions than existing techniques.

  11. Performance of Generating Plant: Managing the Changes. Part 4: Markets and Risk Management Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Moss, Terry; Loedolff, Gerhard; Griffin, Rob; Kydd, Robert; Micali, Vince [Eskom (South Africa)

    2008-05-15

    The WEC Committee on the Performance of Generating Plant (PGP) has been collecting and analysing power plant performance statistics worldwide for more than 30 years and has produced regular reports, which include examples of advanced techniques and methods for improving power plant performance through benchmarking. A series of reports from the various working groups was issued in 2008. This reference presents the results of Working Group 4 (WG4). WG4 will monitor the development of power markets, in particular from the market risk management point of view, including operational risks. It will assess various risk management strategies used by market players around the world and develop recommendations for a wider deployment of successful strategies. The report covers the project approach and outcomes.

  12. Generation and characterization of polyclonal antibody against part of immunoglobulin constant heavy υ chain of goose.

    Science.gov (United States)

    Zhao, Panpan; Guo, Yongli; Ma, Bo; Xing, Mingwei; Wang, Junwei

    2014-08-01

    Immunoglobulin Y (abbreviated as IgY) is a type of immunoglobulin that is the major antibody in bird, reptile, and lungfish blood. IgY consists of two light (λ) and two heavy (υ) chains. In the present study, polyclonal antibody against IgYFc was generated and evaluated. rIgYCυ3/Cυ4 was expressed in Escherichia coli, purified and utilized to raise polyclonal antibody in rabbit. High affinity antisera were obtained, which successfully detected the antigen at a dilution of 1:204,800 for ELISA assay. The antibody can specifically recognize both rIgYCυ3/Cυ4 and native IgY by Western bolt analysis. Furthermore, the serum of Grus japonensis or immunoglobulin of chicken, duck, turkey, and silkie samples and dynamic changes of serum GoIgY after immunogenicity with GPV-VP3-virus-like particles (GPV-VP3-VLPs) can be detected with the anti-GoIgYFc polyclonal antibody. These results suggested that the antibody is valuable for the investigation of biochemical properties and biological functions of GoIgY.

  13. Legal Issues for User-generated Content to Face in Web 2.0%Web2.0环境下用户生成内容面临的法律问题

    Institute of Scientific and Technical Information of China (English)

    龚立群; 方洁

    2012-01-01

    The rapid growth of User-generated Content provides a great opportunity for the average users and business Web sites,but it also brings a serious of Legal Issues.Firstly researches on legal issues of UGC at home and abroad are introduced,and then legal issues of UGC which include intellectual property rights violations,privacy violations,hate speech,libel,online pornography and other legal issues are analyzed,also defects of legal,regulatory and technical are analyzed,finally countermeasures and suggestions are proposed from the perspective of the parties involved.%用户生成内容的迅速增长,为普通用户和商业网站提供了巨大的机会,但同时也带来了一系列法律问题。本文首先介绍了国内外用户生成内容面临的法律问题研究现状;其次对用户生成内容带来的知识产权侵权、隐私侵犯、仇恨言论、诽谤、网络色情等法律问题进行了分析;同时也分析了用户生成内容规范过程中所面临的法律、法规及技术的缺陷;最后从各参与方角度,提出了用户生成内容所带来的法律问题对策建议。

  14. Burial history and kinetic modeling for hydrocarbon generation, Part II: Applying the Galo model to Saharan basins

    Energy Technology Data Exchange (ETDEWEB)

    Markhous, M.; Galushkin, Y. [Moscow State Univ. (Russian Federation); Lopatin, N. [Geosystems Institute, Moscow (Russian Federation)

    1997-10-01

    The GALO basin evolution model described in Makhous et al. is applied to evaluate hydrocarbon generation and migration histories in several Saharan basins. Three basins, the Oued el-Mya, Ghadames, and Illizi, are located in the central and eastern parts of the Saharan platform and are investigated in detail. The Ahnet, Mouydir, Timimoun, Reggane, and other basins located in the southern and western parts of the platform are also studied. The modeling results, combined with geochemical data, are used in a synthesis of the regional framework. The thermal gradients in the Ghadames and Illizi basins are greater than those in the Oued el-Mya basin. This difference is attributed to differences in sedimentation and subsidence rates, to less Hercynian erosion, and to fewer occurrences of evaporates in the Akfadou region of the Ghadames basin and in the Mereksen region of the Illizi basin. In the southern and western parts of the Illizi province, the major subsidence occurred before Hercynian uplift. Very moderate Hercynian uplift in the Ghadames and Illizi basins did not involve a significant decrease of temperatures, and organic matter maturation continued, but at slower rates. As a result, the realization of hydrocarbon potential appears to be higher than would be expected. Favorable traps are located near subsided areas where the source shales (particularly the Devonian) were not subjected to uplift and erosion. In this respect, the eastern Sahara including the Ghadames and Illizi basins is a favored province. Analysis of the distribution of present-day temperatures and paleotemperatures in the Paleozoic sediments of the Triassic province (Oued el-Mya, Ghadames, Trias, and north Illizi basins), combined with effective source rock occurrences, shows that favorable conditions for hydrocarbon generation during the Paleozoic occurred essentially in the southern and southwestern parts of the province.

  15. Using Semantic Web Technologies to Collaboratively Collect and Share User-Generated Content in Order to Enrich the Presentation of Bibliographic Records–Development of a Prototype Based on RDF, D2RQ, Jena, SPARQL and WorldCat’s FRBRization Web Service

    Directory of Open Access Journals (Sweden)

    Ragnhild Holgersen

    2012-06-01

    Full Text Available In this article we present a prototype of a semantic web-based framework for collecting and sharing user-generated content (reviews, ratings, tags, etc. across different libraries in order to enrich the presentation of bibliographic records. The user-generated data is remodeled into RDF, utilizing established linked data ontologies. This is done in a semi-automatic manner utilizing the Jena and the D2RQ-toolkits. For the remodeling, a SPARQL-construct statement is tailored for each data source. In the data source used in our prototype, user-generated content is linked to the relevant books via their ISBN. By remodeling the data according to the FRBR model, and expanding the RDF graph with data returned by WorldCat's FRBRization web service, we are able to greatly increase the number of entry points to each book. We make the social content available through a RESTful web service with ISBN as a parameter. The web service returns a graph of all user-generated data registered to any edition of the book in question in the RDF/XML format. Libraries using our framework would thus be able to present relevant social content in association with bibliographic records, even if they hold a different version of a book than the one that was originally accessed by users. Finally, we connect our RDF graph to the linked open data cloud through the use of Talis’ openlibrary.org SPARQL endpoint.

  16. Generations.

    Science.gov (United States)

    Chambers, David W

    2005-01-01

    Groups naturally promote their strengths and prefer values and rules that give them an identity and an advantage. This shows up as generational tensions across cohorts who share common experiences, including common elders. Dramatic cultural events in America since 1925 can help create an understanding of the differing value structures of the Silents, the Boomers, Gen Xers, and the Millennials. Differences in how these generations see motivation and values, fundamental reality, relations with others, and work are presented, as are some applications of these differences to the dental profession.

  17. SLITHER: a web server for generating contiguous conformations of substrate molecules entering into deep active sites of proteins or migrating through channels in membrane transporters.

    Science.gov (United States)

    Lee, Po-Hsien; Kuo, Kuei-Ling; Chu, Pei-Ying; Liu, Eric M; Lin, Jung-Hsin

    2009-07-01

    Many proteins use a long channel to guide the substrate or ligand molecules into the well-defined active sites for catalytic reactions or for switching molecular states. In addition, substrates of membrane transporters can migrate to another side of cellular compartment by means of certain selective mechanisms. SLITHER (http://bioinfo.mc.ntu.edu.tw/slither/or http://slither.rcas.sinica.edu.tw/) is a web server that can generate contiguous conformations of a molecule along a curved tunnel inside a protein, and the binding free energy profile along the predicted channel pathway. SLITHER adopts an iterative docking scheme, which combines with a puddle-skimming procedure, i.e. repeatedly elevating the potential energies of the identified global minima, thereby determines the contiguous binding modes of substrates inside the protein. In contrast to some programs that are widely used to determine the geometric dimensions in the ion channels, SLITHER can be applied to predict whether a substrate molecule can crawl through an inner channel or a half-channel of proteins across surmountable energy barriers. Besides, SLITHER also provides the list of the pore-facing residues, which can be directly compared with many genetic diseases. Finally, the adjacent binding poses determined by SLITHER can also be used for fragment-based drug design.

  18. OneWeb: web content adaptation platform based on W3C Mobile Web Initiative guidelines

    Directory of Open Access Journals (Sweden)

    Francisco O. Martínez P.

    2011-01-01

    Full Text Available  Restrictions regardingnavigability and user-friendliness are the main challenges the Mobile Web faces to be accepted worldwide. W3C has recently developed the Mobile Web Initiative (MWI, a set of directives for the suitable design and presentation of mobile Web interfaces. This article presents the main features and functional modules of OneWeb, an MWI-based Web content adaptation platform developed by Mobile Devices Applications Development Interest Group’s  (W@PColombia research activities, forming part of the Universidad de Cauca’s Telematics Engineering Group.Some performance measurementresults and comparison with other Web content adaptation platforms are presented. Tests have shown suitable response times for Mobile Web environments; MWI guidelines were applied to over twenty Web pages selected for testing purposes.  

  19. Browser-based Analysis of Web Framework Applications

    CERN Document Server

    Kersten, Benjamin; 10.4204/EPTCS.35.5

    2010-01-01

    Although web applications evolved to mature solutions providing sophisticated user experience, they also became complex for the same reason. Complexity primarily affects the server-side generation of dynamic pages as they are aggregated from multiple sources and as there are lots of possible processing paths depending on parameters. Browser-based tests are an adequate instrument to detect errors within generated web pages considering the server-side process and path complexity a black box. However, these tests do not detect the cause of an error which has to be located manually instead. This paper proposes to generate metadata on the paths and parts involved during server-side processing to facilitate backtracking origins of detected errors at development time. While there are several possible points of interest to observe for backtracking, this paper focuses user interface components of web frameworks.

  20. Browser-based Analysis of Web Framework Applications

    Directory of Open Access Journals (Sweden)

    Benjamin Kersten

    2010-09-01

    Full Text Available Although web applications evolved to mature solutions providing sophisticated user experience, they also became complex for the same reason. Complexity primarily affects the server-side generation of dynamic pages as they are aggregated from multiple sources and as there are lots of possible processing paths depending on parameters. Browser-based tests are an adequate instrument to detect errors within generated web pages considering the server-side process and path complexity a black box. However, these tests do not detect the cause of an error which has to be located manually instead. This paper proposes to generate metadata on the paths and parts involved during server-side processing to facilitate backtracking origins of detected errors at development time. While there are several possible points of interest to observe for backtracking, this paper focuses user interface components of web frameworks.

  1. Implementing reliable Web services

    OpenAIRE

    Koskipää, Otto

    2012-01-01

    Web services are a common and standard way to implement communication between information systems and provide documented interfaces. The Web services are usually using SOAP because it is a widely-spread, well-documented and used standard. SOAP standard defines a message structure, an envelope, that is sent over internet using HTTP and contains XML data. An important part of the SOAP structure is the exception mechanism that returns a Fault element in the response. The SOAP Fault is a stan...

  2. Automatic Virtual Entity Simulation of Conceptual Design Results-Part I:Symbolic Scheme Generation and Identification

    Institute of Scientific and Technical Information of China (English)

    WANG Yu-xin; LI Yu-tong

    2014-01-01

    The development of new products of high quality, low unit cost, and short lead time to market are the key elements required for any enterprise to obtain a competitive advantage. For shorting the lead time to market and improving the creativity and performances of the product, a rule-based conceptual design approach and a methodology to simulate the conceptual design results generated in conceptual design process in automatical virtual entity form are presented in this paper. This part of paper presents a rule-based conceptual design method for generating creative conceptual design schemes of mechanisms based on Yan’s kinematic chain regeneration creative design method. The design rules are adopted to describe the design requirements of the functional characteristics, the connection relationships and topological characteristics among mechanisms. Through the graphs-based reasoning process, the conceptual design space is expanded extremely, and the potential creative conceptual design results are then dug out. By refining the design rules, the solution exploration problem is avioded, and the tendentious conceptual design schemes are generated. Since mechanical, electrical and hydraulic subsystems can be transformed into general mechansims, the conceptual design method presented in this paper can also be applied in the conceptual design problem of complex mechatronic systems. And then the method to identify conceptual design schemes is given.

  3. Benchmarking Performance of Web Service Operations

    OpenAIRE

    Zhang, Shuai

    2011-01-01

    Web services are often used for retrieving data from servers providing information of different kinds. A data providing web service operation returns collections of objects for a given set of arguments without any side effects. In this project a web service benchmark (WSBENCH) is developed to simulate the performance of web service calls. Web service operations are specified as SQL statements. The function generator of WSBENCH converts user specified SQL queries into functions and automatical...

  4. Web 2.0 Applications in China

    Science.gov (United States)

    Zhai, Dongsheng; Liu, Chen

    Since 2005, the term Web 2.0 has gradually become a hot topic on the Internet. Web 2.0 lets users create web contents as distinct from webmasters or web coders. Web 2.0 has come to our work, our life and even has become an indispensable part of our web-life. Its applications have already been widespread in many fields on the Internet. So far, China has about 137 million netizens [1], therefore its Web 2.0 market is so attractive that many sources of venture capital flow into the Chinese Web 2.0 market and there are also a lot of new Web 2.0 companies in China. However, the development of Web 2.0 in China is accompanied by some problems and obstacles. In this paper, we will mainly discuss Web 2.0 applications in China, with their current problems and future development trends.

  5. Comprendre le Web caché

    OpenAIRE

    Senellart, Pierre

    2007-01-01

    The hidden Web (also known as deep or invisible Web), that is, the part of the Web not directly accessible through hyperlinks, but through HTML forms or Web services, is of great value, but difficult to exploit. We discuss a process for the fully automatic discovery, syntactic and semantic analysis, and querying of hidden-Web services. We propose first a general architecture that relies on a semi-structured warehouse of imprecise (probabilistic) content. We provide a detailed complexity analy...

  6. Test Cases Reduction and Selection Optimization in Testing Web Services

    Directory of Open Access Journals (Sweden)

    Izzat Alsmadi

    2012-10-01

    Full Text Available Software testing in web services environment faces different challenges in comparison with testing in traditional software environments. Regression testing activities are triggered based on software changes or evolutions. In web services, evolution is not a choice for service clients. They have always to use the current updated version of the software. In addition test execution or invocation is expensive in web services and hence providing algorithms to optimize test case generation and execution is vital. In this environment, we proposed several approach for test cases’ selection in web services’ regression testing. Testing in this new environment should evolve to be included part of the service contract. Service providers should provide data or usage sessions that can help service clients reduce testing expenses through optimizing the selected and executed test cases.

  7. Query Translation on the Fly in Deep Web Integration

    Institute of Scientific and Technical Information of China (English)

    JIANG Fangjiao; JIA Linlin; MENG Xiaofeng

    2007-01-01

    To facilitate users to access the desired information,many researches have dedicated to the Deep Web (i.e. Web databases) integration. We focus on query translation which is an important part of the Deep Web integration. Our aim is to construct automatically a set of constraints mapping rules so that the system can translate the query from the integrated interface to the Web database interfaces based on them. We construct a concept hierarchy for the attributes of the query interfaces, especially, store the synonyms and the types (e.g. Number, Text, etc.) for every concept.At the same time, we construct the data hierarchies for some concepts if necessary. Then we present an algorithm to generate the constraint mapping rules based on these hierarchies. The approach is suitable for the scalability of such application and can be extended easily from one domain to another for its domain independent feature. The results of experiment show its effectiveness and efficiency.

  8. Electronic apex locator: A comprehensive literature review - Part I: Different generations, comparison with other techniques and different usages

    Directory of Open Access Journals (Sweden)

    Hamid Mosleh

    2014-01-01

    Full Text Available Introduction: To compare electronic apex locators (EAL with others root canal determination techniques and evaluate other usage of this devices. Materials and Methods: "Tooth apex," "Dental instrument," "Odontometry," "Electronic medical," and "Electronic apex locator" were searched as primary identifiers via Medline/PubMed, Cochrane library, and Scopus data base up to 30 July 2013. Original articles that fulfilled the inclusion criteria were selected and reviewed. Results: Out of 402 relevant studies, 183 were selected based on the inclusion criteria. In this part, 108 studies are presented. Under the same conditions, no significant differences could be seen between different EALs of one generation. The application of EALs can result in lower patient radiation exposure, exact diagnosing of fractures, less perforation, and better retreatment. Conclusions: EALs were more accurate than other techniques in root canal length determination.

  9. Collaborative web hosting challenges and research directions

    CERN Document Server

    Ahmed, Reaz

    2014-01-01

    This brief presents a peer-to-peer (P2P) web-hosting infrastructure (named pWeb) that can transform networked, home-entertainment devices into lightweight collaborating Web servers for persistently storing and serving multimedia and web content. The issues addressed include ensuring content availability, Plexus routing and indexing, naming schemes, web ID, collaborative web search, network architecture and content indexing. In pWeb, user-generated voluminous multimedia content is proactively uploaded to a nearby network location (preferably within the same LAN or at least, within the same ISP)

  10. Engineering Semantic Web Applications by Using Object-Oriented Paradigm

    CERN Document Server

    Farooq, Amjad; Shah, Abad

    2010-01-01

    The web information resources are growing explosively in number and volume. Now to retrieve relevant data from web has become very difficult and time-consuming. Semantic Web envisions that these web resources should be developed in machine-processable way in order to handle irrelevancy and manual processing problems. Whereas, the Semantic Web is an extension of current web, in which web resources are equipped with formal semantics about their interpretation through machines. These web resources are usually contained in web applications and systems, and their formal semantics are normally represented in the form of web-ontologies. In this research paper, an object-oriented design methodology (OODM) is upgraded for developing semantic web applications. OODM has been developed for designing of web applications for the current web. This methodology is good enough to develop web applications. It also provides a systematic approach for the web applications development but it is not helpful in generating machine-poc...

  11. An Investigation of the Acid Rock Drainage Generation from the Road Cut Slope in the Middle Part of South Korea

    Science.gov (United States)

    Ji, S.; Cheong, Y.; Yim, G.

    2006-05-01

    To examine the Acid Rock Drainage (ARD) generation from the road cut slope, a prediction study including Acid-Base Accounting (ABA) test and Net Acid Generation (NAG) test was performed for road cut rock samples (20 samples) at the new construction site of a highway in the middle part of South Korea. This slope is composed of slate and phyllite. It was a pit wall which was operated as a quarry which produced materials for roofing. pH1:2 and EC1:2 measurements were performed to evaluate free hydrogen ion contents and salts in samples. ABA test was performed to estimate the balance of the acid generating minerals (mainly pyrite) and the acid neutralizing minerals (mainly carbonates) in rock samples. Total sulfur was analyzed by sulfur analyzer, and then the maximum potential acidity (MPA, kg H2SO4/t) was calculated. X-ray diffraction (XRD) analysis was performed to identify the mineral composition of rock samples. Acid neutralizing capacity (ANC) test, after the Sobek et al. (1978), was performed to estimate the amount of acid originated from the oxidation of sulfide minerals. NAPP (Net Acid Producing Potential) was calculated by total sulfur (MPA) and ANC. NAG test was performed with grounded samples and 15 % hydrogen peroxide, and then NAG was analyzed by measuring pH (NAGpH) of the mixed solution. pH1:2 and EC1:2 ranged from 2.95 to 7.23 and 17.1 to 3070.0 ¥ìS/cm, respectively. MPA of samples was ranged from 0.0 to 79.9 kg H2SO4/t. From the XRD analysis pyrite was found at the most samples. In the sample from highly weathered dike, goethite was found. Results of the ANC tests indicated that the value of ANC reached up to 59.36 kg H2SO4/t. Rock samples could be classified as Potential Acid Forming rock (PAF) and Non- Acid Forming rock (NAF) by plotting NAPP versus NAGpH. In this study 17 samples were classified as PAF rock. It means that this slope would generate ARD when they reacted with rain. Two samples were grouped as NAF. By application this ARD prediction

  12. Use of a Web Forum and an Online Questionnaire in the Detection and Investigation of an Outbreak

    Science.gov (United States)

    Stuart Chester, Tammy L.; Taylor, Marsha; Sandhu, Jat; Forsting, Sara; Ellis, Andrea; Stirling, Rob; Galanis, Eleni

    2011-01-01

    A campylobacteriosis outbreak investigation provides relevant examples of how two web-based technologies were used in an outbreak setting and potential reasons for their usefulness. A web forum aided in outbreak detection and provided contextual insights for hypothesis generation and questionnaire development. An online questionnaire achieved a high response rate and enabled rapid preliminary data analysis that allowed for a targeted environmental investigation. The usefulness of these tools may in part be attributed to the existence of an internet savvy, close-knit community. Given the right population, public health officials should consider web-based technologies, including web fora and online questionnaires as valuable tools in public health investigations. PMID:23569598

  13. Sustainable Materials Management (SMM) Web Academy Webinar: Advancing Sustainable Materials Management: Facts and Figures 2013 - Assessing Trends in Materials Generation, Recycling and Disposal in the United States

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  14. Web Engineering

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-06-23

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.

  15. Defining What Matters When Preserving Web-Based Personal Digital Collections: Listening to Bloggers

    OpenAIRE

    Ayoung Yoon

    2013-01-01

    User-generated content (UGC) has become a part of personal digital collections on the Web, as such collections often contain personal memories, activities, thoughts and even profiles. With the increase in the creation of personal materials on the Web, the needs for archiving and preserving these materials are increasing, not only for the purpose of developing personal archives but also for the purpose of capturing social memory and tracking human traces in this era. Using both survey and inte...

  16. Predicting web site audience demographics for web advertising targeting using multi-web site clickstream data

    OpenAIRE

    Bock, K W; D. VAN DEN POEL; Manigart, S.

    2009-01-01

    Several recent studies have explored the virtues of behavioral targeting and personalization for online advertising. In this paper, we add to this literature by proposing a cost-effective methodology for the prediction of demographic web site visitor profiles that can be used for web advertising targeting purposes. The methodology involves the transformation of web site visitors’ clickstream patterns to a set of features and the training of Random Forest classifiers that generate predictions ...

  17. Alternative methodology for assessing part-through-wall cracks in carbon steel bends removed from Point Lepreau Generating Station

    Energy Technology Data Exchange (ETDEWEB)

    Duan Xinjian, E-mail: duanx@aecl.c [Senior Engineer, Reactor Engineering Department, Atomic Energy of Canada Ltd., Mississauga, ON (Canada); Kozluk, Michael J., E-mail: kozlukm@aecl.c [Independent Consultant, Oakville, ON (Canada); Gendron, Tracy [Manager-HTS Materials Integrity, Atomic Energy of Canada Ltd., Chalk River, ON (Canada); Slade, John [Senior Technical Advisor, Point Lepreau Generating Station, Lepreau, NB (Canada)

    2011-03-15

    In 2008 April Point Lepreau Generating Station entered an extended refurbishment outage that will involve the replacement of key reactor components (fuel channels and connecting feeder pipes). Prior to the refurbishment outage, New Brunswick Power Nuclear had been successfully managing intergranular, axial cracking of carbon steel feeder piping, that were also experiencing thinning, in the Point Lepreau Generating Station, primarily by an aggressive program of inspection, repair and testing of ex-service material. For the previous three maintenance outages, a probabilistic safety evaluation (PSE) had been used to demonstrate that annual inspection of the highest risk locations maintains the nuclear safety risk from cracking at an acceptably low level. The PSE makes use of the Failure Assessment Diagram (FAD) model to predict the failure of part-through-wall cracks. Burst-pressure testing of two ex-service feeder pipe sections with part-through-wall cracks showed that this FAD model significantly under predicts the failure pressure measured in the component tests. Use of this FAD model introduces undesirable conservatism into PSE assessments that are used to optimize feeder piping inspection and maintenance plans. This paper describes an alternative finite element approach, which could be used to provide more representative structural models for use in PSE assessments. This alternative approach employs the elasto-plastic large strain finite element formulation; uses representative material properties; considers the spatial microstructural distribution; accounts for the effect of work hardening rate; models all deformation processes, i.e., uniform deformation, localized necking, and failure imitation and propagation. Excellent pre-test prediction was shown for the burst-pressure test performed in 2006. Although cold-worked feeder bends have reduced fracture toughness compared to the parent straight pipe, post-test metallurgical examinations showed that failure at the

  18. Collection, validation and generation of bitumen fumes for inhalation studies in rats Part 1: Workplace samples and validation criteria.

    Science.gov (United States)

    Preiss, A; Koch, W; Kock, H; Elend, M; Raabe, M; Pohlmann, G

    2006-11-01

    Undertaking a chronic inhalation study on bitumen fume presents a challenge in terms of generating large amounts of representative fume. The objective of the study described in this and the following contributions was to collect sufficient fume and develop a laboratory-generated exposure atmosphere that resembles, as closely as possible, personal exposures seen in workers during road paving operations, for use in chronic inhalation toxicity studies in rats. To achieve this goal, atmospheric workplace samples were collected at road paving work sites both by Shell Global Solutions, Int. (Shell) and by the 'Berufsgenossenschaftliches Institut für Arbeitssicherheit' (BIA, Germany) and compared with bitumen fume condensate samples collected from the head space of hot bitumen storage tanks. Part 1 describes the collection and analysis of personal and static workplace samples. Different sampling methods were also used to allow a comparison of the standard German sampling method with the most common industry method used. Samples were analyzed by Shell, BIA and by the Fraunhofer Institute of Toxicology and Experimental Medicine (Fh-ITEM, Germany) using different methods. Parameters determined were: total particulate matter (TPM), benzene soluble matter (BSM), semi-volatiles (SV), total organic matter (TOM), boiling point distribution (BPD), polycyclic aromatic hydrocarbons (PAHs) and UV fluorescence (UVF). The BPD of personal and static samples had almost identical start and end points, but static samples show a tendency towards an increase in amounts of higher boiling point compounds. Personal samples generally show higher PAH concentrations than comparable static samples. The results of the analysis of personal workplace samples were used to establish validation/acceptance criteria for the bitumen fume condensate sampled from storage tanks for the inhalation study, which is described in a further publication. The criteria involve a range of parameters that can be

  19. Writing for the web composing, coding, and constructing web sites

    CERN Document Server

    Applen, JD

    2013-01-01

    Writing for the Web unites theory, technology, and practice to explore writing and hypertext for website creation. It integrates such key topics as XHTML/CSS coding, writing (prose) for the Web, the rhetorical needs of the audience, theories of hypertext, usability and architecture, and the basics of web site design and technology. Presenting information in digestible parts, this text enables students to write and construct realistic and manageable Web sites with a strong theoretical understanding of how online texts communicate to audiences. Key features of the book

  20. XML相關技術與下一代Web出版趨勢之研究 A Study on XML-based Technologies and Next-Generation Web Publishing

    Directory of Open Access Journals (Sweden)

    Sinn-cheng Lin

    1999-12-01

    Full Text Available 無XML was completed in 1998 by the W3C. This paper focuses the attention on the issues of the XML-based Web publishing. First of all, the paper describes the essential primaries of the electronic documents, analyses the situation of the Web publishing, explore the difficulties and bottlenecks of HTML. Furthermore, based on the purposes of document access, integration, delivery, manipulation and display, the paper proposes a 3-tier distributed architecture for document management. Which maps the related technologies of XML family, such as DTD, XML Schema, XML Namespaces, RDF, XLink, DOM, CCS and XSL, etc., to the corresponding tiers. It also reflects that XML will plays an important role in the Web integration. If we say that the HTML was the first revolution of the Web, then the XML could be viewed as the second-generation. XML not only provides a new mechanism for data representation, but also has the potential to extend the Internet beyond information delivery to knowledge management. Expectably, XML will cut a generous swath across various application fields, such as electronic publishing, electronic commerce, electronic library, electronic data exchange, and distance learning, etc.

  1. Web Similarity

    NARCIS (Netherlands)

    Cohen, A.R.; Vitányi, P.M.B.

    2015-01-01

    Normalized web distance (NWD) is a similarity or normalized semantic distance based on the World Wide Web or any other large electronic database, for instance Wikipedia, and a search engine that returns reliable aggregate page counts. For sets of search terms the NWD gives a similarity on a scale fr

  2. Construction of Community Web Directories based on Web usage Data

    CERN Document Server

    Sandhyarani, Ramancha; Gyani, Jayadev; 10.5121/acij.2012.3205

    2012-01-01

    This paper support the concept of a community Web directory, as a Web directory that is constructed according to the needs and interests of particular user communities. Furthermore, it presents the complete method for the construction of such directories by using web usage data. User community models take the form of thematic hierarchies and are constructed by employing clustering approach. We applied our methodology to the ODP directory and also to an artificial Web directory, which was generated by clustering Web pages that appear in the access log of an Internet Service Provider. For the discovery of the community models, we introduced a new criterion that combines a priori thematic informativeness of the Web directory categories with the level of interest observed in the usage data. In this context, we introduced and evaluated new clustering method. We have tested the methodology using access log files which are collected from the proxy servers of an Internet Service Provider and provided results that ind...

  3. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.

    2001-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  4. An evaluation on the Web page navigation tools in university library Web sites In Turkey

    OpenAIRE

    Çakmak, Tolga

    2010-01-01

    Web technologies and web pages are primary tools for dissemination of information all over the world today. Libraries are also using and adopting these technologies to reach their audiences. The effective usage of these technologies can be possible with user centered design. Web pages that have user centered design help users to find information without being lost in the web page. As a part of the web pages, navigation systems have a vital role in this context. Effective usage of navigation s...

  5. Automated Web Applications Testing

    Directory of Open Access Journals (Sweden)

    Alexandru Dan CĂPRIŢĂ

    2009-01-01

    Full Text Available Unit tests are a vital part of several software development practicesand processes such as Test-First Programming, Extreme Programming andTest-Driven Development. This article shortly presents the software quality andtesting concepts as well as an introduction to an automated unit testingframework for PHP web based applications.

  6. Introduction to Webometrics Quantitative Web Research for the Social Sciences

    CERN Document Server

    Thelwall, Michael

    2009-01-01

    Webometrics is concerned with measuring aspects of the web: web sites, web pages, parts of web pages, words in web pages, hyperlinks, web search engine results. The importance of the web itself as a communication medium and for hosting an increasingly wide array of documents, from journal articles to holiday brochures, needs no introduction. Given this huge and easily accessible source of information, there are limitless possibilities for measuring or counting on a huge scale (e.g., the number of web sites, the number of web pages, the number of blogs) or on a smaller scale (e.g., the number o

  7. Ramakrishnan: Semantics on the Web

    Data.gov (United States)

    National Aeronautics and Space Administration — It is becoming increasingly clear that the next generation of web search and advertising will rely on a deeper understanding of user intent and task modeling, and a...

  8. Web traffic and firm performance

    DEFF Research Database (Denmark)

    Farooq, Omar; Aguenaou, Samir

    2013-01-01

    Does the traffic generated by websites of firms signal anything to stock market participants? Does higher web-traffic translate into availability of more information and therefore lower agency problems? And if answers to above questions are in affirmative, does higher web-traffic traffic translate...... into better firm performance? This paper aims to answer these questions by documenting a positive relationship between the extent of web-traffic and firm performance in the MENA region during the 2010. We argue that higher web-traffic lowers the agency problems in firms by disseminating more information...... to stock market participants. Consequently, lower agency problems translate into better performance. Furthermore, we also show that agency reducing role of web-traffic is more pronounced in regimes where information environment is already bad. For example, our results show stronger impact of web-traffic...

  9. Efficacy of Standard Versus Enhanced Features in a Web-Based Commercial Weight-Loss Program for Obese Adults, Part 2: Randomized Controlled Trial

    Science.gov (United States)

    Morgan, Philip J; Hutchesson, Melinda J; Callister, Robin

    2013-01-01

    Background Commercial Web-based weight-loss programs are becoming more popular and increasingly refined through the addition of enhanced features, yet few randomized controlled trials (RCTs) have independently and rigorously evaluated the efficacy of these commercial programs or additional features. Objective To determine whether overweight and obese adults randomized to an online weight-loss program with additional support features (enhanced) experienced a greater reduction in body mass index (BMI) and increased usage of program features after 12 and 24 weeks compared to those randomized to a standard online version (basic). Methods An assessor-blinded RCT comparing 301 adults (male: n=125, 41.5%; mean age: 41.9 years, SD 10.2; mean BMI: 32.2 kg/m2, SD 3.9) who were recruited and enrolled offline, and randomly allocated to basic or enhanced versions of a commercially available Web-based weight-loss program for 24 weeks. Results Retention at 24 weeks was greater in the enhanced group versus the basic group (basic 68.5%, enhanced 81.0%; P=.01). In the intention-to-treat analysis of covariance with imputation using last observation carried forward, after 24 weeks both intervention groups had reductions in key outcomes with no difference between groups: BMI (basic mean –1.1 kg/m2, SD 1.5; enhanced mean –1.3 kg/m2, SD 2.0; P=.29), weight (basic mean –3.3 kg, SD 4.7; enhanced mean –4.0 kg, SD 6.2; P=.27), waist circumference (basic mean –3.1 cm, SD 4.6; enhanced mean –4.0 cm, SD 6.2; P=.15), and waist-to-height ratio (basic mean –0.02, SD 0.03; enhanced mean –0.02, SD 0.04, P=.21). The enhanced group logged in more often at both 12 and 24 weeks, respectively (enhanced 12-week mean 34.1, SD 28.1 and 24-week mean 43.1, SD 34.0 vs basic 12-week mean 24.6, SD 25.5 and 24-week mean 31.8, SD 33.9; P=.002). Conclusions The addition of personalized e-feedback in the enhanced program provided limited additional benefits compared to a standard commercial Web

  10. Web Personalization Using Web Mining

    Directory of Open Access Journals (Sweden)

    Ms.Kavita D.Satokar,

    2010-03-01

    Full Text Available The information on the web is growing dramatically. The users has to spend lots of time on the web finding the information they are interested in. Today, he traditional search engines do not give users enough personalized help but provide the user with lots of irrelevant information. In this paper, we present a personalize Web searchsystem, which can helps users to get the relevant web pages based on their selection from the domain list. Thus, users can obtain a set of interested domains and the web pages from the system. The system is based on features extracted from hyperlinks, such as anchor terms or URL tokens. Our methodology uses an innovative weighted URL Rank algorithm based on user interested domains and user query.

  11. Semantic web services for web databases

    CERN Document Server

    Ouzzani, Mourad

    2011-01-01

    Semantic Web Services for Web Databases introduces an end-to-end framework for querying Web databases using novel Web service querying techniques. This includes a detailed framework for the query infrastructure for Web databases and services. Case studies are covered in the last section of this book. Semantic Web Services For Web Databases is designed for practitioners and researchers focused on service-oriented computing and Web databases.

  12. Smart Style on the Semantic Web

    NARCIS (Netherlands)

    J.R. van Ossenbruggen (Jacco); L. Hardman (Lynda)

    2002-01-01

    textabstractWeb publishing systems have to take into account a plethora of Web-enabled devices, user preferences and abilities. Technologies generating these presentations will need to be explicitly aware of the context in which the information is being presented. Semantic Web technology can be a

  13. Smart Style on the Semantic Web

    NARCIS (Netherlands)

    Ossenbruggen, J.R. van; Hardman, L.

    2002-01-01

    Web publishing systems have to take into account a plethora of Web-enabled devices, user preferences and abilities. Technologies generating these presentations will need to be explicitly aware of the context in which the information is being presented. Semantic Web technology can be a fundamental pa

  14. The Semantic Web and Educational Technology

    Science.gov (United States)

    Maddux, Cleborne D., Ed.

    2008-01-01

    The "Semantic Web" is an idea proposed by Tim Berners-Lee, the inventor of the "World Wide Web." The topic has been generating a great deal of interest and enthusiasm, and there is a rapidly growing body of literature dealing with it. This article attempts to explain how the Semantic Web would work, and explores short-term and long-term…

  15. Web-Mediated Knowledge Synthesis for Educators

    Science.gov (United States)

    DeSchryver, Michael

    2015-01-01

    Ubiquitous and instant access to information on the Web is challenging what constitutes 21st century literacies. This article explores the notion of Web-mediated knowledge synthesis, an approach to integrating Web-based learning that may result in generative synthesis of ideas. This article describes the skills and strategies that may support…

  16. Towards Development of Web-based Assessment System Based on Semantic Web Technology

    Directory of Open Access Journals (Sweden)

    Hosam Farouk El-Sofany

    2011-01-01

    Full Text Available The assessment process in an educational system is an important and primordial part of its success to assure the correct way of knowledge transmission and to ensure that students are working correctly and succeed to acquire the needed knowledge. In this study, we aim to include Semantic Web technologies in the E-learning process, as new components. We use Semantic Web (SW to: 1 support the evaluation of open questions in e-learning courses, 2 support the creation of questions and exams automatically, 3 support the evaluation of exams created by the system. These components should allow for measuring academic performance, providing feedback mechanisms, and improving participative and collaborative ideas. Our goal is to use Semantic Web and Wireless technologies to design and implement the assessment system that allows the students, to take: web-based tutorials, quizzes, free exercises, and exams, to download: course reviews, previous exams and their model answers, to access the system through the Mobile and take quick quizzes and exercises. The system facilitates generation of automatic, balanced, and different exam sheets that contain different types of questions covering the entire curriculum, and display gradually from easiness to difficulty. The system provides the teachers and administrators with several services such as: store different types of questions, generate exams with specific criteria, and upload course assignments, exams, and reviews.

  17. Sensor web

    Science.gov (United States)

    Delin, Kevin A. (Inventor); Jackson, Shannon P. (Inventor)

    2011-01-01

    A Sensor Web formed of a number of different sensor pods. Each of the sensor pods include a clock which is synchronized with a master clock so that all of the sensor pods in the Web have a synchronized clock. The synchronization is carried out by first using a coarse synchronization which takes less power, and subsequently carrying out a fine synchronization to make a fine sync of all the pods on the Web. After the synchronization, the pods ping their neighbors to determine which pods are listening and responded, and then only listen during time slots corresponding to those pods which respond.

  18. Final Report on Utilization of TRU TRISO Fuel as Applied to HTR Systems Part II: Prismatic Reactor Cross Section Generation

    Energy Technology Data Exchange (ETDEWEB)

    Vincent Descotes

    2011-03-01

    The deep-burn prismatic high temperature reactor is made up of an annular core loaded with transuranic isotopes and surrounded in the center and in the periphery by reflector blocks in graphite. This disposition creates challenges for the neutronics compared to usual light water reactor calculation schemes. The longer mean free path of neutrons in graphite affects the neutron spectrum deep inside the blocks located next to the reflector. The neutron thermalisation in the graphite leads to two characteristic fission peaks at the inner and outer interfaces as a result of the increased thermal flux seen in those assemblies. Spectral changes are seen at least on half of the fuel blocks adjacent to the reflector. This spectral effect of the reflector may prevent us from successfully using the two step scheme -lattice then core calculation- typically used for light water reactors. We have been studying the core without control mechanisms to provide input for the development of a complete calculation scheme. To correct the spectrum at the lattice level, we have tried to generate cross-sections from supercell calculations at the lattice level, thus taking into account part of the graphite surrounding the blocks of interest for generating the homogenised cross-sections for the full-core calculation. This one has been done with 2 to 295 groups to assess if increasing the number of groups leads to more accurate results. A comparison with a classical single block model has been done. Both paths were compared to a reference calculation done with MCNP. It is concluded that the agreement with MCNP is better with supercells, but that the single block model remains quite close if enough groups are kept for the core calculation. 26 groups seems to be a good compromise between time and accu- racy. However, some trials with depletion have shown huge variations of the isotopic composition across a block next to the reflector. It may imply that at least an in- core depletion for the

  19. Finding pages on the unarchived Web

    NARCIS (Netherlands)

    Kamps, J.; Ben-David, A.; Huurdeman, H.C.; Vries, A.P. de; Samar, T.

    2014-01-01

    Web archives preserve the fast changing Web, yet are highly incomplete due to crawling restrictions, crawling depth and frequency, or restrictive selection policies-most of the Web is unarchived and therefore lost to posterity. In this paper, we propose an approach to recover significant parts of th

  20. IS 37 FORM ON EDH WEB

    CERN Multimedia

    2000-01-01

    To Staff Members in charge of the execution of works The “Issuers” are reminded to fill in - if necessary - the form attached to Safety Instruction 37 when disabling all or part of the system generating a level 3 alarm. Reminder: The request must be completed by the issuer and authorised by the TSO/GLIMOS responsible for the building or area. After completion of the works, the TSO/GLIMOS make sure that the system is recommissioned. Please note that the computerized version of this form is available on the web. The icon can be found on the EDH Web Desktop Homepage. The paper version is still in use. If you have any questions, please contact A. Chouvelon/TIS, tel. 74229.

  1. Web Application Frameworks

    Directory of Open Access Journals (Sweden)

    Maria Cristina ENACHE

    2015-12-01

    Full Text Available As modern browsers become more powerful with rich features, building full-blown web applications in JavaScript is not only feasible, but increasingly popular. Based on trends on HTTP Archive, deployed JavaScript code size has grown 45% over the course of the year. MVC offers architectural benefits over standard JavaScript — it helps you write better organized, and therefore more maintainable code. This pattern has been used and extensively tested over multiple languages and generations of programmers. It's no coincidence that many of the most popular web programming frameworks also encapsulate MVC principles: Django, Ruby on Rails, CakePHP, Struts, or Laravell.

  2. Evaluating web serch engines

    CERN Document Server

    Lewandowski, Dirk

    2011-01-01

    Every month, more than 130 billion queries worldwide are entered into the search boxes of general-purpose web search engines (ComScore, 2010). This enormous number shows that web searching is not only a large business, but also that many people rely on the search engines' results when researching information. A goal of all search engine evaluation efforts is to generate better systems. This goal is of major importance to the search engine vendors who can directly apply evaluation results to develop better ranking algorithms.

  3. Sustainable web ecosystem design

    CERN Document Server

    O'Toole, Greg

    2013-01-01

    This book is about the process of creating web-based systems (i.e., websites, content, etc.) that consider each of the parts, the modules, the organisms - binary or otherwise - that make up a balanced, sustainable web ecosystem. In the current media-rich environment, a website is more than a collection of relative html documents of text and images on a static desktop computer monitor. There is now an unlimited combination of screens, devices, platforms, browsers, locations, versions, users, and exabytes of data with which to interact. Written in a highly approachable, practical style, this boo

  4. Users’ recognition in web using web mining techniques

    Directory of Open Access Journals (Sweden)

    Hamed Ghazanfaripoor

    2013-06-01

    Full Text Available The rapid growth of the web and the lack of structure or an integrated schema create various issues to access the information for users. All users’ access on web information are saved in the related server log files. The circumstance of using these files is implemented as a resource for finding some patterns of user's behavior. Web mining is a subset of data mining and it means the mining of the related data from WWW, which is categorized into three parts including web content mining, web structure mining and web usage mining, based on the part of data, which is mined. It seems necessary to have a technique, which is capable of learning the users’ interests and based on the interests, which could filter the unrelated interests automatically or it could offer the related information to the user in reasonable amount of time. The web usage mining makes a profile from users to recognize them and it has direct relationship to web personalizing. The primary objective of personalizing systems is to prepare the thing, which is required by users, without asking them explicitly. In the other way, formal models prepare the possibility of system’s behavior modeling. The Petri and queue nets as some samples of these models can analyze the user's behavior in web. The primary objective of this paper is to present a colored Petri net to model the user's interactions for offering a list of pages recommendation to them in web. Estimating the user's behavior is implemented in some cases like offering the proper pages to continue the browse in web, ecommerce and targeted advertising. The preliminary results indicate that the proposed method is able to improve the accuracy criterion 8.3% rather static method.

  5. Metaphor Modeling on the Semantic Web

    Science.gov (United States)

    Czejdo, Bogdan D.; Biguenet, Jonathan; Biguenet, John

    Metaphor is a high-level abstract concept that can be an important part of active conceptual modeling. In this paper, we use the extended Unified Modeling Language (UML) for metaphor modeling. We discuss how to create UML diagrams to capture knowledge about metaphors. The metaphor-based processing system on the Semantic Web can support new query/search operations. Such a computer system can be used for a broad spectrum of applications such as predicting surprises (e.g., terrorist attacks) or generating automatically new innovations.

  6. Architecting Secure Web Services using Model Driven Agile Modeling

    Directory of Open Access Journals (Sweden)

    Dr.B.Padmaja Rani,

    2010-09-01

    Full Text Available The importance of the software security has been profound, since most attacks to software systems are based on vulnerabilities caused by poorly designed and developed software. Design flaws account for fifty percent of security problems and risk analysis plays essential role in solid security problems. Service Web Services are an integral part of next generation Web applications. The development and use of these services is growing at an incredible rate, and so too security issues surrounding them. If the history of inter-application communication repeats itself, the ease with which web services architectures publish information about applications across thenetwork is only going to result in more application hacking. At the very least, it’s going to put an even greater burden on web architects and developers to design and write secure code. Developing specification like WS-Security should be leveraged as secure maturity happens over firewalls. In this paper, we want to discuss security architectures design patterns for Service Oriented Web Services. Finally, we validated this by implementing a case study of a Service Oriented Web Services application StockTrader Security using WS-Security and WS-Secure Conversation.

  7. An Efficient Web Page Ranking for Semantic Web

    Science.gov (United States)

    Chahal, P.; Singh, M.; Kumar, S.

    2014-01-01

    With the enormous amount of information presented on the web, the retrieval of relevant information has become a serious problem and is also the topic of research for last few years. The most common tools to retrieve information from web are search engines like Google. The Search engines are usually based on keyword searching and indexing of web pages. This approach is not very efficient as the result-set of web pages obtained include large irrelevant pages. Sometimes even the entire result-set may contain lot of irrelevant pages for the user. The next generation of search engines must address this problem. Recently, many semantic web search engines have been developed like Ontolook, Swoogle, which help in searching meaningful documents presented on semantic web. In this process the ranking of the retrieved web pages is very crucial. Some attempts have been made in ranking of semantic web pages but still the ranking of these semantic web documents is neither satisfactory and nor up to the user's expectations. In this paper we have proposed a semantic web based document ranking scheme that relies not only on the keywords but also on the conceptual instances present between the keywords. As a result only the relevant page will be on the top of the result-set of searched web pages. We explore all relevant relations between the keywords exploring the user's intention and then calculate the fraction of these relations on each web page to determine their relevance. We have found that this ranking technique gives better results than those by the prevailing methods.

  8. Factsheets Web Application

    Energy Technology Data Exchange (ETDEWEB)

    VIGIL,FRANK; REEDER,ROXANA G.

    2000-10-30

    The Factsheets web application was conceived out of the requirement to create, update, publish, and maintain a web site with dynamic research and development (R and D) content. Before creating the site, a requirements discovery process was done in order to accurately capture the purpose and functionality of the site. One of the high priority requirements for the site would be that no specialized training in web page authoring would be necessary. All functions of uploading, creation, and editing of factsheets needed to be accomplished by entering data directly into web form screens generated by the application. Another important requirement of the site was to allow for access to the factsheet web pages and data via the internal Sandia Restricted Network and Sandia Open Network based on the status of the input data. Important to the owners of the web site would be to allow the published factsheets to be accessible to all personnel within the department whether or not the sheets had completed the formal Review and Approval (R and A) process. Once the factsheets had gone through the formal review and approval process, they could then be published both internally and externally based on their individual publication status. An extended requirement and feature of the site would be to provide a keyword search capability to search through the factsheets. Also, since the site currently resides on both the internal and external networks, it would need to be registered with the Sandia search engines in order to allow access to the content of the site by the search engines. To date, all of the above requirements and features have been created and implemented in the Factsheet web application. These have been accomplished by the use of flat text databases, which are discussed in greater detail later in this paper.

  9. Gravitational wave generation by interaction of high power lasers with matter. Part II: Ablation and Piston models

    CERN Document Server

    Kadlecová, Hedvika; Weber, Stefan; Korn, Georg

    2016-01-01

    We analyze theoretical models of gravitational waves generation in the interaction of high intensity laser with matter, namely ablation and piston models. We analyse the generated gravitational waves in linear approximation of gravitational theory. We derive the analytical formulas and estimates for the metric perturbations and the radiated power of generated gravitational waves. Furthermore we investigate the characteristics of polarization and the behaviour of test particles in the presence of gravitational wave which will be important for the detection.

  10. Web语料抓取中基于相似度的URL过滤规则生成算法%A URL Filtering Generation Algorithm Based on Similarity Degree for Web Crawling

    Institute of Scientific and Technical Information of China (English)

    陈荟慧; 舒云星; 林丽

    2014-01-01

    Web text is an important component of the corpus, however, unnecessary time consumption for visiting redundant URLs influences the quality and efficiency of the large scale web crawling. The quality and efficiency of Web crawling can be promoted by using high effective URL filtering rules. The distribution of files in the virtual directories of a website is uneven and a URL filtering rule generation method is introduced to discover the clustering region of target files. Firstly, URLs are transformed into regular expressions and they are divided into many groups by clustering same regular expressions. Then, the similarity degrees between URLs in one group are calculated and the virtual path tree is constructed by using URLs with higher similarity degrees. Finally, the virtual path tree is utilized to generate URL filtering rules and classification rules for Web crawling. The algorithms for generating virtual path tree are introduced in detail and the experimental results of the generated virtual path trees and the filtered URLs are compared by using different similarity degree thresholds.%Web语料是语料库的重要组成部分,但对冗余URL的访问开支影响大规模语料爬取工作的质量和效率,使用高效的URL过滤规则可提高Web爬取的质量和效率。因网站虚拟目录下的文件分布不均匀,为发现目标文件聚集区域,提出一种生成URL过滤规则的方法。该方法使用正则表达式将URL元素通配化,归并相同元素后划分为子集,再计算子集内URL之间的相似度,并根据相似程度较高的URL构造虚拟目录树,基于虚拟目录树生成语料爬取的URL过滤规则和分类规则。文中详细介绍虚拟目录树的生成算法,并通过实验对比不同相似度阈值对目录树生成结果和URL过滤效果的影响。

  11. On Generation and Development Trend of Web Film Review%论网络影评的产生与发展趋势

    Institute of Scientific and Technical Information of China (English)

    周海英

    2011-01-01

    Web film review is a new form of the film critics in internet age.As a new cultural force like any other art form has its emergence and development process.The prerequisite for web film review is the development of communication technology and network technology;audience diversity and individual demand is the social basis for web film review;traditional film reviews under the impact of the market economy have gradually been marginalized,and audio-visual media hegemony on the wording of the new round extrusion is a comprehensive social and cultural background.A variety of text forms and forms of communication complement each other,parallel and multi-symbiotic,which try to find the balance between two poles of technical and artistic;the business operations of cultural capital involved in web film review,and the integration with traditional film review can be taken as main feature and trend of the development of web film review.%网络影评是电影批评在网络时代的新形式。作为一种新生的文化力量,它和其它的任何一种艺术形式一样,有其产生和发展的过程。网络影评产生的前提条件是通讯技术和网络技术的发展;受众多元化、个性化的需求是网络影评产生的社会基础;传统影评在市场经济的冲击下逐渐边缘化、视听新媒介霸权对文字的全面挤压是网络影评产生的社会文化背景。多种文本形态和传播形式并行互补、多元共生;在技术性和艺术性两极之间寻求平衡;文化资本的商业运作介入网络影视评论;与传统影评的融合是网络影评发展的主要特征和趋势。

  12. Perceptual Objective Listening Quality Assessment (POLQA), The Third Generation ITU-T Standard for End-to-End Speech Quality Measurement : Part II – Perceptual Model

    NARCIS (Netherlands)

    Beerends, J.G.; Schmidmer, C.; Berger, J.; Obermann, M.; Ullman, R.; Pomy, J.; Keyhl, M.

    2013-01-01

    In this and the companion paper Part I, the authors present the Perceptual Objective Listening Quality Assessment (POLQA), the third-generation speech quality measurement algorithm, standardized by the International Telecommunication Union in 2011 as Recommendation P.863. This paper describes the

  13. An Oral History of First-Generation Leaders in Education of Children with Emotional/Behavioral Disorders, Part 1: The Accidental Special Educator

    Science.gov (United States)

    Kaff, Marilyn S.; Teagarden, James M.; Zabel, Robert H.

    2011-01-01

    As the first part of an oral history of education of students with emotional/behavioral disorders, 15 first-generation leaders were asked to relate how they entered the field and to describe their careers, which span the past 35 to 50 years. Their videotaped responses were transcribed and are reported here together with discussion of several…

  14. Studies in hydride generation atomic fluorescence determination of selenium and tellurium. Part 1 — self interference effect in hydrogen telluride generation and the effect of KI

    Science.gov (United States)

    D'Ulivo, A.; Marcucci, K.; Bramanti, E.; Lampugnani, L.; Zamboni, R.

    2000-08-01

    The effects of tetrahydroborate (0.02-1%) and iodide (0-3 M) were investigated in determination of tellurium and selenium by hydride generation atomic fluorescence spectrometry. The effect of tetrahydroborate and iodide concentration were tested on the shape of calibration curves in concentration range of 1-1000 ng ml -1 analyte. Reductant deficiency resulted in a moderate sensitivity depression for tellurium but dramatically reduced the useful dynamic range down to 50 ng ml -1. On the contrary, selenium calibration curves retained a linear character even under conditions generating strong sensitivity depression. Curvature and rollover of tellurium calibration curves has been addressed to a self-interference effect caused by the formation of finely dispersed elemental tellurium. Iodide ions were found to have beneficial or no negative effects in the hydrogen telluride generation. Addition of iodide on-line to the sample has been proved effective in the control of the self-interference effect and allows to work in mild reaction conditions. Moreover, it allows a good control of Cu(II) interference and eliminates Ni(II) and Co(II) interferences. The method has been successfully applied to determination of tellurium in copper and lead ores certified reference materials.

  15. Process-oriented semantic web search

    CERN Document Server

    Tran, DT

    2011-01-01

    The book is composed of two main parts. The first part is a general study of Semantic Web Search. The second part specifically focuses on the use of semantics throughout the search process, compiling a big picture of Process-oriented Semantic Web Search from different pieces of work that target specific aspects of the process.In particular, this book provides a rigorous account of the concepts and technologies proposed for searching resources and semantic data on the Semantic Web. To collate the various approaches and to better understand what the notion of Semantic Web Search entails, this bo

  16. Adaptive web data extraction policies

    Directory of Open Access Journals (Sweden)

    Provetti, Alessandro

    2008-12-01

    Full Text Available Web data extraction is concerned, among other things, with routine data accessing and downloading from continuously-updated dynamic Web pages. There is a relevant trade-off between the rate at which the external Web sites are accessed and the computational burden on the accessing client. We address the problem by proposing a predictive model, typical of the Operating Systems literature, of the rate-of-update of each Web source. The presented model has been implemented into a new version of the Dynamo project: a middleware that assists in generating informative RSS feeds out of traditional HTML Web sites. To be effective, i.e., make RSS feeds be timely and informative and to be scalable, Dynamo needs a careful tuning and customization of its polling policies, which are described in detail.

  17. COMPARISON ANALYSIS OF WEB USAGE MINING USING PATTERN RECOGNITION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Nanhay Singh

    2013-07-01

    Full Text Available Web usage mining is the application of data mining techniques to better serve the needs of web-based applications on the web site. In this paper, we analyze the web usage mining by applying the pattern recognition techniques on web log data. Pattern recognition is defined as the act of taking in raw data and making an action based on the ‘category’ of the pattern. Web usage mining is divided into three partsPreprocessing, Pattern discovery and Pattern analysis. Further, this paper intended with experimental work in which web log data is used. We have taken the web log data from the “NASA” web server which is analyzed with “Web Log Explorer”. Web Log Explorer is a web usage mining tool which plays the vital role to carry out this work.

  18. Application of Designing Economic Mechanisms to Power Market - Part 2 Characteristic Analysis of Generation Side Power Market

    OpenAIRE

    Zhu, Yonggang; XIE Qingyang; Ying, Liming

    2013-01-01

    The incentive power market may lead to a high information cost if it is not informationally efficient. The paper analyzes the characteristic of the generation side power market mechanism model based on the designing economic mechanisms theory by the three GENCOs (Generation Companies) case. The result of analysis is that the mechanism model has four main features: the informationally efficient which means that the mechanism meets requirements of the observational efficiency, the communication...

  19. Personalized Metaheuristic Clustering Onto Web Documents

    Institute of Scientific and Technical Information of China (English)

    Wookey Lee

    2004-01-01

    Optimal clustering for the web documents is known to complicated cornbinatorial Optimization problem and it is hard to develop a generally applicable oplimal algorithm. An accelerated simuIated arlneaIing aIgorithm is developed for automatic web document classification. The web document classification problem is addressed as the problem of best describing a match between a web query and a hypothesized web object. The normalized term frequency and inverse document frequency coefficient is used as a measure of the match. Test beds are generated on - line during the search by transforming model web sites. As a result, web sites can be clustered optimally in terms of keyword vectofs of corresponding web documents.

  20. Web Page Watermarking for Tamper-Proof

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This paper proposed a watermarking algorithm for tamper-proof of web pages. For a web page, it generates a watermark consisting of a sequence of Space and Tab. The watermark is then embedded into the web page after each word and each line. When a watermarked web page is tampered, the extracted watermark can detect and locate the modifications to the web page. Besides, the framework of watermarked Web Server system was given.Compared with traditional digital signature methods, this watermarking method is more transparent in that there is no necessary to detach the watermark before displaying web pages. The experimental results show that the proposed scheme is an effective tool for tamper-proof of web pages.

  1. XML databases and the semantic web

    CERN Document Server

    Thuraisingham, Bhavani

    2002-01-01

    Efficient access to data, sharing data, extracting information from data, and making use of the information have become urgent needs for today''s corporations. With so much data on the Web, managing it with conventional tools is becoming almost impossible. New tools and techniques are necessary to provide interoperability as well as warehousing between multiple data sources and systems, and to extract information from the databases. XML Databases and the Semantic Web focuses on critical and new Web technologies needed for organizations to carry out transactions on the Web, to understand how to use the Web effectively, and to exchange complex documents on the Web.This reference for database administrators, database designers, and Web designers working in tandem with database technologists covers three emerging technologies of significant impact for electronic business: Extensible Markup Language (XML), semi-structured databases, and the semantic Web. The first two parts of the book explore these emerging techn...

  2. Mapping Commercial Web 2.0 Worlds: Towards a New Critical Ontogenesis

    Directory of Open Access Journals (Sweden)

    Kenneth Werbin

    2009-01-01

    Full Text Available At the 2007 International Communication Association Conference, Web 2.0 was highlighted as an emergent topic of research with a keynote panel entitled 'What's so Significant about Social Networking? Web 2.0 and its Critical Potentials'. One of the thought-provoking moments during the panel was the juxtaposition of two very different and at first, contradictory theoretical approaches to the relationships between Web 2.0 and user-generated content. While Henry Jenkins focused on the democratic potential of online participatory culture as enabling new modes of knowledge production, Titziana Terranova argued for a post-Marxist perspective on Web 2.0 as a site of cultural colonization and expansion of new forms of capitalization on culture, affect and knowledge. The juxtaposition of these two very different critical approaches did not simply rehash the old divide between cultural theory, particularly active audience theory, and post-Marxist critical theory; rather, this debate over Web 2.0 suggested new possibilities for the synthesis and continued development of both sets of critiques. In other words, the event reinforced our belief that corporate colonization arguments do not provide an entirely adequate model for understanding Web 2.0. After all, commercial Web 2.0 spaces such as Facebook, YouTube and MySpace are important sites of cultural exchange and political discussion, in part because they almost entirely rely on user-generated content to exist.

  3. Semantic Annotations and Querying of Web Data Sources

    Science.gov (United States)

    Hornung, Thomas; May, Wolfgang

    A large part of the Web, actually holding a significant portion of the useful information throughout the Web, consists of views on hidden databases, provided by numerous heterogeneous interfaces that are partly human-oriented via Web forms ("Deep Web"), and partly based on Web Services (only machine accessible). In this paper we present an approach for annotating these sources in a way that makes them citizens of the Semantic Web. We illustrate how queries can be stated in terms of the ontology, and how the annotations are used to selected and access appropriate sources and to answer the queries.

  4. Reliable execution based on CPN and skyline optimization for Web service composition.

    Science.gov (United States)

    Chen, Liping; Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets.

  5. Reliable Execution Based on CPN and Skyline Optimization for Web Service Composition

    Directory of Open Access Journals (Sweden)

    Liping Chen

    2013-01-01

    Full Text Available With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user’s requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets.

  6. Synthesis and Characterization of Electroresponsive Materials with Applications In: Part I. Second Harmonic Generation. Part II. Organic-Lanthanide Ion Complexes for Electroluminescence and Optical Amplifiers.

    Science.gov (United States)

    Claude, Charles

    1995-01-01

    Materials for optical waveguides were developed from two different approaches, inorganic-organic composites and soft gel polymers. Inorganic-organic composites were developed from alkoxysilane and organically modified silanes based on nonlinear optical chromophores. Organically modified silanes based on N-((3^' -trialkoxysilyl)propyl)-4-nitroaniline were synthesized and sol-gelled with trimethoxysilane. After a densification process at 190^circC with a corona discharge, the second harmonic of the film was measured with a Nd:YAG laser with a fundamental wavelength of 1064nm, d_{33} = 13pm/V. The decay of the second harmonic was expressed by a stretched bi-exponential equation. The decay time (tau _2) was equal to 3374 hours, and was comparable to nonlinear optical systems based on epoxy/Disperse Orange 1. The processing temperature of the organically modified silane was limited to 200^circC due to the decomposition of the organic chromophore. Soft gel polymers were synthesized and characterized for the development of optical waveguides with dc-electrical field assisted phase-matching. Polymers based on 4-nitroaniline terminated poly(ethylene oxide-co-propylene oxide) were shown to exhibit second harmonic generation that were optically phase-matched in an electrical field. The optical signals were stable and reproducible. Siloxane polymers modified with 1-mercapto-4-nitrobenzene and 1-mercapto-4-methylsulfonylstilbene nonlinear optical chromophores were synthesized. The physical and the linear and nonlinear optical properties of the polymers were characterized. Waveguides were developed from the polymers which were optically phase -matched and had an efficiency of 8.1%. The siloxane polymers exhibited optical phase-matching in an applied electrical field and can be used with a semiconductor laser. Organic lanthanide ion complexes for electroluminescence and optical amplifiers were synthesized and characterized. The complexes were characterized for their thermal and

  7. Emergent web intelligence advanced information retrieval

    CERN Document Server

    Badr, Youakim; Abraham, Ajith; Hassanien, Aboul-Ella

    2010-01-01

    Web Intelligence explores the impact of artificial intelligence and advanced information technologies representing the next generation of Web-based systems, services, and environments, and designing hybrid web systems that serve wired and wireless users more efficiently. Multimedia and XML-based data are produced regularly and in increasing way in our daily digital activities, and their retrieval must be explored and studied in this emergent web-based era. 'Emergent Web Intelligence: Advanced information retrieval, provides reviews of the related cutting-edge technologies and insights. It is v

  8. The Partial Mapping of the Web Graph

    Directory of Open Access Journals (Sweden)

    Kristina Machova

    2009-06-01

    Full Text Available The paper presents an approach to partial mapping of a web sub-graph. This sub-graph contains the nearest surroundings of an actual web page. Our work deals with acquiring relevant Hyperlinks of a base web site, generation of adjacency matrix, the nearest distance matrix and matrix of converted distances of Hyperlinks, detection of compactness of web representation, and visualization of its graphical representation. The paper introduces an LWP algorithm – a technique for Hyperlink filtration.  This work attempts to help users with the orientation within the web graph.

  9. Survey of Technologies for Web Application Development

    CERN Document Server

    Doyle, Barry

    2008-01-01

    Web-based application developers face a dizzying array of platforms, languages, frameworks and technical artifacts to choose from. We survey, classify, and compare technologies supporting Web application development. The classification is based on (1) foundational technologies; (2)integration with other information sources; and (3) dynamic content generation. We further survey and classify software engineering techniques and tools that have been adopted from traditional programming into Web programming. We conclude that, although the infrastructure problems of the Web have largely been solved, the cacophony of technologies for Web-based applications reflects the lack of a solid model tailored for this domain.

  10. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 1: Theory and method

    Science.gov (United States)

    Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no

  11. Gestor de contenidos web

    OpenAIRE

    García Populin, Iván

    2014-01-01

    Trabajo final de carrera desarrollado en .NET. Presenta un gestor de contenidos web para generar una web publicitaria. Treball final de carrera desenvolupat en .NET. Presenta un gestor de continguts web per generar una web publicitària.

  12. Design Principles for Responsive Web

    OpenAIRE

    Aryal, Chandra Shekhar

    2014-01-01

    The purpose of this bachelor’s thesis project was to study the responsive design paradigms and development approaches for creating web pages that are optimised for adaptive web design. The additional goals were to analyse the design principles and implement prototype to find out whether it is feasible to achieve responsive design for various screen resolution of devices. The theoretical part of the thesis work explains more details of primary development approaches and design consideratio...

  13. A note on an integration by parts formula for the generators of uniform translations on configuration space

    CERN Document Server

    Conrad, Florian

    2011-01-01

    An integration by parts formula is derived for the first order differential operator corresponding to the action of translations on the space of locally finite simple configurations of infinitely many points on R^d. As reference measures, tempered grand canonical Gibbs measures are considered corresponding to a non-constant non-smooth intensity (one-body potential) and translation invariant potentials fulfilling the usual conditions. It is proven that such Gibbs measures fulfill the intuitive integration by parts formula if and only if the action of the translation is not broken for this particular measure. The latter is automatically fulfilled in the high temperature and low intensity regime.

  14. Value of flexible resources, virtual bidding, and self-scheduling in two-settlement electricity markets with wind generation - Part II

    DEFF Research Database (Denmark)

    Kazempour, Jalal; Hobbs, Benjamin F.

    2017-01-01

    In Part II of this paper, we present formulations for three two-settlement market models: baseline cost-minimization (Stoch-Opt); and two sequential market models in which an independent system operator (ISO) runs real-time (RT) balancing markets after making day-ahead (DA) generating unit...... case study based on the 24-node IEEE reliability test system. Their results confirm that flexible resources, including fast-start generators and demand response, can reduce expected costs in a sequential two-settlement market. In addition, virtual bidders can also improve the functioning of sequential...

  15. Cost reduction in the cold: heat generated by terrestrial locomotion partly substitutes for thermoregulation costs in Knot Calidris canutus

    NARCIS (Netherlands)

    Bruinzeel, L.W.; Piersma, T.

    1998-01-01

    To test whether heat generated during locomotion substitutes for the thermoregulation cost, oxygen consumption of four post-absorptive temperate-wintering Knot Calidris canutus was measured at air temperatures of 25 degrees C (thermoneutral) and 10 degrees C (c. 10 degrees below the lower critical

  16. Cost reduction in the cold : heat generated by terrestrial locomotion partly substitutes for thermoregulation costs in Knot Calidris canutus

    NARCIS (Netherlands)

    Bruinzeel, Leo W.; Piersma, T

    To test whether heat generated during locomotion substitutes for the thermoregulation cost, oxygen consumption of four post-absorptive temperate-wintering Knot Calidris canutus was measured at air temperatures of 25 degrees C (thermoneutral) and 10 degrees C (c. 10 degrees below the lower critical

  17. Transient stability and control of renewable generators based on Hamiltonian surface shaping and power flow control. Part II, analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Robinett, Rush D., III; Wilson, David Gerald

    2010-11-01

    The swing equations for renewable generators connected to the grid are developed and a wind turbine is used as an example. The swing equations for the renewable generators are formulated as a natural Hamiltonian system with externally applied non-conservative forces. A two-step process referred to as Hamiltonian Surface Shaping and Power Flow Control (HSSPFC) is used to analyze and design feedback controllers for the renewable generators system. This formulation extends previous results on the analytical verification of the Potential Energy Boundary Surface (PEBS) method to nonlinear control analysis and design and justifies the decomposition of the system into conservative and non-conservative systems to enable a two-step, serial analysis and design procedure. The first step is to analyze the system as a conservative natural Hamiltonian system with no externally applied non-conservative forces. The Hamiltonian surface of the swing equations is related to the Equal-Area Criterion and the PEBS method to formulate the nonlinear transient stability problem. This formulation demonstrates the effectiveness of proportional feedback control to expand the stability region. The second step is to analyze the system as natural Hamiltonian system with externally applied non-conservative forces. The time derivative of the Hamiltonian produces the work/rate (power flow) equation which is used to ensure balanced power flows from the renewable generators to the loads. The Second Law of Thermodynamics is applied to the power flow equations to determine the stability boundaries (limit cycles) of the renewable generators system and enable design of feedback controllers that meet stability requirements while maximizing the power generation and flow to the load. Necessary and sufficient conditions for stability of renewable generators systems are determined based on the concepts of Hamiltonian systems, power flow, exergy (the maximum work that can be extracted from an energy flow) rate

  18. Semantic Sensor Web

    Science.gov (United States)

    Sheth, A.; Henson, C.; Thirunarayan, K.

    2008-12-01

    Sensors are distributed across the globe leading to an avalanche of data about our environment. It is possible today to utilize networks of sensors to detect and identify a multitude of observations, from simple phenomena to complex events and situations. The lack of integration and communication between these networks, however, often isolates important data streams and intensifies the existing problem of too much data and not enough knowledge. With a view to addressing this problem, the Semantic Sensor Web (SSW) [1] proposes that sensor data be annotated with semantic metadata that will both increase interoperability and provide contextual information essential for situational knowledge. Kno.e.sis Center's approach to SSW is an evolutionary one. It adds semantic annotations to the existing standard sensor languages of the Sensor Web Enablement (SWE) defined by OGC. These annotations enhance primarily syntactic XML-based descriptions in OGC's SWE languages with microformats, and W3C's Semantic Web languages- RDF and OWL. In association with semantic annotation and semantic web capabilities including ontologies and rules, SSW supports interoperability, analysis and reasoning over heterogeneous multi-modal sensor data. In this presentation, we will also demonstrate a mashup with support for complex spatio-temporal-thematic queries [2] and semantic analysis that utilize semantic annotations, multiple ontologies and rules. It uses existing services (e.g., GoogleMap) and semantics enhanced SWE's Sensor Observation Service (SOS) over weather and road condition data from various sensors that are part of Ohio's transportation network. Our upcoming plans are to demonstrate end to end (heterogeneous sensor to application) semantics support and study scalability of SSW involving thousands of sensors to about a billion triples. Keywords: Semantic Sensor Web, Spatiotemporal thematic queries, Semantic Web Enablement, Sensor Observation Service [1] Amit Sheth, Cory Henson, Satya

  19. EVALUACIÓN DE LA EFECTIVIDAD DE LA BANCA CHILENA EN INTERNET PARA LA GENERACIÓN DE ESTRATEGIAS DE NEGOCIOS BANCARIOS EN LA WEB EVALUATION OF THE EFFECTIVENESS OF INTERNET BANKING IN CHILE FOR THE GENERATION OF WEB-BASED BANKING BUSINESSES STRATEGIES

    Directory of Open Access Journals (Sweden)

    Fabián Vergara

    2006-12-01

    Full Text Available Este estudio tiene como objetivo principal analizar la efectividad de los servicios bancarios a través de Internet en Chile provistos por trece bancos nacionales. Para ello se consideraron dos etapas, la primera correspondió a la evaluación de funcionalidades de los sitios Web bancarios, utilizando una adaptación del modelo de evaluación de sitios Web de Hersey, y la segunda a la evaluación de los servicios según las percepciones de los clientes. Para la determinación de las funcionalidades fue necesario detectar la presencia o ausencia de treinta y siete elementos evaluados, según lo cual fue posible clasificar al sector bancario chileno en términos de lo que ofrece o no cada institución. Por otra parte, la encuesta aplicada tanto a usuarios como no usuarios de la banca en línea permitió conocer variados factores referentes al uso y no uso de los servicios. A pesar de que existen algunos aspectos en los que la banca nacional se desempeña sólo de forma regular, en el contexto global lo hace con un alto grado de efectividad. Además, el buen desempeño en el contexto general se hace evidente al observar la gran equiparidad existente entre los resultados nacionales y los de la banca neocelandesaThe main objective of this study is to analyze the effectiveness of the services of Internet banking provided by thirteen Chilean banking institutions. To achieve this objective, two stages were considered. The first stage corresponded to the evaluation of the functionalities offered by the banking Web sites, using an adaptation of the Hersey Web site evaluation model. The second stage corresponded to the evaluation of the services according to the clients' perceptions. In order to determine the functionalities offered by banking Web sites it was necessary to detect the presence or absence of thirty seven elements. This approach permitted to rank the Chilean banking sector based on the Internet offer of each institution. On the other hand

  20. Value of Flexible Resources, Virtual Bidding, and Self-Scheduling in Two-Settlement Electricity Markets With Wind GenerationPart I

    DEFF Research Database (Denmark)

    Kazempour, Jalal; Hobbs, Benjamin F.

    2017-01-01

    Part one of this two-part paper presents new models for evaluating flexible resources in two-settlement electricity markets (day-ahead and real-time) with uncertain net loads (demand minus wind). Physical resources include wind together with fast- and slow-start demand response and thermal...... to resolve imbalances. The value of various flexible resources is evaluated through four two-settlement models: i) an equilibrium model in which each player independently schedules its generation or purchases to maximize expected profit; ii) a benchmark (expected system cost minimization); iii) a sequential...... equilibrium model in which the independent system operator (ISO) first optimizes against a deterministic wind power forecast; and iv) an extended sequential equilibrium model with self-scheduling by profit-maximizing slow-start generators. A tight convexified unit commitment allows for demonstration...

  1. Application of Designing Economic Mechanisms to Power Market - Part 2 Characteristic Analysis of Generation Side Power Market

    Directory of Open Access Journals (Sweden)

    ZHU Yonggang

    2013-04-01

    Full Text Available The incentive power market may lead to a high information cost if it is not informationally efficient. The paper analyzes the characteristic of the generation side power market mechanism model based on the designing economic mechanisms theory by the three GENCOs (Generation Companies case. The result of analysis is that the mechanism model has four main features: the informationally efficient which means that the mechanism meets requirements of the observational efficiency, the communication efficiency and the low complexity of computing; the incentive compatibility which indicates that the resource allocation of the power market is Pareto Optimality and the social benefit achieves the maximization when GENCOs also achieve profit maximization; the decentralized decision which refers to preserving the privacy information of GENCOs; encouragement of competition which suggests that the mechanism encourages GENCOs to compete with each other healthily.

  2. Performance of Generating Plant: Managing the Changes. Part 3: Renewable energy plant: reports on wind, photovoltaics and biomas energies

    Energy Technology Data Exchange (ETDEWEB)

    Manoha, Bruno; Cohen, Martin [Electricite de France (France)

    2008-05-15

    The WEC Committee on the Performance of Generating Plant (PGP) has been collecting and analysing power plant performance statistics worldwide for more than 30 years and has produced regular reports, which include examples of advanced techniques and methods for improving power plant performance through benchmarking. A series of reports from the various working groups was issued in 2008. This reference presents the results of Working Group 3 (WG3). WG3 will promote the introduction of performance indicators for renewable energy generating plant (wind, geothermal, solar and biomass) developed by the Committee. It will also assess selected transitional technology issues and environmental factors related to non-conventional technologies. The WG3 report includes sections on Wind Energy Today, Photovoltaics Energy Today, Biomass Electricity Today and appendices.

  3. Human-Like Behavior Generation Based on Head-Arms Model for Robot Tracking External Targets and Body Parts.

    Science.gov (United States)

    Zhang, Zhijun; Beck, Aryel; Magnenat-Thalmann, Nadia

    2015-08-01

    Facing and pointing toward moving targets is a usual and natural behavior in daily life. Social robots should be able to display such coordinated behaviors in order to interact naturally with people. For instance, a robot should be able to point and look at specific objects. This is why, a scheme to generate coordinated head-arm motion for a humanoid robot with two degrees-of-freedom for the head and seven for each arm is proposed in this paper. Specifically, a virtual plane approach is employed to generate the analytical solution of the head motion. A quadratic program (QP)-based method is exploited to formulate the coordinated dual-arm motion. To obtain the optimal solution, a simplified recurrent neural network is used to solve the QP problem. The effectiveness of the proposed scheme is demonstrated using both computer simulation and physical experiments.

  4. Primer on client-side web security

    CERN Document Server

    De Ryck, Philippe; Piessens, Frank; Johns, Martin

    2014-01-01

    This volume illustrates the continuous arms race between attackers and defenders of the Web ecosystem by discussing a wide variety of attacks. In the first part of the book, the foundation of the Web ecosystem is briefly recapped and discussed. Based on this model, the assets of the Web ecosystem are identified, and the set of capabilities an attacker may have are enumerated. In the second part, an overview of the web security vulnerability landscape is constructed. Included are selections of the most representative attack techniques reported in great detail. In addition to descriptions of the

  5. Social web artifacts for boosting recommenders theory and implementation

    CERN Document Server

    Ziegler, Cai-Nicolas

    2013-01-01

    Recommender systems, software programs that learn from human behavior and make predictions of what products we are expected to appreciate and purchase, have become an integral part of our everyday life. They proliferate across electronic commerce around the globe and exist for virtually all sorts of consumable goods, such as books, movies, music, or clothes. At the same time, a new evolution on the Web has started to take shape, commonly known as the “Web 2.0” or the “Social Web”: Consumer-generated media has become rife, social networks have emerged and are pulling significant shares of Web traffic. In line with these developments, novel information and knowledge artifacts have become readily available on the Web, created by the collective effort of millions of people. This textbook presents approaches to exploit the new Social Web fountain of knowledge, zeroing in first and foremost on two of those information artifacts, namely classification taxonomies and trust networks. These two are used to impr...

  6. A simple model of ultrasound propagation in a cavitating liquid. Part I: Theory, nonlinear attenuation and traveling wave generation

    CERN Document Server

    Louisnard, Olivier

    2013-01-01

    The bubbles involved in sonochemistry and other applications of cavitation oscillate inertially. A correct estimation of the wave attenuation in such bubbly media requires a realistic estimation of the power dissipated by the oscillation of each bubble, by thermal diffusion in the gas and viscous friction in the liquid. Both quantities and calculated numerically for a single inertial bubble driven at 20 kHz, and are found to be several orders of magnitude larger than the linear prediction. Viscous dissipation is found to be the predominant cause of energy loss for bubbles small enough. Then, the classical nonlinear Caflish equations describing the propagation of acoustic waves in a bubbly liquid are recast and simplified conveniently. The main harmonic part of the sound field is found to fulfill a nonlinear Helmholtz equation, where the imaginary part of the squared wave number is directly correlated with the energy lost by a single bubble. For low acoustic driving, linear theory is recovered, but for larger ...

  7. Documentation of program AFTBDY to generate coordinate system for 3D after body using body fitted curvilinear coordinates, part 1

    Science.gov (United States)

    Kumar, D.

    1980-01-01

    The computer program AFTBDY generates a body fitted curvilinear coordinate system for a wedge curved after body. This wedge curved after body is being used in an experimental program. The coordinate system generated by AFTBDY is used to solve 3D compressible N.S. equations. The coordinate system in the physical plane is a cartesian x,y,z system, whereas, in the transformed plane a rectangular xi, eta, zeta system is used. The coordinate system generated is such that in the transformed plane coordinate spacing in the xi, eta, zeta direction is constant and equal to unity. The physical plane coordinate lines in the different regions are clustered heavily or sparsely depending on the regions where physical quantities to be solved for by the N.S. equations have high or low gradients. The coordinate distribution in the physical plane is such that x stays constant in eta and zeta direction, whereas, z stays constant in xi and eta direction. The desired distribution in x and z is input to the program. Consequently, only the y-coordinate is solved for by the program AFTBDY.

  8. When the Social Meets the Semantic: Social Semantic Web or Web 2.5

    Directory of Open Access Journals (Sweden)

    Salvatore F. Pileggi

    2012-09-01

    Full Text Available The social trend is progressively becoming the key feature of current Web understanding (Web 2.0. This trend appears irrepressible as millions of users, directly or indirectly connected through social networks, are able to share and exchange any kind of content, information, feeling or experience. Social interactions radically changed the user approach. Furthermore, the socialization of content around social objects provides new unexplored commercial marketplaces and business opportunities. On the other hand, the progressive evolution of the web towards the Semantic Web (or Web 3.0 provides a formal representation of knowledge based on the meaning of data. When the social meets semantics, the social intelligence can be formed in the context of a semantic environment in which user and community profiles as well as any kind of interaction is semantically represented (Semantic Social Web. This paper first provides a conceptual analysis of the second and third version of the Web model. That discussion is aimed at the definition of a middle concept (Web 2.5 resulting in the convergence and integration of key features from the current and next generation Web. The Semantic Social Web (Web 2.5 has a clear theoretical meaning, understood as the bridge between the overused Web 2.0 and the not yet mature Semantic Web (Web 3.0.

  9. WEB 238 Courses Tutorial / indigohelp

    OpenAIRE

    2015-01-01

    WEB 238 Week 2 JavaScript Events WEB 238 Week 3 Cookies WEB 238 Week 4 Dynamic HTML WEB 238 Week 5 Web Programming Languages WEB 238 Week 1 DQs WEB 238 Week 2DQs WEB 238 Week 3DQs WEB 238 Week 4DQs WEB 238 Week 5DQs  

  10. Web-ADARE: A Web-Aided Data Repairing System

    KAUST Repository

    Gu, Binbin

    2017-03-08

    Data repairing aims at discovering and correcting erroneous data in databases. In this paper, we develop Web-ADARE, an end-to-end web-aided data repairing system, to provide a feasible way to involve the vast data sources on the Web in data repairing. Our main attention in developing Web-ADARE is paid on the interaction problem between web-aided repairing and rule-based repairing, in order to minimize the Web consultation cost while reaching predefined quality requirements. The same interaction problem also exists in crowd-based methods but this is not yet formally defined and addressed. We first prove in theory that the optimal interaction scheme is not feasible to be achieved, and then propose an algorithm to identify a scheme for efficient interaction by investigating the inconsistencies and the dependencies between values in the repairing process. Extensive experiments on three data collections demonstrate the high repairing precision and recall of Web-ADARE, and the efficiency of the generated interaction scheme over several baseline ones.

  11. Association and Sequence Mining in Web Usage

    Directory of Open Access Journals (Sweden)

    Claudia Elena DINUCA

    2011-06-01

    Full Text Available Web servers worldwide generate a vast amount of information on web users’ browsing activities. Several researchers have studied these so-called clickstream or web access log data to better understand and characterize web users. Clickstream data can be enriched with information about the content of visited pages and the origin (e.g., geographic, organizational of the requests. The goal of this project is to analyse user behaviour by mining enriched web access log data. With the continued growth and proliferation of e-commerce, Web services, and Web-based information systems, the volumes of click stream and user data collected by Web-based organizations in their daily operations has reached astronomical proportions. This information can be exploited in various ways, such as enhancing the effectiveness of websites or developing directed web marketing campaigns. The discovered patterns are usually represented as collections of pages, objects, or re-sources that are frequently accessed by groups of users with common needs or interests. The focus of this paper is to provide an overview how to use frequent pattern techniques for discovering different types of patterns in a Web log database. In this paper we will focus on finding association as a data mining technique to extract potentially useful knowledge from web usage data. I implemented in Java, using NetBeans IDE, a program for identification of pages’ association from sessions. For exemplification, we used the log files from a commercial web site.

  12. Security scanning of Web sites at CERN

    CERN Multimedia

    IT Department

    2010-01-01

    As of early 2010, the CERN Computer Security Team will start regular scanning of all Web sites and Web applications at CERN, visible on the Internet, or on the General Purpose Network (office network). The goal of this scanning is to improve the quality of CERN Web sites. All deficits found will be reported by e-mail to the relevant Web site owners, and must be fixed in a timely manner. Web site owners may also request one-off scans of their Web site or Web application, by sending an e-mail to Computer.Security@cern.ch. These Web scans are designed to limit the impact on the scanned Web sites. Nevertheless, in very rare cases scans may cause undesired side-effects, e.g. generate a large number of log entries, or cause particularly badly designed or less robust Web applications to crash. If a Web site is affected by these security scans, it will also be susceptible to any more aggressive scan that can be performed any time by a malicious attacker. Such Web applications should be fixed, and also additionally...

  13. Usare WebDewey

    OpenAIRE

    Baldi, Paolo

    2016-01-01

    This presentation shows how to use the WebDewey tool. Features of WebDewey. Italian WebDewey compared with American WebDewey. Querying Italian WebDewey. Italian WebDewey and MARC21. Italian WebDewey and UNIMARC. Numbers, captions, "equivalente verbale": Dewey decimal classification in Italian catalogues. Italian WebDewey and Nuovo soggettario. Italian WebDewey and LCSH. Italian WebDewey compared with printed version of Italian Dewey Classification (22. edition): advantages and disadvantages o...

  14. Keynote Talk: Mining the Web 2.0 for Improved Image Search

    Science.gov (United States)

    Baeza-Yates, Ricardo

    There are several semantic sources that can be found in the Web that are either explicit, e.g. Wikipedia, or implicit, e.g. derived from Web usage data. Most of them are related to user generated content (UGC) or what is called today the Web 2.0. In this talk we show how to use these sources of evidence in Flickr, such as tags, visual annotations or clicks, which represent the the wisdom of crowds behind UGC, to improve image search. These results are the work of the multimedia retrieval team at Yahoo! Research Barcelona and they are already being used in Yahoo! image search. This work is part of a larger effort to produce a virtuous data feedback circuit based on the right combination many different technologies to leverage the Web itself.

  15. GCOOS Web Applications for Recreational Boaters and Fishermen

    Science.gov (United States)

    Kobara, S.; Howard, M. K.; Simoniello, C.; Jochens, A. E.; Gulf Of Mexico Coastal Ocean Observing System Regional Association (Gcoos-Ra)

    2010-12-01

    Spatial and temporal information on the ecology of marine species and encompassing oceanographic environment is vital to the development of effective strategies for marine resource management and biodiversity conservation. Assembling data and generating products is a time-consuming and often laborious part of the workflow required of fisheries specialists, resource managers, marine scientists and other stakeholder groups for effective fishery management and marine spatial planning. Workflow costs for all groups can be significantly reduced through the use of interoperable networked data systems. The Gulf of Mexico Coastal Ocean Observing System Regional Association (GCOOS-RA) is one of 11 RAs comprising the non-Federal part of the U.S. Integrated Ocean Observing System (IOOS). The RAs serve the region’s needs for data and information: by working with data providers to offer their data in standardized ways following IOOS guidance, by gathering stakeholders’ needs and requirements, and by producing basic products or facilitating product-generation by others to meet those needs. The GCOOS Data Portal aggregates regional near real-time data and serves these data through standardized service interfaces suitable for automated machine access or in formats suitable for human consumption. The related Products Portal generates products in graphical displays for humans and in standard formats for importing into common software packages. Web map applications are created using ArcGIS server RESTful service, publicly available Open Geospatial Consortium (OGC) Web Map Service (WMS) layers, and Web Coverage Service (WCS). Use of standardize interfaces allows us to construct seamless workflows that carry data from sensors through to products in an automated fashion. As a demonstration of the power of interoperable standards-based systems we have developed tailored product web pages for recreational boaters and fishermen. This is a part of an ongoing project to provide an

  16. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-05-04

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.

  17. Maintaining Web Cache Coherency

    Directory of Open Access Journals (Sweden)

    2000-01-01

    Full Text Available Document coherency is a challenging problem for Web caching. Once the documents are cached throughout the Internet, it is often difficult to keep them coherent with the origin document without generating a new traffic that could increase the traffic on the international backbone and overload the popular servers. Several solutions have been proposed to solve this problem, among them two categories have been widely discussed: the strong document coherency and the weak document coherency. The cost and the efficiency of the two categories are still a controversial issue, while in some studies the strong coherency is far too expensive to be used in the Web context, in other studies it could be maintained at a low cost. The accuracy of these analysis is depending very much on how the document updating process is approximated. In this study, we compare some of the coherence methods proposed for Web caching. Among other points, we study the side effects of these methods on the Internet traffic. The ultimate goal is to study the cache behavior under several conditions, which will cover some of the factors that play an important role in the Web cache performance evaluation and quantify their impact on the simulation accuracy. The results presented in this study show indeed some differences in the outcome of the simulation of a Web cache depending on the workload being used, and the probability distribution used to approximate updates on the cached documents. Each experiment shows two case studies that outline the impact of the considered parameter on the performance of the cache.

  18. SEMANTIC WEB SERVICES – DISCOVERY, SELECTION AND COMPOSITION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Sowmya Kamath S

    2013-02-01

    Full Text Available Web services are already one of the most important resources on the Internet. As an integrated solution for realizing the vision of the Next Generation Web, semantic web services combine semantic web technology with web service technology, envisioning automated life cycle management of web services. This paper discusses the significance and importance of service discovery & selection to business logic, and the requisite current research in the various phases of the semantic web service lifecycle like discovery and selection. We also present several different composition strategies, based on current research, and provide an outlook towards critical future work.

  19. State of the art on high-temperature thermal energy storage for power generation. Part 2. Case studies

    Energy Technology Data Exchange (ETDEWEB)

    Medrano, Marc; Gil, Antoni; Martorell, Ingrid; Potau, Xavi; Cabeza, Luisa F. [GREA Innovacio Concurrent, Universitat de Lleida, Pere de Cabrera s/n, 25001 Lleida (Spain)

    2010-01-15

    Power generation systems are attracting a lot of interest from researchers and companies. Storage is becoming a component with high importance to ensure system reliability and economic profitability. A few experiences of storage components have taken place until the moment in solar power plants, most of them as research initiatives. In this paper, real experiences with active storage systems and passive storage systems are compiled, giving detailed information of advantages and disadvantages of each one. Also, a summary of different technologies and materials used in solar power plants with thermal storage systems existing in the world is presented. (author)

  20. State of the art on high temperature thermal energy storage for power generation. Part 1. Concepts, materials and modellization

    Energy Technology Data Exchange (ETDEWEB)

    Gil, Antoni; Medrano, Marc; Martorell, Ingrid; Cabeza, Luisa F. [GREA Innovacio Concurrent, Universitat de Lleida, Pere de Cabrera s/n, 25001-Lleida (Spain); Lazaro, Ana; Dolado, Pablo; Zalba, Belen [Instituto de Investigacion en Ingenieria de Aragon, I3A, Grupo de Ingenieria Termica y Sistemas Energeticos (GITSE), Dpto. Ingenieria Mecanica, Area de Maquinas y Motores Termicos, Universidad de Zaragoza, Campus Politecnico Rio Ebro, Edificio ' Agustin de Betancourt' , Maria de Luna s/n, 50018 Zaragoza (Spain)

    2010-01-15

    Concentrated solar thermal power generation is becoming a very attractive renewable energy production system among all the different renewable options, as it has have a better potential for dispatchability. This dispatchability is inevitably linked with an efficient and cost-effective thermal storage system. Thus, of all components, thermal storage is a key one. However, it is also one of the less developed. Only a few plants in the world have tested high temperature thermal energy storage systems. In this paper, the different storage concepts are reviewed and classified. All materials considered in literature or plants are listed. And finally, modellization of such systems is reviewed. (author)

  1. Public health and Web 2.0.

    Science.gov (United States)

    Hardey, Michael

    2008-07-01

    This article examines the nature and role of Web 2.0 resources and their impact on health information made available though the Internet. The transition of the Web from version one to Web 2.0 is described and the main features of the new Web examined. Two characteristic Web 2.0 resources are explored and the implications for the public and practitioners examined. First, what are known as 'user reviews' or 'user testimonials', which allow people to comment on the health services delivered to them, are described. Second, new mapping applications that take advantage of the interactive potential of Web 2.0 and provide tools to visualize complex data are examined. Following a discussion of the potential of Web 2.0, it is concluded that it offers considerable opportunities for disseminating health information and creating new sources of data, as well as generating new questions and dilemmas.

  2. SWI-Prolog and the Web

    CERN Document Server

    Wielemaker, Jan; van der Meij, Lourens

    2007-01-01

    Where Prolog is commonly seen as a component in a Web application that is either embedded or communicates using a proprietary protocol, we propose an architecture where Prolog communicates to other components in a Web application using the standard HTTP protocol. By avoiding embedding in external Web servers development and deployment become much easier. To support this architecture, in addition to the transfer protocol, we must also support parsing, representing and generating the key Web document types such as HTML, XML and RDF. This paper motivates the design decisions in the libraries and extensions to Prolog for handling Web documents and protocols. The design has been guided by the requirement to handle large documents efficiently. The described libraries support a wide range of Web applications ranging from HTML and XML documents to Semantic Web RDF processing. To appear in Theory and Practice of Logic Programming (TPLP)

  3. E-Learning in Web 3.0

    Directory of Open Access Journals (Sweden)

    Maria Dominic

    2014-02-01

    Full Text Available Web 2.0 is about social networking and collaboration between the creator and the user. Web 3.0 is termed as intelligent web or semantic web with technologies like big data, linked data, cloud computing, 3D visualization, augmented reality and more to make passive learner into active learner in the learning process. This paper identifies the characteristics of the different generations of web and its effect on the different generations of e-learning and also identifies the various issues related with web 3.0. Finally a study is made on the user preferences and recorded in this paper.

  4. NOMAGE4 activities 2011. Part I, Nordic Nuclear Materials Forum for Generation IV Reactors: Status and activities in 2011

    Energy Technology Data Exchange (ETDEWEB)

    Van Nieuwenhove, R. (Institutt for Energiteknikk, OECD Halden Reactor Project (Norway))

    2012-01-15

    A network for materials issues has been initiated in 2009 within the Nordic countries. The original objectives of the Generation IV Nordic Nuclear Materials Forum (NOMAGE4) were to form the basis of a sustainable forum for Gen-IV issues, especially focusing on fuels, cladding, structural materials and coolant interaction. Over the last years, other issues such as reactor physics, thermal hydraulics, safety and waste have gained in importance (within the network) and therefore the scope of the forum has been enlarged and a more appropriate and more general name, NORDIC-GEN4, has been chosen for the forum. Further, the interaction with non-Nordic countries (such as The Netherlands (JRC, NRG) and Czech Republic (CVR)) will be increased. Within the NOMAGE4 project, a seminar was organized by IFE-Halden during 31 October - 1 November 2011. The seminar attracted 65 participants from 12 countries. The seminar provided a forum for exchange of information, discussion on future research reactor needs and networking of experts on Generation IV reactor concepts. The participants could also visit the Halden reactor site and the workshop. (Author)

  5. Postprocessing of simulated precipitation for impact research in West Africa. Part II: A weather generator for daily data

    Science.gov (United States)

    Paeth, Heiko; Diederich, Malte

    2011-04-01

    Data from global and regional climate models refer to grid cells and, hence, are basically different from station data. This particularly holds for variables with enhanced spatio-temporal variability like precipitation. On the other hand, many applications like for instance hydrological models require atmospheric data with the statistical characteristics of station data. Here, we present a dynamical-statistical tool to construct virtual station data based on regional climate model output for tropical West Africa. This weather generator (WEGE) incorporates daily gridded rainfall from the model, an orographic term and a stochastic term, accounting for the chaotic spatial distribution of local rain events within a model grid box. In addition, the simulated probability density function of daily precipitation is adjusted to available station data in Benin. It is also assured that the generated data are still consistent with other model parameters like cloudiness and atmospheric circulation. The resulting virtual station data are in excellent agreement with various observed characteristics which are not explicitly addressed by the WEGE algorithm. This holds for the mean daily rainfall intensity and variability, the relative number of rainless days and the scaling of precipitation in time. The data set has already been used successfully for various climate impact studies in Benin.

  6. Preventing Long-Term Risk of Obesity for Two Generations: Prenatal Physical Activity Is Part of the Puzzle

    Directory of Open Access Journals (Sweden)

    Stephanie-May Ruchat

    2012-01-01

    Full Text Available Background. The period surrounding pregnancy has been identified as a risk period for overweight/obesity in both mother and child because of excessive gestational weight gain (GWG. The promotion of a healthy GWG is therefore of paramount importance in the context of the prevention of obesity in the current and next generations. Objective. To provide a comprehensive overview of the effect of prenatal physical activity interventions, alone or in combination with nutritional counselling, on GWG and to address whether preventing excessive GWG decreases the incidence of infant high birth weight and/or postpartum weight retention. Method. A search of the PubMed database was conducted to identify all relevant studies. Nineteen studies were included in this review: 13 interventions combining physical activity, nutrition, and GWG counselling and 6 interventions including physical activity alone. Results. Prenatal lifestyle interventions promoting healthy eating and physical activity habits appear to be the most effective approach to prevent excessive GWG. Achievement of appropriate GWG may also decrease the incidence of high infant birth weight and postpartum weight retention. Conclusion. Healthy eating habits during pregnancy, combined with an active lifestyle, may be important elements in the prevention of long-term risk of obesity for two generations.

  7. New web technologies for astronomy

    Science.gov (United States)

    Sprimont, P.-G.; Ricci, D.; Nicastro, L.

    2014-12-01

    Thanks to the new HTML5 capabilities and the huge improvements of the JavaScript language, it is now possible to design very complex and interactive web user interfaces. On top of that, the once monolithic and file-server oriented web servers are evolving into easily programmable server applications capable to cope with the complex interactions made possible by the new generation of browsers. We believe that the whole community of amateur and professionals astronomers can benefit from the potential of these new technologies. New web interfaces can be designed to provide the user with a large deal of much more intuitive and interactive tools. Accessing astronomical data archives, schedule, control and monitor observatories, and in particular robotic telescopes, supervising data reduction pipelines, all are capabilities that can now be implemented in a JavaScript web application. In this paper we describe the Sadira package we are implementing exactly to this aim.

  8. Platelet-rich fibrin (PRF): a second-generation platelet concentrate. Part II: platelet-related biologic features.

    Science.gov (United States)

    Dohan, David M; Choukroun, Joseph; Diss, Antoine; Dohan, Steve L; Dohan, Anthony J J; Mouhyi, Jaafar; Gogly, Bruno

    2006-03-01

    Platelet-rich fibrin (PRF) belongs to a new generation of platelet concentrates, with simplified processing and without biochemical blood handling. In this second article, we investigate the platelet-associated features of this biomaterial. During PRF processing by centrifugation, platelets are activated and their massive degranulation implies a very significant cytokine release. Concentrated platelet-rich plasma platelet cytokines have already been quantified in many technologic configurations. To carry out a comparative study, we therefore undertook to quantify PDGF-BB, TGFbeta-1, and IGF-I within PPP (platelet-poor plasma) supernatant and PRF clot exudate serum. These initial analyses revealed that slow fibrin polymerization during PRF processing leads to the intrinsic incorporation of platelet cytokines and glycanic chains in the fibrin meshes. This result would imply that PRF, unlike the other platelet concentrates, would be able to progressively release cytokines during fibrin matrix remodeling; such a mechanism might explain the clinically observed healing properties of PRF.

  9. Measuring Personalization of Web Search

    DEFF Research Database (Denmark)

    Hannak, Aniko; Sapiezynski, Piotr; Kakhki, Arash Molavi

    2013-01-01

    Web search is an integral part of our daily lives. Recently, there has been a trend of personalization in Web search, where different users receive different results for the same search query. The increasing personalization is leading to concerns about Filter Bubble effects, where certain users...... are simply unable to access information that the search engines’ algorithm decidesis irrelevant. Despitetheseconcerns, there has been little quantification of the extent of personalization in Web search today, or the user attributes that cause it. In light of this situation, we make three contributions....... First, we develop a methodology for measuring personalization in Web search results. While conceptually simple, there are numerous details that our methodology must handle in order to accurately attribute differences in search results to personalization. Second, we apply our methodology to 200 users...

  10. Semantic Web

    Directory of Open Access Journals (Sweden)

    Anna Lamandini

    2011-06-01

    Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.

  11. Aesthetics and function in web design

    DEFF Research Database (Denmark)

    Thorlacius, Lisbeth

    2004-01-01

    Since the origin of the web site in the first part of the 90’s there has been discussions regarding the relative weighting of function and aesthetics. A renewed discussion is needed, however, to clarify what exactly is meant by aesthetics in web design. Moreover the balance between aesthetics...... and function ought to be considered more in respect to the target group and the genre of web site....

  12. Aesthetics and function in web design

    DEFF Research Database (Denmark)

    Thorlacius, Lisbeth

    2004-01-01

    Since the origin of the web site in the first part of the 90’s there has been discussions regarding the relative weighting of function and aesthetics. A renewed discussion is needed, however, to clarify what exactly is meant by aesthetics in web design. Moreover the balance between aesthetics...... and function ought to be considered more in respect to the target group and the genre of web site....

  13. User-Generated Content, YouTube and Participatory Culture on the Web: Music Learning and Teaching in Two Contrasting Online Communities

    Science.gov (United States)

    Waldron, Janice

    2013-01-01

    In this paper, I draw on seminal literature from new media researchers to frame the broader implications that user-generated content (UGC), YouTube, and participatory culture have for music learning and teaching in online communities; to illustrate, I use examples from two contrasting online music communities, the Online Academy of Irish…

  14. Are clusters important in understanding the mechanisms in atmospheric pressure ionization? Part 1: Reagent ion generation and chemical control of ion populations.

    Science.gov (United States)

    Klee, Sonja; Derpmann, Valerie; Wißdorf, Walter; Klopotowski, Sebastian; Kersten, Hendrik; Brockmann, Klaus J; Benter, Thorsten; Albrecht, Sascha; Bruins, Andries P; Dousty, Faezeh; Kauppila, Tiina J; Kostiainen, Risto; O'Brien, Rob; Robb, Damon B; Syage, Jack A

    2014-08-01

    It is well documented since the early days of the development of atmospheric pressure ionization methods, which operate in the gas phase, that cluster ions are ubiquitous. This holds true for atmospheric pressure chemical ionization, as well as for more recent techniques, such as atmospheric pressure photoionization, direct analysis in real time, and many more. In fact, it is well established that cluster ions are the primary carriers of the net charge generated. Nevertheless, cluster ion chemistry has only been sporadically included in the numerous proposed ionization mechanisms leading to charged target analytes, which are often protonated molecules. This paper series, consisting of two parts, attempts to highlight the role of cluster ion chemistry with regard to the generation of analyte ions. In addition, the impact of the changing reaction matrix and the non-thermal collisions of ions en route from the atmospheric pressure ion source to the high vacuum analyzer region are discussed. This work addresses such issues as extent of protonation versus deuteration, the extent of analyte fragmentation, as well as highly variable ionization efficiencies, among others. In Part 1, the nature of the reagent ion generation is examined, as well as the extent of thermodynamic versus kinetic control of the resulting ion population entering the analyzer region.

  15. Development of a Prototype Web GIS-Based Disaster Management System for Safe Operation of the Next Generation Bimodal Tram, South Korea—Focused Flooding and Snowfall

    Directory of Open Access Journals (Sweden)

    Won Seok Jang

    2014-04-01

    Full Text Available The Korea Railroad Research Institute (KRRI has developed a bimodal tram and advanced bus rapid transit (BRT system which is an optimized public transit system created by mixing the railway’s punctual operation and the bus’ easy and convenient access. The bimodal tram system provides mass-transportation service with an eco-friendly and human-centered approach. Natural disasters have been increasing worldwide in recent years, including floods, snow, and typhoons disasters. Flooding is the most frequent natural disaster in many countries and is increasingly a concern with climate change; it seriously affects people’s lives and productivity, causing considerable economic loss and significant damage. Enhanced conventional disaster management systems are needed to support comprehensive actions to secure safety and convenience. The objective of this study is to develop a prototype version of a Web GIS-based bimodal tram disaster management system (BTDMS using the Storm Water Management Model (SWMM 5.0 to enhance on-time operation and safety of the bimodal tram system. The BTDMS was tested at the bimodal tram test railroad by simulating probable maximum flood (PMF and snow melting for forecasting flooding and snow covered roads. This result could provide the basis for plans to protect against flooding disasters and snow covered roads in operating the bimodal tram system. The BTDMS will be used to assess and predict weather impacts on roadway conditions and operations and thus has the potential to influence economic growth. The methodology presented in this paper makes it possible to manage impacts of flooding and snowfall on urban transportation and enhance operation of the bimodal tram system. Such a methodology based on modeling could be created for most metropolitan areas in Korea and in many other countries.

  16. Mimicked Web Page Detection over Internet

    Directory of Open Access Journals (Sweden)

    Y. Narasimha Rao

    2014-01-01

    Full Text Available Phishing is process of steeling valuable information such as ATM pins, Credit card details over internet. Where the attacker creates mimicked web pages from the legitimate web pages to fool users. In this paper, we propose an effective anti-phishing solution, by combining image based visual similarity based approach to detect plagiarized web pages. We used effective algorithms for our detection mechanism, speeded up Robust Features (SURF algorithm in order to generate signature based on extracting stable key points from the screen shot of the web page. When a legitimate web page is registered with our system, this algorithm applied on that web page in order to generate signatures, and these signatures are stored in the database for our trained system. When there is a suspected web page, this algorithm is applied to generate both the signatures of the suspected page and is verified against our database of corresponding legitimate web pages. Our results verified that our proposed system is very effective to detect the mimicked web pages with minimal false positives

  17. Los usos de la red social Facebook por parte de bibliotecas universitarias argentinas. Reflexiones en torno a las dinámicas comunicativas en la Web 2.0

    Directory of Open Access Journals (Sweden)

    Claudia Nora Laudano

    2016-01-01

    Full Text Available El presente artículo analiza la adopción y los principales usos de la plataforma Facebook por parte de las bibliotecas de tres universidades en Argentina. Luego de una revisión bibliográfica sobre la temática, se exponen los procedimientos metodológicos empleados para identificar, en primer lugar, las instituciones que cuentan con esta herramienta de comunicación en la actualidad y, en segundo término, los usos que se hace de ella a partir de una serie de variables, entre las que caben consignar: el momento de inicio, los vínculos con otros medios de comunicación administrados por la institución (webs y redes sociales, el número de seguidores, el empleo de las imágenes, la frecuencia y el tipo de posteos y, por último, la cantidad y la calidad de los comentarios. En los resultados se trabaja la información recabada procurando un análisis cuantitativo y otro cualitativo en la construcción de un panorama global. En las conclusiones se destaca la paulatina adopción de la plataforma por parte de las bibliotecas universitarias, aunque con usos limitados respecto de su potencial. Se sugieren otras posibilidades. The present article analyses the adoption and the main uses of the Facebook platform by three library universities in Argentina. After a bibliographic review on the topic, the methodological procedures used are showed primarily to identify the institutions that have this communication tool at present and, secondly, its uses depending on several variables, amongst them the starting moment, the links with other mass media managed by the institution (webs and social networks, the number of followers, the use of images, the frequency and the type of posts, and, finally, the quantity and the quality of the comments. In the results the obtained information is used to try to make a quantitative and a qualitative analysis of the construction of a global outlook. The conclusion shows the gradual adoption of the platform by university

  18. Connecting small and medium enterprises to the new consumer: The Web 2.0 as marketing tool

    NARCIS (Netherlands)

    Constantinides, Efthymios; Bharati, P.; Lee, I.

    2010-01-01

    This chapter explains the nature, effects and current standing of the new generation of Internet applications, commonly known as Social Media or Web 2.0, reviews their role as marketing instruments and identifies opportunities for SMEs for engaging them as part of their marketing strategy. The chapt

  19. How the World Wide Web Started世界网(万维网)是怎样开始的

    Institute of Scientific and Technical Information of China (English)

    任继东

    2002-01-01

    Tim Berners-Lee is the man who wrote the software (软件)programme that led to the foundation of the World Wide Web. Britain played an important part in developing the first generation of computers. The parents of Tim Berners-Lee both worked on one of the earliest commercial computers and talked about their work at home.

  20. Connecting small and medium enterprises to the new consumer: The Web 2.0 as marketing tool

    NARCIS (Netherlands)

    Constantinides, Efthymios; Bharati, P.; Lee, I.

    2010-01-01

    This chapter explains the nature, effects and current standing of the new generation of Internet applications, commonly known as Social Media or Web 2.0, reviews their role as marketing instruments and identifies opportunities for SMEs for engaging them as part of their marketing strategy. The chapt

  1. Connecting small and medium enterprises to the new consumer: The Web 2.0 as marketing tool

    NARCIS (Netherlands)

    Constantinides, Efthymios; Bharati, P.; Lee, I.

    2010-01-01

    This chapter explains the nature, effects and current standing of the new generation of Internet applications, commonly known as Social Media or Web 2.0, reviews their role as marketing instruments and identifies opportunities for SMEs for engaging them as part of their marketing strategy. The

  2. Advanced Techniques in Web Intelligence-2 Web User Browsing Behaviour and Preference Analysis

    CERN Document Server

    Palade, Vasile; Jain, Lakhmi

    2013-01-01

    This research volume focuses on analyzing the web user browsing behaviour and preferences in traditional web-based environments, social  networks and web 2.0 applications,  by using advanced  techniques in data acquisition, data processing, pattern extraction and  cognitive science for modeling the human actions.  The book is directed to  graduate students, researchers/scientists and engineers  interested in updating their knowledge with the recent trends in web user analysis, for developing the next generation of web-based systems and applications.

  3. Generation of continental crust in the northern part of the Borborema Province, northeastern Brazil, from Archaean to Neoproterozoic

    Science.gov (United States)

    de Souza, Zorano Sérgio; Kalsbeek, Feiko; Deng, Xiao-Dong; Frei, Robert; Kokfelt, Thomas Find; Dantas, Elton Luiz; Li, Jian-Wei; Pimentel, Márcio Martins; Galindo, Antonio Carlos

    2016-07-01

    This work deals with the origin and evolution of the magmatic rocks in the area north of the Patos Lineament in the Borborema Province (BP). This northeastern segment of NE Brazil is composed of at least six different tectonic blocks with ages varying from late-Archaean to late-Palaeoproterozoic. Archaean rocks cover ca. 5% of the region. They were emplaced over a period of 700 Ma, with at least seven events of magma generation, at 3.41, 3.36, 3.25, 3.18, 3.12, 3.03, and 2.69 Ga. The rocks are subalkaline to slightly alkaline, with affinity to I- and M-type magmas; they follow trondhjemitic or potassium calc-alkaline differentiation trends. They have epsilon Nd(t) of +1.4 to -4.2 and negative anomalies for Ta-Nb, P and Ti, consistent with a convergent tectonic setting. Both subducted oceanic crust and upper mantle (depleted or metasomatised) served as sources of the magmas. After a time lapse of about 350 m y., large-scale emplacement of Paleoproterozoic units took place. These rocks cover about 50% of the region. Their geochemistry indicates juvenile magmatism with a minor contribution from crustal sources. These rocks also exhibit potassic calc-alkaline differentiation trends, again akin to I- and M-type magmas, and show negative anomalies for Ta-Nb, Ti and P. Depleted and metasomatised mantle, resulting from interaction with adakitic or trondhjemitic melts in a subduction zone setting, is interpreted to be the main source of the magmas, predominanting over crustal recycling. U-Pb ages indicate generation of plutonic rocks at 2.24-2.22 Ga (in some places at about 2.4-2.3 Ga) and 2.13-2.11 Ga, and andesitic volcanism at 2.15 Ga. Isotopic evidence indicates juvenile magmatism (epsilon Nd(t) of +2.9 to -2.9). After a time lapse of about 200 m y. a period of within-plate magmatic activity followed, with acidic volcanism (1.79 Ga) in Orós, granitic plutonism (1.74 Ga) in the Seridó region, anorthosites (1.70 Ga) and A-type granites (1.6 Ga) in the Transverse Zone

  4. Transfer efficiency of angular momentum in sum-frequency generation and control of its spin and orbital parts by varying polarization and frequency of fundamental beams

    Science.gov (United States)

    Perezhogin, I. A.; Grigoriev, K. S.; Potravkin, N. N.; Cherepetskaya, E. B.; Makarov, V. A.

    2017-08-01

    Considering sum-frequency generation in an isotropic chiral nonlinear medium, we analyze the transfer of the spin angular momentum of fundamental elliptically polarized Gaussian light beams to the signal beam, which appears as the superposition of two Laguerre-Gaussian modes with both spin and orbital angular momentum. Only for the circular polarization of the fundamental radiation is its angular momentum fully transferred to the sum-frequency beam; otherwise, part of it can be transferred to the medium. Its value, as well as the ratio of spin and orbital contributions in the signal beam, depends on the fundamental frequency ratio and the polarization of the incident beams. Higher energy conversion efficiency in sum-frequency generation does not always correspond to higher angular momentum conversion efficiency.

  5. Web Intelligence and Artificial Intelligence in Education

    Science.gov (United States)

    Devedzic, Vladan

    2004-01-01

    This paper surveys important aspects of Web Intelligence (WI) in the context of Artificial Intelligence in Education (AIED) research. WI explores the fundamental roles as well as practical impacts of Artificial Intelligence (AI) and advanced Information Technology (IT) on the next generation of Web-related products, systems, services, and…

  6. Web 2.0 (and Beyond)

    NARCIS (Netherlands)

    P.A. Arora (Payal)

    2015-01-01

    textabstractWeb 2.0 is a term coined to mark a new era of Internet usage driven by user interactivity and collaboration in generating content, moving away from the static information dissemination model associated with Web 1.0. It became common in early 2000 with the growth of social network sites,

  7. Web 2.0 (and Beyond)

    NARCIS (Netherlands)

    P.A. Arora (Payal)

    2015-01-01

    textabstractWeb 2.0 is a term coined to mark a new era of Internet usage driven by user interactivity and collaboration in generating content, moving away from the static information dissemination model associated with Web 1.0. It became common in early 2000 with the growth of social network sites,

  8. Web 2.0 and beyond

    NARCIS (Netherlands)

    P.A. Arora (Payal)

    2015-01-01

    markdownabstract__Abstract__ Web 2.0 is a term coined to mark a new era of Internet usage driven by user interactivity and collaboration in generating content, moving away from the static information dissemination model associated with Web 1.0. It became common in early 2000 with the growth of soci

  9. Web Intelligence and Artificial Intelligence in Education

    Science.gov (United States)

    Devedzic, Vladan

    2004-01-01

    This paper surveys important aspects of Web Intelligence (WI) in the context of Artificial Intelligence in Education (AIED) research. WI explores the fundamental roles as well as practical impacts of Artificial Intelligence (AI) and advanced Information Technology (IT) on the next generation of Web-related products, systems, services, and…

  10. Design of Auto Generating Tool of Part Sub Assembly Drawing%零件小组立图自动生成程序设计

    Institute of Scientific and Technical Information of China (English)

    周玉飞

    2014-01-01

    The part sub assembly drawing is still in the stage of manual drawing mode in many shipyards for .By using the Python language of Tribon system for secondary development , the part sub assembly drawing can be automatically generated .This program is verified by a series of ship design , showing that it is feasible .%针对目前很多船厂对于零件小组立图的生成还处于手工出图模式的问题,通过利用Python语言对Tribon系统进行二次开发,实现零件小组立图的自动批量生成。一系列船型的实践证明该程序是可行的且实用性很强。

  11. WEB MINING IN E-COMMERCE

    Directory of Open Access Journals (Sweden)

    Istrate Mihai

    2009-05-01

    Full Text Available Recently, the web is becoming an important part of people’s life. The web is a very good place to run successful businesses. Selling products or services online plays an important role in the success of businesses that have a physical presence, like a re

  12. DISTANCE LEARNING ONLINE WEB 3 .0

    Directory of Open Access Journals (Sweden)

    S. M. Petryk

    2015-05-01

    Full Text Available This article analyzes the existing methods of identification information in the semantic web, outlines the main problems of its implementation and researches the use of Semantic Web as the part of distance learning. Proposed alternative variant of identification and relationship construction of information and acquired knowledge based on the developed method “spectrum of knowledge”

  13. Feedback for Web-based Assignments.

    Science.gov (United States)

    Collis, Betty; De Boer, W.; Slotman, K.

    2001-01-01

    Discusses a concept used at the University of Twente based on increased flexibility in learning options and the active student in which there are assignments submitted and monitored via a Web-based course management system. Outlines conceptual aspects of feedback as part of the assessment process, particularly feedback supported by a Web-based…

  14. Endnote web

    OpenAIRE

    Uezu, Denis

    2015-01-01

    Представлено краткое руководство по работе с сетевой сервисной программой EndNote Web на платформе Web of Knowledge издательства Thomson Reuters на русском языке. EndNote Web разработана для предоставления помощи исследователям и студентам в процессе написания научных публикаций. Позволяет создавать свои базы данных с собственными библиографическими списками для цитирования в научных работах....

  15. La salud de las web universitarias españolas

    Directory of Open Access Journals (Sweden)

    Thelwall, Mike

    2003-09-01

    Full Text Available The Web has become an important tool for universities, and one that is employed in a variety of ways. Examples are: disseminating and publicising research findings and activities; publishing teaching and administrative information for students; and collaborating with other institutions nationally and internationally. But how effectively are Spanish universities using the Web and what information can be gained about online communication patterns through the study of Web links? This paper reports on an investigation into 64 university Web sites that were indexed using a specialist information science Web crawler and analysed using associated software. There were a wide variety of sizes for university Web sites and that universities attracted links from others broadly in proportion to their site size. The Spanish academic Web was found to lag behind those of the four countries that it was compared to. However, the most commonly targeted top-level Internet domains were from non-Spanish speaking high Web using countries around the world, showing a broad international perspective and high degree of multilingualism for Web authors. The most highly targeted pages were mainly those that attracted automatically generated links, but several government ministries were a surprise inclusion.

    La web se ha convertido en una importante herramienta para las universidades, donde se utiliza en una amplia variedad de formas, tales como publicar y diseminar actividades y resultados de investigación, proporcionar información administrativa y académica de interés para los estudiantes o facilitar la colaboración con otras instituciones nacionales e internacionales. Pero, ¿cómo están realmente utilizando la Web las universidades españolas y qué información se puede obtener sobre sus modos de comunicación en línea a través del estudio de los enlaces web?. Para obtener respuestas se han investigado 64 sedes Web de universidades utilizando un robot

  16. Engineering Web Applications

    DEFF Research Database (Denmark)

    Casteleyn, Sven; Daniel, Florian; Dolog, Peter

    Nowadays, Web applications are almost omnipresent. The Web has become a platform not only for information delivery, but also for eCommerce systems, social networks, mobile services, and distributed learning environments. Engineering Web applications involves many intrinsic challenges due...

  17. The National Shipbuilding Research Program. Proceedings of the IREAPS Technical Symposium, Paper No. 2: PARTGEN: An Advanced Interactive Method for Highly Automated Parts Generation Based on the Design Model Data

    Science.gov (United States)

    1982-09-01

    generated by report-generator facilities. The PARTGEN commands include all the AUTOPART commands fa- miliar to the yards using this module presently. This...means that all the geometry possibilities in AUTOPART are available in PARTGEN, and so are the macro facilities. This fact will make the transition from... AUTOPART to PARTGEN easy and quick for old users. By using the PARTGEN module for production part generation, the actual part coding, as we know it

  18. A study on the Web intelligence

    Institute of Scientific and Technical Information of China (English)

    Sang-Geun Kim

    2004-01-01

    This paper surveys important aspects of Web Intelligence (WI). WI explores the fundamental roles as well as practical impacts of Artificial Intelligence (AI) and advanced Information Technology (IT) on the next generation of Web - related products, systens, and activities. As a direction for scientific research and devlopment, WI can be extremely beneficial for the field of Artificial Intelligence in Education (AIED). This paper covers these issues only very briefly. It focuses more on other issues in WI, such as intelligent Web services, and semantic web, and proposes how to use them as basis for tackling new and challenging research problems in AIED.

  19. Towards semantic web mining

    OpenAIRE

    Berendt, Bettina; Hotho, Andreas; Stumme, Gerd

    2002-01-01

    Semantic Web Mining aims at combining the two fast-developing research areas Semantic Web and Web Mining. The idea is to improve, on the one hand, the results of Web Mining by exploiting the new semantic structures in the Web; and to make use of Web Mining, on overview of where the two areas meet today, and sketches ways of how a closer integration could be profitable.

  20. Evaluating Health Advice in a Web 2.0 Environment: The Impact of Multiple User-Generated Factors on HIV Advice Perceptions.

    Science.gov (United States)

    Walther, Joseph B; Jang, Jeong-Woo; Hanna Edwards, Ashley A

    2016-12-02

    Unlike traditional media, social media systems often present information of different types from different kinds of contributors within a single message pane, a juxtaposition of potential influences that challenges traditional health communication processing. One type of social media system, question-and-answer advice systems, provides peers' answers to health-related questions, which yet other peers read and rate. Responses may appear good or bad, responders may claim expertise, and others' aggregated evaluations of an answer's usefulness may affect readers' judgments. An experiment explored how answer feasibility, expertise claims, and user-generated ratings affected readers' assessments of advice about anonymous HIV testing. Results extend the heuristic-systematic model of persuasion (Chaiken, 1980) and warranting theory (Walther & Parks, 2002). Information that is generally associated with both systematic and heuristic processes influenced readers' evaluations. Moreover, content-level cues affected judgments about message sources unexpectedly. When conflicting cues were present, cues with greater warranting value (consensus user-generated ratings) had greater influence on outcomes than less warranted cues (self-promoted expertise). Findings present a challenge to health professionals' concerns about the reliability of online health information systems.

  1. From the Bench to the Bedside: The Role of Semantic Web and Translational Medicine for Enabling the Next Generation Healthcare Enterprise

    Science.gov (United States)

    Kashyap, Vipul

    The success of new innovations and technologies are very often disruptive in nature. At the same time, they enable novel next generation infrastructures and solutions. These solutions introduce great efficiencies in the form of efficient processes and the ability to create, organize, share and manage knowledge effectively; and the same time provide crucial enablers for proposing and realizing new visions. In this paper, we propose a new vision of the next generation healthcare enterprise and discuss how Translational Medicine, which aims to improve communication between the basic and clinical sciences, is a key requirement for achieving this vision. This will lead therapeutic insights may be derived from new scientific ideas - and vice versa. Translation research goes from bench to bedside, where theories emerging from preclinical experimentation are tested on disease-affected human subjects, and from bedside to bench, where information obtained from preliminary human experimentation can be used to refine our understanding of the biological principles underpinning the heterogeneity of human disease and polymorphism(s). Informatics and semantic technologies in particular, has a big role to play in making this a reality. We identify critical requirements, viz., data integration, clinical decision support and knowledge maintenance and provenance; and illustrate semantics-based solutions wrt example scenarios and use cases.

  2. The Social Web and Learning

    NARCIS (Netherlands)

    Wijngaards, Guus

    2008-01-01

    On the internet we see a continuously growing generation of web applications enabling anyone to create and publish online content in a simple way, to link content and to share it with others: wellknown instances include MySpace, Facebook, YouTube, Flickr, Wikipedia and Google Earth. The internet has

  3. The Social Web and Learning

    NARCIS (Netherlands)

    dr. Guus Wijngaards

    2008-01-01

    On the internet we see a continuously growing generation of web applications enabling anyone to create and publish online content in a simple way, to link content and to share it with others: wellknown instances include MySpace, Facebook, YouTube, Flickr, Wikipedia and Google Earth. The internet has

  4. Web TA Production (WebTA)

    Data.gov (United States)

    US Agency for International Development — WebTA is a web-based time and attendance system that supports USAID payroll administration functions, and is designed to capture hours worked, leave used and...

  5. A New Hidden Web Crawling Approach

    OpenAIRE

    L.Saoudi; A.Boukerram; S.Mhamedi

    2015-01-01

    Traditional search engines deal with the Surface Web which is a set of Web pages directly accessible through hyperlinks and ignores a large part of the Web called hidden Web which is a great amount of valuable information of online database which is “hidden” behind the query forms. To access to those information the crawler have to fill the forms with a valid data, for this reason we propose a new approach which use SQLI technique in order to find the most promising keywords of a specific dom...

  6. Semantic web for dummies

    CERN Document Server

    Pollock, Jeffrey T

    2009-01-01

    Semantic Web technology is already changing how we interact with data on the Web. By connecting random information on the Internet in new ways, Web 3.0, as it is sometimes called, represents an exciting online evolution. Whether you're a consumer doing research online, a business owner who wants to offer your customers the most useful Web site, or an IT manager eager to understand Semantic Web solutions, Semantic Web For Dummies is the place to start! It will help you:Know how the typical Internet user will recognize the effects of the Semantic WebExplore all the benefits the data Web offers t

  7. Design and Analysis of Web Application Frameworks

    DEFF Research Database (Denmark)

    Schwarz, Mathias Romme

    -state manipulation vulnerabilities. The hypothesis of this dissertation is that we can design frameworks and static analyses that aid the programmer to avoid such errors. First, we present the JWIG web application framework for writing secure and maintainable web applications. We discuss how this framework solves...... some of the common errors through an API that is designed to be safe by default. Second, we present a novel technique for checking HTML validity for output that is generated by web applications. Through string analysis, we approximate the output of web applications as context-free grammars. We model......Numerous web application frameworks have been developed in recent years. These frameworks enable programmers to reuse common components and to avoid typical pitfalls in web application development. Although such frameworks help the programmer to avoid many common errors, we nd...

  8. Mass spectrometry and Web 2.0.

    Science.gov (United States)

    Murray, Kermit K

    2007-10-01

    The term Web 2.0 is a convenient shorthand for a new era in the Internet in which users themselves are both generating and modifying existing web content. Several types of tools can be used. With social bookmarking, users assign a keyword to a web resource and the collection of the keyword 'tags' from multiple users form the classification of these resources. Blogs are a form of diary or news report published on the web in reverse chronological order and are a popular form of information sharing. A wiki is a website that can be edited using a web browser and can be used for collaborative creation of information on the site. This article is a tutorial that describes how these new ways of creating, modifying, and sharing information on the Web are being used for on-line mass spectrometry resources.

  9. Comparing cosmic web classifiers using information theory

    CERN Document Server

    Leclercq, Florent; Jasche, Jens; Wandelt, Benjamin

    2016-01-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-web, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  10. Recent advances of the Semantic Web

    Institute of Scientific and Technical Information of China (English)

    Zhao-hui WU

    2012-01-01

    The World Wide Web (WWW) has become an indispensable media in our daily life.It is frequently used in our day-to-day operations to procure a solution for a difficult problem,communicate and socialize with others,reserve hotels and book tickets to arrange our trips,seek business opportunities,entertain ourselves,and so forth.Without the Web,our life may not have been what it is today.However,a number of questions remain:Has the Web reached its full potential? Can it change our life more than what we have seen? What will the Web look like 20 years from now? One key question out of all is:Could the Web be smarter and more intelligent than before,considering the overwhelming Web content generated at such an exponential rate?

  11. Comparing cosmic web classifiers using information theory

    Science.gov (United States)

    Leclercq, Florent; Lavaux, Guilhem; Jasche, Jens; Wandelt, Benjamin

    2016-08-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  12. Discovery and Classification of Bioinformatics Web Services

    Energy Technology Data Exchange (ETDEWEB)

    Rocco, D; Critchlow, T

    2002-09-02

    The transition of the World Wide Web from a paradigm of static Web pages to one of dynamic Web services provides new and exciting opportunities for bioinformatics with respect to data dissemination, transformation, and integration. However, the rapid growth of bioinformatics services, coupled with non-standardized interfaces, diminish the potential that these Web services offer. To face this challenge, we examine the notion of a Web service class that defines the functionality provided by a collection of interfaces. These descriptions are an integral part of a larger framework that can be used to discover, classify, and wrapWeb services automatically. We discuss how this framework can be used in the context of the proliferation of sites offering BLAST sequence alignment services for specialized data sets.

  13. Automating Information Discovery Within the Invisible Web

    Science.gov (United States)

    Sweeney, Edwina; Curran, Kevin; Xie, Ermai

    A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.

  14. WEB GIS: IMPLEMENTATION ISSUES

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    With the rapid expansion and development of Internet and WWW (World Wide Web or Web), Web GIS (Web Geographical Information Systen) is becoming ever more popular and as a result numerous sites have added GIS capability on their Web sites. In this paper, the reasons behind developing a Web GIS instead of a “traditional” GIS are first outlined. Then the current status of Web GIS is reviewed, and their implementation methodologies are explored as well.The underlying technologies for developing Web GIS, such as Web Server, Web browser, CGI (Common Gateway Interface), Java, ActiveX, are discussed, and some typical implementation tools from both commercial and public domain are given as well. Finally, the future development direction of Web GIS is predicted.

  15. Global Web Accessibility Analysis of National Government Portals and Ministry Web Sites

    DEFF Research Database (Denmark)

    Goodwin, Morten; Susar, Deniz; Nietzio, Annika

    2011-01-01

    Equal access to public information and services for all is an essential part of the United Nations (UN) Declaration of Human Rights. Today, the Web plays an important role in providing information and services to citizens. Unfortunately, many government Web sites are poorly designed and have...... accessibility of 192 United Nation Member States made publically available. The article also identifies common properties of Member States that have accessible and inaccessible Web sites and shows that implementing antidisability discrimination laws is highly beneficial for the accessibility of Web sites, while...... signing the UN Rights and Dignity of Persons with Disabilities has had no such effect yet. The article demonstrates that, despite the commonly held assumption to the contrary, mature, high-quality Web sites are more accessible than lower quality ones. Moreover, Web accessibility conformance claims by Web...

  16. Global Web Accessibility Analysis of National Government Portals and Ministry Web Sites

    DEFF Research Database (Denmark)

    Goodwin, Morten; Susar, Deniz; Nietzio, Annika

    2011-01-01

    Equal access to public information and services for all is an essential part of the United Nations (UN) Declaration of Human Rights. Today, the Web plays an important role in providing information and services to citizens. Unfortunately, many government Web sites are poorly designed and have...... accessibility barriers that prevent people with disabilities from using them. This article combines current Web accessibility benchmarking methodologies with a sound strategy for comparing Web accessibility among countries and continents. Furthermore, the article presents the first global analysis of the Web...... accessibility of 192 United Nation Member States made publically available. The article also identifies common properties of Member States that have accessible and inaccessible Web sites and shows that implementing antidisability discrimination laws is highly beneficial for the accessibility of Web sites, while...

  17. Semantics-based Automated Web Testing

    OpenAIRE

    Hai-Feng Guo; Qing Ouyang; Harvey Siy

    2015-01-01

    We present TAO, a software testing tool performing automated test and oracle generation based on a semantic approach. TAO entangles grammar-based test generation with automated semantics evaluation using a denotational semantics framework. We show how TAO can be incorporated with the Selenium automation tool for automated web testing, and how TAO can be further extended to support automated delta debugging, where a failing web test script can be systematically reduced based on grammar-direct...

  18. DATA EXTRACTION AND LABEL ASSIGNMENT FOR WEB DATABASES

    Directory of Open Access Journals (Sweden)

    T. Rajesh

    2015-10-01

    Full Text Available Deep Web contents are accessed by queries submitted to Web databases and the returned data records are en wrapped in dynamically generated Web pages (they will be called deep Web pages in this paper. The structured data that Extracting from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now, a too many number of techniques have been proposed to address this problem, but all of them have limitations because they are Web-page-programming-language dependent.

  19. Exposing the structure of an arctic food web

    NARCIS (Netherlands)

    Wirta, Helena K; Vesterinen, Eero J; Hambäck, Peter A.; Weingartner, Elisabeth; Rasmussen, Claus; Reneerkens, Jeroen; Schmidt, Niels M; Gilg, Olivier; Roslin, Tomas

    2015-01-01

    How food webs are structured has major implications for their stability and dynamics. While poorly studied to date, arctic food webs are commonly assumed to be simple in structure, with few links per species. If this is the case, then different parts of the web may be weakly connected to each other,

  20. Lost but not forgotten: finding pages on the unarchived web

    NARCIS (Netherlands)

    Huurdeman, H.C.; Kamps, J.; Samar, T.; Vries, A.P. de; Ben-David, A.; Rogers, R.A.

    2015-01-01

    Web archives attempt to preserve the fast changing web, yet they will always be incomplete. Due to restrictions in crawling depth, crawling frequency, and restrictive selection policies, large parts of the Web are unarchived and, therefore, lost to posterity. In this paper, we propose an approach to

  1. Web Accessibility Theory and Practice: An Introduction for University Faculty

    Science.gov (United States)

    Bradbard, David A.; Peters, Cara

    2010-01-01

    Web accessibility is the practice of making Web sites accessible to all, particularly those with disabilities. As the Internet becomes a central part of post-secondary instruction, it is imperative that instructional Web sites be designed for accessibility to meet the needs of disabled students. The purpose of this article is to introduce Web…

  2. A simple method for serving Web hypermaps with dynamic database drill-down

    Directory of Open Access Journals (Sweden)

    Carson Ewart R

    2002-08-01

    Full Text Available Abstract Background HealthCyberMap http://healthcybermap.semanticweb.org aims at mapping parts of health information cyberspace in novel ways to deliver a semantically superior user experience. This is achieved through "intelligent" categorisation and interactive hypermedia visualisation of health resources using metadata, clinical codes and GIS. HealthCyberMap is an ArcView 3.1 project. WebView, the Internet extension to ArcView, publishes HealthCyberMap ArcView Views as Web client-side imagemaps. The basic WebView set-up does not support any GIS database connection, and published Web maps become disconnected from the original project. A dedicated Internet map server would be the best way to serve HealthCyberMap database-driven interactive Web maps, but is an expensive and complex solution to acquire, run and maintain. This paper describes HealthCyberMap simple, low-cost method for "patching" WebView to serve hypermaps with dynamic database drill-down functionality on the Web. Results The proposed solution is currently used for publishing HealthCyberMap GIS-generated navigational information maps on the Web while maintaining their links with the underlying resource metadata base. Conclusion The authors believe their map serving approach as adopted in HealthCyberMap has been very successful, especially in cases when only map attribute data change without a corresponding effect on map appearance. It should be also possible to use the same solution to publish other interactive GIS-driven maps on the Web, e.g., maps of real world health problems.

  3. Traductor Writing System Web

    CERN Document Server

    Texier, Jose

    2012-01-01

    A compilator is a program which is development in a programming language that read a file known as source. After this file have to translate and have to convert in other program known as object or to generate a exit. The best way for to know any programming language is analizing a compilation process which is same in all programming paradigm existents. To like to generate a tool that permit a learning in university course. This course could explain in any plataform such as Linux o Windows. This goal is posible through development a Web aplication which is unite with a compilator, it is Traductor Writing System (Sistema de Escritura de Traductores). This system is complete and permit extend and modify the compilator. The system is a module in Moodle which is a Course Management System (CMS) that help teachers for to create comunities of learning in line. This software is in free software license (GPL).

  4. Xerox trails: a new web-based publishing technology

    Science.gov (United States)

    Rao, Venkatesh G.; Vandervort, David; Silverstein, Jesse

    2010-02-01

    Xerox Trails is a new digital publishing model developed at the Xerox Research Center, Webster. The primary purpose of the technology is to allow Web users and publishers to collect, organize and present information in the form of a useful annotated narrative (possibly non-sequential) with editorial content and metadata, that can be consumed both online and offline. The core concept is a trail: a digital object that improves online content production, consumption and navigation user experiences. When appropriate, trails can also be easily sequenced and transformed into printable documents, thereby bridging the gap between online and offline content experiences. The model is partly inspired by Vannevar Bush's influential idea of the "Memex" [1] which has inspired several generations of Web technology [2]. Xerox Trails is a realization of selected elements from the idea of the Memex, along with several original design ideas. It is based on a primitive data construct, the trail. In Xerox Trails, the idea of a trail is used to support the architecture of a Web 2.0 product suite called Trailmeme, that includes a destination Web site, plugins for major content management systems, and a browser toolbar.

  5. Personalized Web Services for Web Information Extraction

    CERN Document Server

    Jarir, Zahi; Erradi, Mahammed

    2011-01-01

    The field of information extraction from the Web emerged with the growth of the Web and the multiplication of online data sources. This paper is an analysis of information extraction methods. It presents a service oriented approach for web information extraction considering both web data management and extraction services. Then we propose an SOA based architecture to enhance flexibility and on-the-fly modification of web extraction services. An implementation of the proposed architecture is proposed on the middleware level of Java Enterprise Edition (JEE) servers.

  6. A rapid method of generating dynamic permission tree for Web database application%一种基于数据库的动态Web权限树快速生成方法

    Institute of Scientific and Technical Information of China (English)

    景民; 万其明; 韩志军; 杨艳萍

    2012-01-01

    A rapid method of generating dynamic permission tree for Web database application is presented in this paper, which uses a range of integrated optimization techniques including the implementation of a Tree View component in client Web browser, the design of database tables supporting dynamic rights allocation, and the rapid algorithm of generating permission tree through a single visit to the database. By allocating reasonable computational load among computing units, this method effectively reduces the waiting time for interaction and communication, and also takes efficient algorithms to fully tap the potential of each computing unit The method has been adopted in the development of DAG system, and has reached a satisfying engineering effect. Through the overall comparison to current research findings, this method is proved a fast and efficient technology, which has a relatively high value for research and development.%文章提出一种基于数据库生成动态Web权限树的快速方法,采用了一系列综合优化技术,如客户端浏览器的Web树形显现组件实现,支持动态权限分配的数据表设计以及对数据库一次访问实现权限树快速生成的算法.该方法合理分配了计算单元之间的运算负载,有效降低了通信交互和等待时间,并注重采用高效算法充分挖掘每个计算单元的潜力.该方法在DAG系统的开发中达到了满意的工程应用效果;与当前的相关研究成果的全面对比表明,该方法快速高效,具有理论参考和工程借鉴价值.

  7. Web content a writer's guide

    CERN Document Server

    Mizrahi, Janet

    2013-01-01

    The explosion of electronic sources, whether in the form of news, commentary, sales and marketing, or information, has created boundless opportunities for producing content. Whether you're an entrepreneur with a start-up business who needs a website, an executive who uses social media to connect with various stakeholders, or a content provider blogging about topical issues, you'll need to know how to write for the web and address the unique environment of the digital world. This book will help you produce web content that generates results. Writing for the screen differs from writing for a pri

  8. A SEMANTICALLY ENRICHED WEB USAGE BASED RECOMMENDATION MODEL

    Directory of Open Access Journals (Sweden)

    C.Ramesh

    2011-11-01

    Full Text Available With the rapid growth of internet technologies, Web has become a huge repository of information andkeeps growing exponentially under no editorial control. However the human capability to read, accessand understand Web content remains constant. This motivated researchers to provide Web personalizedonline services such as Web recommendations to alleviate the information overload problem and providetailored Web experiences to the Web users. Recent studies show that Web usage mining has emerged as apopular approach in providing Web personalization. However conventional Web usage basedrecommender systems are limited in their ability to use the domain knowledge of the Web application.The focus is only on Web usage data. As a consequence the quality of the discovered patterns is low. Inthis paper, we propose a novel framework integrating semantic information in the Web usage miningprocess. Sequential Pattern Mining technique is applied over the semantic space to discover the frequentsequential patterns. The frequent navigational patterns are extracted in the form of Ontology instancesinstead of Web page views and the resultant semantic patterns are used for generating Web pagerecommendations to the user. Experimental results shown are promising and proved that incorporatingsemantic information into Web usage mining process can provide us with more interesting patterns whichconsequently make the recommendation system more functional, smarter and comprehensive

  9. Web Analytics

    OpenAIRE

    Mužík, Zbyněk

    2006-01-01

    Práce se zabývá problematikou měření ukazatelů souvisejících s provozem webových stránek a aplikací a technologickými prostředky k tomu sloužícími ? Web Analytics (WA). Hlavním cílem práce je otestovat a porovnat vybrané zástupce těchto nástrojů a podrobit je srovnání podle objektivních kriterií, dále také kritické zhodnocení možností WA nástrojů obecně. V první části se práce zaměřuje na popis různých způsobů měření provozu na WWW a definuje související metriky. Poskytuje také přehled dostup...

  10. Semantics-based Automated Web Testing

    Directory of Open Access Journals (Sweden)

    Hai-Feng Guo

    2015-08-01

    Full Text Available We present TAO, a software testing tool performing automated test and oracle generation based on a semantic approach. TAO entangles grammar-based test generation with automated semantics evaluation using a denotational semantics framework. We show how TAO can be incorporated with the Selenium automation tool for automated web testing, and how TAO can be further extended to support automated delta debugging, where a failing web test script can be systematically reduced based on grammar-directed strategies. A real-life parking website is adopted throughout the paper to demonstrate the effectivity of our semantics-based web testing approach.

  11. Customizable scientific web portal for fusion research

    Energy Technology Data Exchange (ETDEWEB)

    Abla, G., E-mail: abla@fusion.gat.co [General Atomics, P.O. Box 85608, San Diego, CA (United States); Kim, E.N.; Schissel, D.P.; Flanagan, S.M. [General Atomics, P.O. Box 85608, San Diego, CA (United States)

    2010-07-15

    Web browsers have become a major application interface for participating in scientific experiments such as those in magnetic fusion. The recent advances in web technologies motivated the deployment of interactive web applications with rich features. In the scientific world, web applications have been deployed in portal environments. When used in a scientific research environment, such as fusion experiments, web portals can present diverse sources of information in a unified interface. However, the design and development of a scientific web portal has its own challenges. One such challenge is that a web portal needs to be fast and interactive despite the high volume of information and number of tools it presents. Another challenge is that the visual output of the web portal must not be overwhelming to the end users, despite the high volume of data generated by fusion experiments. Therefore, the applications and information should be customizable depending on the needs of end users. In order to meet these challenges, the design and implementation of a web portal needs to support high interactivity and user customization. A web portal has been designed to support the experimental activities of DIII-D researchers worldwide by providing multiple services, such as real-time experiment status monitoring, diagnostic data access and interactive data visualization. The web portal also supports interactive collaborations by providing a collaborative logbook, shared visualization and online instant messaging services. The portal's design utilizes the multi-tier software architecture and has been implemented utilizing web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services, which allows users to create a unique, personalized working environment to fit their own needs and interests. This paper describes the software

  12. Theoretical Foundations of the Web: Cognition, Communication, and Co-Operation. Towards an Understanding of Web 1.0, 2.0, 3.0

    Directory of Open Access Journals (Sweden)

    Robert Bichler

    2010-02-01

    Full Text Available Currently, there is much talk of Web 2.0 and Social Software. A common understanding of these notions is not yet in existence. The question of what makes Social Software social has thus far also remained unacknowledged. In this paper we provide a theoretical understanding of these notions by outlining a model of the Web as a techno-social system that enhances human cognition towards communication and co-operation. According to this understanding, we identify three qualities of the Web, namely Web 1.0 as a Web of cognition, Web 2.0 as a Web of human communication, and Web 3.0 as a Web of co-operation. We use the terms Web 1.0, Web 2.0, Web 3.0 not in a technical sense, but for describing and characterizing the social dynamics and information processes that are part of the Internet.

  13. Instant responsive web design

    CERN Document Server

    Simmons, Cory

    2013-01-01

    A step-by-step tutorial approach which will teach the readers what responsive web design is and how it is used in designing a responsive web page.If you are a web-designer looking to expand your skill set by learning the quickly growing industry standard of responsive web design, this book is ideal for you. Knowledge of CSS is assumed.

  14. Handbook of web surveys

    NARCIS (Netherlands)

    Bethlehem, J.; Biffignandi, S.

    2012-01-01

    Best practices to create and implementhighly effective web surveys Exclusively combining design and sampling issues, Handbook of Web Surveys presents a theoretical yet practical approach to creating and conducting web surveys. From the history of web surveys to various modes of data collection to ti

  15. Magpie: customizing users' experiences when browsing on the semantic web

    OpenAIRE

    Dzbor, Martin; Domingue, John; Motta, Enrico

    2004-01-01

    We describe several advanced functionalities of Magpie -- a tool that assists users with interpreting the web resources. Magpie is an extension to the Internet Explorer that automatically creates a semantic layer for web pages using a user-selected ontology. Semantic layers are annotations of a web page, with a set of applicable semantic services attached to the annotated items. We argue that the ability to generate different semantic layers for a web resource is vital to support the interpre...

  16. Detection And Classification Of Web Robots With Honeypots

    Science.gov (United States)

    2016-03-01

    programs has been attributed to the explosion in content and user-generated social media on the Internet. The Web search engines like Google require...large numbers of automated bots on the Web to build their indexes. Furthermore, the growth of internet has produced a market for businesses, both...played an important role in its evolution and growth. Conversely, the “bad” Web robots have been and continue to be a significant problem. Bad Web robots

  17. Geospatial semantic web

    CERN Document Server

    Zhang, Chuanrong; Li, Weidong

    2015-01-01

    This book covers key issues related to Geospatial Semantic Web, including geospatial web services for spatial data interoperability; geospatial ontology for semantic interoperability; ontology creation, sharing, and integration; querying knowledge and information from heterogeneous data source; interfaces for Geospatial Semantic Web, VGI (Volunteered Geographic Information) and Geospatial Semantic Web; challenges of Geospatial Semantic Web; and development of Geospatial Semantic Web applications. This book also describes state-of-the-art technologies that attempt to solve these problems such as WFS, WMS, RDF, OWL, and GeoSPARQL, and demonstrates how to use the Geospatial Semantic Web technologies to solve practical real-world problems such as spatial data interoperability.

  18. Intelligent assistant for extracting semi-structured web data

    OpenAIRE

    Adžič, Nik

    2016-01-01

    In the revised literature we have not identified any existing approach, which could convert data from semi-structured (websites) or unstructured web sources to the RDF form and consequently integrate into a Linked Data cloud. Therefore, our motivation and objective was to develop intelligent assistant for extracting semi-structured web data. This intelligent assistant should automatically identify and select part of web data, some of those web data should be selected by business user without ...

  19. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.

    Science.gov (United States)

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.

  20. Web Project Management

    OpenAIRE

    Suralkar, Sunita; Joshi, Nilambari; Meshram, B B

    2013-01-01

    This paper describes about the need for Web project management, fundamentals of project management for web projects: what it is, why projects go wrong, and what's different about web projects. We also discuss Cost Estimation Techniques based on Size Metrics. Though Web project development is similar to traditional software development applications, the special characteristics of Web Application development requires adaption of many software engineering approaches or even development of comple...

  1. Web Project Management

    OpenAIRE

    2013-01-01

    This paper describes about the need for Web project management, fundamentals of project management for web projects: what it is, why projects go wrong, and what's different about web projects. We also discuss Cost Estimation Techniques based on Size Metrics. Though Web project development is similar to traditional software development applications, the special characteristics of Web Application development requires adaption of many software engineering approaches or even development of comple...

  2. Web Science 2015

    OpenAIRE

    Boucher, Andy; Cameron, David; Gaver, William; Hauenstein, Mark; Jarvis, Nadine; Kerridge, Tobie; Michael, Mike; Ovalle, Liliana; Pennington, Sarah; Wilkie, Alex

    2015-01-01

    Web Science 2015 conference exhibition. Web Science is the emergent study of the people and technologies, applications, processes and practices that shape and are shaped by the World Wide Web. Web Science aims to draw together theories, methods and findings from across academic disciplines, and to collaborate with industry, business, government and civil society, to develop knowledge and understanding of the Web: the largest socio-technical infrastructure in human history.

  3. Analysis of Web Proxy Logs

    Science.gov (United States)

    Fei, Bennie; Eloff, Jan; Olivier, Martin; Venter, Hein

    Network forensics involves capturing, recording and analysing network audit trails. A crucial part of network forensics is to gather evidence at the server level, proxy level and from other sources. A web proxy relays URL requests from clients to a server. Analysing web proxy logs can give unobtrusive insights to the browsing behavior of computer users and provide an overview of the Internet usage in an organisation. More importantly, in terms of network forensics, it can aid in detecting anomalous browsing behavior. This paper demonstrates the use of a self-organising map (SOM), a powerful data mining technique, in network forensics. In particular, it focuses on how a SOM can be used to analyse data gathered at the web proxy level.

  4. Control of a multivariable web winding system

    Directory of Open Access Journals (Sweden)

    N. RABBAH

    2007-12-01

    Full Text Available To guarantee a good productivity and a better quality of the industrial systems, it is necessary to know the evolution of its various parameters but also its operation mode. The main goal of this work is to carry out a study to improve a flatness control in a reversing cold rolling mill, precisely the web winding system part. We designate by a winding process any system applying the cycles of unwinding, transport, treatment and winding to various flat products. This system knows several constraints such as the thermal effects caused by the frictions, and the mechanical effects provoked by metal elongation, that generates dysfunctions due to the influence of the process conditions. For this installations type, the various automatisms functions, often very advanced, are realized in modular systems with distributed architecture. Our main goal is to obtain a precise thickness, with the best possible regularity. With this intention, we are preceded to the modelling and the control of the nonlinear dynamic behaviour of a web winding process.

  5. Data management on the spatial web

    DEFF Research Database (Denmark)

    Jensen, Christian S.

    2012-01-01

    Due in part to the increasing mobile use of the web and the proliferation of geo-positioning, the web is fast acquiring a significant spatial aspect. Content and users are being augmented with locations that are used increasingly by location-based services. Studies suggest that each week, several...... functionality enabled by the setting. Further, the talk offers insight into the data management techniques capable of supporting such functionality.......Due in part to the increasing mobile use of the web and the proliferation of geo-positioning, the web is fast acquiring a significant spatial aspect. Content and users are being augmented with locations that are used increasingly by location-based services. Studies suggest that each week, several...... billion web queries are issued that have local intent and target spatial web objects. These are points of interest with a web presence, and they thus have locations as well as textual descriptions. This development has given prominence to spatial web data management, an area ripe with new and exciting...

  6. Web 2.0 Solutions to Wicked Climate Change Problems

    Directory of Open Access Journals (Sweden)

    Alanah Kazlauskas

    2010-01-01

    Full Text Available One of the most pressing ‘wicked problems’ facing humankind is climate change together with its many interrelated environmental concerns. The complexity of this set of problems can be overwhelming as there is such diversity among both the interpretations of the scientific evidence and the viability of possible solutions. Among the social technologies associated with the second generation of the Internet known as Web 2.0, there are tools that allow people to communicate, coordinate and collaborate in ways that reduce their carbon footprint and a potential to become part of the climate change solution. However the way forward is not obvious or easy as Web 2.0, while readily accepted in the chaotic social world, is often treated with suspicion in the more ordered world of business and government. This paper applies a holistic theoretical sense-making framework to research and practice on potential Web 2.0 solutions to climate change problems. The suite of issues, activities and tools involved are viewed as an ecosystem where all elements are dynamic and inter-related. Through such innovative thinking the Information Systems community can make a valuable contribution to a critical global problem and hence find a new relevance as part of the solution.

  7. DIRAC: Secure web user interface

    Energy Technology Data Exchange (ETDEWEB)

    Casajus Ramo, A [University of Barcelona, Diagonal 647, ES-08028 Barcelona (Spain); Sapunov, M, E-mail: sapunov@in2p3.f [Centre de Physique des Particules de Marseille, 163 Av de Luminy Case 902 13288 Marseille (France)

    2010-04-01

    Traditionally the interaction between users and the Grid is done with command line tools. However, these tools are difficult to use by non-expert users providing minimal help and generating outputs not always easy to understand especially in case of errors. Graphical User Interfaces are typically limited to providing access to the monitoring or accounting information and concentrate on some particular aspects failing to cover the full spectrum of grid control tasks. To make the Grid more user friendly more complete graphical interfaces are needed. Within the DIRAC project we have attempted to construct a Web based User Interface that provides means not only for monitoring the system behavior but also allows to steer the main user activities on the grid. Using DIRAC's web interface a user can easily track jobs and data. It provides access to job information and allows performing actions on jobs such as killing or deleting. Data managers can define and monitor file transfer activity as well as check requests set by jobs. Production managers can define and follow large data productions and react if necessary by stopping or starting them. The Web Portal is build following all the grid security standards and using modern Web 2.0 technologies which allow to achieve the user experience similar to the desktop applications. Details of the DIRAC Web Portal architecture and User Interface will be presented and discussed.

  8. A Typology for Web 2.0

    DEFF Research Database (Denmark)

    Dalsgaard, Christian; Sorensen, Elsebeth Korsgaard

    2008-01-01

    Web 2.0 is a term used to describe recent developments on the World Wide Web. The term is often used to describe the increased use of the web for user-generated content, collaboration, and social networking. However, Web 2.0 is a weakly defined concept, and it is unclear exactly what kind...... of technologies it covers. The objective of the paper is to develop a typology that can be used to categorize Web 2.0 technologies. Further, the paper will discuss which of these technologies are unique to Web 2.0. Often, Web 2.0 is described by way of different kinds of software; for instance, blogs, wikis....... The typology suggested by this paper relates to four functions or use contexts, which are believed to be central to the potentials of Web 2.0: dialoging, networking and awareness-making, creating and sharing. Based on the typology, the paper identifies unique potentials of Web 2.0 in relation to design...

  9. Geo-processing workflow driven wildfire hot pixel detection under sensor web environment

    Science.gov (United States)

    Chen, Nengcheng; Di, Liping; Yu, Genong; Gong, Jianya

    2010-03-01

    Integrating Sensor Web Enablement (SWE) services with Geo-Processing Workflows (GPW) has become a bottleneck for Sensor Web-based applications, especially remote-sensing observations. This paper presents a common GPW framework for Sensor Web data service as part of the NASA Sensor Web project. This abstract framework includes abstract GPW model construction, GPW chains from service combination, and data retrieval components. The concrete framework consists of a data service node, a data processing node, a data presentation node, a Catalogue Service node, and a BPEL engine. An abstract model designer is used to design the top level GPW model, a model instantiation service is used to generate the concrete Business Process Execution Language (BPEL), and the BPEL execution engine is adopted. This framework is used to generate several kinds of data: raw data from live sensors, coverage or feature data, geospatial products, or sensor maps. A prototype, including a model designer, model instantiation service, and GPW engine-BPELPower is presented. A scenario for an EO-1 Sensor Web data service for wildfire hot pixel detection is used to test the feasibility of the proposed framework. The execution time and influences of the EO-1 live Hyperion data wildfire classification service framework are evaluated. The benefits and high performance of the proposed framework are discussed. The experiments of EO-1 live Hyperion data wildfire classification service show that this framework can improve the quality of services for sensor data retrieval and processing.

  10. Semantically Enriched Web Usage Mining for Predicting User Future Movements

    Directory of Open Access Journals (Sweden)

    Suresh Shirgave

    2013-10-01

    Full Text Available Explosive and quick growth of the World Wide Web has resulted in intricate Web sites, demanding enhanced user skills and sophisticated tools to help the Web user to find the desired information. Finding desired information on the Web has become a critical ingredient of everyday personal, educational, and business life. Thus, there is a demand for more sophisticated tools to help the user to navigate a Web site and find the desired information. The users must be provided with information and services specific to their needs, rather than an undifferentiated mass of information. For discovering interesting and frequent navigation patterns from Web server logs many Web usage mining techniques have been applied. The recommendation accuracy of solely usage based techniques can be improved by integrating Web site content and site structure in the personalization process.Herein, we propose Semantically enriched Web Usage Mining method (SWUM, which combines the fields of Web Usage Mining and Semantic Web. In the proposed method, the undirected graph derived from usage data is enriched with rich semantic information extracted from the Web pages and the Web site structure. The experimental results show that the SWUM generates accurate recommendations with integration of usage, semantic data and Web site structure. The results shows that proposed method is able to achieve 10-20% better accuracy than the solely usage based model, and 5-8% better than an ontology based model.

  11. Millennial Generation Students Search the Web Erratically, with Minimal Evaluation of Information Quality. A Review of: Taylor, A. (2012. A study of the information search behaviour of the millennial generation. Information Research, 17(1, paper 508. Retrieved from http://informationr.net/ir/17-1/paper508.html

    Directory of Open Access Journals (Sweden)

    Dominique Daniel

    2013-03-01

    Full Text Available Objective – To identify how millennial generation students proceed through the information search process and select resources on the web; to determine whether students evaluate the quality of web resources and how they use general information websites.Design – Longitudinal study.Setting – University in the United States.Subjects – 80 undergraduate students of the millennial generation enrolled in a business course.Methods – The students were required to complete a research report with a bibliography in five weeks. They also had to turn in interim assignments during that period (including an abstract, an outline, and rough draft. Their search behaviour was monitored using a modified Yahoo search engine that allowed subjects to search, and then to fill out surveys integrated directly below their search results. The students were asked to indicate the relevance of the resources they found on the open web, to identify the criteria they used toevaluate relevance, and to specify the stage they were at in the search process. They could choose from five stages defined by the author, based on Wilson (1999: initiation, exploration, differentiation, extracting, and verifying. Datawere collected using anonymous user IDs and included URLs for sources selected along with subject answers until completion of all assignments. The students provided 758 distinct web page evaluations.Main Results – Students did not progress in orderly fashion through the search process, but rather proceeded erratically. A substantial number reported being in fewer than four of the five search stages. Only a small percentage ever declared being in the final stage of verifying previously gathered information, and during preparation of the final report a majority still declared being in the extracting stage. In fact, participants selected documents (extracting stage throughout the process. In addition, students were not much concerned with the quality, validity, or

  12. APFEL Web a web-based application for the graphical visualization of parton distribution functions

    CERN Document Server

    Carrazza, Stefano; Palazzo, Daniele; Rojo, Juan

    2015-01-01

    We present APFEL Web, a web-based application designed to provide a flexible user-friendly tool for the graphical visualization of parton distribution functions (PDFs). In this note we describe the technical design of the APFEL Web application, motivating the choices and the framework used for the development of this project. We document the basic usage of APFEL Web and show how it can be used to provide useful input for a variety of collider phenomenological studies. Finally we provide some examples showing the output generated by the application.

  13. Robot Control System based on Web Application and RFID Technology

    Directory of Open Access Journals (Sweden)

    Barenji Ali Vatankhah

    2015-01-01

    Full Text Available This paper discusses an integration driven framework for enabling the RFID based identification of parts to perform robotic distributor operations in the random mix based parts control based on web application. The RFID technology senses newly arriving parts to be distribution robot, the robot is able to recognize them and perform cooperative distributing via web-based application. The developed web application control system is implemented in the educational robotic arm. RFID system sends real time information from parts to the web application and web based application makes a decision for control of the robot arm, controller of robot controls the robot as based on the decision from web application. The proposed control system has increases the reconfiguration and scalability of robot system.

  14. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  15. FPA Depot - Web Application

    Science.gov (United States)

    Avila, Edwin M. Martinez; Muniz, Ricardo; Szafran, Jamie; Dalton, Adam

    2011-01-01

    Lines of code (LOC) analysis is one of the methods used to measure programmer productivity and estimate schedules of programming projects. The Launch Control System (LCS) had previously used this method to estimate the amount of work and to plan development efforts. The disadvantage of using LOC as a measure of effort is that one can only measure 30% to 35% of the total effort of software projects involves coding [8]. In the application, instead of using the LOC we are using function point for a better estimation of hours in each software to develop. Because of these disadvantages, Jamie Szafran of the System Software Branch of Control And Data Systems (NE-C3) at Kennedy Space Canter developed a web application called Function Point Analysis (FPA) Depot. The objective of this web application is that the LCS software architecture team can use the data to more accurately estimate the effort required to implement customer requirements. This paper describes the evolution of the domain model used for function point analysis as project managers continually strive to generate more accurate estimates.

  16. Web Information Extraction Research Based on Page Classification%基于页面分类的 Web 信息抽取方法研究

    Institute of Scientific and Technical Information of China (English)

    成卫青; 于静; 杨晶; 杨龙

    2013-01-01

    By means of analysis of existing Web information extraction and the current Web page characteristics,current extraction tech-niques are found to have problems that the types of extract page fixed and the extract results are not accurate. In order to make up for the deficiency mentioned above,propose a Web information extraction method based on page classification. This method is able to complete the extraction of the mainstream of information on the Internet page. By classifying the Web page and extracting the main body of the page,it overcomes the two problems existing in traditional method respectively. A complete model of the Web information extraction is designed and the details of each functional module are provided. The unique features of the model are containing modules of Web page principle part extraction and Web page classification,as well as using regular expression to generate extraction rules automatically that promote the generality and precision of the extraction method. Experimental results have verified the validity and accuracy of the method.%  通过对现有 Web 信息抽取方法和当前 Web 网页特点的分析,发现现有抽取技术存在抽取页面类型固定和抽取结果不准确的问题,为了弥补以上两个不足,文中提出了一种基于页面分类的 Web 信息抽取方法,此方法能够完成对互联网上主流信息的提取。通过对页面进行分类和对页面主体的提取,分别克服传统方法抽取页面类型固定和抽取结果不够准确的问题。文中设计了一个完整的 Web 信息抽取模型,并给出了各功能模块的实现方法。该模型包含页面主体提取、页面分类和信息抽取等模块,并利用正则表达式自动生成抽取规则,提高了抽取方法的通用性和准确性。最后用实验证实了文中方法的有效性与正确性。

  17. Web Mining and Social Networking

    DEFF Research Database (Denmark)

    Xu, Guandong; Zhang, Yanchun; Li, Lin

    This book examines the techniques and applications involved in the Web Mining, Web Personalization and Recommendation and Web Community Analysis domains, including a detailed presentation of the principles, developed algorithms, and systems of the research in these areas. The applications of web ...... sense of individuals or communities. The volume will benefit both academic and industry communities interested in the techniques and applications of web search, web data management, web mining and web knowledge discovery, as well as web community and social network analysis....

  18. Web 2.0

    CERN Document Server

    Han, Sam

    2012-01-01

    Web 2.0 is a highly accessible introductory text examining all the crucial discussions and issues which surround the changing nature of the World Wide Web. It not only contextualises the Web 2.0 within the history of the Web, but also goes on to explore its position within the broader dispositif of emerging media technologies.The book uncovers the connections between diverse media technologies including mobile smart phones, hand-held multimedia players, ""netbooks"" and electronic book readers such as the Amazon Kindle, all of which are made possible only by the Web 2.0. In addition, Web 2.0 m

  19. Handbook of web surveys

    CERN Document Server

    Bethlehem, Jelke

    2011-01-01

    BEST PRACTICES TO CREATE AND IMPLEMENTHIGHLY EFFECTIVE WEB SURVEYS Exclusively combining design and sampling issues, Handbook of Web Surveys presents a theoretical yet practical approach to creating and conducting web surveys. From the history of web surveys to various modes of data collection to tips for detecting error, this book thoroughly introduces readers to the this cutting-edge technique and offers tips for creating successful web surveys. The authors provide a history of web surveys and go on to explore the advantages and disadvantages of this mode of dat

  20. Web-based Altimeter Service

    Science.gov (United States)

    Callahan, P. S.; Wilson, B. D.; Xing, Z.; Raskin, R. G.

    2010-12-01

    We have developed a web-based system to allow updating and subsetting of TOPEX data. The Altimeter Service will be operated by PODAAC along with their other provision of oceanographic data. The Service could be easily expanded to other mission data. An Altimeter Service is crucial to the improvement and expanded use of altimeter data. A service is necessary for altimetry because the result of most interest - sea surface height anomaly (SSHA) - is composed of several components that are updated individually and irregularly by specialized experts. This makes it difficult for projects to provide the most up-to-date products. Some components are the subject of ongoing research, so the ability for investigators to make products for comparison or sharing is important. The service will allow investigators/producers to get their component models or processing into widespread use much more quickly. For coastal altimetry, the ability to subset the data to the area of interest and insert specialized models (e.g., tides) or data processing results is crucial. A key part of the Altimeter Service is having data producers provide updated or local models and data. In order for this to succeed, producers need to register their products with the Altimeter Service and to provide the product in a form consistent with the service update methods. We will describe the capabilities of the web service and the methods for providing new components. Currently the Service is providing TOPEX GDRs with Retracking (RGDRs) in netCDF format that has been coordinated with Jason data. Users can add new orbits, tide models, gridded geophysical fields such as mean sea surface, and along-track corrections as they become available and are installed by PODAAC. The updated fields are inserted into the netCDF files while the previous values are retained for comparison. The Service will also generate SSH and SSHA. In addition, the Service showcases a feature that plots any variable from files in netCDF. The

  1. Web 2.0 and pharmacy education.

    Science.gov (United States)

    Cain, Jeff; Fox, Brent I

    2009-11-12

    New types of social Internet applications (often referred to as Web 2.0) are becoming increasingly popular within higher education environments. Although developed primarily for entertainment and social communication within the general population, applications such as blogs, social video sites, and virtual worlds are being adopted by higher education institutions. These newer applications differ from standard Web sites in that they involve the users in creating and distributing information, hence effectively changing how the Web is used for knowledge generation and dispersion. Although Web 2.0 applications offer exciting new ways to teach, they should not be the core of instructional planning, but rather selected only after learning objectives and instructional strategies have been identified. This paper provides an overview of prominent Web 2.0 applications, explains how they are being used within education environments, and elaborates on some of the potential opportunities and challenges that these applications present.

  2. Building GIS Web Services on JXTA Network

    Institute of Scientific and Technical Information of China (English)

    WANG Leichun; GUAN Jihong; ZHOU Shuigeng

    2004-01-01

    In recent years, Web services and Peer-to-Peer (or simply P2P) appear as two of the hottest research topics in network computing. On the one hand, by adopting a decentralized, network-based style, P2P technologies can make P2P systems enhance overall reliability and fault-tolerance, increase autonomy, and enable ad-hoc communication and collaboration. On the other hand, Web services provides a good approach to integrate various heterogeneous systems and applications into a cooperative environment. This paper presents the techniques of combining Web services and P2P technologies into GIS to construct a new generation of GIS, which is more flexible and cooperative. As a case study, an ongoing project JGWS is introduced, which is an experimental GIS Web services platform built on JXTA. This paper also explores the schemes of building GIS Web services in a P2P environment.

  3. Web Navigation Sequences Automation in Modern Websites

    Science.gov (United States)

    Montoto, Paula; Pan, Alberto; Raposo, Juan; Bellas, Fernando; López, Javier

    Most today’s web sources are designed to be used by humans, but they do not provide suitable interfaces for software programs. That is why a growing interest has arisen in so-called web automation applications that are widely used for different purposes such as B2B integration, automated testing of web applications or technology and business watch. Previous proposals assume models for generating and reproducing navigation sequences that are not able to correctly deal with new websites using technologies such as AJAX: on one hand existing systems only allow recording simple navigation actions and, on the other hand, they are unable to detect the end of the effects caused by an user action. In this paper, we propose a set of new techniques to record and execute web navigation sequences able to deal with all the complexity existing in AJAX-based web sites. We also present an exhaustive evaluation of the proposed techniques that shows very promising results.

  4. WSWrapper - A Universal Web Service Generator

    Directory of Open Access Journals (Sweden)

    Florian M. Boian

    2010-12-01

    Full Text Available The need for distributed software applications is increasing day by day. Having to choose from a large variety of libraries, and to learn what each is capable of and how to use them is time consuming and overall can decreases the productivity of an engineering team. We created the WS Wrapper as a unified library on top of existing language-specific libraries, to transparently solve all dependencies, and to provide the developer with a solution that can be used in a distributed application without having to know what happens behind the scenes.

  5. Next-generation Nuclear Data Web Services

    Science.gov (United States)

    Sonzogni, A. A.

    2005-07-01

    The National Nuclear Data Center collects, evaluates, and disseminates nuclear physics data for basic nuclear research and applied nuclear technologies. We have recently produced a nuclear data portal featuring modern and powerful servers, relational database software, Linux operating system, and Java programming language. The portal includes nuclear structure, decay and reaction data, as well as literature information. Data can be searched for using optimized query forms; results are presented in tables and interactive plots. Additionally, a number of nuclear science tools, codes, applications, and links are provided. A brief tutorial of the different databases and products will be provided.

  6. A New Hidden Web Crawling Approach

    Directory of Open Access Journals (Sweden)

    L.Saoudi

    2015-10-01

    Full Text Available Traditional search engines deal with the Surface Web which is a set of Web pages directly accessible through hyperlinks and ignores a large part of the Web called hidden Web which is a great amount of valuable information of online database which is “hidden” behind the query forms. To access to those information the crawler have to fill the forms with a valid data, for this reason we propose a new approach which use SQLI technique in order to find the most promising keywords of a specific domain for automatic form submission. The effectiveness of proposed framework has been evaluated through experiments using real web sites and encouraging preliminary results were obtained

  7. OBELISCO, WEB CONTENT DELIVERY

    Directory of Open Access Journals (Sweden)

    Ramón Valera

    2006-11-01

    Full Text Available The basic principle for design and handling of any Website in the Internet or Intranet is centered in the attractiveness, easy to manage, up-to-date, reliability and accuracy. When they are administered by a person and the information generated in different levels of the organization surpasses their capacity, we find places outdated, inexact and not very coherent that can tunnel the image and credibility of a company in the world of the Internet. Due to this problem arises the necessity of creating a tool that automates the creation process, approval and distribution of the information, guaranteeing its coherence and presentation, using a simple and concrete process, this way OBELISCO emerges, as a tool based on APIs open code with friendly and simple interfaces that allows users with little or no experience in technology to place its works in the Web.

  8. OBELISCO, web content delivery

    Directory of Open Access Journals (Sweden)

    Ramón Valera

    2004-01-01

    Full Text Available The basic principle for design and handling of any Website in the Internet or Intranet is centered in the attractiveness, easy to manage, up-to-date, reliability and accuracy. When they are administered by a person and the information generated in different levels of the organization surpasses their capacity, we find places outdated, inexact and not very coherent that can tunnel the image and credibility of a company in the world of the Internet. Due to this problem arises the necessity of creating a tool that automates the creation process, approval and distribution of the information, guaranteeing its coherence and presentation, using a simple and concrete process, this way OBELISCO emerges, as a tool based on APIs open code with friendly and simple interfaces that allows users with little or no experience in technology to place its works in the Web.

  9. WebAUGUSTUS--a web service for training AUGUSTUS and predicting genes in eukaryotes.

    Science.gov (United States)

    Hoff, Katharina J; Stanke, Mario

    2013-07-01

    The prediction of protein coding genes is an important step in the annotation of newly sequenced and assembled genomes. AUGUSTUS is one of the most accurate tools for eukaryotic gene prediction. Here, we present WebAUGUSTUS, a web interface for training AUGUSTUS and predicting genes with AUGUSTUS. Depending on the needs of the user, WebAUGUSTUS generates training gene structures automatically. Besides a genome file, either a file with expressed sequence tags or a file with protein sequences is required for this step. Alternatively, it is possible to submit an externally generated training gene structure file and a genome file. The web service optimizes AUGUSTUS parameters and predicts genes with those parameters. WebAUGUSTUS is available at http://bioinf.uni-greifswald.de/webaugustus.

  10. Detection of Acoustic/Infrasonic/Seismic Waves Generated by Hypersonic Re-Entry of the HAYABUSA Capsule and Fragmented Parts of the Spacecraft

    Science.gov (United States)

    Yamamoto, Masa-Yuki; Ishihara, Yoshiaki; Hiramatsu, Yoshihiro; Kitamura, Kazuki; Ueda, Masayoshi; Shiba, Yasuo; Furumoto, Muneyoshi; Fujita, Kazuhisa

    2011-10-01

    Acoustic/infrasonic/seismic waves were observed during the re-entry of the Japanese asteroid explorer ``HAYABUSA'' at 6 ground sites in Woomera, Australia, on 2010 June 13. Overpressure values of infrasound waves were detected at 3 ground sites in a range from 1.3 Pa, 1.0 Pa, and 0.7 Pa with each distance of 36.9 km, 54.9 km, and 67.8 km, respectively, apart from the SRC trajectory. Seismic waveforms through air-to-ground coupling processes were also detected at 6 sites, showing a one-to-one correspondence to infrasound waves at all simultaneous observation sites. Audible sound up to 1 kHz was recorded at one site with a distance of 67.8 km. The mother spacecraft was fragmented from 75 km down to 38 km with a few explosive enhancements of emissions. A persistent train of HAYABUSA re-entry was confirmed at an altitude range of between 92 km down to 82 km for about 3 minutes. Light curves of 136 fragmented parts of the spacecraft were analyzed in detail based on video observations taken at multiple ground sites, being classified into three types of fragmentations, i.e., melting, explosive, and re-fragmented types. In a comparison between infrasonic waves and video-image analyses, regarding the generation of sonic-boom type shock waves by hypersonically moving artificial meteors, both the sample return capsule and fragmented parts of the mother spacecraft, at an altitude of 40 ± 1 km were confirmed with a one-to-one correspondence with each other.

  11. Study And Implementation Of LCS Algorithm For Web Mining

    Directory of Open Access Journals (Sweden)

    Vrishali P. Sonavane

    2012-03-01

    Full Text Available The Internet is the roads and the highways in the information World, the content providers are the road workers, and the visitors are the drivers. As in the real world, there can be traffic jams, wrong signs, blind alleys, and so on. The content providers, as the road workers, need information about their users to make possible Web site adjustments. Web logs store every motion on the provider's Web site. So the providers need only a tool to analyze these logs. This tool is called Web Usage Mining. Web Usage Mining is a part of Web Mining. It is the foundation for a Web site analysis. It employs various knowledge discovery methods to gain Web usage patterns. In this paper we used LCS algorithm for improving accuracy of recommendation. The Expremental results show that the approach can improve accuracy of classification in the architecture. Using LCS algorithm we can predict users future request more accurately.

  12. Introduction: Life in the Web

    Directory of Open Access Journals (Sweden)

    Oana Mateescu

    2010-10-01

    Full Text Available The article serves as an introduction to the current journal issue on Online Lives. It discusses and connects the research papers here under three different rubrics: the hopes and fears raised by a world where information is inescapable, the potential and risk of online identities and, finally, the forms of knowledge and participation that define the current architecture of a Web dominated by user-generated content.

  13. Multilabel Learning for Automatic Web Services Tagging

    Directory of Open Access Journals (Sweden)

    Mustapha AZNAG

    2014-08-01

    Full Text Available Recently, some web services portals and search engines as Biocatalogue and Seekda!, have allowed users to manually annotate Web services using tags. User Tags provide meaningful descriptions of services and allow users to index and organize their contents. Tagging technique is widely used to annotate objects in Web 2.0 applications. In this paper we propose a novel probabilistic topic model (which extends the CorrLDA model - Correspondence Latent Dirichlet Allocation- to automatically tag web services according to existing manual tags. Our probabilistic topic model is a latent variable model that exploits local correlation labels. Indeed, exploiting label correlations is a challenging and crucial problem especially in multi-label learning context. Moreover, several existing systems can recommend tags for web services based on existing manual tags. In most cases, the manual tags have better quality. We also develop three strategies to automatically recommend the best tags for web services. We also propose, in this paper, WS-Portal; An Enriched Web Services Search Engine which contains 7063 providers, 115 sub-classes of category and 22236 web services crawled from the Internet. In WS-Portal, severals technologies are employed to improve the effectiveness of web service discovery (i.e. web services clustering, tags recommendation, services rating and monitoring. Our experiments are performed out based on real-world web services. The comparisons of Precision@n, Normalised Discounted Cumulative Gain (NDCGn values for our approach indicate that the method presented in this paper outperforms the method based on the CorrLDA in terms of ranking and quality of generated tags.

  14. EPA Web Taxonomy

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA's Web Taxonomy is a faceted hierarchical vocabulary used to tag web pages with terms from a controlled vocabulary. Tagging enables search and discovery of EPA's...

  15. Chemical Search Web Utility

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical Search Web Utility is an intuitive web application that allows the public to easily find the chemical that they are interested in using, and which...

  16. Practical web development

    CERN Document Server

    Wellens, Paul

    2015-01-01

    This book is perfect for beginners who want to get started and learn the web development basics, but also offers experienced developers a web development roadmap that will help them to extend their capabilities.

  17. Wordpress web application development

    CERN Document Server

    Ratnayake, Rakhitha Nimesh

    2015-01-01

    This book is intended for WordPress developers and designers who want to develop quality web applications within a limited time frame and for maximum profit. Prior knowledge of basic web development and design is assumed.

  18. PHP The Good Parts

    CERN Document Server

    MacIntyre, Peter

    2010-01-01

    Get past all the hype about PHP and dig into the real power of this language. This book explores the most useful features of PHP and how they can speed up the web development process, and explains why the most commonly used PHP elements are often misused or misapplied. You'll learn which parts add strength to object-oriented programming, and how to use certain features to integrate your application with databases. Written by a longtime member of the PHP community, PHP: The Good Parts is ideal for new PHP programmers, as well as web developers switching from other languages. Become familiar w

  19. Impact of invasive plants on food webs and pathways

    Directory of Open Access Journals (Sweden)

    Sikai Wang

    2013-05-01

    Full Text Available In natural ecosystems, energy mainly flows along food chains in food webs. Numerous studies have shown that plant invasions influence ecosystem functions through altering food webs. In recent decades, more attention has been paid to the effects of alien plants on local food webs. In this review, we analyze the influence of exotic plants on food webs and pathways, and explore the impacts of local food web characteristics on community invasibility. Invasive plants alter food webs mainly by changing basal resources and environment conditions in different ways. First, they are consumed by native herbivores due to their high availability, and are therefore incorporated into the native food web. Second, if they show low availability to native herbivores, a new food web is generated through introduction of new consumers or by changing the energy pathway. Third, environmental changes induced by plant invasions may alter population density and feeding behavior of various species at different trophic levels, thus alien plants will affect the communities and food web structures along non-trophic pathways. The influence of the local food web on plant invasions depends on web size and trophic connections. Issues that deserve attention in future studies are raised and discussed. Future research should extend from short-term experiments to long-term monitoring. More quantitative researches to define the responses of food web parameters are needed. In addition, recovering of food web structure and species interaction in restored habitats is an important issue requiring future research.

  20. Luoteca .it: il sito web

    OpenAIRE

    Bassi, Giorgia; Silvatici, Gino; Fabbri, Stefania; Spinelli, Chiara

    2014-01-01

    Questa pubblicazione ? dedicata al sito web www.ludotecaregistro.it e alle pagine social (https://www.facebook.com/LudotecaRegistro e https://twitter.com/LudotecaIt), realizzati con lo scopo di dare visibilit? alle attivit? della Ludoteca .it La pubblicazione si apre con la descrizione delle principali caratteristiche del sito in relazione ai diversi target. Segue una parte dedicata alle varie fasi di lavoro: l'inventario del materiale, la definizione e l'organizzazione dei contenuti, l'alber...

  1. Una arquitectura basada en software libre para archivos web

    Directory of Open Access Journals (Sweden)

    Mercy H. Ospina Torres

    2013-04-01

    Full Text Available Los archivos web son sistemas de información que se han venido desarrollando desde finales de los años 90 para llevar a cabo la preservación histórica del patrimonio web como parte del patrimonio digital de la humanidad. Tales archivos han tenido que afrontar ciertos desafíos propios de los recursos web, como son el tamaño de la web y su naturaleza cambiante, las tecnologías asociadas a la web, la web superficial y la web profunda, la organización de la web en dominios, entre otros. Debido a ello, se ha hecho necesario proponer arquitecturas, técnicas, herramientas y estándares para las diferentes funcionalidades de un archivo web que permitan afrontar de manera eficaz dichos desafíos. Este trabajo tiene como objetivo establecer una arquitectura basada en software libre para un prototipo de archivo web. Para ello se hace una revisión detallada del dominio de archivo web, de sus funciones y de los enfoques usados hasta el momento para llevarlas a cabo. Se presenta un estudio comparativo entre diferentes iniciativas de preservación web a nivel mundial y se establecen los componentes para un sistema para la preservación web basada en software libre.

  2. Survey of Web Technologies

    OpenAIRE

    Špoljar, Boris

    2011-01-01

    The World Wide Web bas become an important platform for developing and running applications. A vital process while developing web applications is the choice of web technologies, on which the application will be build. The developers face a dizzying array of platforms, languages, frameworks and technical artifacts to choose from. The decison carries consequences on most other decisions in the development process. Thesis contains analisis, classifications and comparison of web technologies s...

  3. Handbook of Human Factors in Web Design

    CERN Document Server

    Vu, Kim-Phuong L

    2011-01-01

    The Handbook of Human Factors in Web Design covers basic human factors issues relating to screen design, input devices, and information organization and processing, as well as addresses newer features which will become prominent in the next generation of Web technologies. These include multimodal interfaces, wireless capabilities, and agents that can improve convenience and usability. Written by leading researchers and/or practitioners in the field, this volume reflects the varied backgrounds and interests of individuals involved in all aspects of human factors and Web design and includes chap

  4. The DIRAC Web Portal 2.0

    Science.gov (United States)

    Mathe, Z.; Casajus Ramo, A.; Lazovsky, N.; Stagni, F.

    2015-12-01

    For many years the DIRAC interware (Distributed Infrastructure with Remote Agent Control) has had a web interface, allowing the users to monitor DIRAC activities and also interact with the system. Since then many new web technologies have emerged, therefore a redesign and a new implementation of the DIRAC Web portal were necessary, taking into account the lessons learnt using the old portal. These new technologies allowed to build a more compact, robust and responsive web interface that enables users to have better control over the whole system while keeping a simple interface. The web framework provides a large set of “applications”, each of which can be used for interacting with various parts of the system. Communities can also create their own set of personalised web applications, and can easily extend already existing ones with a minimal effort. Each user can configure and personalise the view for each application and save it using the DIRAC User Profile service as RESTful state provider, instead of using cookies. The owner of a view can share it with other users or within a user community. Compatibility between different browsers is assured, as well as with mobile versions. In this paper, we present the new DIRAC Web framework as well as the LHCb extension of the DIRAC Web portal.

  5. Web Mining%Web 数学挖掘

    Institute of Scientific and Technical Information of China (English)

    王实; 高文; 李锦涛

    2000-01-01

    Web Mining is an important branch in Data Mining.It attracts more research interest for rapidly developing Internet. Web Mining includes:(1)Web Content Mining;(g)Web Usage Mining;(3) Web structure Mining.In this paper we define Web Mining and present an overview of the various research issues,techniques and development efforts.

  6. Web semántica y servicios web semanticos

    OpenAIRE

    Marquez Solis, Santiago

    2007-01-01

    Des d'aquest TFC volem estudiar l'evolució de la Web actual cap a la Web Semàntica. Desde este TFC queremos estudiar la evolución de la Web actual hacia la Web Semántica. From this Final Degree Project we want to study the evolution of the current Web to the Semantic Web.

  7. Bibliographic information organization in the semantic web

    CERN Document Server

    Willer, Mirna

    2013-01-01

    New technologies will underpin the future generation of library catalogues. To facilitate their role providing information, serving users, and fulfilling their mission as cultural heritage and memory institutions, libraries must take a technological leap; their standards and services must be transformed to those of the Semantic Web. Bibliographic Information Organization in the Semantic Web explores the technologies that may power future library catalogues, and argues the necessity of such a leap. The text introduces international bibliographic standards and models, and fundamental concepts in

  8. Research of Web Pages Categorization

    Institute of Scientific and Technical Information of China (English)

    Zhongda Lin; Kun Deng; Yanfen Hong

    2006-01-01

    In this paper, we discuss several issues related to automated classification of web pages, especially text classification of web pages. We analyze features selection and categorization algorithms of web pages and give some suggestions for web pages categorization.

  9. Semantic Web Services and Its Approaches

    Directory of Open Access Journals (Sweden)

    Tauqeer Ahmad Usmani,

    2011-07-01

    Full Text Available OWL-S, IRS, WSMF are the prominent field that are the major part for Semantic Web Services. IRS-III is the first WSMO Compliant and implemented structure to support Semantic Web Services.IRS-III is the extension of previous version of IRS-II and supporting WSMO ontology within the IRS-III Server, browser and API.IRS-III provides support for the OWL-S service descriptions by importing the description to IRS-III. This paper describes about different approaches of Semantic WebServices.

  10. WebSelF

    DEFF Research Database (Denmark)

    Thomsen, Jakob Grauenkjær; Ernst, Erik; Brabrand, Claus

    2012-01-01

    previous work on web scraping. We conducted an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over a period of more than one year. Our framework solves three...

  11. Evaluating Web Usability

    Science.gov (United States)

    Snider, Jean; Martin, Florence

    2012-01-01

    Web usability focuses on design elements and processes that make web pages easy to use. A website for college students was evaluated for underutilization. One-on-one testing, focus groups, web analytics, peer university review and marketing focus group and demographic data were utilized to conduct usability evaluation. The results indicated that…

  12. Web Browser Programming

    OpenAIRE

    Luján Mora, Sergio

    2006-01-01

    Presentaciones del curso "Web Browser Programming" impartido en la Université M'Hamed Bougara (Bourmerdes, Argelia) en junio de 2006. Proyecto financiado por la Unión Europea: TEMPUS JEP-32102-2004, Licence Professionnelle Technologies des Applications Web (Professional License for Web Application Technologies).

  13. WebSelF

    DEFF Research Database (Denmark)

    Thomsen, Jakob Grauenkjær; Ernst, Erik; Brabrand, Claus

    2012-01-01

    We present WebSelF, a framework for web scraping which models the process of web scraping and decomposes it into four conceptually independent, reusable, and composable constituents. We have validated our framework through a full parameterized implementation that is flexible enough to capture...

  14. Instant PHP web scraping

    CERN Document Server

    Ward, Jacob

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Short, concise recipes to learn a variety of useful web scraping techniques using PHP.This book is aimed at those new to web scraping, with little or no previous programming experience. Basic knowledge of HTML and the Web is useful, but not necessary.

  15. Web Search Engines

    OpenAIRE

    Rajashekar, TB

    1998-01-01

    The World Wide Web is emerging as an all-in-one information source. Tools for searching Web-based information include search engines, subject directories and meta search tools. We take a look at key features of these tools and suggest practical hints for effective Web searching.

  16. Classification of the web

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...

  17. Prevention of Cross-Site Scripting Vulnerabilities using Dynamic Hash Generation Technique on the Server Side

    Directory of Open Access Journals (Sweden)

    Shashank Gupta

    2012-09-01

    Full Text Available Cookies are a means to provide statefulcommunication over the HTTP. In the World WideWeb (WWW, once the user using web browser hasbeen successfully authenticated by the web server ofthe web application, then the web server willgenerate and transfer the cookie to the web browser.Now each time, if the user again wants to send arequest to the web server as a part of the activeconnection, the user has to include thecorresponding cookie in its request, so that the webserver associates the cookie to the correspondinguser. Cookies are the mechanisms that maintain anauthentication state between the user and webapplication. Therefore cookies are the possibletargets for the attackers. Cross Site Scripting (XSSattack is one of such attacks against the webapplications in which a user has to compromise itsbrowser’s resources (e.g. cookies etc.. In this paper,a novel technique called Dynamic Hash GenerationTechnique is introduced whose aim is to makecookies worthless for the attackers. This techniqueis implemented on the server side whose main taskis to generate a hash of the value of name attributein the cookie and send this hash value to the webbrowser. With this technique, the hash value ofname attribute in the cookie which is stored on thebrowser’s database is not valid for the attackers toexploit the vulnerabilities of XSS attacks.

  18. Weaving Silos--A Leadership Challenge: A Cross-Functional Team Approach to Supporting Web-Based Student Services

    Science.gov (United States)

    Kleemann, Gary L.

    2005-01-01

    The author reviews the evolution of Web services--from information sharing to transactional to relationship building--and the progression from first-generation to fourth-generation Web sites. (Contains 3 figures.)

  19. Realtime Data to Enable Earth-Observing Sensor Web Capabilities

    Science.gov (United States)

    Seablom, M. S.

    2015-12-01

    Over the past decade NASA's Earth Science Technology Office (ESTO) has invested in new technologies for information systems to enhance the Earth-observing capabilities of satellites, aircraft, and ground-based in situ observations. One focus area has been to create a common infrastructure for coordinated measurements from multiple vantage points which could be commanded either manually or through autonomous means, such as from a numerical model. This paradigm became known as the sensor web, formally defined to be "a coherent set of heterogeneous, loosely-coupled, distributed observing nodes interconnected by a communications fabric that can collectively behave as a single dynamically adaptive and reconfigurable observing system". This would allow for adaptive targeting of rapidly evolving, transient, or variable meteorological features to improve our ability to monitor, understand, and predict their evolution. It would also enable measurements earmarked at critical regions of the atmosphere that are highly sensitive to data analysis errors, thus offering the potential for significant improvements in the predictive skill of numerical weather forecasts. ESTO's investment strategy was twofold. Recognizing that implementation of an operational sensor web would not only involve technical cost and risk but also would require changes to the culture of how flight missions were designed and operated, ESTO funded the development of a mission-planning simulator that would quantitatively assess the added value of coordinated observations. The simulator was designed to provide the capability to perform low-cost engineering and design trade studies using synthetic data generated by observing system simulation experiments (OSSEs). The second part of the investment strategy was to invest in prototype applications that implemented key features of a sensor web, with the dual goals of developing a sensor web reference architecture as well as supporting useful science activities that

  20. Scientific Workflows and the Sensor Web for Virtual Environmental Observatories

    Science.gov (United States)

    Simonis, I.; Vahed, A.

    2008-12-01

    interfaces. All data sets and sensor communication follow well-defined abstract models and corresponding encodings, mostly developed by the OGC Sensor Web Enablement initiative. Scientific progress is currently accelerated by an emerging new concept called scientific workflows, which organize and manage complex distributed computations. A scientific workflow represents and records the highly complex processes that a domain scientist typically would follow in exploration, discovery and ultimately, transformation of raw data to publishable results. The challenge is now to integrate the benefits of scientific workflows with those provided by the Sensor Web in order to leverage all resources for scientific exploration, problem solving, and knowledge generation. Scientific workflows for the Sensor Web represent the next evolutionary step towards efficient, powerful, and flexible earth observation frameworks and platforms. Those platforms support the entire process from capturing data, sharing and integrating, to requesting additional observations. Multiple sites and organizations will participate on single platforms and scientists from different countries and organizations interact and contribute to large-scale research projects. Simultaneously, the data- and information overload becomes manageable, as multiple layers of abstraction will free scientists to deal with underlying data-, processing or storage peculiarities. The vision are automated investigation and discovery mechanisms that allow scientists to pose queries to the system, which in turn would identify potentially related resources, schedules processing tasks and assembles all parts in workflows that may satisfy the query.

  1. Ajax and Web Services

    CERN Document Server

    Pruett, Mark

    2006-01-01

    Ajax and web services are a perfect match for developing web applications. Ajax has built-in abilities to access and manipulate XML data, the native format for almost all REST and SOAP web services. Using numerous examples, this document explores how to fit the pieces together. Examples demonstrate how to use Ajax to access publicly available web services fromYahoo! and Google. You'll also learn how to use web proxies to access data on remote servers and how to transform XML data using XSLT.

  2. Web services foundations

    CERN Document Server

    Bouguettaya, Athman; Daniel, Florian

    2013-01-01

    Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet.Web Services Foundations is the first installment of a two-book collection coverin

  3. Interfacing with the WEB

    CERN Document Server

    Dönszelmann, M

    1995-01-01

    Interfacing to the Web or programming interfaces for the Web is used to provide dynamic information for Web users. Using the Web as a transport system of information poses three constraints: namespace, statelessness and performance. To build interfaces on either server or client side of the Web one has to meet these constraints. Several examples, currently in use in High Energy Physics Experiments are described. They range from an interface to show where buildings are located to an interface showing active values of the On-line System of the DELPHI (CERN)..

  4. Excavando la web

    OpenAIRE

    Ricardo, Baeza-Yates

    2004-01-01

    The web is the internet's most important phenomenon, as demonstrated by its exponential growth and diversity. Hence, due to the volume and wealth of its data, search engines have become among the web's main tools. They are useful when we know what we are looking for. However, certainly the web holds answers to questions never imagined. The process of finding relations or interesting patterns within a data set is called "data mining" and in the case of the web, "web mining". In this article...

  5. Exponentiation for products of Wilson lines within the generating function approach

    CERN Document Server

    Vladimirov, Alexey A

    2015-01-01

    We present the generating function approach to the perturbative exponentiation of correlators of a product of Wilson lines and loops. The exponentiated expression is presented in the closed form as an algebraic function of correlators of known operators, which can be seen as a generating function for web diagrams. The expression is naturally split onto two parts: the exponentiation kernel, which accumulates all non-trivial information about web diagrams, and the defect of exponentiation, which reconstructs the matrix exponent and is a function of the exponentiation kernel. The detailed comparison of the presented approach with existing approaches to exponentiation is presented as well. We also give examples of calculations within the generating function exponentiation, namely, we consider different configurations of light-like Wilson lines in the multi-gluon-exchange-webs (MGEW) approximation. Within this approximation the corresponding correlators can be calculated exactly at any order of perturbative expans...

  6. A Sensor Web and Web Service-Based Approach for Active Hydrological Disaster Monitoring

    Directory of Open Access Journals (Sweden)

    Xi Zhai

    2016-09-01

    Full Text Available Rapid advancements in Earth-observing sensor systems have led to the generation of large amounts of remote sensing data that can be used for the dynamic monitoring and analysis of hydrological disasters. The management and analysis of these data could take advantage of distributed information infrastructure technologies such as Web service and Sensor Web technologies, which have shown great potential in facilitating the use of observed big data in an interoperable, flexible and on-demand way. However, it remains a challenge to achieve timely response to hydrological disaster events and to automate the geoprocessing of hydrological disaster observations. This article proposes a Sensor Web and Web service-based approach to support active hydrological disaster monitoring. This approach integrates an event-driven mechanism, Web services, and a Sensor Web and coordinates them using workflow technologies to facilitate the Web-based sharing and processing of hydrological hazard information. The design and implementation of hydrological Web services for conducting various hydrological analysis tasks on the Web using dynamically updating sensor observation data are presented. An application example is provided to demonstrate the benefits of the proposed approach over the traditional approach. The results confirm the effectiveness and practicality of the proposed approach in cases of hydrological disaster.

  7. WebMGA: a customizable web server for fast metagenomic sequence analysis

    Directory of Open Access Journals (Sweden)

    Niu Beifang

    2011-09-01

    Full Text Available Abstract Background The new field of metagenomics studies microorganism communities by culture-independent sequencing. With the advances in next-generation sequencing techniques, researchers are facing tremendous challenges in metagenomic data analysis due to huge quantity and high complexity of sequence data. Analyzing large datasets is extremely time-consuming; also metagenomic annotation involves a wide range of computational tools, which are difficult to be installed and maintained by common users. The tools provided by the few available web servers are also limited and have various constraints such as login requirement, long waiting time, inability to configure pipelines etc. Results We developed WebMGA, a customizable web server for fast metagenomic analysis. WebMGA includes over 20 commonly used tools such as ORF calling, sequence clustering, quality control of raw reads, removal of sequencing artifacts and contaminations, taxonomic analysis, functional annotation etc. WebMGA provides users with rapid metagenomic data analysis using fast and effective tools, which have been implemented to run in parallel on our local computer cluster. Users can access WebMGA through web browsers or programming scripts to perform individual analysis or to configure and run customized pipelines. WebMGA is freely available at http://weizhongli-lab.org/metagenomic-analysis. Conclusions WebMGA offers to researchers many fast and unique tools and great flexibility for complex metagenomic data analysis.

  8. GoWeb: a semantic search engine for the life science web.

    Science.gov (United States)

    Dietze, Heiko; Schroeder, Michael

    2009-10-01

    Current search engines are keyword-based. Semantic technologies promise a next generation of semantic search engines, which will be able to answer questions. Current approaches either apply natural language processing to unstructured text or they assume the existence of structured statements over which they can reason. Here, we introduce a third approach, GoWeb, which combines classical keyword-based Web search with text-mining and ontologies to navigate large results sets and facilitate question answering. We evaluate GoWeb on three benchmarks of questions on genes and functions, on symptoms and diseases, and on proteins and diseases. The first benchmark is based on the BioCreAtivE 1 Task 2 and links 457 gene names with 1352 functions. GoWeb finds 58% of the functional GeneOntology annotations. The second benchmark is based on 26 case reports and links symptoms with diseases. GoWeb achieves 77% success rate improving an existing approach by nearly 20%. The third benchmark is based on 28 questions in the TREC genomics challenge and links proteins to diseases. GoWeb achieves a success rate of 79%. GoWeb's combination of classical Web search with text-mining and ontologies is a first step towards answering questions in the biomedical domain. GoWeb is online at: http://www.gopubmed.org/goweb.

  9. Classroom Assessment in Web-Based Instructional Environment: Instructors' Experience

    Directory of Open Access Journals (Sweden)

    Xin Liang

    2004-03-01

    Full Text Available While a great deal has been written on the advantage and benefits of online teaching, little is known on how..assessment is implemented in online classrooms to monitor and inform performance and progress. The..purpose of this study is to investigate the dynamics of WebCT classroom assessment by analyzing the..perceptions and experience of the instructors. Grounded theory method was employed to generate a - process..theory- . The study included 10 faculties who taught WebCT classes, and 216 students in the College of..Education in an urban university in the Mid west. Interviews and classroom observations were undertaken..on line. The findings indicated that, performance-based assessment, writing skills, interactive assessment..and learner autonomy were major assessment aspects to inform teaching and enhance learning. If one of..the major roles of online instruction is to increase self-directed learning, as part of the pedagogical..mechanism, web-based classroom assessment should be designed and practiced to impact learner autonomy.

  10. 基于.NET 的 Web Service 应用规划设计%Design of Web Service application programming based on.NET

    Institute of Scientific and Technical Information of China (English)

    范文广; 黄存东

    2015-01-01

    Web Service 的产生是为了解决跨平台和语言共享数据的问题。 Web Service 构建块解决了发现 Web Service及与 Web Service 通信的问题。在 Web Service 创建过程中,可以根据不同用户的喜好,选择利用记事本和 Visual Studio.NET。%The generation of Web Service is to address the issue of sharing data across platforms and language;Web Service building blocks, solve problems of finding Web Service and Web Service communication. Notepad and Visual Studio.NET are available for different users preferences in Web Service creation process.

  11. Exploiting Multimedia in Creating and Analysing Multimedia Web Archives

    Directory of Open Access Journals (Sweden)

    Jonathon S. Hare

    2014-04-01

    Full Text Available The data contained on the web and the social web are inherently multimedia and consist of a mixture of textual, visual and audio modalities. Community memories embodied on the web and social web contain a rich mixture of data from these modalities. In many ways, the web is the greatest resource ever created by human-kind. However, due to the dynamic and distributed nature of the web, its content changes, appears and disappears on a daily basis. Web archiving provides a way of capturing snapshots of (parts of the web for preservation and future analysis. This paper provides an overview of techniques we have developed within the context of the EU funded ARCOMEM (ARchiving COmmunity MEMories project to allow multimedia web content to be leveraged during the archival process and for post-archival analysis. Through a set of use cases, we explore several practical applications of multimedia analytics within the realm of web archiving, web archive analysis and multimedia data on the web in general.

  12. Web Video Mining: Metadata Predictive Analysis using Classification Techniques

    Directory of Open Access Journals (Sweden)

    Siddu P. Algur

    2016-02-01

    Full Text Available Now a days, the Data Engineering becoming emerging trend to discover knowledge from web audiovisual data such as- YouTube videos, Yahoo Screen, Face Book videos etc. Different categories of web video are being shared on such social websites and are being used by the billions of users all over the world. The uploaded web videos will have different kind of metadata as attribute information of the video data. The metadata attributes defines the contents and features/characteristics of the web videos conceptually. Hence, accomplishing web video mining by extracting features of web videos in terms of metadata is a challenging task. In this work, effective attempts are made to classify and predict the metadata features of web videos such as length of the web videos, number of comments of the web videos, ratings information and view counts of the web videos using data mining algorithms such as Decision tree J48 and navie Bayesian algorithms as a part of web video mining. The results of Decision tree J48 and navie Bayesian classification models are analyzed and compared as a step in the process of knowledge discovery from web videos.

  13. Digital libraries and World Wide Web sites and page persistence.

    Directory of Open Access Journals (Sweden)

    Wallace Koehler

    1999-01-01

    Full Text Available Web pages and Web sites, some argue, can either be collected as elements of digital or hybrid libraries, or, as others would have it, the WWW is itself a library. We begin with the assumption that Web pages and Web sites can be collected and categorized. The paper explores the proposition that the WWW constitutes a library. We conclude that the Web is not a digital library. However, its component parts can be aggregated and included as parts of digital library collections. These, in turn, can be incorporated into "hybrid libraries." These are libraries with both traditional and digital collections. Material on the Web can be organized and managed. Native documents can be collected in situ, disseminated, distributed, catalogueed, indexed, controlled, in traditional library fashion. The Web therefore is not a library, but material for library collections is selected from the Web. That said, the Web and its component parts are dynamic. Web documents undergo two kinds of change. The first type, the type addressed in this paper, is "persistence" or the existence or disappearance of Web pages and sites, or in a word the lifecycle of Web documents. "Intermittence" is a variant of persistence, and is defined as the disappearance but reappearance of Web documents. At any given time, about five percent of Web pages are intermittent, which is to say they are gone but will return. Over time a Web collection erodes. Based on a 120-week longitudinal study of a sample of Web documents, it appears that the half-life of a Web page is somewhat less than two years and the half-life of a Web site is somewhat more than two years. That is to say, an unweeded Web document collection created two years ago would contain the same number of URLs, but only half of those URLs point to content. The second type of change Web documents experience is change in Web page or Web site content. Again based on the Web document samples, very nearly all Web pages and sites undergo some

  14. HIDDEN WEB EXTRACTOR DYNAMIC WAY TO UNCOVER THE DEEP WEB

    OpenAIRE

    DR. ANURADHA; BABITA AHUJA

    2012-01-01

    In this era of digital tsunami of information on the web, everyone is completely dependent on the WWW for information retrieval. This has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. The web databases are hidden behind the query interfaces. In this paper, we propose a Hidden Web Extractor (HWE) that can automatically discover and download data from the Hidden Web databases. ...

  15. Upside-down spiders build upside-down orb webs: web asymmetry, spider orientation and running speed in Cyclosa.

    Science.gov (United States)

    Nakata, Kensuke; Zschokke, Samuel

    2010-10-07

    Almost all spiders building vertical orb webs face downwards when sitting on the hubs of their webs, and their webs exhibit an up-down size asymmetry, with the lower part of the capture area being larger than the upper. However, spiders of the genus Cyclosa, which all build vertical orb webs, exhibit inter- and intraspecific variation in orientation. In particular, Cyclosa ginnaga and C. argenteoalba always face upwards, and C. octotuberculata always face downwards, whereas some C. confusa face upwards and others face downwards or even sideways. These spiders provide a unique opportunity to examine why most spiders face downwards and have asymmetrical webs. We found that upward-facing spiders had upside-down webs with larger upper parts, downward-facing spiders had normal webs with larger lower parts and sideways-facing spiders had more symmetrical webs. Downward-facing C. confusa spiders were larger than upward- and sideways-facing individuals. We also found that during prey attacks, downward-facing spiders ran significantly faster downwards than upwards, which was not the case in upward-facing spiders. These results suggest that the spider's orientation at the hub and web asymmetry enhance its foraging efficiency by minimizing the time to reach prey trapped in the web.

  16. Upside-down spiders build upside-down orb webs: web asymmetry, spider orientation and running speed in Cyclosa

    Science.gov (United States)

    Nakata, Kensuke; Zschokke, Samuel

    2010-01-01

    Almost all spiders building vertical orb webs face downwards when sitting on the hubs of their webs, and their webs exhibit an up–down size asymmetry, with the lower part of the capture area being larger than the upper. However, spiders of the genus Cyclosa, which all build vertical orb webs, exhibit inter- and intraspecific variation in orientation. In particular, Cyclosa ginnaga and C. argenteoalba always face upwards, and C. octotuberculata always face downwards, whereas some C. confusa face upwards and others face downwards or even sideways. These spiders provide a unique opportunity to examine why most spiders face downwards and have asymmetrical webs. We found that upward-facing spiders had upside-down webs with larger upper parts, downward-facing spiders had normal webs with larger lower parts and sideways-facing spiders had more symmetrical webs. Downward-facing C. confusa spiders were larger than upward- and sideways-facing individuals. We also found that during prey attacks, downward-facing spiders ran significantly faster downwards than upwards, which was not the case in upward-facing spiders. These results suggest that the spider's orientation at the hub and web asymmetry enhance its foraging efficiency by minimizing the time to reach prey trapped in the web. PMID:20462900

  17. Importancia y situación actual de la accesibilidad web para el turismo accesible

    Directory of Open Access Journals (Sweden)

    Gabriel Fontanet Nadal

    2011-04-01

    Full Text Available Accesible Tourism is a kind of Tourism that is specially dedicated to disabled people. This Tourism refers to the removal of physical elements that difficult the disabled people mobility at the destination. The Accesible Tourism should take care of both physical and web accessibility. The Web Accessibility of a web is defined as the capability this web to be accessed by people with any kind of disability. Some organizations generate rules to improve web accessibility. An analysis of Web Accessibility in Tourist Web Sites is shown at this document.

  18. Fuzzification of Web Objects: A Semantic Web Mining Approach

    Directory of Open Access Journals (Sweden)

    Tasawar Hussain

    2012-03-01

    Full Text Available Web Mining is becoming essential to support the web administrators and web users in multi-ways such as information retrieval; website performance management; web personalization; web marketing and website designing. Due to uncontrolled exponential growth in web data, knowledge base retrieval has become a very challenging task. The one viable solution to the problem is the merging of conventional web mining with semantic web technologies. This merging process will be more beneficial to web users by reducing the search space and by providing information that is more relevant. Key web objects play significant role in this process. The extraction of key web objects from a website is a challenging task. In this paper, we have proposed a framework, which extracts the key web objects from web log file and apply a semantic web to mine actionable intelligence. This proposed framework can be applied to non-semantic web for the extraction of key web objects. We also have defined an objective function to calculate key web object from users perspective. We named this function as key web object function. KWO function helps to fuzzify the extracted key web objects into three categories as Most Interested, Interested, and Least Interested. Fuzzification of web objects helps us to accommodate the uncertainty among the web objects of being user attractive. We also have validated the proposed scheme with the help of a case study.

  19. RMatlab-app2web: Web Deployment of R/MATLAB Applications

    Directory of Open Access Journals (Sweden)

    Armin Varmaz

    2013-09-01

    Full Text Available This paper presents the RMatlab-app2web tool which enables the use of R or MATLAB scripts as CGI programs for generating dynamic web content. RMatlab-app2web is highly adjustable. It can be run on both, Windows and Unix-like systems. CGI scripts written in PHP take information entered on web-based forms on the client browser, pass it to R or MATLAB on the server and display the output on the client browser. Adjustable to the servers requirements, the data transfer procedure can use either the GET or the POST routine. The application allows to call R or MATLAB to run previously written scripts. It does not allow to run completely flexible user code. We run a multivariate OLS regression to demonstrate the use of the RMatlab-app2web tool.

  20. The RCSB Protein Data Bank: redesigned web site and web services.

    Science.gov (United States)

    Rose, Peter W; Beran, Bojan; Bi, Chunxiao; Bluhm, Wolfgang F; Dimitropoulos, Dimitris; Goodsell, David S; Prlic, Andreas; Quesada, Martha; Quinn, Gregory B; Westbrook, John D; Young, Jasmine; Yukich, Benjamin; Zardecki, Christine; Berman, Helen M; Bourne, Philip E

    2011-01-01

    The RCSB Protein Data Bank (RCSB PDB) web site (http://www.pdb.org) has been redesigned to increase usability and to cater to a larger and more diverse user base. This article describes key enhancements and new features that fall into the following categories: (i) query and analysis tools for chemical structure searching, query refinement, tabulation and export of query results; (ii) web site customization and new structure alerts; (iii) pair-wise and representative protein structure alignments; (iv) visualization of large assemblies; (v) integration of structural data with the open access literature and binding affinity data; and (vi) web services and web widgets to facilitate integration of PDB data and tools with other resources. These improvements enable a range of new possibilities to analyze and understand structure data. The next generation of the RCSB PDB web site, as described here, provides a rich resource for research and education.

  1. A teen's guide to creating web pages and blogs

    CERN Document Server

    Selfridge, Peter; Osburn, Jennifer

    2008-01-01

    Whether using a social networking site like MySpace or Facebook or building a Web page from scratch, millions of teens are actively creating a vibrant part of the Internet. This is the definitive teen''s guide to publishing exciting web pages and blogs on the Web. This easy-to-follow guide shows teenagers how to: Create great MySpace and Facebook pages Build their own unique, personalized Web site Share the latest news with exciting blogging ideas Protect themselves online with cyber-safety tips Written by a teenager for other teens, this book leads readers step-by-step through the basics of web and blog design. In this book, teens learn to go beyond clicking through web sites to learning winning strategies for web design and great ideas for writing blogs that attract attention and readership.

  2. Web Crawler Based on Mobile Agent and Java Aglets

    Directory of Open Access Journals (Sweden)

    Md. Abu Kausar

    2013-09-01

    Full Text Available With the huge growth of the Internet, many web pages are available online. Search engines use web crawlers to collect these web pages from World Wide Web for the purpose of storage and indexing. Basically Web Crawler is a program, which finds information from the World Wide Web in a systematic and automated manner. This network load farther will be reduced by using mobile agents.The proposed approach uses mobile agents to crawl the pages. A mobile agent is not bound to the system in which it starts execution. It has the unique ability to transfer itself from one system in a network to another system. The main advantages of web crawler based on Mobile Agents are that the analysis part of the crawling process is done locally rather than remote side. This drastically reduces network load and traffic which can improve the performance and efficiency of the whole crawling process.

  3. The Web economy: goods, users, models and policies

    CERN Document Server

    Vafopoulos, Michalis

    2011-01-01

    Web emerged as an antidote to the rapidly increasing quantity of accumulated knowledge and become successful because it facilitates massive participation and communication with minimum costs. Today, its enormous impact, scale and dynamism in time and space make very difficult (and sometimes impossible) to measure and anticipate the effects in human society. In addition to that, we demand from the Web to be fast, secure, reliable, all-inclusive and trustworthy in any transaction. The scope of the present article is to review a part of the Web economy literature that will help us to identify its major participants and their functions. The goal is to understand how the Web economy differs from the traditional setting and what implications have these differences. Secondarily, we attempt to establish a minimal common understanding about the incentives and properties of the Web economy. In this direction the concept of Web Goods and a new classification of Web Users are introduced and analyzed This article, is not,...

  4. Web Applications of Bibliometrics and Link Analysis

    Directory of Open Access Journals (Sweden)

    Faribourz Droudi

    2010-04-01

    Full Text Available The present study aims to introduce and analyze bibliometric application within Web and also to expounds on the status of link analysis in order to point out its application with respect to the existing web-based information sources. Findings indicate that bibliometrics could have required application in the area of digital resources available through Net. Link analysis is a process by which one could make statistical analysis of correlation between hyperlinks and therefore understand the accuracy, veracity and efficacy of citations within a digital document. Link analysis, in effect, is counted as a part of information ranking algorithm within the web environment. The number, linkage and quality of given links to a website are of utmost importance for ranking/status in the Web. The tools applied in this topic include, page ranking strategy, link analysis algorithm, latent semantic indexing and the classical input-output model. The present study analyzes Big Web and Small Web link analysis and explains the means for utilizing web charts in order to better understand the link analysis process.

  5. Ten years for the public Web

    CERN Multimedia

    2003-01-01

    Ten years ago, CERN issued a statement declaring that a little known piece of software called the World Wide Web was in the public domain. Nowadays, the Web is an indispensable part of modern communications. The idea for the Web goes back to March 1989 when CERN Computer scientist Tim Berners-Lee wrote a proposal for a 'Distributed Information Management System' for the high-energy physics community. The Web was originaly conceived and developed to meet the demand for information sharing between scientists working all over the world. There were many obstacles in the 1980s to the effective exchange of information. There was, for example a great variety of computer and network systems, with hardly any common features. The main purpose of the web was to allow scientists to access information from any source in a consistent and simple way. By Christmas 1990, Berners-Lee's idea had become the World Wide Web, with its first server and browser running at CERN. Through 1991, the Web spread to other particle physics ...

  6. A Survey on Semantic Web Search Engine

    Directory of Open Access Journals (Sweden)

    G.Sudeepthi

    2012-03-01

    Full Text Available The tremendous growth in the volume of data and with the terrific growth of number of web pages, traditional search engines now a days are not appropriate and not suitable anymore. Search engine is the most important tool to discover any information in World Wide Web. Semantic Search Engine is born of traditional search engine to overcome the above problem. The Semantic Web is an extension of the current web in which information is given well-defined meaning. Semantic web technologies are playing a crucial role in enhancing traditional web search, as it is working to create machine readable data. but it will not replace traditional search engine. In this paper we made a brief survey on various promising features of some of the best semantic search engines developed so far and we have discussed the various approaches to semantic search. We have summarized the techniques, advantages of some important semantic web search engines that are developed so far.The most prominent part is that how the semantic search engines differ from the traditional searches and their results are shown by giving a sample query as input

  7. Insect symbionts in food webs

    Science.gov (United States)

    Henry, Lee M.

    2016-01-01

    Recent research has shown that the bacterial endosymbionts of insects are abundant and diverse, and that they have numerous different effects on their hosts' biology. Here we explore how insect endosymbionts might affect the structure and dynamics of insect communities. Using the obligate and facultative symbionts of aphids as an example, we find that there are multiple ways that symbiont presence might affect food web structure. Many symbionts are now known to help their hosts escape or resist natural enemy attack, and others can allow their hosts to withstand abiotic stress or affect host plant use. In addition to the direct effect of symbionts on aphid phenotypes there may be indirect effects mediated through trophic and non-trophic community interactions. We believe that by using data from barcoding studies to identify bacterial symbionts, this extra, microbial dimension to insect food webs can be better elucidated. This article is part of the themed issue ‘From DNA barcodes to biomes’. PMID:27481779

  8. A Short History of Designing for Communication on the Web

    DEFF Research Database (Denmark)

    Heilesen, Simon

    2007-01-01

    Web design is important for how we communicate on the internet, and it also has an influence on computer interface design in general. Taking a very literal view of the theme of ‘designing for communication’, this chapter examines the development of web design as a prerequisite for understanding...... what it has become today, and it concludes by offering some reflections on the future of web design. In the first part of the chapter, the history of web design is outlined in terms of the complex interplay of various social, cultural, economic, technological and communicative factors. This section...... concludes with the presentation of a framework for web design that allows for – if not actually reconciles – the many existing approaches to the subject. In the second part of the chapter it is suggested that web design, as it has developed so far, may be facing major changes as the requirements of users...

  9. Functional webs for freeform architecture

    KAUST Repository

    Deng, Bailin

    2011-08-01

    Rationalization and construction-aware design dominate the issue of realizability of freeform architecture. The former means the decomposition of an intended shape into parts which are sufficiently simple and efficient to manufacture; the latter refers to a design procedure which already incorporates rationalization. Recent contributions to this topic have been concerned mostly with small-scale parts, for instance with planar faces of meshes. The present paper deals with another important aspect, namely long-range parts and supporting structures. It turns out that from the pure geometry viewpoint this means studying families of curves which cover surfaces in certain well-defined ways. Depending on the application one has in mind, different combinatorial arrangements of curves are required. We here restrict ourselves to so-called hexagonal webs which correspond to a triangular or tri-hex decomposition of a surface. The individual curve may have certain special properties, like being planar, being a geodesic, or being part of a circle. Each of these properties is motivated by manufacturability considerations and imposes constraints on the shape of the surface. We investigate the available degrees of freedom, show numerical methods of optimization, and demonstrate the effectivity of our approach and the variability of construction solutions derived from webs by means of actual architectural designs.

  10. Engineering Web Applications

    DEFF Research Database (Denmark)

    Casteleyn, Sven; Daniel, Florian; Dolog, Peter

    Nowadays, Web applications are almost omnipresent. The Web has become a platform not only for information delivery, but also for eCommerce systems, social networks, mobile services, and distributed learning environments. Engineering Web applications involves many intrinsic challenges due to their......Nowadays, Web applications are almost omnipresent. The Web has become a platform not only for information delivery, but also for eCommerce systems, social networks, mobile services, and distributed learning environments. Engineering Web applications involves many intrinsic challenges due...... to their distributed nature, content orientation, and the requirement to make them available to a wide spectrum of users who are unknown in advance. The authors discuss these challenges in the context of well-established engineering processes, covering the whole product lifecycle from requirements engineering through...

  11. Metadata and the Web

    Directory of Open Access Journals (Sweden)

    Mehdi Safari

    2004-12-01

    Full Text Available The rapid increase in the number and variety of resources on the World Wide Web has made the problem of resource description and discovery central to discussions about the efficiency and evolution of this medium. The inappropriateness of traditional schemas of resource description for web resources has encouraged significant activities recently on defining web-compatible schemas named "metadata". While conceptually old for library and information professionals, metadata has taken more significant and paramount role than ever before and is considered as the golden key for the next evolution of the web in the form of semantic web. This article is intended to be a brief introduction to metadata and tries to present its overview in the web.

  12. Information Extraction on the Web with Credibility Guarantee

    OpenAIRE

    Nguyen, Thanh Tam

    2015-01-01

    The Web became the central medium for valuable sources of information extraction applications. However, such user-generated resources are often plagued by inaccuracies and misinformation due to the inherent openness and uncertainty of the Web. In this work we study the problem of extracting structured information out of Web data with a credibility guarantee. The ultimate goal is that not only the structured information should be extracted as much as possible but also its credibility is high. ...

  13. Web Services and Their Use in Starlink Software

    Science.gov (United States)

    Taylor, M.; Platon, R.; Chipperfield, A.; Draper, P.; McIlwrath, B.; Giaretta, D.

    Web Services are gaining great popularity in the Grid community, and with good reason. The Starlink project is adopting Web Services as the method of interapplication communication. This is being done natively in new Java-based applications while older applications are being wrapped to provide Web Service interfaces. We are in this way providing interoperability between the generations of software in a heterogeneous, distributed manner and allowing the software to be usable in a distributed environment such as the GRID.

  14. What Web 2.0 Means to Facilities Professionals

    Science.gov (United States)

    Allen, Scott

    2008-01-01

    It's official--the Web is now social. Actually, it has always been social to a degree, but now it's "mostly" social. A lot of terms have been coined or adopted to describe various aspects of this phenomenon--social media, social networking, consumer-generated media (CGM) and Web 2.0. While it is hard to define "exactly" what Web 2.0 is, or when…

  15. Using Open Web APIs in Teaching Web Mining

    Science.gov (United States)

    Chen, Hsinchun; Li, Xin; Chau, M.; Ho, Yi-Jen; Tseng, Chunju

    2009-01-01

    With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems…

  16. Using Open Web APIs in Teaching Web Mining

    Science.gov (United States)

    Chen, Hsinchun; Li, Xin; Chau, M.; Ho, Yi-Jen; Tseng, Chunju

    2009-01-01

    With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems…

  17. Web Service开发

    Institute of Scientific and Technical Information of China (English)

    张彬桥; 吴成明

    2007-01-01

    本文以实际项目为例介绍了J2EE中Axis框架下一个Web Service的完整开发过程,包括Axis下Web Service的编写方式和安装部署,基于JDOM的XML操作方法,并给出了以JSP页面作客户端调用Web Service的参考代码.

  18. Web Security Testing Cookbook

    CERN Document Server

    Hope, Paco

    2008-01-01

    Among the tests you perform on web applications, security testing is perhaps the most important, yet it's often the most neglected. The recipes in the Web Security Testing Cookbook demonstrate how developers and testers can check for the most common web security issues, while conducting unit tests, regression tests, or exploratory tests. Unlike ad hoc security assessments, these recipes are repeatable, concise, and systematic-perfect for integrating into your regular test suite.

  19. Creating Web Pages Simplified

    CERN Document Server

    Wooldridge, Mike

    2011-01-01

    The easiest way to learn how to create a Web page for your family or organization Do you want to share photos and family lore with relatives far away? Have you been put in charge of communication for your neighborhood group or nonprofit organization? A Web page is the way to get the word out, and Creating Web Pages Simplified offers an easy, visual way to learn how to build one. Full-color illustrations and concise instructions take you through all phases of Web publishing, from laying out and formatting text to enlivening pages with graphics and animation. This easy-to-follow visual guide sho

  20. An introduction to webs

    Science.gov (United States)

    White, C. D.

    2016-04-01

    Webs are sets of Feynman diagrams that contribute to the exponents of scattering amplitudes, in the kinematic limit in which emitted radiation is soft. As such, they have a number of phenomenological and formal applications, and offer tantalizing glimpses into the all-order structure of perturbative quantum field theory. This article is based on a series of lectures given to graduate students, and aims to provide a pedagogical introduction to webs. Topics covered include exponentiation in (non-)abelian gauge theories, the web mixing matrix formalism for non-abelian gauge theories, and recent progress on the calculation of web diagrams. Problems are included throughout the text, to aid understanding.