WorldWideScience

Sample records for web engineering models

  1. The Little Engines That Could: Modeling the Performance of World Wide Web Search Engines

    OpenAIRE

    Eric T. Bradlow; David C. Schmittlein

    2000-01-01

    This research examines the ability of six popular Web search engines, individually and collectively, to locate Web pages containing common marketing/management phrases. We propose and validate a model for search engine performance that is able to represent key patterns of coverage and overlap among the engines. The model enables us to estimate the typical additional benefit of using multiple search engines, depending on the particular set of engines being considered. It also provides an estim...

  2. Engineering Web Applications

    DEFF Research Database (Denmark)

    Casteleyn, Sven; Daniel, Florian; Dolog, Peter

    Nowadays, Web applications are almost omnipresent. The Web has become a platform not only for information delivery, but also for eCommerce systems, social networks, mobile services, and distributed learning environments. Engineering Web applications involves many intrinsic challenges due...... to their distributed nature, content orientation, and the requirement to make them available to a wide spectrum of users who are unknown in advance. The authors discuss these challenges in the context of well-established engineering processes, covering the whole product lifecycle from requirements engineering through...... design and implementation to deployment and maintenance. They stress the importance of models in Web application development, and they compare well-known Web-specific development processes like WebML, WSDM and OOHDM to traditional software development approaches like the waterfall model and the spiral...

  3. Designing a Pedagogical Model for Web Engineering Education: An Evolutionary Perspective

    Science.gov (United States)

    Hadjerrouit, Said

    2005-01-01

    In contrast to software engineering, which relies on relatively well established development approaches, there is a lack of a proven methodology that guides Web engineers in building reliable and effective Web-based systems. Currently, Web engineering lacks process models, architectures, suitable techniques and methods, quality assurance, and a…

  4. Dynamics of a macroscopic model characterizing mutualism of search engines and web sites

    Science.gov (United States)

    Wang, Yuanshi; Wu, Hong

    2006-05-01

    We present a model to describe the mutualism relationship between search engines and web sites. In the model, search engines and web sites benefit from each other while the search engines are derived products of the web sites and cannot survive independently. Our goal is to show strategies for the search engines to survive in the internet market. From mathematical analysis of the model, we show that mutualism does not always result in survival. We show various conditions under which the search engines would tend to extinction, persist or grow explosively. Then by the conditions, we deduce a series of strategies for the search engines to survive in the internet market. We present conditions under which the initial number of consumers of the search engines has little contribution to their persistence, which is in agreement with the results in previous works. Furthermore, we show novel conditions under which the initial value plays an important role in the persistence of the search engines and deduce new strategies. We also give suggestions for the web sites to cooperate with the search engines in order to form a win-win situation.

  5. Web Engineering

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-06-23

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.

  6. 25 Years of Model-Driven Web Engineering: What we achieved, What is missing

    Directory of Open Access Journals (Sweden)

    Gustavo Rossi

    2016-12-01

    Full Text Available Model-Driven Web Engineering (MDWE approaches aim to improve the Web applications development process by focusing on modeling instead of coding, and deriving the running application by transformations from conceptual models to code. The emergence of the Interaction Flow Modeling Language (IFML has been an important milestone in the evolution of Web modeling languages, indicating not only the maturity of the field but also a final convergence of languages. In this paper we explain the evolution of modeling and design approaches since the early years (in the 90’s detailing the forces which drove that evolution and discussing the strengths and weaknesses of some of those approaches. A brief presentation of the IFML is accompanied with a thorough analysis of the most important achievements of the MDWE community as well as the problems and obstacles that hinder the dissemination of model-driven techniques in the Web engineering field.

  7. A development process meta-model for Web based expert systems: The Web engineering point of view

    DEFF Research Database (Denmark)

    Dokas, I.M.; Alapetite, Alexandre

    2006-01-01

    raised their complexity. Unfortunately, there is so far no clear answer to the question: How may the methods and experience of Web engineering and expert systems be combined and applied in order todevelop effective and successful Web based expert systems? In an attempt to answer this question...... on Web based expert systems – will be presented. The idea behind the presentation of theaccessibility evaluation and its conclusions is to show to Web based expert system developers, who typically have little Web engineering background, that Web engineering issues must be considered when developing Web......Similar to many legacy computer systems, expert systems can be accessed via the Web, forming a set of Web applications known as Web based expert systems. The tough Web competition, the way people and organizations rely on Web applications and theincreasing user requirements for better services have...

  8. Introducing Model-Based System Engineering Transforming System Engineering through Model-Based Systems Engineering

    Science.gov (United States)

    2014-03-31

    Web  Presentation...Software  .....................................................  20   Figure  6.  Published   Web  Page  from  Data  Collection...the  term  Model  Based  Engineering  (MBE),  Model  Driven  Engineering  ( MDE ),  or  Model-­‐Based  Systems  

  9. Changes in users' mental models of Web search engines after ten ...

    African Journals Online (AJOL)

    Ward's Cluster analyses including the Pseudo T² Statistical analyses were used to determine the mental model clusters for the seventeen salient design features of Web search engines at each time point. The cubic clustering criterion (CCC) and the dendogram were conducted for each sample to help determine the number ...

  10. Web Search Engines

    OpenAIRE

    Rajashekar, TB

    1998-01-01

    The World Wide Web is emerging as an all-in-one information source. Tools for searching Web-based information include search engines, subject directories and meta search tools. We take a look at key features of these tools and suggest practical hints for effective Web searching.

  11. Semantic Web and Model-Driven Engineering

    CERN Document Server

    Parreiras, Fernando S

    2012-01-01

    The next enterprise computing era will rely on the synergy between both technologies: semantic web and model-driven software development (MDSD). The semantic web organizes system knowledge in conceptual domains according to its meaning. It addresses various enterprise computing needs by identifying, abstracting and rationalizing commonalities, and checking for inconsistencies across system specifications. On the other side, model-driven software development is closing the gap among business requirements, designs and executables by using domain-specific languages with custom-built syntax and se

  12. Development of Web-Based Learning Environment Model to Enhance Cognitive Skills for Undergraduate Students in the Field of Electrical Engineering

    Science.gov (United States)

    Lakonpol, Thongmee; Ruangsuwan, Chaiyot; Terdtoon, Pradit

    2015-01-01

    This research aimed to develop a web-based learning environment model for enhancing cognitive skills of undergraduate students in the field of electrical engineering. The research is divided into 4 phases: 1) investigating the current status and requirements of web-based learning environment models. 2) developing a web-based learning environment…

  13. Web Spam, Social Propaganda and the Evolution of Search Engine Rankings

    Science.gov (United States)

    Metaxas, Panagiotis Takis

    Search Engines have greatly influenced the way we experience the web. Since the early days of the web, users have been relying on them to get informed and make decisions. When the web was relatively small, web directories were built and maintained using human experts to screen and categorize pages according to their characteristics. By the mid 1990's, however, it was apparent that the human expert model of categorizing web pages does not scale. The first search engines appeared and they have been evolving ever since, taking over the role that web directories used to play.

  14. A Novel Personalized Web Search Model

    Institute of Scientific and Technical Information of China (English)

    ZHU Zhengyu; XU Jingqiu; TIAN Yunyan; REN Xiang

    2007-01-01

    A novel personalized Web search model is proposed.The new system, as a middleware between a user and a Web search engine, is set up on the client machine. It can learn a user's preference implicitly and then generate the user profile automatically. When the user inputs query keywords, the system can automatically generate a few personalized expansion words by computing the term-term associations according to the current user profile, and then these words together with the query keywords are submitted to a popular search engine such as Yahoo or Google.These expansion words help to express accurately the user's search intention. The new Web search model can make a common search engine personalized, that is, the search engine can return different search results to different users who input the same keywords. The experimental results show the feasibility and applicability of the presented work.

  15. Web document engineering

    International Nuclear Information System (INIS)

    White, B.

    1996-05-01

    This tutorial provides an overview of several document engineering techniques which are applicable to the authoring of World Wide Web documents. It illustrates how pre-WWW hypertext research is applicable to the development of WWW information resources

  16. Engineering the presentation layer of adaptable web information systems

    NARCIS (Netherlands)

    Fiala, Z.; Frasincar, F.; Hinz, M.; Houben, G.J.P.M.; Barna, P.; Meissner, K.; Koch, N.; Fraternali, P.; Wirsing, M.

    2004-01-01

    Engineering adaptable Web Information Systems (WIS) requires systematic design models and specification frameworks. A complete model-driven methodology like Hera distinguishes between the conceptual, navigational, and presentational aspects of WIS design and identifies different adaptation hot-spots

  17. Competence Centered Specialization in Web Engineering Topics in a Software Engineering Masters Degree Programme

    DEFF Research Database (Denmark)

    Dolog, Peter; Thomsen, Lone Leth; Thomsen, Bent

    2010-01-01

    Web applications and Web-based systems are becoming increasingly complex as a result of either customer requests or technology evolution which has eased other aspects of software engineering. Therefore, there is an increasing demand for highly skilled software engineers able to build and also...... advance the systems on the one hand as well as professionals who are able to evaluate their eectiveness on the other hand. With this idea in mind, the computer science department at Aalborg University is continuously working on improvements in its specialization in web engineering topics as well...... as on general competence based web engineering proles oered also for those who specialize in other areas of software engineering. We describe the current state of the art and our experience with a web engineering curriculum within the software engineering masters degree programme. We also discuss an evolution...

  18. The Use of Web Search Engines in Information Science Research.

    Science.gov (United States)

    Bar-Ilan, Judit

    2004-01-01

    Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…

  19. Adding a visualization feature to web search engines: it's time.

    Science.gov (United States)

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  20. Engineering Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    suit the user profile the most. This paper summarizes the domain engineering framework for such adaptive web applications. The framework provides guidelines to develop adaptive web applications as members of a family. It suggests how to utilize the design artifacts as knowledge which can be used......Information and services on the web are accessible for everyone. Users of the web differ in their background, culture, political and social environment, interests and so on. Ambient intelligence was envisioned as a concept for systems which are able to adapt to user actions and needs....... With the growing amount of information and services, the web applications become natural candidates to adopt the concepts of ambient intelligence. Such applications can deal with divers user intentions and actions based on the user profile and can suggest the combination of information content and services which...

  1. Security and computer forensics in web engineering education

    OpenAIRE

    Glisson, W.; Welland, R.; Glisson, L.M.

    2010-01-01

    The integration of security and forensics into Web Engineering curricula is imperative! Poor security in web-based applications is continuing to cost organizations millions and the losses are still increasing annually. Security is frequently taught as a stand-alone course, assuming that security can be 'bolted on' to a web application at some point. Security issues must be integrated into Web Engineering processes right from the beginning to create secure solutions and therefore security shou...

  2. Model Driven Engineering

    Science.gov (United States)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.

  3. Integrating ecosystem engineering and food webs

    NARCIS (Netherlands)

    Sanders, Dirk; Jones, Clive G.; Thebault, Elisa; Bouma, Tjeerd J.; van der Heide, Tjisse; van Belzen, Jim; Barot, Sebastien

    Ecosystem engineering, the physical modification of the environment by organisms, is a common and often influential process whose significance to food web structure and dynamics is largely unknown. In the light of recent calls to expand food web studies to include non-trophic interactions, we

  4. Integrating ecosystem engineering and food webs

    NARCIS (Netherlands)

    Sanders, D.; Jones, C.G.; Thébault, E.; Bouma, T.J.; van der Heide, T.; van Belzen, J.; Barot, S.

    2014-01-01

    Ecosystem engineering, the physical modification of the environment by organisms, is a common and often influential process whose significance to food web structure and dynamics is largely unknown. In the light of recent calls to expand food web studies to include non-trophic interactions, we

  5. WebVR——Web Virtual Reality Engine Based on P2P network

    OpenAIRE

    zhihan LV; Tengfei Yin; Yong Han; Yong Chen; Ge Chen

    2011-01-01

    WebVR, a multi-user online virtual reality engine, is introduced. The main contributions are mapping the geographical space and virtual space to the P2P overlay network space, and dividing the three spaces by quad-tree method. The geocoding is identified with Hash value, which is used to index the user list, terrain data, and the model object data. Sharing of data through improved Kademlia network model is designed and implemented. In this model, XOR algorithm is used to calculate the distanc...

  6. BPELPower—A BPEL execution engine for geospatial web services

    Science.gov (United States)

    Yu, Genong (Eugene); Zhao, Peisheng; Di, Liping; Chen, Aijun; Deng, Meixia; Bai, Yuqi

    2012-10-01

    The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms.

  7. Comparison of Physics Frameworks for WebGL-Based Game Engine

    Directory of Open Access Journals (Sweden)

    Yogya Resa

    2014-03-01

    Full Text Available Recently, a new technology called WebGL shows a lot of potentials for developing games. However since this technology is still new, there are still many potentials in the game development area that are not explored yet. This paper tries to uncover the potential of integrating physics frameworks with WebGL technology in a game engine for developing 2D or 3D games. Specifically we integrated three open source physics frameworks: Bullet, Cannon, and JigLib into a WebGL-based game engine. Using experiment, we assessed these frameworks in terms of their correctness or accuracy, performance, completeness and compatibility. The results show that it is possible to integrate open source physics frameworks into a WebGLbased game engine, and Bullet is the best physics framework to be integrated into the WebGL-based game engine.

  8. Knowledge engineering in a temporal symantic web context

    NARCIS (Netherlands)

    Milea, D.V.; Frasincar, F.; Kaymak, U.; Schwabe, D.; Curbera, F.; Dantzig, P.

    2008-01-01

    The emergence of Web 2.0 and the semantic Web as established technologies is fostering a whole new breed of Web applications and systems. These are often centered around knowledge engineering and context awareness. However, adequate temporal formalisms underlying context awareness are currently

  9. Sexual information seeking on web search engines.

    Science.gov (United States)

    Spink, Amanda; Koricich, Andrew; Jansen, B J; Cole, Charles

    2004-02-01

    Sexual information seeking is an important element within human information behavior. Seeking sexually related information on the Internet takes many forms and channels, including chat rooms discussions, accessing Websites or searching Web search engines for sexual materials. The study of sexual Web queries provides insight into sexually-related information-seeking behavior, of value to Web users and providers alike. We qualitatively analyzed queries from logs of 1,025,910 Alta Vista and AlltheWeb.com Web user queries from 2001. We compared the differences in sexually-related Web searching between Alta Vista and AlltheWeb.com users. Differences were found in session duration, query outcomes, and search term choices. Implications of the findings for sexual information seeking are discussed.

  10. A study of medical and health queries to web search engines.

    Science.gov (United States)

    Spink, Amanda; Yang, Yin; Jansen, Jim; Nykanen, Pirrko; Lorence, Daniel P; Ozmutlu, Seda; Ozmutlu, H Cenk

    2004-03-01

    This paper reports findings from an analysis of medical or health queries to different web search engines. We report results: (i). comparing samples of 10000 web queries taken randomly from 1.2 million query logs from the AlltheWeb.com and Excite.com commercial web search engines in 2001 for medical or health queries, (ii). comparing the 2001 findings from Excite and AlltheWeb.com users with results from a previous analysis of medical and health related queries from the Excite Web search engine for 1997 and 1999, and (iii). medical or health advice-seeking queries beginning with the word 'should'. Findings suggest: (i). a small percentage of web queries are medical or health related, (ii). the top five categories of medical or health queries were: general health, weight issues, reproductive health and puberty, pregnancy/obstetrics, and human relationships, and (iii). over time, the medical and health queries may have declined as a proportion of all web queries, as the use of specialized medical/health websites and e-commerce-related queries has increased. Findings provide insights into medical and health-related web querying and suggests some implications for the use of the general web search engines when seeking medical/health information.

  11. F-OWL: An Inference Engine for Semantic Web

    Science.gov (United States)

    Zou, Youyong; Finin, Tim; Chen, Harry

    2004-01-01

    Understanding and using the data and knowledge encoded in semantic web documents requires an inference engine. F-OWL is an inference engine for the semantic web language OWL language based on F-logic, an approach to defining frame-based systems in logic. F-OWL is implemented using XSB and Flora-2 and takes full advantage of their features. We describe how F-OWL computes ontology entailment and compare it with other description logic based approaches. We also describe TAGA, a trading agent environment that we have used as a test bed for F-OWL and to explore how multiagent systems can use semantic web concepts and technology.

  12. The invisible Web uncovering information sources search engines can't see

    CERN Document Server

    Sherman, Chris

    2001-01-01

    Enormous expanses of the Internet are unreachable with standard web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, informa

  13. Specification framework for engineering adaptive web applications

    NARCIS (Netherlands)

    Frasincar, F.; Houben, G.J.P.M.; Vdovják, R.

    2002-01-01

    The growing demand for data-driven Web applications has led to the need for a structured and controlled approach to the engineering of such applications. Both designers and developers need a framework that in all stages of the engineering process allows them to specify the relevant aspects of the

  14. A Web-based modeling tool for the SEMAT Essence theory of software engineering

    Directory of Open Access Journals (Sweden)

    Daniel Graziotin

    2013-09-01

    Full Text Available As opposed to more mature subjects, software engineering lacks general theories that establish its foundations as a discipline. The Essence Theory of software engineering (Essence has been proposed by the Software Engineering Methods and Theory (SEMAT initiative. The goal of Essence is to develop a theoretically sound basis for software engineering practice and its wide adoption. However, Essence is far from reaching academic- and industry-wide adoption. The reasons for this include a struggle to foresee its utilization potential and a lack of tools for implementation. SEMAT Accelerator (SematAcc is a Web-positioning tool for a software engineering endeavor, which implements the SEMAT’s Essence kernel. SematAcc permits the use of Essence, thus helping to understand it. The tool enables the teaching, adoption, and research of Essence in controlled experiments and case studies.

  15. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-08-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  16. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-12-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  17. Categorization of web pages - Performance enhancement to search engine

    Digital Repository Service at National Institute of Oceanography (India)

    Lakshminarayana, S.

    of Artificial Intelligence, Volume III. Los Altos, CA.: William Kaufmann. pp 1-74. 18. Brin, S. & Page, L. (1998). The anatomy of a large scale hyper-textual web search engine. In Proceedings of the seventh World Wide Web conference, Brisbane, Australia. 19...

  18. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  19. Classifying web genres in context: a case study documenting the web genres used by a software engineer

    NARCIS (Netherlands)

    Montesi, M.; Navarrete, T.

    2008-01-01

    This case study analyzes the Internet-based resources that a software engineer uses in his daily work. Methodologically, we studied the web browser history of the participant, classifying all the web pages he had seen over a period of 12 days into web genres. We interviewed him before and after the

  20. A unified architecture for biomedical search engines based on semantic web technologies.

    Science.gov (United States)

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  1. Virtual Reference Services through Web Search Engines: Study of Academic Libraries in Pakistan

    Directory of Open Access Journals (Sweden)

    Rubia Khan

    2017-03-01

    Full Text Available Web search engines (WSE are powerful and popular tools in the field of information service management. This study is an attempt to examine the impact and usefulness of web search engines in providing virtual reference services (VRS within academic libraries in Pakistan. The study also attempts to investigate the relevant expertise and skills of library professionals in providing digital reference services (DRS efficiently using web search engines. Methodology used in this study is quantitative in nature. The data was collected from fifty public and private sector universities in Pakistan using a structured questionnaire. Microsoft Excel and SPSS were used for data analysis. The study concludes that web search engines are commonly used by librarians to help users (especially research scholars by providing digital reference services. The study also finds a positive correlation between use of web search engines and quality of digital reference services provided to library users. It is concluded that although search engines have increased the expectations of users and are really big competitors to a library’s reference desk, they are however not an alternative to reference service. Findings reveal that search engines pose numerous challenges for librarians and the study also attempts to bring together possible remedial measures. This study is useful for library professionals to understand the importance of search engines in providing VRS. The study also provides an intellectual comparison among different search engines, their capabilities, limitations, challenges and opportunities to provide VRS effectively in libraries.

  2. Web Feet Guide to Search Engines: Finding It on the Net.

    Science.gov (United States)

    Web Feet, 2001

    2001-01-01

    This guide to search engines for the World Wide Web discusses selecting the right search engine; interpreting search results; major search engines; online tutorials and guides; search engines for kids; specialized search tools for various subjects; and other specialized engines and gateways. (LRW)

  3. Semantic web for the working ontologist effective modeling in RDFS and OWL

    CERN Document Server

    Allemang, Dean

    2011-01-01

    Semantic Web models and technologies provide information in machine-readable languages that enable computers to access the Web more intelligently and perform tasks automatically without the direction of users. These technologies are relatively recent and advancing rapidly, creating a set of unique challenges for those developing applications. Semantic Web for the Working Ontologist is the essential, comprehensive resource on semantic modeling, for practitioners in health care, artificial intelligence, finance, engineering, military intelligence, enterprise architecture, and more. Focused on

  4. Adding a Visualization Feature to Web Search Engines: It’s Time

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Pak C.

    2008-11-11

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t help but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.

  5. Comparing the Scale of Web Subject Directories Precision in Technical-Engineering Information Retrieval

    Directory of Open Access Journals (Sweden)

    Mehrdokht Wazirpour Keshmiri

    2012-07-01

    Full Text Available The main purpose of this research was to compare the scale of web subject directories precision in information retrieval of technical-engineering science. Information gathering was documentary and webometric. Keywords of technical-engineering science were chosen at twenty different subjects from IEEE (Institute of Electrical and Electronics Engineers and engineering magazines that situated in sciencedirect site. These keywords are used at five subject directories Yahoo, Google, Infomine, Intute, Dmoz, that were web directories high-utilization. Usually first results in searching tools are connected to searching keywords. Because, first ten results was evaluated in every search. These assessments to consist of scale of precision, scale of error, scale retrieval items in technical-engineering categories to retrieval items entirely. The used criteria for determining the scale of precision that was according to high-utilization standards in different documents, to consist of presence of the keywords in title, appearance of keywords at the part of web retrieved pages, keywords adjacency, URL of page, page description and subject categories. Information analysis was according to Kruskal-Wallis Test and L.S.D fisher. Results revealed that there was meaningful difference about precision of web subject directories in information retrieval of technical-engineering science, Therefore this theory was confirmed.web subject directories ranked from point of precision as follows. Google, Yahoo, Intute, Dmoz, and Infomine. The scale of observed error at the first results was another criterion that was used for comparing web subject directories. In this research, Yahoo had minimum scale of error and Infomine had most of error. This research also compared the scale of retrieval items in all of categories web subject directories entirely to retrieval items in technical-engineering categories, results revealed that there was meaningful difference between them. And

  6. AADL and Model-based Engineering

    Science.gov (United States)

    2014-10-20

    pictures – MDE and MDA with UML – Automatically generated documents We need language for architecture modeling • Strongly typed • Well-defined...Mail Software Engineering Institute Customer Relations 4500 Fifth Avenue Pittsburgh, PA 15213-2612 USA Web Wiki.sei.cmu.edu/aadl www.aadl.info

  7. Introduction to Chemical Engineering Reactor Analysis: A Web-Based Reactor Design Game

    Science.gov (United States)

    Orbey, Nese; Clay, Molly; Russell, T.W. Fraser

    2014-01-01

    An approach to explain chemical engineering through a Web-based interactive game design was developed and used with college freshman and junior/senior high school students. The goal of this approach was to demonstrate how to model a lab-scale experiment, and use the results to design and operate a chemical reactor. The game incorporates both…

  8. Exploiting Semantic Web Technologies to Develop OWL-Based Clinical Practice Guideline Execution Engines.

    Science.gov (United States)

    Jafarpour, Borna; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2016-01-01

    Computerizing paper-based CPG and then executing them can provide evidence-informed decision support to physicians at the point of care. Semantic web technologies especially web ontology language (OWL) ontologies have been profusely used to represent computerized CPG. Using semantic web reasoning capabilities to execute OWL-based computerized CPG unties them from a specific custom-built CPG execution engine and increases their shareability as any OWL reasoner and triple store can be utilized for CPG execution. However, existing semantic web reasoning-based CPG execution engines suffer from lack of ability to execute CPG with high levels of expressivity, high cognitive load of computerization of paper-based CPG and updating their computerized versions. In order to address these limitations, we have developed three CPG execution engines based on OWL 1 DL, OWL 2 DL and OWL 2 DL + semantic web rule language (SWRL). OWL 1 DL serves as the base execution engine capable of executing a wide range of CPG constructs, however for executing highly complex CPG the OWL 2 DL and OWL 2 DL + SWRL offer additional executional capabilities. We evaluated the technical performance and medical correctness of our execution engines using a range of CPG. Technical evaluations show the efficiency of our CPG execution engines in terms of CPU time and validity of the generated recommendation in comparison to existing CPG execution engines. Medical evaluations by domain experts show the validity of the CPG-mediated therapy plans in terms of relevance, safety, and ordering for a wide range of patient scenarios.

  9. A web-based, collaborative modeling, simulation, and parallel computing environment for electromechanical systems

    Directory of Open Access Journals (Sweden)

    Xiaoliang Yin

    2015-03-01

    Full Text Available Complex electromechanical system is usually composed of multiple components from different domains, including mechanical, electronic, hydraulic, control, and so on. Modeling and simulation for electromechanical system on a unified platform is one of the research hotspots in system engineering at present. It is also the development trend of the design for complex electromechanical system. The unified modeling techniques and tools based on Modelica language provide a satisfactory solution. To meet with the requirements of collaborative modeling, simulation, and parallel computing for complex electromechanical systems based on Modelica, a general web-based modeling and simulation prototype environment, namely, WebMWorks, is designed and implemented. Based on the rich Internet application technologies, an interactive graphic user interface for modeling and post-processing on web browser was implemented; with the collaborative design module, the environment supports top-down, concurrent modeling and team cooperation; additionally, service-oriented architecture–based architecture was applied to supply compiling and solving services which run on cloud-like servers, so the environment can manage and dispatch large-scale simulation tasks in parallel on multiple computing servers simultaneously. An engineering application about pure electric vehicle is tested on WebMWorks. The results of simulation and parametric experiment demonstrate that the tested web-based environment can effectively shorten the design cycle of the complex electromechanical system.

  10. Web Services as Product Experience Augmenters and the Implications for Requirements Engineering: A Position Paper

    NARCIS (Netherlands)

    van Eck, Pascal; Nijholt, Antinus; Wieringa, Roelf J.

    There is currently little insight into what requirement engineering for web services is and in which context it will be carried out. In this position paper, we investigate requirements engineering for a special kind of web services, namely web services that are used to augment the perceived value of

  11. Key word placing in Web page body text to increase visibility to search engines

    Directory of Open Access Journals (Sweden)

    W. T. Kritzinger

    2007-11-01

    Full Text Available The growth of the World Wide Web has spawned a wide variety of new information sources, which has also left users with the daunting task of determining which sources are valid. Many users rely on the Web as an information source because of the low cost of information retrieval. It is also claimed that the Web has evolved into a powerful business tool. Examples include highly popular business services such as Amazon.com and Kalahari.net. It is estimated that around 80% of users utilize search engines to locate information on the Internet. This, by implication, places emphasis on the underlying importance of Web pages being listed on search engines indices. Empirical evidence that the placement of key words in certain areas of the body text will have an influence on the Web sites' visibility to search engines could not be found in the literature. The result of two experiments indicated that key words should be concentrated towards the top, and diluted towards the bottom of a Web page to increase visibility. However, care should be taken in terms of key word density, to prevent search engine algorithms from raising the spam alarm.

  12. Web components and the semantic web

    OpenAIRE

    Casey, Maire; Pahl, Claus

    2003-01-01

    Component-based software engineering on the Web differs from traditional component and software engineering. We investigate Web component engineering activites that are crucial for the development,com position, and deployment of components on the Web. The current Web Services and Semantic Web initiatives strongly influence our work. Focussing on Web component composition we develop description and reasoning techniques that support a component developer in the composition activities,fo cussing...

  13. THE EFFECTIVENESS OF WEB-BASED INTERACTIVE BLENDED LEARNING MODEL IN ELECTRICAL ENGINEERING COURSES

    Directory of Open Access Journals (Sweden)

    Hansi Effendi

    2015-12-01

    Full Text Available The study was to test the effectiveness of the Web-Based Interactive Blended Learning Model (BLIBW for subjects in the Department of Electrical Engineering, Padang State University. The design that the researcher employed was a quasi-experimental design with one group pretest-posttest, which was conducted on a group of students consisting of 30 people and the test was conducted for two times. The effectiveness of BLIBW Model was tested by comparing the average pretest scores and the average posttest scores both in the first trial and the second trial. The average prestest and posttest scores in the first trial were 14.13 and 33.80. The increase in the average score was significant at alpha 0.05. Then, the average pretest and posttest scores in the second trial were 18.67 and 47.03. The result was also significant at alpha 0.05. The effectiveness of BLIBW Model in the second trial was higher than in the first test. Those result were not entirely satisfactory and it might be caused several weaknesses in both tests such as: the number of sessions were limited, there was only one subject, and the number of students who were subjected too limited. However, the researcher would like to conclude that the BLIBW Model might be implemented as a replacement alternative for the face-to-face instruction.

  14. Engineering Compensations in Web Service Environment

    DEFF Research Database (Denmark)

    Schäfer, Micahel; Dolog, Peter; Nejdl, Wolfgang

    2007-01-01

    Business to business integration has recently been performed by employing Web service environments. Moreover, such environments are being provided by major players on the technology markets. Those environments are based on open specifications for transaction coordination. When a failure in such a......Business to business integration has recently been performed by employing Web service environments. Moreover, such environments are being provided by major players on the technology markets. Those environments are based on open specifications for transaction coordination. When a failure...... in such an environment occurs, a compensation can be initiated to recover from the failure. However, current environments have only limited capabilities for compensations, and are usually based on backward recovery. In this paper, we introduce an engineering approach and an environment to deal with advanced...... compensations based on forward recovery principles. We extend the existing Web service transaction coordination architecture and infrastructure in order to support flexible compensation operations. A contract-based approach is being used, which allows the specification of permitted compensations at runtime. We...

  15. Semantic Web technologies in software engineering

    OpenAIRE

    Gall, H C; Reif, G

    2008-01-01

    Over the years, the software engineering community has developed various tools to support the specification, development, and maintainance of software. Many of these tools use proprietary data formats to store artifacts which hamper interoperability. However, the Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Ontologies are used define the concepts in the domain of discourse and their relationships an...

  16. Situational Requirements Engineering for the Development of Content Management System-based Web Applications

    NARCIS (Netherlands)

    Souer, J.; van de Weerd, I.; Versendaal, J.M.; Brinkkemper, S.

    2005-01-01

    Web applications are evolving towards strong content-centered Web applications. The development processes and implementation of these applications are unlike the development and implementation of traditional information systems. In this paper we propose WebEngineering Method; a method for developing

  17. Personalizing Web Search based on User Profile

    OpenAIRE

    Utage, Sharyu; Ahire, Vijaya

    2016-01-01

    Web Search engine is most widely used for information retrieval from World Wide Web. These Web Search engines help user to find most useful information. When different users Searches for same information, search engine provide same result without understanding who is submitted that query. Personalized web search it is search technique for proving useful result. This paper models preference of users as hierarchical user profiles. a framework is proposed called UPS. It generalizes profile and m...

  18. Enhancing food engineering education with interactive web-based simulations

    Directory of Open Access Journals (Sweden)

    Alexandros Koulouris

    2015-04-01

    Full Text Available In the traditional deductive approach in teaching any engineering topic, teachers would first expose students to the derivation of the equations that govern the behavior of a physical system and then demonstrate the use of equations through a limited number of textbook examples. This methodology, however, is rarely adequate to unmask the cause-effect and quantitative relationships between the system variables that the equations embody. Web-based simulation, which is the integration of simulation and internet technologies, has the potential to enhance the learning experience by offering an interactive and easily accessible platform for quick and effortless experimentation with physical phenomena.This paper presents the design and development of a web-based platform for teaching basic food engineering phenomena to food technology students. The platform contains a variety of modules (“virtual experiments” covering the topics of mass and energy balances, fluid mechanics and heat transfer. In this paper, the design and development of three modules for mass balances and heat transfer is presented. Each webpage representing an educational module has the following features: visualization of the studied phenomenon through graphs, charts or videos, computation through a mathematical model and experimentation.  The student is allowed to edit key parameters of the phenomenon and observe the effect of these changes on the outputs. Experimentation can be done in a free or guided fashion with a set of prefabricated examples that students can run and self-test their knowledge by answering multiple-choice questions.

  19. Effects of Web-Based Interactive Modules on Engineering Students' Learning Motivations

    Science.gov (United States)

    Bai, Haiyan; Aman, Amjad; Xu, Yunjun; Orlovskaya, Nina; Zhou, Mingming

    2016-01-01

    The purpose of this study is to assess the impact of a newly developed modules, Interactive Web-Based Visualization Tools for Gluing Undergraduate Fuel Cell Systems Courses system (IGLU), on learning motivations of engineering students using two samples (n[subscript 1] = 144 and n[subscript 2] = 135) from senior engineering classes. The…

  20. A Software Engineering Approach based on WebML and BPMN to the Mediation Scenario of the SWS Challenge

    Science.gov (United States)

    Brambilla, Marco; Ceri, Stefano; Valle, Emanuele Della; Facca, Federico M.; Tziviskou, Christina

    Although Semantic Web Services are expected to produce a revolution in the development of Web-based systems, very few enterprise-wide design experiences are available; one of the main reasons is the lack of sound Software Engineering methods and tools for the deployment of Semantic Web applications. In this chapter, we present an approach to software development for the Semantic Web based on classical Software Engineering methods (i.e., formal business process development, computer-aided and component-based software design, and automatic code generation) and on semantic methods and tools (i.e., ontology engineering, semantic service annotation and discovery).

  1. The Effectiveness of Web Search Engines to Index New Sites from Different Countries

    Science.gov (United States)

    Pirkola, Ari

    2009-01-01

    Introduction: Investigates how effectively Web search engines index new sites from different countries. The primary interest is whether new sites are indexed equally or whether search engines are biased towards certain countries. If major search engines show biased coverage it can be considered a significant economic and political problem because…

  2. An open source web interface for linking models to infrastructure system databases

    Science.gov (United States)

    Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.

    2016-12-01

    Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.

  3. Reverse Engineering and Software Products Reuse to Teach Collaborative Web Portals: A Case Study with Final-Year Computer Science Students

    Science.gov (United States)

    Medina-Dominguez, Fuensanta; Sanchez-Segura, Maria-Isabel; Mora-Soto, Arturo; Amescua, Antonio

    2010-01-01

    The development of collaborative Web applications does not follow a software engineering methodology. This is because when university students study Web applications in general, and collaborative Web portals in particular, they are not being trained in the use of software engineering techniques to develop collaborative Web portals. This paper…

  4. Index Compression and Efficient Query Processing in Large Web Search Engines

    Science.gov (United States)

    Ding, Shuai

    2013-01-01

    The inverted index is the main data structure used by all the major search engines. Search engines build an inverted index on their collection to speed up query processing. As the size of the web grows, the length of the inverted list structures, which can easily grow to hundreds of MBs or even GBs for common terms (roughly linear in the size of…

  5. Research on the optimization strategy of web search engine based on data mining

    Science.gov (United States)

    Chen, Ronghua

    2018-04-01

    With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.

  6. Using Web 2.0 Techniques in NASA's Ares Engineering Operations Network (AEON) Environment - First Impressions

    Science.gov (United States)

    Scott, David W.

    2010-01-01

    The Mission Operations Laboratory (MOL) at Marshall Space Flight Center (MSFC) is responsible for Engineering Support capability for NASA s Ares rocket development and operations. In pursuit of this, MOL is building the Ares Engineering and Operations Network (AEON), a web-based portal to support and simplify two critical activities: Access and analyze Ares manufacturing, test, and flight performance data, with access to Shuttle data for comparison Establish and maintain collaborative communities within the Ares teams/subteams and with other projects, e.g., Space Shuttle, International Space Station (ISS). AEON seeks to provide a seamless interface to a) locally developed engineering applications and b) a Commercial-Off-The-Shelf (COTS) collaborative environment that includes Web 2.0 capabilities, e.g., blogging, wikis, and social networking. This paper discusses how Web 2.0 might be applied to the typically conservative engineering support arena, based on feedback from Integration, Verification, and Validation (IV&V) testing and on searching for their use in similar environments.

  7. Curating the Web: Building a Google Custom Search Engine for the Arts

    Science.gov (United States)

    Hennesy, Cody; Bowman, John

    2008-01-01

    Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…

  8. Search Engine Ranking, Quality, and Content of Web Pages That Are Critical Versus Noncritical of Human Papillomavirus Vaccine.

    Science.gov (United States)

    Fu, Linda Y; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G

    2016-01-01

    Online information can influence attitudes toward vaccination. The aim of the present study was to provide a systematic evaluation of the search engine ranking, quality, and content of Web pages that are critical versus noncritical of human papillomavirus (HPV) vaccination. We identified HPV vaccine-related Web pages with the Google search engine by entering 20 terms. We then assessed each Web page for critical versus noncritical bias and for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than ninth grade. We also determined Web page comprehensiveness in terms of mention of 14 HPV vaccine-relevant topics. Twenty searches yielded 116 unique Web pages. HPV vaccine-critical Web pages comprised roughly a third of the top, top 5- and top 10-ranking Web pages. The prevalence of HPV vaccine-critical Web pages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical Web pages, Web pages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (p engine queries despite being of lower quality and less comprehensive than noncritical Web pages. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  9. Integrating Ecosystem Engineering and Food Web Ecology: Testing the Effect of Biogenic Reefs on the Food Web of a Soft-Bottom Intertidal Area.

    Science.gov (United States)

    De Smet, Bart; Fournier, Jérôme; De Troch, Marleen; Vincx, Magda; Vanaverbeke, Jan

    2015-01-01

    The potential of ecosystem engineers to modify the structure and dynamics of food webs has recently been hypothesised from a conceptual point of view. Empirical data on the integration of ecosystem engineers and food webs is however largely lacking. This paper investigates the hypothesised link based on a field sampling approach of intertidal biogenic aggregations created by the ecosystem engineer Lanice conchilega (Polychaeta, Terebellidae). The aggregations are known to have a considerable impact on the physical and biogeochemical characteristics of their environment and subsequently on the abundance and biomass of primary food sources and the macrofaunal (i.e. the macro-, hyper- and epibenthos) community. Therefore, we hypothesise that L. conchilega aggregations affect the structure, stability and isotopic niche of the consumer assemblage of a soft-bottom intertidal food web. Primary food sources and the bentho-pelagic consumer assemblage of a L. conchilega aggregation and a control area were sampled on two soft-bottom intertidal areas along the French coast and analysed for their stable isotopes. Despite the structural impacts of the ecosystem engineer on the associated macrofaunal community, the presence of L. conchilega aggregations only has a minor effect on the food web structure of soft-bottom intertidal areas. The isotopic niche width of the consumer communities of the L. conchilega aggregations and control areas are highly similar, implying that consumer taxa do not shift their diet when feeding in a L. conchilega aggregation. Besides, species packing and hence trophic redundancy were not affected, pointing to an unaltered stability of the food web in the presence of L. conchilega.

  10. Assessment and Comparison of Search capabilities of Web-based Meta-Search Engines: A Checklist Approach

    Directory of Open Access Journals (Sweden)

    Alireza Isfandiyari Moghadam

    2010-03-01

    Full Text Available   The present investigation concerns evaluation, comparison and analysis of search options existing within web-based meta-search engines. 64 meta-search engines were identified. 19 meta-search engines that were free, accessible and compatible with the objectives of the present study were selected. An author’s constructed check list was used for data collection. Findings indicated that all meta-search engines studied used the AND operator, phrase search, number of results displayed setting, previous search query storage and help tutorials. Nevertheless, none of them demonstrated any search options for hypertext searching and displaying the size of the pages searched. 94.7% support features such as truncation, keywords in title and URL search and text summary display. The checklist used in the study could serve as a model for investigating search options in search engines, digital libraries and other internet search tools.

  11. How to Boost Engineering Support Via Web 2.0 - Seeds for the Ares Project...and/or Yours?

    Science.gov (United States)

    Scott, David W.

    2010-01-01

    The Mission Operations Laboratory (MOL) at Marshall Space Flight Center (MSFC) is responsible for Engineering Support capability for NASA s Ares launch system development. In pursuit of this, MOL is building the Ares Engineering and Operations Network (AEON), a web-based portal intended to provide a seamless interface to support and simplify two critical activities: a) Access and analyze Ares manufacturing, test, and flight performance data, with access to Shuttle data for comparison. b) Provide archive storage for engineering instrumentation data to support engineering design, development, and test. A mix of NASA-written and COTS software provides engineering analysis tools. A by-product of using a data portal to access and display data is access to collaborative tools inherent in a Web 2.0 environment. This paper discusses how Web 2.0 techniques, particularly social media, might be applied to the traditionally conservative and formal engineering support arena. A related paper by the author [1] considers use

  12. A Webometric Analysis of ISI Medical Journals Using Yahoo, AltaVista, and All the Web Search Engines

    Directory of Open Access Journals (Sweden)

    Zohreh Zahedi

    2010-12-01

    Full Text Available The World Wide Web is an important information source for scholarly communications. Examining the inlinks via webometrics studies has attracted particular interests among information researchers. In this study, the number of inlinks to 69 ISI medical journals retrieved by Yahoo, AltaVista, and All The web Search Engines were examined via a comparative and Webometrics study. For data analysis, SPSS software was employed. Findings revealed that British Medical Journal website attracted the most links of all in the three search engines. There is a significant correlation between the number of External links and the ISI impact factor. The most significant correlation in the three search engines exists between external links of Yahoo and AltaVista (100% and the least correlation is found between external links of All The web & the number of pages of AltaVista (0.51. There is no significant difference between the internal links & the number of pages found by the three search engines. But in case of impact factors, significant differences are found between these three search engines. So, the study shows that journals with higher impact factor attract more links to their websites. It also indicates that the three search engines are significantly different in terms of total links, outlinks and web impact factors

  13. WebCom: A Model for Understanding Web Site Communication

    DEFF Research Database (Denmark)

    Godsk, Mikkel; Petersen, Anja Bechmann

    2008-01-01

    of the approaches' strengths. Furthermore, it is discussed and shortly demonstrated how WebCom can be used for analytical and design purposes with YouTube as an example. The chapter concludes that WebCom is able to serve as a theoretically-based model for understanding complex Web site communication situations...

  14. A Web System Trace Model and Its Application to Web Design

    OpenAIRE

    Kong, Xiaoying; Liu, Li; Lowe, David

    2007-01-01

    Traceability analysis is crucial to the development of web-centric systems, particularly those with frequent system changes, fine-grained evolution and maintenance, and high level of requirements uncertainty. A trace model at the level of the web system architecture is presented in this paper to address the specific challenges of developing web-centric systems. The trace model separates the concerns of different stakeholders in the web development life cycle into viewpoints; and c...

  15. Characterizing interdisciplinarity of researchers and research topics using web search engines.

    Science.gov (United States)

    Sayama, Hiroki; Akaishi, Jin

    2012-01-01

    Researchers' networks have been subject to active modeling and analysis. Earlier literature mostly focused on citation or co-authorship networks reconstructed from annotated scientific publication databases, which have several limitations. Recently, general-purpose web search engines have also been utilized to collect information about social networks. Here we reconstructed, using web search engines, a network representing the relatedness of researchers to their peers as well as to various research topics. Relatedness between researchers and research topics was characterized by visibility boost-increase of a researcher's visibility by focusing on a particular topic. It was observed that researchers who had high visibility boosts by the same research topic tended to be close to each other in their network. We calculated correlations between visibility boosts by research topics and researchers' interdisciplinarity at the individual level (diversity of topics related to the researcher) and at the social level (his/her centrality in the researchers' network). We found that visibility boosts by certain research topics were positively correlated with researchers' individual-level interdisciplinarity despite their negative correlations with the general popularity of researchers. It was also found that visibility boosts by network-related topics had positive correlations with researchers' social-level interdisciplinarity. Research topics' correlations with researchers' individual- and social-level interdisciplinarities were found to be nearly independent from each other. These findings suggest that the notion of "interdisciplinarity" of a researcher should be understood as a multi-dimensional concept that should be evaluated using multiple assessment means.

  16. Advanced Techniques in Web Intelligence-2 Web User Browsing Behaviour and Preference Analysis

    CERN Document Server

    Palade, Vasile; Jain, Lakhmi

    2013-01-01

    This research volume focuses on analyzing the web user browsing behaviour and preferences in traditional web-based environments, social  networks and web 2.0 applications,  by using advanced  techniques in data acquisition, data processing, pattern extraction and  cognitive science for modeling the human actions.  The book is directed to  graduate students, researchers/scientists and engineers  interested in updating their knowledge with the recent trends in web user analysis, for developing the next generation of web-based systems and applications.

  17. A Web-based Visualization System for Three Dimensional Geological Model using Open GIS

    Science.gov (United States)

    Nemoto, T.; Masumoto, S.; Nonogaki, S.

    2017-12-01

    A three dimensional geological model is an important information in various fields such as environmental assessment, urban planning, resource development, waste management and disaster mitigation. In this study, we have developed a web-based visualization system for 3D geological model using free and open source software. The system has been successfully implemented by integrating web mapping engine MapServer and geographic information system GRASS. MapServer plays a role of mapping horizontal cross sections of 3D geological model and a topographic map. GRASS provides the core components for management, analysis and image processing of the geological model. Online access to GRASS functions has been enabled using PyWPS that is an implementation of WPS (Web Processing Service) Open Geospatial Consortium (OGC) standard. The system has two main functions. Two dimensional visualization function allows users to generate horizontal and vertical cross sections of 3D geological model. These images are delivered via WMS (Web Map Service) and WPS OGC standards. Horizontal cross sections are overlaid on the topographic map. A vertical cross section is generated by clicking a start point and an end point on the map. Three dimensional visualization function allows users to visualize geological boundary surfaces and a panel diagram. The user can visualize them from various angles by mouse operation. WebGL is utilized for 3D visualization. WebGL is a web technology that brings hardware-accelerated 3D graphics to the browser without installing additional software. The geological boundary surfaces can be downloaded to incorporate the geologic structure in a design on CAD and model for various simulations. This study was supported by JSPS KAKENHI Grant Number JP16K00158.

  18. Web-page Prediction for Domain Specific Web-search using Boolean Bit Mask

    OpenAIRE

    Sinha, Sukanta; Duttagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Search Engine is a Web-page retrieval tool. Nowadays Web searchers utilize their time using an efficient search engine. To improve the performance of the search engine, we are introducing a unique mechanism which will give Web searchers more prominent search results. In this paper, we are going to discuss a domain specific Web search prototype which will generate the predicted Web-page list for user given search string using Boolean bit mask.

  19. Photonics Applications and Web Engineering: WILGA 2017

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2017-08-01

    XLth Wilga Summer 2017 Symposium on Photonics Applications and Web Engineering was held on 28 May-4 June 2017. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, modern optics, mechatronics, applied physics, electronics technologies and applications. There were presented around 300 oral and poster papers in a few main topical tracks, which are traditional for Wilga, including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Things, measurement systems for astronomy, high energy physics experiments, and other. The paper is a traditional introduction to the 2017 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations. This year Symposium was divided to the following topical sessions/conferences: Optics, Optoelectronics and Photonics, Computational and Artificial Intelligence, Biomedical Applications, Astronomical and High Energy Physics Experiments Applications, Material Research and Engineering, and Advanced Photonics and Electronics Applications in Research and Industry.

  20. Assessing ecosystem effects of reservoir operations using food web-energy transfer and water quality models

    Science.gov (United States)

    Saito, L.; Johnson, B.M.; Bartholow, J.; Hanna, R.B.

    2001-01-01

    We investigated the effects on the reservoir food web of a new temperature control device (TCD) on the dam at Shasta Lake, California. We followed a linked modeling approach that used a specialized reservoir water quality model to forecast operation-induced changes in phytoplankton production. A food web–energy transfer model was also applied to propagate predicted changes in phytoplankton up through the food web to the predators and sport fishes of interest. The food web–energy transfer model employed a 10% trophic transfer efficiency through a food web that was mapped using carbon and nitrogen stable isotope analysis. Stable isotope analysis provided an efficient and comprehensive means of estimating the structure of the reservoir's food web with minimal sampling and background data. We used an optimization procedure to estimate the diet proportions of all food web components simultaneously from their isotopic signatures. Some consumers were estimated to be much more sensitive than others to perturbations to phytoplankton supply. The linked modeling approach demonstrated that interdisciplinary efforts enhance the value of information obtained from studies of managed ecosystems. The approach exploited the strengths of engineering and ecological modeling methods to address concerns that neither of the models could have addressed alone: (a) the water quality model could not have addressed quantitatively the possible impacts to fish, and (b) the food web model could not have examined how phytoplankton availability might change due to reservoir operations.

  1. A model-driven approach for representing clinical archetypes for Semantic Web environments.

    Science.gov (United States)

    Martínez-Costa, Catalina; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Maldonado, José Alberto

    2009-02-01

    The life-long clinical information of any person supported by electronic means configures his Electronic Health Record (EHR). This information is usually distributed among several independent and heterogeneous systems that may be syntactically or semantically incompatible. There are currently different standards for representing and exchanging EHR information among different systems. In advanced EHR approaches, clinical information is represented by means of archetypes. Most of these approaches use the Archetype Definition Language (ADL) to specify archetypes. However, ADL has some drawbacks when attempting to perform semantic activities in Semantic Web environments. In this work, Semantic Web technologies are used to specify clinical archetypes for advanced EHR architectures. The advantages of using the Ontology Web Language (OWL) instead of ADL are described and discussed in this work. Moreover, a solution combining Semantic Web and Model-driven Engineering technologies is proposed to transform ADL into OWL for the CEN EN13606 EHR architecture.

  2. An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.

    Science.gov (United States)

    Zweigenbaum, P; Darmoni, S J; Grabar, N; Douyère, M; Benichou, J

    2002-01-01

    Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF.

  3. ICSE 2009 Tutorial - Semantic Web Technologies in Software Engineering

    OpenAIRE

    Gall, H C; Reif, G

    2009-01-01

    Over the years, the software engineering community has developed various tools to support the specification, development, and maintainance of software. Many of these tools use proprietary data formats to store artifacts which hamper interoperability. On the other hand, the Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Ontologies are used to define the concepts in the domain of discourse and their rel...

  4. Enhancing food engineering education with interactive web-based simulations

    OpenAIRE

    Alexandros Koulouris; Georgios Aroutidis; Dimitrios Vardalis; Petros Giannoulis; Paraskevi Karakosta

    2015-01-01

    In the traditional deductive approach in teaching any engineering topic, teachers would first expose students to the derivation of the equations that govern the behavior of a physical system and then demonstrate the use of equations through a limited number of textbook examples. This methodology, however, is rarely adequate to unmask the cause-effect and quantitative relationships between the system variables that the equations embody. Web-based simulation, which is the integration of simulat...

  5. Automatic Conversion of a Conceptual Model to a Standard Multi-view Web Services Definition

    Directory of Open Access Journals (Sweden)

    Anass Misbah

    2018-03-01

    Full Text Available Information systems are becoming more and more heterogeneous and here comes the need to have more generic transformation algorithms and more automatic generation Meta rules. In fact, the large number of terminals, devices, operating systems, platforms and environments require a high level of adaptation. Therefore, it is becoming more and more difficult to validate, generate and implement manually models, designs and codes.Web services are one of the technologies that are used massively nowadays; hence, it is considered as one of technologies that require the most automatic rules of validation and automation. Many previous works have dealt with Web services by proposing new concepts such as Multi-view Web services, standard WSDL implementation of Multi-view Web services and even further Generic Meta rules for automatic generation of Multi-view Web services.In this work we will propose a new way of generating Multi-view Web ser-vices, which is based on an engine algorithm that takes as input both an initial Conceptual Model and user’s matrix and then unroll a generic algorithm to gen-erate dynamically a validated set of points of view. This set of points of view will be transformed to a standard WSDL implementation of Multi-view Web services by means of the automatic transformation Meta rules.

  6. Penilaian Risiko Aplikasi Web Menggunakan Model DREAD

    Directory of Open Access Journals (Sweden)

    Didit Suprihanto

    2016-01-01

    Full Text Available Application that  is developed by web based, beside has surplus in WWW technology, it has susceptibility side that can be threat too. Susceptibility generate risk and can bring out big trouble even effect big disadvantage. The goal of this research is design and build document risk assessment system of threat level and prevention advice. It use DREAD model as method to solve trouble by giving qualified information. This information are used to produce risk level in web application. The result of this research is web application risk assessment system by using DREAD model to know risk threat level and equate perception of web threat risk to application developer, minimize of threat risk and maximize performance of web application.   Keywords : DREAD model, web threat risk, web risk assessment system

  7. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    Science.gov (United States)

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Photonics applications and web engineering: WILGA Summer 2016

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2016-09-01

    Wilga Summer 2016 Symposium on Photonics Applications and Web Engineering was held on 29 May - 06 June. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, electronics technologies and applications. There were presented around 300 presentations in a few main topical tracks including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Thins, and other. The paper is an introduction the 2016 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations.

  9. Photonics applications and web engineering: WILGA Summer 2015

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2015-09-01

    Wilga Summer 2015 Symposium on Photonics Applications and Web Engineering was held on 23-31 May. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, electronics technologies and applications. There were presented around 300 presentations in a few main topical tracks including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Thins, and other. The paper is an introduction the 2015 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations.

  10. Feature-based engineering of compensations in web service environment

    DEFF Research Database (Denmark)

    Schaefer, Michael; Dolog, Peter

    2009-01-01

    In this paper, we introduce a product line approach for developing Web services with extended compensation capabilities. We adopt a feature modelling approach in order to describe variable and common compensation properties of Web service variants, as well as service consumer application...

  11. Can Interactive Web-Based CAD Tools Improve the Learning of Engineering Drawing? A Case Study

    Science.gov (United States)

    Pando Cerra, Pablo; Suárez González, Jesús M.; Busto Parra, Bernardo; Rodríguez Ortiz, Diana; Álvarez Peñín, Pedro I.

    2014-01-01

    Many current Web-based learning environments facilitate the theoretical teaching of a subject but this may not be sufficient for those disciplines that require a significant use of graphic mechanisms to resolve problems. This research study looks at the use of an environment that can help students learn engineering drawing with Web-based CAD…

  12. The Semantic Web: opportunities and challenges for next-generation Web applications

    Directory of Open Access Journals (Sweden)

    2002-01-01

    Full Text Available Recently there has been a growing interest in the investigation and development of the next generation web - the Semantic Web. While most of the current forms of web content are designed to be presented to humans, but are barely understandable by computers, the content of the Semantic Web is structured in a semantic way so that it is meaningful to computers as well as to humans. In this paper, we report a survey of recent research on the Semantic Web. In particular, we present the opportunities that this revolution will bring to us: web-services, agent-based distributed computing, semantics-based web search engines, and semantics-based digital libraries. We also discuss the technical and cultural challenges of realizing the Semantic Web: the development of ontologies, formal semantics of Semantic Web languages, and trust and proof models. We hope that this will shed some light on the direction of future work on this field.

  13. Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.

    2006-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data

  14. Design and implementation of Web-based SDUV-FEL engineering database system

    International Nuclear Information System (INIS)

    Sun Xiaoying; Shen Liren; Dai Zhimin; Xie Dong

    2006-01-01

    A design of Web-based SDUV-FEL engineering database and its implementation are introduced. This system will save and offer static data and archived data of SDUV-FEL, and build a proper and effective platform for share of SDUV-FEL data. It offers usable and reliable SDUV-FEL data for operators and scientists. (authors)

  15. Web-based reactive transport modeling using PFLOTRAN

    Science.gov (United States)

    Zhou, H.; Karra, S.; Lichtner, P. C.; Versteeg, R.; Zhang, Y.

    2017-12-01

    Actionable understanding of system behavior in the subsurface is required for a wide spectrum of societal and engineering needs by both commercial firms and government entities and academia. These needs include, for example, water resource management, precision agriculture, contaminant remediation, unconventional energy production, CO2 sequestration monitoring, and climate studies. Such understanding requires the ability to numerically model various coupled processes that occur across different temporal and spatial scales as well as multiple physical domains (reservoirs - overburden, surface-subsurface, groundwater-surface water, saturated-unsaturated zone). Currently, this ability is typically met through an in-house approach where computational resources, model expertise, and data for model parameterization are brought together to meet modeling needs. However, such an approach has multiple drawbacks which limit the application of high-end reactive transport codes such as the Department of Energy funded[?] PFLOTRAN code. In addition, while many end users have a need for the capabilities provided by high-end reactive transport codes, they do not have the expertise - nor the time required to obtain the expertise - to effectively use these codes. We have developed and are actively enhancing a cloud-based software platform through which diverse users are able to easily configure, execute, visualize, share, and interpret PFLOTRAN models. This platform consists of a web application and available on-demand HPC computational infrastructure. The web application consists of (1) a browser-based graphical user interface which allows users to configure models and visualize results interactively, and (2) a central server with back-end relational databases which hold configuration, data, modeling results, and Python scripts for model configuration, and (3) a HPC environment for on-demand model execution. We will discuss lessons learned in the development of this platform, the

  16. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    OpenAIRE

    Filistea Naude; Chris Rensleigh; Adeline S.A. du Toit

    2010-01-01

    This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa) was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The re...

  17. A Web portal for the Engineering and Equipment Data Management System at CERN

    International Nuclear Information System (INIS)

    Tsyganov, A; Petit, S; Martel, P; Milenkovic, S; Suwalska, A; Delamare, C; Widegren, D; Amerigo, S Mallon; Pettersson, T

    2010-01-01

    CERN, the European Laboratory for Particle Physics, located in Geneva - Switzerland, has recently started the Large Hadron Collider (LHC), a 27 km particle accelerator. The CERN Engineering and Equipment Data Management Service (EDMS) provides support for managing engineering and equipment information throughout the entire lifecycle of a project. Based on several both in-house developed and commercial data management systems, this service supports management and follow-up of different kinds of information throughout the lifecycle of the LHC project: design, manufacturing, installation, commissioning data, maintenance and more. The data collection phase, carried out by specialists, is now being replaced by a phase during which data will be consulted on an extensive basis by non-experts users. In order to address this change, a Web portal for the EDMS has been developed. It brings together in one space all the aspects covered by the EDMS: project and document management, asset tracking and safety follow-up. This paper presents the EDMS Web portal, its dynamic content management and its 'one click' information search engine.

  18. 07051 Working Group Outcomes -- Programming Paradigms for the Web: Web Programming and Web Services

    OpenAIRE

    Hull, Richard; Thiemann, Peter; Wadler, Philip

    2007-01-01

    Participants in the seminar broke into groups on ``Patterns and Paradigms'' for web programming, ``Web Services,'' ``Data on the Web,'' ``Software Engineering'' and ``Security.'' Here we give the raw notes recorded during these sessions.

  19. Interactive Web-based e-learning for Studying Flexible Manipulator Systems

    Directory of Open Access Journals (Sweden)

    Abul K. M. Azad

    2008-03-01

    Full Text Available Abstract— This paper presents a web-based e-leaning facility for simulation, modeling, and control of flexible manipulator systems. The simulation and modeling part includes finite difference and finite element simulations along with neural network and genetic algorithm based modeling strategies for flexible manipulator systems. The controller part constitutes a number of open-loop and closed-loop designs. Closed loop control designs include the classical, adaptive, and neuro-model based strategies. Matlab software package and its associated toolboxes are used to implement these. The Matlab web server is used as the gateway between the facility and web-access. ASP.NET technology and SQL database are utilized to develop web applications for access control, user account and password maintenance, administrative management, and facility utilization monitoring. The reported facility provides a flexible but effective approach of web-based interactive e-learning facility of an engineering system. This can be extended to incorporate additional engineering systems within the e-learning framework.

  20. TouchTerrain: A simple web-tool for creating 3D-printable topographic models

    Science.gov (United States)

    Hasiuk, Franciszek J.; Harding, Chris; Renner, Alex Raymond; Winer, Eliot

    2017-12-01

    An open-source web-application, TouchTerrain, was developed to simplify the production of 3D-printable terrain models. Direct Digital Manufacturing (DDM) using 3D Printers can change how geoscientists, students, and stakeholders interact with 3D data, with the potential to improve geoscience communication and environmental literacy. No other manufacturing technology can convert digital data into tangible objects quickly at relatively low cost; however, the expertise necessary to produce a 3D-printed terrain model can be a substantial burden: knowledge of geographical information systems, computer aided design (CAD) software, and 3D printers may all be required. Furthermore, printing models larger than the build volume of a 3D printer can pose further technical hurdles. The TouchTerrain web-application simplifies DDM for elevation data by generating digital 3D models customized for a specific 3D printer's capabilities. The only required user input is the selection of a region-of-interest using the provided web-application with a Google Maps-style interface. Publically available digital elevation data is processed via the Google Earth Engine API. To allow the manufacture of 3D terrain models larger than a 3D printer's build volume the selected area can be split into multiple tiles without third-party software. This application significantly reduces the time and effort required for a non-expert like an educator to obtain 3D terrain models for use in class. The web application is deployed at http://touchterrain.geol.iastate.edu/

  1. Realising the Uncertainty Enabled Model Web

    Science.gov (United States)

    Cornford, D.; Bastin, L.; Pebesma, E. J.; Williams, M.; Stasch, C.; Jones, R.; Gerharz, L.

    2012-12-01

    The FP7 funded UncertWeb project aims to create the "uncertainty enabled model web". The central concept here is that geospatial models and data resources are exposed via standard web service interfaces, such as the Open Geospatial Consortium (OGC) suite of encodings and interface standards, allowing the creation of complex workflows combining both data and models. The focus of UncertWeb is on the issue of managing uncertainty in such workflows, and providing the standards, architecture, tools and software support necessary to realise the "uncertainty enabled model web". In this paper we summarise the developments in the first two years of UncertWeb, illustrating several key points with examples taken from the use case requirements that motivate the project. Firstly we address the issue of encoding specifications. We explain the usage of UncertML 2.0, a flexible encoding for representing uncertainty based on a probabilistic approach. This is designed to be used within existing standards such as Observations and Measurements (O&M) and data quality elements of ISO19115 / 19139 (geographic information metadata and encoding specifications) as well as more broadly outside the OGC domain. We show profiles of O&M that have been developed within UncertWeb and how UncertML 2.0 is used within these. We also show encodings based on NetCDF and discuss possible future directions for encodings in JSON. We then discuss the issues of workflow construction, considering discovery of resources (both data and models). We discuss why a brokering approach to service composition is necessary in a world where the web service interfaces remain relatively heterogeneous, including many non-OGC approaches, in particular the more mainstream SOAP and WSDL approaches. We discuss the trade-offs between delegating uncertainty management functions to the service interfaces themselves and integrating the functions in the workflow management system. We describe two utility services to address

  2. A design method for an intuitive web site

    Energy Technology Data Exchange (ETDEWEB)

    Quinniey, M.L.; Diegert, K.V.; Baca, B.G.; Forsythe, J.C.; Grose, E.

    1999-11-03

    The paper describes a methodology for designing a web site for human factor engineers that is applicable for designing a web site for a group of people. Many web pages on the World Wide Web are not organized in a format that allows a user to efficiently find information. Often the information and hypertext links on web pages are not organized into intuitive groups. Intuition implies that a person is able to use their knowledge of a paradigm to solve a problem. Intuitive groups are categories that allow web page users to find information by using their intuition or mental models of categories. In order to improve the human factors engineers efficiency for finding information on the World Wide Web, research was performed to develop a web site that serves as a tool for finding information effectively. The paper describes a methodology for designing a web site for a group of people who perform similar task in an organization.

  3. WMT: The CSDMS Web Modeling Tool

    Science.gov (United States)

    Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged

  4. Web-Based Simulation Games for the Integration of Engineering and Business Fundamentals

    Science.gov (United States)

    Calfa, Bruno; Banholzer, William; Alger, Monty; Doherty, Michael

    2017-01-01

    This paper describes a web-based suite of simulation games that have the purpose to enhance the chemical engineering curriculum with business-oriented decisions. Two simulation cases are discussed whose teaching topics include closing material and energy balances, importance of recycle streams, price-volume relationship in a dynamic market, impact…

  5. GEO-ENGINEERING MODELING THROUGH INTERNET INFORMATICS (GEMINI)

    Energy Technology Data Exchange (ETDEWEB)

    W. Lynn Watney; John H. Doveton

    2004-05-13

    GEMINI (Geo-Engineering Modeling through Internet Informatics) is a public-domain web application focused on analysis and modeling of petroleum reservoirs and plays (http://www.kgs.ukans.edu/Gemini/index.html). GEMINI creates a virtual project by ''on-the-fly'' assembly and analysis of on-line data either from the Kansas Geological Survey or uploaded from the user. GEMINI's suite of geological and engineering web applications for reservoir analysis include: (1) petrofacies-based core and log modeling using an interactive relational rock catalog and log analysis modules; (2) a well profile module; (3) interactive cross sections to display ''marked'' wireline logs; (4) deterministic gridding and mapping of petrophysical data; (5) calculation and mapping of layer volumetrics; (6) material balance calculations; (7) PVT calculator; (8) DST analyst, (9) automated hydrocarbon association navigator (KHAN) for database mining, and (10) tutorial and help functions. The Kansas Hydrocarbon Association Navigator (KHAN) utilizes petrophysical databases to estimate hydrocarbon pay or other constituent at a play- or field-scale. Databases analyzed and displayed include digital logs, core analysis and photos, DST, and production data. GEMINI accommodates distant collaborations using secure password protection and authorized access. Assembled data, analyses, charts, and maps can readily be moved to other applications. GEMINI's target audience includes small independents and consultants seeking to find, quantitatively characterize, and develop subtle and bypassed pays by leveraging the growing base of digital data resources. Participating companies involved in the testing and evaluation of GEMINI included Anadarko, BP, Conoco-Phillips, Lario, Mull, Murfin, and Pioneer Resources.

  6. Spatial Visualization Learning in Engineering: Traditional Methods vs. a Web-Based Tool

    Science.gov (United States)

    Pedrosa, Carlos Melgosa; Barbero, Basilio Ramos; Miguel, Arturo Román

    2014-01-01

    This study compares an interactive learning manager for graphic engineering to develop spatial vision (ILMAGE_SV) to traditional methods. ILMAGE_SV is an asynchronous web-based learning tool that allows the manipulation of objects with a 3D viewer, self-evaluation, and continuous assessment. In addition, student learning may be monitored, which…

  7. Soil food web changes during spontaneous succession at post mining sites: a possible ecosystem engineering effect on food web organization?

    Science.gov (United States)

    Frouz, Jan; Thébault, Elisa; Pižl, Václav; Adl, Sina; Cajthaml, Tomáš; Baldrián, Petr; Háněl, Ladislav; Starý, Josef; Tajovský, Karel; Materna, Jan; Nováková, Alena; de Ruiter, Peter C

    2013-01-01

    Parameters characterizing the structure of the decomposer food web, biomass of the soil microflora (bacteria and fungi) and soil micro-, meso- and macrofauna were studied at 14 non-reclaimed 1- 41-year-old post-mining sites near the town of Sokolov (Czech Republic). These observations on the decomposer food webs were compared with knowledge of vegetation and soil microstructure development from previous studies. The amount of carbon entering the food web increased with succession age in a similar way as the total amount of C in food web biomass and the number of functional groups in the food web. Connectance did not show any significant changes with succession age, however. In early stages of the succession, the bacterial channel dominated the food web. Later on, in shrub-dominated stands, the fungal channel took over. Even later, in the forest stage, the bacterial channel prevailed again. The best predictor of fungal bacterial ratio is thickness of fermentation layer. We argue that these changes correspond with changes in topsoil microstructure driven by a combination of plant organic matter input and engineering effects of earthworms. In early stages, soil is alkaline, and a discontinuous litter layer on the soil surface promotes bacterial biomass growth, so the bacterial food web channel can dominate. Litter accumulation on the soil surface supports the development of the fungal channel. In older stages, earthworms arrive, mix litter into the mineral soil and form an organo-mineral topsoil, which is beneficial for bacteria and enhances the bacterial food web channel.

  8. REPTREE CLASSIFIER FOR IDENTIFYING LINK SPAM IN WEB SEARCH ENGINES

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2013-01-01

    Full Text Available Search Engines are used for retrieving the information from the web. Most of the times, the importance is laid on top 10 results sometimes it may shrink as top 5, because of the time constraint and reliability on the search engines. Users believe that top 10 or 5 of total results are more relevant. Here comes the problem of spamdexing. It is a method to deceive the search result quality. Falsified metrics such as inserting enormous amount of keywords or links in website may take that website to the top 10 or 5 positions. This paper proposes a classifier based on the Reptree (Regression tree representative. As an initial step Link-based features such as neighbors, pagerank, truncated pagerank, trustrank and assortativity related attributes are inferred. Based on this features, tree is constructed. The tree uses the feature inference to differentiate spam sites from legitimate sites. WEBSPAM-UK-2007 dataset is taken as a base. It is preprocessed and converted into five datasets FEATA, FEATB, FEATC, FEATD and FEATE. Only link based features are taken for experiments. This paper focus on link spam alone. Finally a representative tree is created which will more precisely classify the web spam entries. Results are given. Regression tree classification seems to perform well as shown through experiments.

  9. Photonics and Web Engineering: WILGA 2009

    CERN Document Server

    Romaniuk, Ryszard

    2009-01-01

    The paper is a digest of work presented during a cyclic Ph.D. student symposium on Photonics and Web Engineering WILGA 2009. The subject of WILGA are Photonics Applications in Astronomy, Communications, Industry and High-Energy Physics Experiments. WILGA is sponsored by EuCARD Project. Symposium is organized by ISE PW in cooperation with professional organizations IEEE, SPIE, PSP and KEiT PAN. There are presented mainly Ph.D. and M.Sc. theses as well as achievements of young researchers. These papers, presented in such a big number, more than 250 in some years, are in certain sense a good digest of the condition of academic research capabilities in this branch of science and technology. The undertaken research subjects for Ph.D. theses in electronics is determined by the interest and research capacity (financial, laboratory and intellectual) of the young researchers and their tutors. Basically, the condition of academic electronics research depends on financing coming from applications areas. During Wilga 200...

  10. EuroGOV: Engineering a Multilingual Web Corpus

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.

    2005-01-01

    EuroGOV is a multilingual web corpus that was created to serve as the document collection for WebCLEF, the CLEF 2005 web retrieval task. EuroGOV is a collection of web pages crawled from the European Union portal, European Union member state governmental web sites, and Russian government web sites.

  11. The fraying web of life and our future engineers

    Science.gov (United States)

    Splitt, Frank G.

    2004-07-01

    Evidence abounds that we are reaching the carrying capacity of the earth -- engaging in deficit spending. The amount of crops, animals, and other biomatter we extract from the earth each year exceeds wth the earth can replace by an estimated 20%. Additionally, signs of climate change are precursors of things to come. Global industrialization and the new technologies of the 20th century have helped to stretch the capacities of our finite natural system to precarious levels. Taken together, this evidence reflects a fraying web of life. Sustainable development and natural capitalism work to reverse these trends, however, we are often still wedded to the notion that environmental conservation and economic development are the 'players' in a zero-sum game. Engineering and its technological derivatives can also help remedy the problem. The well being of future generations will depend to a large extent on how we educate our future engineers. These engineers will be a new breed -- developing and using sustainable technology, benign manufacturing processes and an expanded array of environmental assessment tools that will simultaneously support and maintain healthy economies and a healthy environment. The importance of environment and sustainable development cosiderations, the need for their widespread inclusion in engineering education, the impediments to change, and the important role played by ABET will be presented.

  12. The Evolution of Web Searching.

    Science.gov (United States)

    Green, David

    2000-01-01

    Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…

  13. State-of-the-art WEB -technologies and ecological safety of nuclear power engineering facilities

    International Nuclear Information System (INIS)

    Batij, V.G.; Batij, E.V.; Rud'ko, V.M.; Kotlyarov, V.T.

    2004-01-01

    Prospects of web-technologies using in the field of improvement radiation safety level of nuclear power engineering facilities is seen. It is shown that application of such technologies will enable entirely using the data of all information systems of radiation control

  14. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  15. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  16. A formal model for classifying trusted Semantic Web Services

    OpenAIRE

    Galizia, Stefania; Gugliotta, Alessio; Pedrinaci, Carlos

    2008-01-01

    Semantic Web Services (SWS) aim to alleviate Web service limitations, by combining Web service technologies with the potential of Semantic Web. Several open issues have to be tackled yet, in order to enable a safe and efficient Web services selection. One of them is represented by trust. In this paper, we introduce a trust definition and formalize a model for managing trust in SWS. The model approaches the selection of trusted Web services as a classification problem, and it is realized by an...

  17. WebGIS based on semantic grid model and web services

    Science.gov (United States)

    Zhang, WangFei; Yue, CaiRong; Gao, JianGuo

    2009-10-01

    As the combination point of the network technology and GIS technology, WebGIS has got the fast development in recent years. With the restriction of Web and the characteristics of GIS, traditional WebGIS has some prominent problems existing in development. For example, it can't accomplish the interoperability of heterogeneous spatial databases; it can't accomplish the data access of cross-platform. With the appearance of Web Service and Grid technology, there appeared great change in field of WebGIS. Web Service provided an interface which can give information of different site the ability of data sharing and inter communication. The goal of Grid technology was to make the internet to a large and super computer, with this computer we can efficiently implement the overall sharing of computing resources, storage resource, data resource, information resource, knowledge resources and experts resources. But to WebGIS, we only implement the physically connection of data and information and these is far from the enough. Because of the different understanding of the world, following different professional regulations, different policies and different habits, the experts in different field will get different end when they observed the same geographic phenomenon and the semantic heterogeneity produced. Since these there are large differences to the same concept in different field. If we use the WebGIS without considering of the semantic heterogeneity, we will answer the questions users proposed wrongly or we can't answer the questions users proposed. To solve this problem, this paper put forward and experienced an effective method of combing semantic grid and Web Services technology to develop WebGIS. In this paper, we studied the method to construct ontology and the method to combine Grid technology and Web Services and with the detailed analysis of computing characteristics and application model in the distribution of data, we designed the WebGIS query system driven by

  18. A Web Centric Architecture for Deploying Multi-Disciplinary Engineering Design Processes

    Science.gov (United States)

    Woyak, Scott; Kim, Hongman; Mullins, James; Sobieszczanski-Sobieski, Jaroslaw

    2004-01-01

    There are continuous needs for engineering organizations to improve their design process. Current state of the art techniques use computational simulations to predict design performance, and optimize it through advanced design methods. These tools have been used mostly by individual engineers. This paper presents an architecture for achieving results at an organization level beyond individual level. The next set of gains in process improvement will come from improving the effective use of computers and software within a whole organization, not just for an individual. The architecture takes advantage of state of the art capabilities to produce a Web based system to carry engineering design into the future. To illustrate deployment of the architecture, a case study for implementing advanced multidisciplinary design optimization processes such as Bi-Level Integrated System Synthesis is discussed. Another example for rolling-out a design process for Design for Six Sigma is also described. Each example explains how an organization can effectively infuse engineering practice with new design methods and retain the knowledge over time.

  19. Correct software in web applications and web services

    CERN Document Server

    Thalheim, Bernhard; Prinz, Andreas; Buchberger, Bruno

    2015-01-01

    The papers in this volume aim at obtaining a common understanding of the challenging research questions in web applications comprising web information systems, web services, and web interoperability; obtaining a common understanding of verification needs in web applications; achieving a common understanding of the available rigorous approaches to system development, and the cases in which they have succeeded; identifying how rigorous software engineering methods can be exploited to develop suitable web applications; and at developing a European-scale research agenda combining theory, methods a

  20. Engineering semantic web information systems in Hera

    NARCIS (Netherlands)

    Vdovják, R.; Frasincar, F.; Houben, G.J.P.M.; Barna, P.

    2003-01-01

    The success of the World Wide Web has caused the concept of information system to change. Web Information Systems (WIS) use from the Web its paradigm and technologies in order to retrieve information from sources on the Web, and to present the information in terms of a Web or hypermedia

  1. Engineers and the Web: An analysis of real life gaps in information usage

    NARCIS (Netherlands)

    Kraaijenbrink, Jeroen

    2007-01-01

    Engineers face a wide range of gaps when trying to identify, acquire, and utilize information from the Web. To be able to avoid creating such gaps, it is essential to understand them in detail. This paper reports the results of a study of the real life gaps in information usage processes of 17

  2. A web-based online collaboration platform for formulating engineering design projects

    Science.gov (United States)

    Varikuti, Sainath

    Effective communication and collaboration among students, faculty and industrial sponsors play a vital role while formulating and solving engineering design projects. With the advent in the web technology, online platforms and systems have been proposed to facilitate interactions and collaboration among different stakeholders in the context of senior design projects. However, there are noticeable gaps in the literature with respect to understanding the effects of online collaboration platforms for formulating engineering design projects. Most of the existing literature is focused on exploring the utility of online platforms on activities after the problem is defined and teams are formed. Also, there is a lack of mechanisms and tools to guide the project formation phase in senior design projects, which makes it challenging for students and faculty to collaboratively develop and refine project ideas and to establish appropriate teams. In this thesis a web-based online collaboration platform is designed and implemented to share, discuss and obtain feedback on project ideas and to facilitate collaboration among students and faculty prior to the start of the semester. The goal of this thesis is to understand the impact of an online collaboration platform for formulating engineering design projects, and how a web-based online collaboration platform affects the amount of interactions among stakeholders during the early phases of design process. A survey measuring the amount of interactions among students and faculty is administered. Initial findings show a marked improvement in the students' ability to share project ideas and form teams with other students and faculty. Students found the online platform simple to use. The suggestions for improving the tool generally included features that were not necessarily design specific, indicating that the underlying concept of this collaborative platform provides a strong basis and can be extended for future online platforms

  3. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    Science.gov (United States)

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. A resource oriented webs service for environmental modeling

    Science.gov (United States)

    Ferencik, Ioan

    2013-04-01

    Environmental modeling is a largely adopted practice in the study of natural phenomena. Environmental models can be difficult to build and use and thus sharing them within the community is an important aspect. The most common approach to share a model is to expose it as a web service. In practice the interaction with this web service is cumbersome due to lack of standardized contract and the complexity of the model being exposed. In this work we investigate the use of a resource oriented approach in exposing environmental models as web services. We view a model as a layered resource build atop the object concept from Object Oriented Programming, augmented with persistence capabilities provided by an embedded object database to keep track of its state and implementing the four basic principles of resource oriented architectures: addressability, statelessness, representation and uniform interface. For implementation we use exclusively open source software: Django framework, dyBase object oriented database and Python programming language. We developed a generic framework of resources structured into a hierarchy of types and consequently extended this typology with recurses specific to the domain of environmental modeling. To test our web service we used cURL, a robust command-line based web client.

  5. Web information retrieval based on ontology

    Science.gov (United States)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  6. Graph Structure in Three National Academic Webs: Power Laws with Anomalies.

    Science.gov (United States)

    Thelwall, Mike; Wilkinson, David

    2003-01-01

    Explains how the Web can be modeled as a mathematical graph and analyzes the graph structures of three national university publicly indexable Web sites from Australia, New Zealand, and the United Kingdom. Topics include commercial search engines and academic Web link research; method-analysis environment and data sets; and power laws. (LRW)

  7. Discovery and Selection of Semantic Web Services

    CERN Document Server

    Wang, Xia

    2013-01-01

    For advanced web search engines to be able not only to search for semantically related information dispersed over different web pages, but also for semantic services providing certain functionalities, discovering semantic services is the key issue. Addressing four problems of current solution, this book presents the following contributions. A novel service model independent of semantic service description models is proposed, which clearly defines all elements necessary for service discovery and selection. It takes service selection as its gist and improves efficiency. Corresponding selection algorithms and their implementation as components of the extended Semantically Enabled Service-oriented Architecture in the Web Service Modeling Environment are detailed. Many applications of semantic web services, e.g. discovery, composition and mediation, can benefit from a general approach for building application ontologies. With application ontologies thus built, services are discovered in the same way as with single...

  8. Climate Model Diagnostic Analyzer Web Service System

    Science.gov (United States)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.

    2014-12-01

    We have developed a cloud-enabled web-service system that empowers physics-based, multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks. The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the observational datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation, (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs, and (3) ECMWF reanalysis outputs for several environmental variables in order to supplement observational datasets. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, (4) the calculation of difference between two variables, and (5) the conditional sampling of one physical variable with respect to another variable. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use, avoiding the hassle of local software installation and environment incompatibility. CMDA will be used as an educational tool for the summer school organized by JPL's Center for Climate Science in 2014. In order to support 30+ simultaneous users during the school, we have deployed CMDA to the Amazon cloud environment. The cloud-enabled CMDA will provide each student with a virtual machine while the user interaction with the system will remain the same

  9. A User-centered Model for Web Site Design

    Science.gov (United States)

    Kinzie, Mable B.; Cohn, Wendy F.; Julian, Marti F.; Knaus, William A.

    2002-01-01

    As the Internet continues to grow as a delivery medium for health information, the design of effective Web sites becomes increasingly important. In this paper, the authors provide an overview of one effective model for Web site design, a user-centered process that includes techniques for needs assessment, goal/task analysis, user interface design, and rapid prototyping. They detail how this approach was employed to design a family health history Web site, Health Heritage . This Web site helps patients record and maintain their family health histories in a secure, confidential manner. It also supports primary care physicians through analysis of health histories, identification of potential risks, and provision of health care recommendations. Visual examples of the design process are provided to show how the use of this model resulted in an easy-to-use Web site that is likely to meet user needs. The model is effective across diverse content arenas and is appropriate for applications in varied media. PMID:12087113

  10. BaBar - A Community Web Site in an Organizational Setting

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-07-10

    The BABAR Web site was established in 1993 at the Stanford Linear Accelerator Center (SLAC) to support the BABAR experiment, to report its results, and to facilitate communication among its scientific and engineering collaborators, currently numbering about 600 individuals from 75 collaborating institutions in 10 countries. The BABAR Web site is, therefore, a community Web site. At the same time it is hosted at SLAC and funded by agencies that demand adherence to policies decided under different priorities. Additionally, the BABAR Web administrators deal with the problems that arise during the course of managing users, content, policies, standards, and changing technologies. Desired solutions to some of these problems may be incompatible with the overall administration of the SLAC Web sites and/or the SLAC policies and concerns. There are thus different perspectives of the same Web site and differing expectations in segments of the SLAC population which act as constraints and challenges in any review or re-engineering activities. Web Engineering, which post-dates the BABAR Web, has aimed to provide a comprehensive understanding of all aspects of Web development. This paper reports on the first part of a recent review of application of Web Engineering methods to the BABAR Web site, which has led to explicit user and information models of the BABAR community and how SLAC and the BABAR community relate and react to each other. The paper identifies the issues of a community Web site in a hierarchical, semi-governmental sector and formulates a strategy for periodic reviews of BABAR and similar sites. A separate paper reports on the findings of a user survey and selected interviews with users, along with their implications and recommendations for future.

  11. BaBar - A Community Web Site in an Organizational Setting

    International Nuclear Information System (INIS)

    White, Bebo

    2003-01-01

    The BABAR Web site was established in 1993 at the Stanford Linear Accelerator Center (SLAC) to support the BABAR experiment, to report its results, and to facilitate communication among its scientific and engineering collaborators, currently numbering about 600 individuals from 75 collaborating institutions in 10 countries. The BABAR Web site is, therefore, a community Web site. At the same time it is hosted at SLAC and funded by agencies that demand adherence to policies decided under different priorities. Additionally, the BABAR Web administrators deal with the problems that arise during the course of managing users, content, policies, standards, and changing technologies. Desired solutions to some of these problems may be incompatible with the overall administration of the SLAC Web sites and/or the SLAC policies and concerns. There are thus different perspectives of the same Web site and differing expectations in segments of the SLAC population which act as constraints and challenges in any review or re-engineering activities. Web Engineering, which post-dates the BABAR Web, has aimed to provide a comprehensive understanding of all aspects of Web development. This paper reports on the first part of a recent review of application of Web Engineering methods to the BABAR Web site, which has led to explicit user and information models of the BABAR community and how SLAC and the BABAR community relate and react to each other. The paper identifies the issues of a community Web site in a hierarchical, semi-governmental sector and formulates a strategy for periodic reviews of BABAR and similar sites. A separate paper reports on the findings of a user survey and selected interviews with users, along with their implications and recommendations for future

  12. Engineering High Assurance Distributed Cyber Physical Systems

    Science.gov (United States)

    2015-01-15

    engineering ( MDE ), Model- centric software engineering (MCSE), and others have attempted to leverage and integrate techniques for requirements...Part I: Principles of Software Engineering.” IBM Syst. J. 38, 2-3, pp.289-295, June 1999. [2] Xie, T, “Software Engineering Conferences”, web page

  13. Bioprocess-Engineering Education with Web Technology

    NARCIS (Netherlands)

    Sessink, O.

    2006-01-01

    Development of learning material that is distributed through and accessible via the World Wide Web. Various options from web technology are exploited to improve the quality and efficiency of learning material.

  14. Semantic similarity measure in biomedical domain leverage web search engine.

    Science.gov (United States)

    Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei

    2010-01-01

    Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.

  15. The inverse niche model for food webs with parasites

    Science.gov (United States)

    Warren, Christopher P.; Pascual, Mercedes; Lafferty, Kevin D.; Kuris, Armand M.

    2010-01-01

    Although parasites represent an important component of ecosystems, few field and theoretical studies have addressed the structure of parasites in food webs. We evaluate the structure of parasitic links in an extensive salt marsh food web, with a new model distinguishing parasitic links from non-parasitic links among free-living species. The proposed model is an extension of the niche model for food web structure, motivated by the potential role of size (and related metabolic rates) in structuring food webs. The proposed extension captures several properties observed in the data, including patterns of clustering and nestedness, better than does a random model. By relaxing specific assumptions, we demonstrate that two essential elements of the proposed model are the similarity of a parasite's hosts and the increasing degree of parasite specialization, along a one-dimensional niche axis. Thus, inverting one of the basic rules of the original model, the one determining consumers' generality appears critical. Our results support the role of size as one of the organizing principles underlying niche space and food web topology. They also strengthen the evidence for the non-random structure of parasitic links in food webs and open the door to addressing questions concerning the consequences and origins of this structure.

  16. Integration of Web mining and web crawler: Relevance and State of Art

    OpenAIRE

    Subhendu kumar pani; Deepak Mohapatra,; Bikram Keshari Ratha

    2010-01-01

    This study presents the role of web crawler in web mining environment. As the growth of the World Wide Web exceeded all expectations,the research on Web mining is growing more and more.web mining research topic which combines two of the activated research areas: Data Mining and World Wide Web .So, the World Wide Web is a very advanced area for data mining research. Search engines that are based on web crawling framework also used in web mining to find theinteracted web pages. This paper discu...

  17. Web Project Management

    OpenAIRE

    Suralkar, Sunita; Joshi, Nilambari; Meshram, B B

    2013-01-01

    This paper describes about the need for Web project management, fundamentals of project management for web projects: what it is, why projects go wrong, and what's different about web projects. We also discuss Cost Estimation Techniques based on Size Metrics. Though Web project development is similar to traditional software development applications, the special characteristics of Web Application development requires adaption of many software engineering approaches or even development of comple...

  18. Developing as new search engine and browser for libraries to search and organize the World Wide Web library resources

    OpenAIRE

    Sreenivasulu, V.

    2000-01-01

    Internet Granthalaya urges world wide advocates and targets at the task of creating a new search engine and dedicated browseer. Internet Granthalaya may be the ultimate search engine exclusively dedicated for every library use to search and organize the world wide web libary resources

  19. Exploring the academic invisible web

    OpenAIRE

    Lewandowski, Dirk; Mayr, Philipp

    2006-01-01

    Purpose: To provide a critical review of Bergman’s 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodol...

  20. A web services choreography scenario for interoperating bioinformatics applications

    Directory of Open Access Journals (Sweden)

    Cheung David W

    2004-03-01

    Full Text Available Abstract Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1 the platforms on which the applications run are heterogeneous, 2 their web interface is not machine-friendly, 3 they use a non-standard format for data input and output, 4 they do not exploit standards to define application interface and message exchange, and 5 existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates

  1. Web malware spread modelling and optimal control strategies

    Science.gov (United States)

    Liu, Wanping; Zhong, Shouming

    2017-02-01

    The popularity of the Web improves the growth of web threats. Formulating mathematical models for accurate prediction of malicious propagation over networks is of great importance. The aim of this paper is to understand the propagation mechanisms of web malware and the impact of human intervention on the spread of malicious hyperlinks. Considering the characteristics of web malware, a new differential epidemic model which extends the traditional SIR model by adding another delitescent compartment is proposed to address the spreading behavior of malicious links over networks. The spreading threshold of the model system is calculated, and the dynamics of the model is theoretically analyzed. Moreover, the optimal control theory is employed to study malware immunization strategies, aiming to keep the total economic loss of security investment and infection loss as low as possible. The existence and uniqueness of the results concerning the optimality system are confirmed. Finally, numerical simulations show that the spread of malware links can be controlled effectively with proper control strategy of specific parameter choice.

  2. Using UML to Model Web Services for Automatic Composition

    OpenAIRE

    Amal Elgammal; Mohamed El-Sharkawi

    2010-01-01

    There is a great interest paid to the web services paradigm nowadays. One of the most important problems related to the web service paradigm is the automatic composition of web services. Several frameworks have been proposed to achieve this novel goal. The most recent and richest framework (model) is the Colombo model. However, even for experienced developers, working with Colombo formalisms is low-level, very complex and timeconsuming. We propose to use UML (Unified Modeling Language) to mod...

  3. Electronic Grey Literature in Accelerator Science and Its Allied Subjects : Selected Web Resources for Scientists and Engineers

    CERN Document Server

    Rajendiran, P

    2006-01-01

    Grey literature Web resources in the field of accelerator science and its allied subjects are collected for the scientists and engineers of RRCAT (Raja Ramanna Centre for Advanced Technology). For definition purposes the different types of grey literature are described. The Web resources collected and compiled in this article (with an overview and link for each) specifically focus on technical reports, preprints or e-prints, which meet the main information needs of RRCAT users.

  4. A Connection Model between the Positioning Mechanism and Ultrasonic Measurement System via a Web Browser to Assess Acoustic Target Strength

    Science.gov (United States)

    Ishii, Ken; Imaizumi, Tomohito; Abe, Koki; Takao, Yoshimi; Tamura, Shuko

    This paper details a network-controlled measurement system for use in fisheries engineering. The target strength (TS) of fish is important in order to convert acoustic integration values obtained during acoustic surveys into estimates of fish abundance. The target strength pattern is measured with the combination of the rotation system for the aspect of the sample and the echo data acquisition system using the underwater supersonic wave. The user interface of the network architecture is designed for collaborative use with researchers in other organizations. The flexible network architecture is based on the web direct-access model for the rotation mechanism. The user interface is available for monitoring and controlling via a web browser that is installed in any terminal PC (personal computer). Previously the combination of two applications was performed not by a web browser but by the exclusive interface program. So a connection model is proposed between two applications by indirect communication via the DCOM (Distributed Component Object Model) server and added in the web direct-access model. A prompt report system in the TS measurement system and a positioning and measurement system using an electric flatcar via a web browser are developed. By a secure network architecture, DCOM communications via both Intranet and LAN are successfully certificated.

  5. On N = 1 gauge models from geometric engineering in M-theory

    International Nuclear Information System (INIS)

    Belhaj, A; Drissi, L B; Rasmussen, J

    2003-01-01

    We study geometric engineering of four-dimensional N = 1 gauge models from M-theory on a seven-dimensional manifold with G 2 holonomy. The manifold is constructed as a K3 fibration over a three-dimensional base space with ADE geometry. The resulting gauge theory is discussed in the realm of (p, q) webs. We discuss how the anomaly cancellation condition translates into a condition on the associated affine ADE Lie algebras

  6. Sagace: A web-based search engine for biomedical databases in Japan

    Directory of Open Access Journals (Sweden)

    Morita Mizuki

    2012-10-01

    Full Text Available Abstract Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data and biological resource banks (such as mouse models of disease and cell lines. With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/.

  7. Flow Webs: Mechanism and Architecture for the Implementation of Sensor Webs

    Science.gov (United States)

    Gorlick, M. M.; Peng, G. S.; Gasster, S. D.; McAtee, M. D.

    2006-12-01

    -time demands. Flows are the connective tissue of flow webs—massive computational engines organized as directed graphs whose nodes are semi-autonomous components and whose edges are flows. The individual components of a flow web may themselves be encapsulated flow webs. In other words, a flow web subgraph may be presented to a yet larger flow web as a single, seamless component. Flow webs, at all levels, may be edited and modified while still executing. Within a flow web individual components may be added, removed, started, paused, halted, reparameterized, or inspected. The topology of a flow web may be changed at will. Thus, flow webs exhibit an extraordinary degree of adaptivity and robustness as they are explicitly designed to be modified on the fly, an attribute well suited for dynamic model interactions in sensor webs. We describe our concept for a sensor web, implemented as a flow web, in the context of a wildfire disaster management system for the southern California region. Comprehensive wildfire management requires cooperation among multiple agencies. Flow webs allow agencies to share resources in exactly the manner they choose. We will explain how to employ flow webs and agents to integrate satellite remote sensing data, models, in-situ sensors, UAVs and other resources into a sensor web that interconnects organizations and their disaster management tools in a manner that simultaneously preserves their independence and builds upon the individual strengths of agency-specific models and data sources.

  8. Search Engine Optimization for Flash Best Practices for Using Flash on the Web

    CERN Document Server

    Perkins, Todd

    2009-01-01

    Search Engine Optimization for Flash dispels the myth that Flash-based websites won't show up in a web search by demonstrating exactly what you can do to make your site fully searchable -- no matter how much Flash it contains. You'll learn best practices for using HTML, CSS and JavaScript, as well as SWFObject, for building sites with Flash that will stand tall in search rankings.

  9. Applying Web Usage Mining for Personalizing Hyperlinks in Web-Based Adaptive Educational Systems

    Science.gov (United States)

    Romero, Cristobal; Ventura, Sebastian; Zafra, Amelia; de Bra, Paul

    2009-01-01

    Nowadays, the application of Web mining techniques in e-learning and Web-based adaptive educational systems is increasing exponentially. In this paper, we propose an advanced architecture for a personalization system to facilitate Web mining. A specific Web mining tool is developed and a recommender engine is integrated into the AHA! system in…

  10. SBMLmod: a Python-based web application and web service for efficient data integration and model simulation.

    Science.gov (United States)

    Schäuble, Sascha; Stavrum, Anne-Kristin; Bockwoldt, Mathias; Puntervoll, Pål; Heiland, Ines

    2017-06-24

    Systems Biology Markup Language (SBML) is the standard model representation and description language in systems biology. Enriching and analysing systems biology models by integrating the multitude of available data, increases the predictive power of these models. This may be a daunting task, which commonly requires bioinformatic competence and scripting. We present SBMLmod, a Python-based web application and service, that automates integration of high throughput data into SBML models. Subsequent steady state analysis is readily accessible via the web service COPASIWS. We illustrate the utility of SBMLmod by integrating gene expression data from different healthy tissues as well as from a cancer dataset into a previously published model of mammalian tryptophan metabolism. SBMLmod is a user-friendly platform for model modification and simulation. The web application is available at http://sbmlmod.uit.no , whereas the WSDL definition file for the web service is accessible via http://sbmlmod.uit.no/SBMLmod.wsdl . Furthermore, the entire package can be downloaded from https://github.com/MolecularBioinformatics/sbml-mod-ws . We envision that SBMLmod will make automated model modification and simulation available to a broader research community.

  11. Reconsidering the Rhizome: A Textual Analysis of Web Search Engines as Gatekeepers of the Internet

    Science.gov (United States)

    Hess, A.

    Critical theorists have often drawn from Deleuze and Guattari's notion of the rhizome when discussing the potential of the Internet. While the Internet may structurally appear as a rhizome, its day-to-day usage by millions via search engines precludes experiencing the random interconnectedness and potential democratizing function. Through a textual analysis of four search engines, I argue that Web searching has grown hierarchies, or "trees," that organize data in tracts of knowledge and place users in marketing niches rather than assist in the development of new knowledge.

  12. Developing Creativity and Problem-Solving Skills of Engineering Students: A Comparison of Web- and Pen-and-Paper-Based Approaches

    Science.gov (United States)

    Valentine, Andrew; Belski, Iouri; Hamilton, Margaret

    2017-01-01

    Problem-solving is a key engineering skill, yet is an area in which engineering graduates underperform. This paper investigates the potential of using web-based tools to teach students problem-solving techniques without the need to make use of class time. An idea generation experiment involving 90 students was designed. Students were surveyed…

  13. A Taxonomic Search Engine: federating taxonomic databases using web services.

    Science.gov (United States)

    Page, Roderic D M

    2005-03-09

    The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  14. Managing uncertainty in integrated environmental modelling: The UncertWeb framework.

    NARCIS (Netherlands)

    Bastin, L.; Cornford, D.; Jones, R.; Heuvelink, G.B.M.; Pebesma, E.; Stasch, C.; Nativi, S.; Mazzetti, P.

    2013-01-01

    Web-based distributed modelling architectures are gaining increasing recognition as potentially useful tools to build holistic environmental models, combining individual components in complex workflows. However, existing web-based modelling frameworks currently offer no support for managing

  15. A Two-Tiered Model for Analyzing Library Web Site Usage Statistics, Part 1: Web Server Logs.

    Science.gov (United States)

    Cohen, Laura B.

    2003-01-01

    Proposes a two-tiered model for analyzing web site usage statistics for academic libraries: one tier for library administrators that analyzes measures indicating library use, and a second tier for web site managers that analyzes measures aiding in server maintenance and site design. Discusses the technology of web site usage statistics, and…

  16. Error Checking for Chinese Query by Mining Web Log

    Directory of Open Access Journals (Sweden)

    Jianyong Duan

    2015-01-01

    Full Text Available For the search engine, error-input query is a common phenomenon. This paper uses web log as the training set for the query error checking. Through the n-gram language model that is trained by web log, the queries are analyzed and checked. Some features including query words and their number are introduced into the model. At the same time data smoothing algorithm is used to solve data sparseness problem. It will improve the overall accuracy of the n-gram model. The experimental results show that it is effective.

  17. Modelling food-web mediated effects of hydrological variability and environmental flows.

    Science.gov (United States)

    Robson, Barbara J; Lester, Rebecca E; Baldwin, Darren S; Bond, Nicholas R; Drouart, Romain; Rolls, Robert J; Ryder, Darren S; Thompson, Ross M

    2017-11-01

    Environmental flows are designed to enhance aquatic ecosystems through a variety of mechanisms; however, to date most attention has been paid to the effects on habitat quality and life-history triggers, especially for fish and vegetation. The effects of environmental flows on food webs have so far received little attention, despite food-web thinking being fundamental to understanding of river ecosystems. Understanding environmental flows in a food-web context can help scientists and policy-makers better understand and manage outcomes of flow alteration and restoration. In this paper, we consider mechanisms by which flow variability can influence and alter food webs, and place these within a conceptual and numerical modelling framework. We also review the strengths and weaknesses of various approaches to modelling the effects of hydrological management on food webs. Although classic bioenergetic models such as Ecopath with Ecosim capture many of the key features required, other approaches, such as biogeochemical ecosystem modelling, end-to-end modelling, population dynamic models, individual-based models, graph theory models, and stock assessment models are also relevant. In many cases, a combination of approaches will be useful. We identify current challenges and new directions in modelling food-web responses to hydrological variability and environmental flow management. These include better integration of food-web and hydraulic models, taking physiologically-based approaches to food quality effects, and better representation of variations in space and time that may create ecosystem control points. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  18. Modelling PCB bioaccumulation in a Baltic food web

    International Nuclear Information System (INIS)

    Nfon, Erick; Cousins, Ian T.

    2007-01-01

    A steady state model is developed to describe the bioaccumulation of organic contaminants by 14 species in a Baltic food web including pelagic and benthic aquatic organisms. The model is used to study the bioaccumulation of five PCB congeners of different chlorination levels. The model predictions are evaluated against monitoring data for five of the species in the food web. Predicted concentrations are on average within a factor of two of measured concentrations. The model shows that all PCB congeners were biomagnified in the food web, which is consistent with observations. Sensitivity analysis reveals that the single most sensitive parameter is log K OW . The most sensitive environmental parameter is the annual average temperature. Although not identified amongst the most sensitive input parameters, the dissolved concentration in water is believed to be important because of the uncertainty in its determination. The most sensitive organism-specific input parameters are the fractional respiration of species from the water column and sediment pore water, which are also difficult to determine. Parameters such as feeding rate, growth rate and lipid content of organism are only important at higher trophic levels. - The bioaccumulation behaviour of PCB congeners in a Baltic food web is studied using a novel mechanistic model

  19. Web services foundations

    CERN Document Server

    Bouguettaya, Athman; Daniel, Florian

    2013-01-01

    Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet.Web Services Foundations is the first installment of a two-book collection coverin

  20. Understanding the Web from an Economic Perspective: The Evolution of Business Models and the Web

    Directory of Open Access Journals (Sweden)

    Louis Rinfret

    2014-08-01

    Full Text Available The advent of the World Wide Web is arguably amongst the most important changes that have occurred since the 1990s in the business landscape. It has fueled the rise of new industries, supported the convergence and reshaping of existing ones and enabled the development of new business models. During this time the web has evolved tremendously from a relatively static pagedisplay tool to a massive network of user-generated content, collective intelligence, applications and hypermedia. As technical standards continue to evolve, business models catch-up to the new capabilities. New ways of creating value, distributing it and profiting from it emerge more rapidly than ever. In this paper we explore how the World Wide Web and business models evolve and we identify avenues for future research in light of the web‟s ever-evolving nature and its influence on business models.

  1. Model My Watershed and BiG CZ Data Portal: Interactive geospatial analysis and hydrological modeling web applications that leverage the Amazon cloud for scientists, resource managers and students

    Science.gov (United States)

    Aufdenkampe, A. K.; Mayorga, E.; Tarboton, D. G.; Sazib, N. S.; Horsburgh, J. S.; Cheetham, R.

    2016-12-01

    The Model My Watershed Web app (http://wikiwatershed.org/model/) was designed to enable citizens, conservation practitioners, municipal decision-makers, educators, and students to interactively select any area of interest anywhere in the continental USA to: (1) analyze real land use and soil data for that area; (2) model stormwater runoff and water-quality outcomes; and (3) compare how different conservation or development scenarios could modify runoff and water quality. The BiG CZ Data Portal is a web application for scientists for intuitive, high-performance map-based discovery, visualization, access and publication of diverse earth and environmental science data via a map-based interface that simultaneously performs geospatial analysis of selected GIS and satellite raster data for a selected area of interest. The two web applications share a common codebase (https://github.com/WikiWatershed and https://github.com/big-cz), high performance geospatial analysis engine (http://geotrellis.io/ and https://github.com/geotrellis) and deployment on the Amazon Web Services (AWS) cloud cyberinfrastructure. Users can use "on-the-fly" rapid watershed delineation over the national elevation model to select their watershed or catchment of interest. The two web applications also share the goal of enabling the scientists, resource managers and students alike to share data, analyses and model results. We will present these functioning web applications and their potential to substantially lower the bar for studying and understanding our water resources. We will also present work in progress, including a prototype system for enabling citizen-scientists to register open-source sensor stations (http://envirodiy.org/mayfly/) to stream data into these systems, so that they can be reshared using Water One Flow web services.

  2. Habitat-mediated variation in the importance of ecosystem engineers for secondary cavity nesters in a nest web.

    Science.gov (United States)

    Robles, Hugo; Martin, Kathy

    2014-01-01

    Through physical state changes in biotic or abiotic materials, ecosystem engineers modulate resource availability to other organisms and are major drivers of evolutionary and ecological dynamics. Understanding whether and how ecosystem engineers are interchangeable for resource users in different habitats is a largely neglected topic in ecosystem engineering research that can improve our understanding of the structure of communities. We addressed this issue in a cavity-nest web (1999-2011). In aspen groves, the presence of mountain bluebird (Sialia currucoides) and tree swallow (Tachycineta bicolour) nests was positively related to the density of cavities supplied by northern flickers (Colaptes auratus), which provided the most abundant cavities (1.61 cavities/ha). Flickers in aspen groves provided numerous nesting cavities to bluebirds (66%) and swallows (46%), despite previous research showing that flicker cavities are avoided by swallows. In continuous mixed forests, however, the presence of nesting swallows was mainly related to cavity density of red-naped sapsuckers (Sphyrapicus nuchalis), which provided the most abundant cavities (0.52 cavities/ha), and to cavity density of hairy woodpeckers (Picoides villosus), which provided few (0.14 cavities/ha) but high-quality cavities. Overall, sapsuckers and hairy woodpeckers provided 86% of nesting cavities to swallows in continuous forests. In contrast, the presence of nesting bluebirds in continuous forests was associated with the density of cavities supplied by all the ecosystem engineers. These results suggest that (i) habitat type may mediate the associations between ecosystem engineers and resource users, and (ii) different ecosystem engineers may be interchangeable for resource users depending on the quantity and quality of resources that each engineer supplies in each habitat type. We, therefore, urge the incorporation of the variation in the quantity and quality of resources provided by ecosystem engineers

  3. Critical Reading of the Web

    Science.gov (United States)

    Griffin, Teresa; Cohen, Deb

    2012-01-01

    The ubiquity and familiarity of the world wide web means that students regularly turn to it as a source of information. In doing so, they "are said to rely heavily on simple search engines, such as Google to find what they want." Researchers have also investigated how students use search engines, concluding that "the young web users tended to…

  4. Digging Deeper: The Deep Web.

    Science.gov (United States)

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  5. Developing BP-driven web application through the use of MDE techniques

    OpenAIRE

    Torres Bosch, Maria Victoria; Giner Blasco, Pau; Pelechano Ferragud, Vicente

    2012-01-01

    Model driven engineering (MDE) is a suitable approach for performing the construction of software systems (in particular in the Web application domain). There are different types of Web applications depending on their purpose (i.e., document-centric, interactive, transactional, workflow/business process-based, collaborative, etc). This work focusses on business process-based Web applications in order to be able to understand business processes in a broad sense, from the lightweight business p...

  6. Web service availability-impact of error recovery and traffic model

    International Nuclear Information System (INIS)

    Martinello, Magnos; Kaa-hat niche, Mohamed; Kanoun, Karama

    2005-01-01

    Internet is often used for transaction based applications such as online banking, stock trading and shopping, where the service interruption or outages are unacceptable. Therefore, it is important for designers of such applications to analyze how hardware, software and performance related failures affect the quality of service delivered to the users. This paper presents analytical models for evaluating the service availability of web cluster architectures. A composite performance and availability modeling approach is defined considering various causes of service unavailability. In particular, web cluster systems are modeled taking into account: two error recovery strategies (client transparent and non-client-transparent) as well as two traffic models (Poisson and modulated Poisson). Sensitivity analysis results are presented to show their impact on the web service availability. The obtained results provide useful guidelines to web designers

  7. Integrating hydrologic modeling web services with online data sharing to prepare, store, and execute models in hydrology

    Science.gov (United States)

    Gan, T.; Tarboton, D. G.; Dash, P. K.; Gichamo, T.; Horsburgh, J. S.

    2017-12-01

    Web based apps, web services and online data and model sharing technology are becoming increasingly available to support research. This promises benefits in terms of collaboration, platform independence, transparency and reproducibility of modeling workflows and results. However, challenges still exist in real application of these capabilities and the programming skills researchers need to use them. In this research we combined hydrologic modeling web services with an online data and model sharing system to develop functionality to support reproducible hydrologic modeling work. We used HydroDS, a system that provides web services for input data preparation and execution of a snowmelt model, and HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. To make the web services easy to use, we developed a HydroShare app (based on the Tethys platform) to serve as a browser based user interface for HydroDS. In this integration, HydroDS receives web requests from the HydroShare app to process the data and execute the model. HydroShare supports storage and sharing of the results generated by HydroDS web services. The snowmelt modeling example served as a use case to test and evaluate this approach. We show that, after the integration, users can prepare model inputs or execute the model through the web user interface of the HydroShare app without writing program code. The model input/output files and metadata describing the model instance are stored and shared in HydroShare. These files include a Python script that is automatically generated by the HydroShare app to document and reproduce the model input preparation workflow. Once stored in HydroShare, inputs and results can be shared with other users, or published so that other users can directly discover, repeat or modify the modeling work. This approach provides a collaborative environment that integrates hydrologic web services with a data and model sharing

  8. Advancing the Implementation of Hydrologic Models as Web-based Applications

    Science.gov (United States)

    Dahal, P.; Tarboton, D. G.; Castronova, A. M.

    2017-12-01

    Advanced computer simulations are required to understand hydrologic phenomenon such as rainfall-runoff response, groundwater hydrology, snow hydrology, etc. Building a hydrologic model instance to simulate a watershed requires investment in data (diverse geospatial datasets such as terrain, soil) and computer resources, typically demands a wide skill set from the analyst, and the workflow involved is often difficult to reproduce. This work introduces a web-based prototype infrastructure in the form of a web application that provides researchers with easy to use access to complete hydrological modeling functionality. This includes creating the necessary geospatial and forcing data, preparing input files for a model by applying complex data preprocessing, running the model for a user defined watershed, and saving the results to a web repository. The open source Tethys Platform was used to develop the web app front-end Graphical User Interface (GUI). We used HydroDS, a webservice that provides data preparation processing capability to support backend computations used by the app. Results are saved in HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. The TOPographic Kinematic APproximation and Integration (TOPKAPI) model served as the example for which we developed a complete hydrologic modeling service to demonstrate the approach. The final product is a complete modeling system accessible through the web to create input files, and run the TOPKAPI hydrologic model for a watershed of interest. We are investigating similar functionality for the preparation of input to Regional Hydro-Ecological Simulation System (RHESSys). Key Words: hydrologic modeling, web services, hydrologic information system, HydroShare, HydroDS, Tethys Platform

  9. Myanmar Language Search Engine

    OpenAIRE

    Pann Yu Mon; Yoshiki Mikami

    2011-01-01

    With the enormous growth of the World Wide Web, search engines play a critical role in retrieving information from the borderless Web. Although many search engines are available for the major languages, but they are not much proficient for the less computerized languages including Myanmar. The main reason is that those search engines are not considering the specific features of those languages. A search engine which capable of searching the Web documents written in those languages is highly n...

  10. Using Web-Based Knowledge Extraction Techniques to Support Cultural Modeling

    Science.gov (United States)

    Smart, Paul R.; Sieck, Winston R.; Shadbolt, Nigel R.

    The World Wide Web is a potentially valuable source of information about the cognitive characteristics of cultural groups. However, attempts to use the Web in the context of cultural modeling activities are hampered by the large-scale nature of the Web and the current dominance of natural language formats. In this paper, we outline an approach to support the exploitation of the Web for cultural modeling activities. The approach begins with the development of qualitative cultural models (which describe the beliefs, concepts and values of cultural groups), and these models are subsequently used to develop an ontology-based information extraction capability. Our approach represents an attempt to combine conventional approaches to information extraction with epidemiological perspectives of culture and network-based approaches to cultural analysis. The approach can be used, we suggest, to support the development of models providing a better understanding of the cognitive characteristics of particular cultural groups.

  11. Engine modeling and control modeling and electronic management of internal combustion engines

    CERN Document Server

    Isermann, Rolf

    2014-01-01

    The increasing demands for internal combustion engines with regard to fuel consumption, emissions and driveability lead to more actuators, sensors and complex control functions. A systematic implementation of the electronic control systems requires mathematical models from basic design through simulation to calibration. The book treats physically-based as well as models based experimentally on test benches for gasoline (spark ignition) and diesel (compression ignition) engines and uses them for the design of the different control functions. The main topics are: - Development steps for engine control - Stationary and dynamic experimental modeling - Physical models of intake, combustion, mechanical system, turbocharger, exhaust, cooling, lubrication, drive train - Engine control structures, hardware, software, actuators, sensors, fuel supply, injection system, camshaft - Engine control methods, static and dynamic feedforward and feedback control, calibration and optimization, HiL, RCP, control software developm...

  12. Intelligent Agent Based Semantic Web in Cloud Computing Environment

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Considering today's web scenario, there is a need of effective and meaningful search over the web which is provided by Semantic Web. Existing search engines are keyword based. They are vulnerable in answering intelligent queries from the user due to the dependence of their results on information available in web pages. While semantic search engines provides efficient and relevant results as the semantic web is an extension of the current web in which information is given well defined meaning....

  13. Web の探索行動と情報評価過程の分析

    OpenAIRE

    種市, 淳子; 逸村, 裕; TANEICHI, Junko; ITSUMURA, Hiroshi

    2005-01-01

    In this study, we discussed information seeking behavior on the Web. First, the currentWeb-searching studies are reviewed from the perspective of: (1) Web-searching characteristics; (2) the process model for how users evaluate Web resources. Secondly, we investigated information seeking processes using the Web search engine and online public access catalogue (OPAC) system by undergraduate students, through an experiment and its protocol analysis. The results indicate that: (1) Web-searching p...

  14. MODEST: a web-based design tool for oligonucleotide-mediated genome engineering and recombineering

    DEFF Research Database (Denmark)

    Bonde, Mads; Klausen, Michael Schantz; Anderson, Mads Valdemar

    2014-01-01

    Recombineering and multiplex automated genome engineering (MAGE) offer the possibility to rapidly modify multiple genomic or plasmid sites at high efficiencies. This enables efficient creation of genetic variants including both single mutants with specifically targeted modifications as well......, which confers the corresponding genetic change, is performed manually. To address these challenges, we have developed the MAGE Oligo Design Tool (MODEST). This web-based tool allows designing of MAGE oligos for (i) tuning translation rates by modifying the ribosomal binding site, (ii) generating...

  15. A Taxonomic Search Engine: Federating taxonomic databases using web services

    Directory of Open Access Journals (Sweden)

    Page Roderic DM

    2005-03-01

    Full Text Available Abstract Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata for each name. Conclusion The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  16. Space Physics Data Facility Web Services

    Science.gov (United States)

    Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.

    2005-01-01

    The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.

  17. An application of TOPSIS for ranking internet web browsers

    Directory of Open Access Journals (Sweden)

    Shahram Rostampour

    2012-07-01

    Full Text Available Web browser is one of the most important internet facilities for surfing the internet. A good web browser must incorporate literally tens of features such as integrated search engine, automatic updates, etc. Each year, ten web browsers are formally introduced as top best reviewers by some organizations. In this paper, we propose the implementation of TOPSIS technique to rank ten web browsers. The proposed model of this paper uses five criteria including speed, features, security, technical support and supported configurations. In terms of speed, Safari is the best web reviewer followed by Google Chrome and Internet Explorer while Opera is the best web reviewer when we look into 20 different features. We have also ranked these web browsers using all five categories together and the results indicate that Opera, Internet explorer, Firefox and Google Chrome are the best web browsers to be chosen.

  18. Philosophical engineering toward a philosophy of the web

    CERN Document Server

    Halpin, Harry

    2013-01-01

    This is the first interdisciplinary exploration of the philosophical foundations of the Web, a new area of inquiry that has important implications across a range of domains. Contains twelve essays that bridge the fields of philosophy, cognitive science, and phenomenologyTackles questions such as the impact of Google on intelligence and epistemology, the philosophical status of digital objects, ethics on the Web, semantic and ontological changes caused by the Web, and the potential of the Web to serve as a genuine cognitive extensionBrings together insightful new scholarship from well-known an

  19. Modelling Web-Based Instructional Systems

    NARCIS (Netherlands)

    Retalis, Symeon; Avgeriou, Paris

    2002-01-01

    The size and complexity of modern instructional systems, which are based on the World Wide Web, bring about great intricacy in their crafting, as there is not enough knowledge or experience in this field. This imposes the use of new instructional design models in order to achieve risk-mitigation,

  20. BioModels.net Web Services, a free and integrated toolkit for computational modelling software.

    Science.gov (United States)

    Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille

    2010-05-01

    Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.

  1. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  2. Promoting Your Web Site.

    Science.gov (United States)

    Raeder, Aggi

    1997-01-01

    Discussion of ways to promote sites on the World Wide Web focuses on how search engines work and how they retrieve and identify sites. Appropriate Web links for submitting new sites and for Internet marketing are included. (LRW)

  3. Advanced web services

    CERN Document Server

    Bouguettaya, Athman; Daniel, Florian

    2013-01-01

    Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet. This book is the second installment of a two-book collection covering the state-o

  4. Engine Modelling for Control Applications

    DEFF Research Database (Denmark)

    Hendricks, Elbert

    1997-01-01

    In earlier work published by the author and co-authors, a dynamic engine model called a Mean Value Engine Model (MVEM) was developed. This model is physically based and is intended mainly for control applications. In its newer form, it is easy to fit to many different engines and requires little...... engine data for this purpose. It is especially well suited to embedded model applications in engine controllers, such as nonlinear observer based air/fuel ratio and advanced idle speed control. After a brief review of this model, it will be compared with other similar models which can be found...

  5. Development of Content Management System-based Web Applications

    OpenAIRE

    Souer, J.

    2012-01-01

    Web engineering is the application of systematic and quantifiable approaches (concepts, methods, techniques, tools) to cost-effective requirements analysis, design, implementation, testing, operation, and maintenance of high quality web applications. Over the past years, Content Management Systems (CMS) have emerged as an important foundation for the web engineering process. CMS can be defined as a tool for the creation, editing and management of web information in an integral way. A CMS appe...

  6. Contrasting Web Robot and Human Behaviors with Network Models

    OpenAIRE

    Brown, Kyle; Doran, Derek

    2018-01-01

    The web graph is a commonly-used network representation of the hyperlink structure of a website. A network of similar structure to the web graph, which we call the session graph has properties that reflect the browsing habits of the agents in the web server logs. In this paper, we apply session graphs to compare the activity of humans against web robots or crawlers. Understanding these properties will enable us to improve models of HTTP traffic, which can be used to predict and generate reali...

  7. Incorporating the surfing behavior of web users into PageRank

    OpenAIRE

    Ashyralyyev, Shatlyk

    2013-01-01

    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013. Thesis (Master's) -- Bilkent University, 2013. Includes bibliographical references leaves 68-73 One of the most crucial factors that determines the effectiveness of a large-scale commercial web search engine is the ranking (i.e., order) in which web search results are presented to the end user. In modern web search engines, the skeleton for the rank...

  8. Model-Based Systems Engineering in Concurrent Engineering Centers

    Science.gov (United States)

    Iwata, Curtis; Infeld, Samantha; Bracken, Jennifer Medlin; McGuire, Melissa; McQuirk, Christina; Kisdi, Aron; Murphy, Jonathan; Cole, Bjorn; Zarifian, Pezhman

    2015-01-01

    Concurrent Engineering Centers (CECs) are specialized facilities with a goal of generating and maturing engineering designs by enabling rapid design iterations. This is accomplished by co-locating a team of experts (either physically or virtually) in a room with a narrow design goal and a limited timeline of a week or less. The systems engineer uses a model of the system to capture the relevant interfaces and manage the overall architecture. A single model that integrates other design information and modeling allows the entire team to visualize the concurrent activity and identify conflicts more efficiently, potentially resulting in a systems model that will continue to be used throughout the project lifecycle. Performing systems engineering using such a system model is the definition of model-based systems engineering (MBSE); therefore, CECs evolving their approach to incorporate advances in MBSE are more successful in reducing time and cost needed to meet study goals. This paper surveys space mission CECs that are in the middle of this evolution, and the authors share their experiences in order to promote discussion within the community.

  9. Characteristics of scientific web publications

    DEFF Research Database (Denmark)

    Thorlund Jepsen, Erik; Seiden, Piet; Ingwersen, Peter Emil Rerup

    2004-01-01

    were generated based on specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AllTheWeb, and AltaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality...... of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various...... types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both Alta...

  10. Interactive WebGL-based 3D visualizations for EAST experiment

    International Nuclear Information System (INIS)

    Xia, J.Y.; Xiao, B.J.; Li, Dan; Wang, K.R.

    2016-01-01

    Highlights: • Developing a user-friendly interface to visualize the EAST experimental data and the device is important to scientists and engineers. • The Web3D visualization system is based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. • The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. • The original CAD model was discretized into different layers with different simplification to enable realistic rendering and improve performance. - Abstract: In recent years EAST (Experimental Advanced Superconducting Tokamak) experimental data are being shared and analyzed by an increasing number of international collaborators. Developing a user-friendly interface to visualize the data, meta data and the relevant parts of the device is becoming more and more important to aid scientists and engineers. Compared with the previous virtual EAST system based on VRML/Java3D [1] (Li et al., 2014), a new technology is being adopted to create a 3D visualization system based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. It offers a highly interactive interface allowing scientists to roam inside EAST device and view the complex 3-D structure of the machine. It includes technical details of the device and various diagnostic components, and provides visualization of diagnostic metadata with a direct link to each signal name and its stored data. In order for the quick access to the device 3D model, the original CAD model was discretized into different layers with different simplification. It allows users to search for plasma videos in any experiment and analyze the video frame by frame. In this paper, we present the implementation details to enable realistic rendering and improve performance.

  11. Interactive WebGL-based 3D visualizations for EAST experiment

    Energy Technology Data Exchange (ETDEWEB)

    Xia, J.Y., E-mail: jyxia@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Xiao, B.J. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Li, Dan [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Wang, K.R. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China)

    2016-11-15

    Highlights: • Developing a user-friendly interface to visualize the EAST experimental data and the device is important to scientists and engineers. • The Web3D visualization system is based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. • The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. • The original CAD model was discretized into different layers with different simplification to enable realistic rendering and improve performance. - Abstract: In recent years EAST (Experimental Advanced Superconducting Tokamak) experimental data are being shared and analyzed by an increasing number of international collaborators. Developing a user-friendly interface to visualize the data, meta data and the relevant parts of the device is becoming more and more important to aid scientists and engineers. Compared with the previous virtual EAST system based on VRML/Java3D [1] (Li et al., 2014), a new technology is being adopted to create a 3D visualization system based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. It offers a highly interactive interface allowing scientists to roam inside EAST device and view the complex 3-D structure of the machine. It includes technical details of the device and various diagnostic components, and provides visualization of diagnostic metadata with a direct link to each signal name and its stored data. In order for the quick access to the device 3D model, the original CAD model was discretized into different layers with different simplification. It allows users to search for plasma videos in any experiment and analyze the video frame by frame. In this paper, we present the implementation details to enable realistic rendering and improve performance.

  12. Semantic Web status model

    CSIR Research Space (South Africa)

    Gerber, AJ

    2006-06-01

    Full Text Available Semantic Web application areas are experiencing intensified interest due to the rapid growth in the use of the Web, together with the innovation and renovation of information content technologies. The Semantic Web is regarded as an integrator across...

  13. The poor quality of information about laparoscopy on the World Wide Web as indexed by popular search engines.

    Science.gov (United States)

    Allen, J W; Finch, R J; Coleman, M G; Nathanson, L K; O'Rourke, N A; Fielding, G A

    2002-01-01

    This study was undertaken to determine the quality of information on the Internet regarding laparoscopy. Four popular World Wide Web search engines were used with the key word "laparoscopy." Advertisements, patient- or physician-directed information, and controversial material were noted. A total of 14,030 Web pages were found, but only 104 were unique Web sites. The majority of the sites were duplicate pages, subpages within a main Web page, or dead links. Twenty-eight of the 104 pages had a medical product for sale, 26 were patient-directed, 23 were written by a physician or group of physicians, and six represented corporations. The remaining 21 were "miscellaneous." The 46 pages containing educational material were critically reviewed. At least one of the senior authors found that 32 of the pages contained controversial or misleading statements. All of the three senior authors (LKN, NAO, GAF) independently agreed that 17 of the 46 pages contained controversial information. The World Wide Web is not a reliable source for patient or physician information about laparoscopy. Authenticating medical information on the World Wide Web is a difficult task, and no government or surgical society has taken the lead in regulating what is presented as fact on the World Wide Web.

  14. Extracting Macroscopic Information from Web Links.

    Science.gov (United States)

    Thelwall, Mike

    2001-01-01

    Discussion of Web-based link analysis focuses on an evaluation of Ingversen's proposed external Web Impact Factor for the original use of the Web, namely the interlinking of academic research. Studies relationships between academic hyperlinks and research activities for British universities and discusses the use of search engines for Web link…

  15. A Semantic Web management model for integrative biomedical informatics.

    Directory of Open Access Journals (Sweden)

    Helena F Deus

    2008-08-01

    Full Text Available Data, data everywhere. The diversity and magnitude of the data generated in the Life Sciences defies automated articulation among complementary efforts. The additional need in this field for managing property and access permissions compounds the difficulty very significantly. This is particularly the case when the integration involves multiple domains and disciplines, even more so when it includes clinical and high throughput molecular data.The emergence of Semantic Web technologies brings the promise of meaningful interoperation between data and analysis resources. In this report we identify a core model for biomedical Knowledge Engineering applications and demonstrate how this new technology can be used to weave a management model where multiple intertwined data structures can be hosted and managed by multiple authorities in a distributed management infrastructure. Specifically, the demonstration is performed by linking data sources associated with the Lung Cancer SPORE awarded to The University of Texas MD Anderson Cancer Center at Houston and the Southwestern Medical Center at Dallas. A software prototype, available with open source at www.s3db.org, was developed and its proposed design has been made publicly available as an open source instrument for shared, distributed data management.The Semantic Web technologies have the potential to addresses the need for distributed and evolvable representations that are critical for systems Biology and translational biomedical research. As this technology is incorporated into application development we can expect that both general purpose productivity software and domain specific software installed on our personal computers will become increasingly integrated with the relevant remote resources. In this scenario, the acquisition of a new dataset should automatically trigger the delegation of its analysis.

  16. Web Viz 2.0: A versatile suite of tools for collaboration and visualization

    Science.gov (United States)

    Spencer, C.; Yuen, D. A.

    2012-12-01

    Most scientific applications on the web fail to realize the full collaborative potential of the internet by not utilizing web 2.0 technology. To relieve users from the struggle with software tools and allow them to focus on their research, new software developed for scientists and researchers must harness the full suite of web technology. For several years WebViz 1.0 enabled researchers with any web accessible device to interact with the peta-scale data generated by the Hierarchical Volume Renderer (HVR) system. We have developed a new iteration of WebViz that can be easily interfaced with many problem domains in addition to HVR by employing the best practices of software engineering and object-oriented programming. This is done by separating the core WebViz system from domain specific code at an interface, leveraging inheritance and polymorphism to allow newly developed modules access to the core services. We employed several design patterns (model-view-controller, singleton, observer, and application controller) to engineer this highly modular system implemented in Java.

  17. A Distributed Web-based Solution for Ionospheric Model Real-time Management, Monitoring, and Short-term Prediction

    Science.gov (United States)

    Kulchitsky, A.; Maurits, S.; Watkins, B.

    2006-12-01

    With the widespread availability of the Internet today, many people can monitor various scientific research activities. It is important to accommodate this interest providing on-line access to dynamic and illustrative Web-resources, which could demonstrate different aspects of ongoing research. It is especially important to explain and these research activities for high school and undergraduate students, thereby providing more information for making decisions concerning their future studies. Such Web resources are also important to clarify scientific research for the general public, in order to achieve better awareness of research progress in various fields. Particularly rewarding is dissemination of information about ongoing projects within Universities and research centers to their local communities. The benefits of this type of scientific outreach are mutual, since development of Web-based automatic systems is prerequisite for many research projects targeting real-time monitoring and/or modeling of natural conditions. Continuous operation of such systems provide ongoing research opportunities for the statistically massive validation of the models, as well. We have developed a Web-based system to run the University of Alaska Fairbanks Polar Ionospheric Model in real-time. This model makes use of networking and computational resources at the Arctic Region Supercomputing Center. This system was designed to be portable among various operating systems and computational resources. Its components can be installed across different computers, separating Web servers and computational engines. The core of the system is a Real-Time Management module (RMM) written Python, which facilitates interactions of remote input data transfers, the ionospheric model runs, MySQL database filling, and PHP scripts for the Web-page preparations. The RMM downloads current geophysical inputs as soon as they become available at different on-line depositories. This information is processed to

  18. Sharing and reusing cardiovascular anatomical models over the Web: a step towards the implementation of the virtual physiological human project.

    Science.gov (United States)

    Gianni, Daniele; McKeever, Steve; Yu, Tommy; Britten, Randall; Delingette, Hervé; Frangi, Alejandro; Hunter, Peter; Smith, Nicolas

    2010-06-28

    Sharing and reusing anatomical models over the Web offers a significant opportunity to progress the investigation of cardiovascular diseases. However, the current sharing methodology suffers from the limitations of static model delivery (i.e. embedding static links to the models within Web pages) and of a disaggregated view of the model metadata produced by publications and cardiac simulations in isolation. In the context of euHeart--a research project targeting the description and representation of cardiovascular models for disease diagnosis and treatment purposes--we aim to overcome the above limitations with the introduction of euHeartDB, a Web-enabled database for anatomical models of the heart. The database implements a dynamic sharing methodology by managing data access and by tracing all applications. In addition to this, euHeartDB establishes a knowledge link with the physiome model repository by linking geometries to CellML models embedded in the simulation of cardiac behaviour. Furthermore, euHeartDB uses the exFormat--a preliminary version of the interoperable FieldML data format--to effectively promote reuse of anatomical models, and currently incorporates Continuum Mechanics, Image Analysis, Signal Processing and System Identification Graphical User Interface (CMGUI), a rendering engine, to provide three-dimensional graphical views of the models populating the database. Currently, euHeartDB stores 11 cardiac geometries developed within the euHeart project consortium.

  19. Modelling Safe Interface Interactions in Web Applications

    Science.gov (United States)

    Brambilla, Marco; Cabot, Jordi; Grossniklaus, Michael

    Current Web applications embed sophisticated user interfaces and business logic. The original interaction paradigm of the Web based on static content pages that are browsed by hyperlinks is, therefore, not valid anymore. In this paper, we advocate a paradigm shift for browsers and Web applications, that improves the management of user interaction and browsing history. Pages are replaced by States as basic navigation nodes, and Back/Forward navigation along the browsing history is replaced by a full-fledged interactive application paradigm, supporting transactions at the interface level and featuring Undo/Redo capabilities. This new paradigm offers a safer and more precise interaction model, protecting the user from unexpected behaviours of the applications and the browser.

  20. Using Google App Engine

    CERN Document Server

    Severance, Charles

    2009-01-01

    Build exciting, scalable web applications quickly and confidently using Google App Engine and this book, even if you have little or no experience in programming or web development. App Engine is perhaps the most appealing web technology to appear in the last year, providing an easy-to-use application framework with basic web tools. While Google's own tutorial assumes significant experience, Using Google App Engine will help anyone get started with this platform. By the end of this book, you'll know how to build complete, interactive applications and deploy them to the cloud using the same s

  1. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M.B. Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  2. A continuum membrane model for small deformations of a spider orb-web

    Science.gov (United States)

    Morassi, Antonino; Soler, Alejandro; Zaera, Ramón

    2017-09-01

    In this paper we propose a continuum membrane model for the infinitesimal deformation of a spider web. The model is derived in the simple context of axially-symmetric webs formed by radial threads connected with circumferential threads belonging to concentric circles. Under suitable assumption on the tensile pre-stress acting in the referential configuration, the out-of-plane static equilibrium and the free transverse and in-plane vibration of a supported circular orb-web are studied in detail. The accuracy of the model in describing a discrete spider web is numerically investigated.

  3. Utilization of two web-based continuing education courses evaluated by Markov chain model.

    Science.gov (United States)

    Tian, Hao; Lin, Jin-Mann S; Reeves, William C

    2012-01-01

    To evaluate the web structure of two web-based continuing education courses, identify problems and assess the effects of web site modifications. Markov chain models were built from 2008 web usage data to evaluate the courses' web structure and navigation patterns. The web site was then modified to resolve identified design issues and the improvement in user activity over the subsequent 12 months was quantitatively evaluated. Web navigation paths were collected between 2008 and 2010. The probability of navigating from one web page to another was analyzed. The continuing education courses' sequential structure design was clearly reflected in the resulting actual web usage models, and none of the skip transitions provided was heavily used. The web navigation patterns of the two different continuing education courses were similar. Two possible design flaws were identified and fixed in only one of the two courses. Over the following 12 months, the drop-out rate in the modified course significantly decreased from 41% to 35%, but remained unchanged in the unmodified course. The web improvement effects were further verified via a second-order Markov chain model. The results imply that differences in web content have less impact than web structure design on how learners navigate through continuing education courses. Evaluation of user navigation can help identify web design flaws and guide modifications. This study showed that Markov chain models provide a valuable tool to evaluate web-based education courses. Both the results and techniques in this study would be very useful for public health education and research specialists.

  4. WEB APPLICATION TO MANAGE DOCUMENTS USING THE GOOGLE WEB TOOLKIT AND APP ENGINE TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    Velázquez Santana Eugenio César

    2017-12-01

    Full Text Available The application of new information technologies such as Google Web Toolkit and App Engine are making a difference in the academic management of Higher Education Institutions (IES, who seek to streamline their processes as well as reduce infrastructure costs. However, they encounter the problems with regard to acquisition costs, the infrastructure necessary for their use, as well as the maintenance of the software; It is for this reason that the present research aims to describe the application of these new technologies in HEIs, as well as to identify their advantages and disadvantages and the key success factors in their implementation. As a software development methodology, SCRUM was used as well as PMBOK as a project management tool. The main results were related to the application of these technologies in the development of customized software for teachers, students and administrators, as well as the weaknesses and strengths of using them in the cloud. On the other hand, it was also possible to describe the paradigm shift that data warehouses are generating with respect to today's relational databases.

  5. Mean Value Engine Modelling of an SI Engine with EGR

    DEFF Research Database (Denmark)

    Føns, Michael; Müller, Martin; Chevalier, Alain

    1999-01-01

    Mean Value Engine Models (MVEMs) are simplified, dynamic engine models what are physically based. Such models are useful for control studies, for engine control system analysis and for model based engine control systems. Very few published MVEMs have included the effects of Exhaust Gas...... Recirculation (EGR). The purpose of this paper is to present a modified MVEM which includes EGR in a physical way. It has been tested using newly developed, very fast manifold pressure, manifold temperature, port and EGR mass flow sensors. Reasonable agreement has been obtained on an experimental engine...

  6. WAsP engineering 2000

    DEFF Research Database (Denmark)

    Mann, J.; Ott, Søren; Jørgensen, B.H.

    2002-01-01

    This report summarizes the findings of the EFP project WAsP Engineering Version 2000. The main product of this project is the computer program WAsP Engineering which is used for the estimation of extreme wind speeds, wind shears, profiles, and turbulencein complex terrain. At the web page http......://www.waspengineering.dk more information of the program can be obtained and a copy of the manual can be downloaded. The reports contains a complete description of the turbulence modelling in moderately complexterrain, implemented in WAsP Engineering. Also experimental validation of the model together with comparison...... with spectra from engineering codes is done. Some shortcomings of the linear flow model LINCOM, which is at the core of WAsP Engineering, ispointed out and modifications to eliminate the problem are presented. The global database of meteorological "reanalysis" data from NCAP/NCEP are used to estimate...

  7. Engineering graphic modelling a workbook for design engineers

    CERN Document Server

    Tjalve, E; Frackmann Schmidt, F

    2013-01-01

    Engineering Graphic Modelling: A Practical Guide to Drawing and Design covers how engineering drawing relates to the design activity. The book describes modeled properties, such as the function, structure, form, material, dimension, and surface, as well as the coordinates, symbols, and types of projection of the drawing code. The text provides drawing techniques, such as freehand sketching, bold freehand drawing, drawing with a straightedge, a draughting machine or a plotter, and use of templates, and then describes the types of drawing. Graphic designers, design engineers, mechanical engine

  8. RESTful web services with Dropwizard

    CERN Document Server

    Dallas, Alexandros

    2014-01-01

    A hands-on focused step-by-step tutorial to help you create Web Service applications using Dropwizard. If you are a software engineer or a web developer and want to learn more about building your own Web Service application, then this is the book for you. Basic knowledge of Java and RESTful Web Service concepts is assumed and familiarity with SQL/MySQL and command-line scripting would be helpful.

  9. Changes in users' Web search performance after ten years ...

    African Journals Online (AJOL)

    The changes in users' Web search performance using search engines over ten years was investigated in this study. Matched data obtained from samples in 2000 and 2010 were used for the comparative analysis. The patterns of Web search engine use suggested a dominance in using a particular search engine. Statistical ...

  10. Customer Decision Making in Web Services with an Integrated P6 Model

    Science.gov (United States)

    Sun, Zhaohao; Sun, Junqing; Meredith, Grant

    Customer decision making (CDM) is an indispensable factor for web services. This article examines CDM in web services with a novel P6 model, which consists of the 6 Ps: privacy, perception, propensity, preference, personalization and promised experience. This model integrates the existing 6 P elements of marketing mix as the system environment of CDM in web services. The new integrated P6 model deals with the inner world of the customer and incorporates what the customer think during the DM process. The proposed approach will facilitate the research and development of web services and decision support systems.

  11. Modelling of web-based virtual university administration for Nigerian ...

    African Journals Online (AJOL)

    This research work focused on development of a model of web based virtual University Administration for Nigerian universities. This is necessary as there is still a noticeable administrative constraint in our Universities, the establishment of many University Web portals notwithstanding. More efforts are therefore needed to ...

  12. Climate Model Diagnostic Analyzer Web Service System

    Science.gov (United States)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Kubar, T. L.; Li, J.; Zhang, J.; Wang, W.

    2015-12-01

    Both the National Research Council Decadal Survey and the latest Intergovernmental Panel on Climate Change Assessment Report stressed the need for the comprehensive and innovative evaluation of climate models with the synergistic use of global satellite observations in order to improve our weather and climate simulation and prediction capabilities. The abundance of satellite observations for fundamental climate parameters and the availability of coordinated model outputs from CMIP5 for the same parameters offer a great opportunity to understand and diagnose model biases in climate models. In addition, the Obs4MIPs efforts have created several key global observational datasets that are readily usable for model evaluations. However, a model diagnostic evaluation process requires physics-based multi-variable comparisons that typically involve large-volume and heterogeneous datasets, making them both computationally- and data-intensive. In response, we have developed a novel methodology to diagnose model biases in contemporary climate models and implementing the methodology as a web-service based, cloud-enabled, provenance-supported climate-model evaluation system. The evaluation system is named Climate Model Diagnostic Analyzer (CMDA), which is the product of the research and technology development investments of several current and past NASA ROSES programs. The current technologies and infrastructure of CMDA are designed and selected to address several technical challenges that the Earth science modeling and model analysis community faces in evaluating and diagnosing climate models. In particular, we have three key technology components: (1) diagnostic analysis methodology; (2) web-service based, cloud-enabled technology; (3) provenance-supported technology. The diagnostic analysis methodology includes random forest feature importance ranking, conditional probability distribution function, conditional sampling, and time-lagged correlation map. We have implemented the

  13. Modelo de web semántica para universidades

    Directory of Open Access Journals (Sweden)

    Karla Abad

    2015-12-01

    Full Text Available A raíz del estudio de estado actual de micrositios y repositorios en la Universidad Estatal Península de Santa Elena se encontró que su información carecía de semántica óptima y adecuada. Bajo estas circunstancias, se plantea entonces la necesidad de crear un modelo de estructura de web semántica para Universidades, el cual posteriormente fue aplicado a micrositios y repositorio digital de la UPSE, como caso de prueba. Parte de este proyecto incluye la instalación de módulos de software con sus respectivas configuraciones y la utilización de estándares de metadatos como DUBLIN CORE, para la mejora del SEO (optimización en motores de búsqueda; con ello se ha logrado la generación de metadatos estandarizados y la creación de políticas para la subida de información. El uso de metadatos transforma datos simples en estructuras bien organizadas que aportan información y conocimiento para generar resultados en buscadores web. Al culminar la implementación del modelo de web semántica es posible decir que la universidad ha mejorado su presencia y visibilidad en la web a través del indexamiento de información en diferentes motores de búsqueda y posicionamiento en la categorización de universidades y de repositorios de Webometrics (ranking que proporciona clasificación de universidades de todo el mundo.   Abstract After examining the current microsites and repositories situation in University, Peninsula of Santa Elena´s, it was found that information lacked optimal and appropriate semantic. Under these circumstances, there is a need to create a semantic web structure model for Universities, which was subsequently applied to UPSE´s microsites and digital repositories, as a test study case. Part of this project includes the installation of software modules with their respective configurations and the use of metadata standards such as DUBLIN CORE, to improve the SEO (Search Engine Optimization; with these applications, it was

  14. Modeling user navigation behavior in web by colored Petri nets to determine the user's interest in recommending web pages

    Directory of Open Access Journals (Sweden)

    Mehdi Sadeghzadeh

    2013-01-01

    Full Text Available One of existing challenges in personalization of the web is increasing the efficiency of a web in meeting the users' requirements for the contents they require in an optimal state. All the information associated with the current user behavior following in web and data obtained from pervious users’ interaction in web can provide some necessary keys to recommend presentation of services, productions, and the required information of the users. This study aims at presenting a formal model based on colored Petri nets to identify the present user's interest, which is utilized to recommend the most appropriate pages ahead. In the proposed design, recommendation of the pages is considered with respect to information obtained from pervious users' profile as well as the current session of the present user. This model offers the updated proposed pages to the user by clicking on the web pages. Moreover, an example of web is modeled using CPN Tools. The results of the simulation show that this design improves the precision factor. We explain, through evaluation where the results of this method are more objective and the dynamic recommendations demonstrate that the results of the recommended method improve the precision criterion 15% more than the static method.

  15. Publicizing Your Web Resources for Maximum Exposure.

    Science.gov (United States)

    Smith, Kerry J.

    2001-01-01

    Offers advice to librarians for marketing their Web sites on Internet search engines. Advises against relying solely on spiders and recommends adding metadata to the source code and delivering that information directly to the search engines. Gives an overview of metadata and typical coding for meta tags. Includes Web addresses for a number of…

  16. A Topological Framework for Interactive Queries on 3D Models in the Web

    Science.gov (United States)

    Figueiredo, Mauro; Rodrigues, José I.; Silvestre, Ivo; Veiga-Pires, Cristina

    2014-01-01

    Several technologies exist to create 3D content for the web. With X3D, WebGL, and X3DOM, it is possible to visualize and interact with 3D models in a web browser. Frequently, three-dimensional objects are stored using the X3D file format for the web. However, there is no explicit topological information, which makes it difficult to design fast algorithms for applications that require adjacency and incidence data. This paper presents a new open source toolkit TopTri (Topological model for Triangle meshes) for Web3D servers that builds the topological model for triangular meshes of manifold or nonmanifold models. Web3D client applications using this toolkit make queries to the web server to get adjacent and incidence information of vertices, edges, and faces. This paper shows the application of the topological information to get minimal local points and iso-lines in a 3D mesh in a web browser. As an application, we present also the interactive identification of stalactites in a cave chamber in a 3D web browser. Several tests show that even for large triangular meshes with millions of triangles, the adjacency and incidence information is returned in real time making the presented toolkit appropriate for interactive Web3D applications. PMID:24977236

  17. A Topological Framework for Interactive Queries on 3D Models in the Web

    Directory of Open Access Journals (Sweden)

    Mauro Figueiredo

    2014-01-01

    Full Text Available Several technologies exist to create 3D content for the web. With X3D, WebGL, and X3DOM, it is possible to visualize and interact with 3D models in a web browser. Frequently, three-dimensional objects are stored using the X3D file format for the web. However, there is no explicit topological information, which makes it difficult to design fast algorithms for applications that require adjacency and incidence data. This paper presents a new open source toolkit TopTri (Topological model for Triangle meshes for Web3D servers that builds the topological model for triangular meshes of manifold or nonmanifold models. Web3D client applications using this toolkit make queries to the web server to get adjacent and incidence information of vertices, edges, and faces. This paper shows the application of the topological information to get minimal local points and iso-lines in a 3D mesh in a web browser. As an application, we present also the interactive identification of stalactites in a cave chamber in a 3D web browser. Several tests show that even for large triangular meshes with millions of triangles, the adjacency and incidence information is returned in real time making the presented toolkit appropriate for interactive Web3D applications.

  18. Research on Artificial Spider Web Model for Farmland Wireless Sensor Network

    OpenAIRE

    Jun Wang; Song Gao; Shimin Zhao; Guang Hu; Xiaoli Zhang; Guowang Xie

    2018-01-01

    Through systematic analysis of the structural characteristics and invulnerability of spider web, this paper explores the possibility of combining the advantages of spider web such as network robustness and invulnerability with farmland wireless sensor network. A universally applicable definition and mathematical model of artificial spider web structure are established. The comparison between artificial spider web and traditional networks is discussed in detail. The simulation result shows tha...

  19. Penetration Testing Model for Web sites Hosted in Nuclear Malaysia

    International Nuclear Information System (INIS)

    Mohd Dzul Aiman Aslan; Mohamad Safuan Sulaiman; Siti Nurbahyah Hamdan; Saaidi Ismail; Mohd Fauzi Haris; Norzalina Nasiruddin; Raja Murzaferi Mokhtar

    2012-01-01

    Nuclear Malaysia web sites has been very crucial in providing important and useful information and services to the clients as well as the users worldwide. Furthermore, a web site is important as it reflects the organisation image. To ensure the integrity of the content of web site, a study has been made and a penetration testing model has been implemented to test the security of several web sites hosted at Nuclear Malaysia for malicious attempts. This study will explain how the security was tested in the detailed condition and measured. The result determined the security level and the vulnerability of several web sites. This result is important for improving and hardening the security of web sites in Nuclear Malaysia. (author)

  20. A grammar checker based on web searching

    Directory of Open Access Journals (Sweden)

    Joaquim Moré

    2006-05-01

    Full Text Available This paper presents an English grammar and style checker for non-native English speakers. The main characteristic of this checker is the use of an Internet search engine. As the number of web pages written in English is immense, the system hypothesises that a piece of text not found on the Web is probably badly written. The system also hypothesises that the Web will provide examples of how the content of the text segment can be expressed in a grammatically correct and idiomatic way. Thus, when the checker warns the user about the odd nature of a text segment, the Internet engine searches for contexts that can help the user decide whether he/she should correct the segment or not. By means of a search engine, the checker also suggests use of other expressions that appear on the Web more often than the expression he/she actually wrote.

  1. A Web portal for the Engineering and Equipment Data Management System at CERN

    CERN Document Server

    Tsyganov, A; Martel, P; Milenkovic, S; Suwalska, A; Delamare, Christophe; Widegren, David; Mallon Amerigo, S; Pettersson, Thomas Sven

    2010-01-01

    CERN, the European Laboratory for Particle Physics, located in Geneva – Switzerland, has recently started the Large Hadron Collider (LHC), a 27 km particle accelerator. The CERN Engineering and Equipment Data Management Service (EDMS) provides support for managing engineering and equipment information throughout the entire lifecycle of a project. Based on several both in-house developed and commercial data management systems, this service supports management and follow-up of different kinds of information throughout the lifecycle of the LHC project: design, manufacturing, installation, commissioning data, maintenance and more. The data collection phase, carried out by specialists, is now being replaced by a phase during which data will be consulted on an extensive basis by non-experts users. In order to address this change, a Web portal for the EDMS has been developed. It brings together in one space all the aspects covered by the EDMS: project and document management, asset tracking and safety follow-up. T...

  2. An Application for Data Preprocessing and Models Extractions in Web Usage Mining

    Directory of Open Access Journals (Sweden)

    Claudia Elena DINUCA

    2011-11-01

    Full Text Available Web servers worldwide generate a vast amount of information on web users’ browsing activities. Several researchers have studied these so-called clickstream or web access log data to better understand and characterize web users. The goal of this application is to analyze user behaviour by mining enriched web access log data. With the continued growth and proliferation of e-commerce, Web services, and Web-based information systems, the volumes of click stream and user data collected by Web-based organizations in their daily operations has reached astronomical proportions. This information can be exploited in various ways, such as enhancing the effectiveness of websites or developing directed web marketing campaigns. The discovered patterns are usually represented as collections of pages, objects, or re-sources that are frequently accessed by groups of users with common needs or interests. In this paper we will focus on displaying the way how it was implemented the application for data preprocessing and extracting different data models from web logs data, finding association as a data mining technique to extract potentially useful knowledge from web usage data. We find different data models navigation patterns by analysing the log files of the web-site. I implemented the application in Java using NetBeans IDE. For exemplification, I used the log files data from a commercial web site www.nice-layouts.com.

  3. Using declarative workflow languages to develop process-centric web applications

    NARCIS (Netherlands)

    Bernardi, M.L.; Cimitile, M.; Di Lucca, G.A.; Maggi, F.M.

    2012-01-01

    Nowadays, process-centric Web Applications (WAs) are extensively used in contexts where multi-user, coordinated work is required. Recently, Model Driven Engineering (MDE) techniques have been investigated for the development of this kind of applications. However, there are still some open issues.

  4. Semantically-Rigorous Systems Engineering Modeling Using Sysml and OWL

    Science.gov (United States)

    Jenkins, J. Steven; Rouquette, Nicolas F.

    2012-01-01

    The Systems Modeling Language (SysML) has found wide acceptance as a standard graphical notation for the domain of systems engineering. SysML subsets and extends the Unified Modeling Language (UML) to define conventions for expressing structural, behavioral, and analytical elements, and relationships among them. SysML-enabled modeling tools are available from multiple providers, and have been used for diverse projects in military aerospace, scientific exploration, and civil engineering. The Web Ontology Language (OWL) has found wide acceptance as a standard notation for knowledge representation. OWL-enabled modeling tools are available from multiple providers, as well as auxiliary assets such as reasoners and application programming interface libraries, etc. OWL has been applied to diverse projects in a wide array of fields. While the emphasis in SysML is on notation, SysML inherits (from UML) a semantic foundation that provides for limited reasoning and analysis. UML's partial formalization (FUML), however, does not cover the full semantics of SysML, which is a substantial impediment to developing high confidence in the soundness of any conclusions drawn therefrom. OWL, by contrast, was developed from the beginning on formal logical principles, and consequently provides strong support for verification of consistency and satisfiability, extraction of entailments, conjunctive query answering, etc. This emphasis on formal logic is counterbalanced by the absence of any graphical notation conventions in the OWL standards. Consequently, OWL has had only limited adoption in systems engineering. The complementary strengths and weaknesses of SysML and OWL motivate an interest in combining them in such a way that we can benefit from the attractive graphical notation of SysML and the formal reasoning of OWL. This paper describes an approach to achieving that combination.

  5. From BPMN process models to BPEL web services

    NARCIS (Netherlands)

    Ouyang, C.; Aalst, van der W.M.P.; Dumas, M.; Hofstede, ter A.H.M.; Feig, E.; Kumar, A.

    2006-01-01

    The Business Process Modelling Notation (BPMN) is a graph-oriented language in which control and action nodes can be connected almost arbitrarily. It is supported by various modelling tools but so far no systems can directly execute BPMN models. The Business Process Execution Language for Web

  6. IMPROVING PERSONALIZED WEB SEARCH USING BOOKSHELF DATA STRUCTURE

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2012-10-01

    Full Text Available Search engines are playing a vital role in retrieving relevant information for the web user. In this research work a user profile based web search is proposed. So the web user from different domain may receive different set of results. The main challenging work is to provide relevant results at the right level of reading difficulty. Estimating user expertise and re-ranking the results are the main aspects of this paper. The retrieved results are arranged in Bookshelf Data Structure for easy access. Better presentation of search results hence increases the usability of web search engines significantly in visual mode.

  7. Engineering semantic-based interactive multi-device web applications

    NARCIS (Netherlands)

    Bellekens, P.A.E.; Sluijs, van der K.A.M.; Aroyo, L.M.; Houben, G.J.P.M.; Baresi, L.; Fraternali, P.; Houben, G.J.

    2007-01-01

    To build high-quality personalized Web applications developers have to deal with a number of complex problems. We look at the growing class of personalized Web Applications that share three characteristic challenges. Firstly, the semantic problem of how to enable content reuse and integration.

  8. A model of visual, aesthetic communication focusing on web sites

    DEFF Research Database (Denmark)

    Thorlacius, Lisbeth

    2002-01-01

    Theory books and method books within the field of web design mainly focus on the technical and functional aspects of the construction of web design. There is a lack of a model which weighs the analysis of the visual and aesthetic aspects against the the functional and technical aspects of web...... design. With a point of departure in Roman Jakobson's linguistic communication model, the reader is introduced to a model which covers the communication aspects, the visual aspects, the aesthetic aspects and the net specific aspects of the analysis of media products. The aesthetic aspects rank low...... in the eyes of the media producers even though the most outstanding media products often obtained their success due to aesthetic phenomena. The formal aesthetic function and the inexpressible aesthetic function have therefore been prioritised in the model in regard to the construction and analysis of media...

  9. Transforming Systems Engineering through Model Centric Engineering

    Science.gov (United States)

    2017-08-08

    Contract No. HQ0034-13-D-0004 Report No. SERC-2017-TR-110 Date: August 8, 2017 Transforming Systems Engineering through Model-Centric... Engineering Technical Report SERC-2017-TR-110 Update: August 8, 2017 Principal Investigator: Mark Blackburn, Stevens Institute of Technology Co...Evangelista Sponsor: U.S. Army Armament Research, Development and Engineering Center (ARDEC), Office of the Deputy Assistant Secretary of Defense for

  10. Semantic similarity measures in the biomedical domain by leveraging a web search engine.

    Science.gov (United States)

    Hsieh, Sheau-Ling; Chang, Wen-Yung; Chen, Chi-Huang; Weng, Yung-Ching

    2013-07-01

    Various researches in web related semantic similarity measures have been deployed. However, measuring semantic similarity between two terms remains a challenging task. The traditional ontology-based methodologies have a limitation that both concepts must be resided in the same ontology tree(s). Unfortunately, in practice, the assumption is not always applicable. On the other hand, if the corpus is sufficiently adequate, the corpus-based methodologies can overcome the limitation. Now, the web is a continuous and enormous growth corpus. Therefore, a method of estimating semantic similarity is proposed via exploiting the page counts of two biomedical concepts returned by Google AJAX web search engine. The features are extracted as the co-occurrence patterns of two given terms P and Q, by querying P, Q, as well as P AND Q, and the web search hit counts of the defined lexico-syntactic patterns. These similarity scores of different patterns are evaluated, by adapting support vector machines for classification, to leverage the robustness of semantic similarity measures. Experimental results validating against two datasets: dataset 1 provided by A. Hliaoutakis; dataset 2 provided by T. Pedersen, are presented and discussed. In dataset 1, the proposed approach achieves the best correlation coefficient (0.802) under SNOMED-CT. In dataset 2, the proposed method obtains the best correlation coefficient (SNOMED-CT: 0.705; MeSH: 0.723) with physician scores comparing with measures of other methods. However, the correlation coefficients (SNOMED-CT: 0.496; MeSH: 0.539) with coder scores received opposite outcomes. In conclusion, the semantic similarity findings of the proposed method are close to those of physicians' ratings. Furthermore, the study provides a cornerstone investigation for extracting fully relevant information from digitizing, free-text medical records in the National Taiwan University Hospital database.

  11. Real-time GIS data model and sensor web service platform for environmental data management.

    Science.gov (United States)

    Gong, Jianya; Geng, Jing; Chen, Zeqiang

    2015-01-09

    Effective environmental data management is meaningful for human health. In the past, environmental data management involved developing a specific environmental data management system, but this method often lacks real-time data retrieving and sharing/interoperating capability. With the development of information technology, a Geospatial Service Web method is proposed that can be employed for environmental data management. The purpose of this study is to determine a method to realize environmental data management under the Geospatial Service Web framework. A real-time GIS (Geographic Information System) data model and a Sensor Web service platform to realize environmental data management under the Geospatial Service Web framework are proposed in this study. The real-time GIS data model manages real-time data. The Sensor Web service platform is applied to support the realization of the real-time GIS data model based on the Sensor Web technologies. To support the realization of the proposed real-time GIS data model, a Sensor Web service platform is implemented. Real-time environmental data, such as meteorological data, air quality data, soil moisture data, soil temperature data, and landslide data, are managed in the Sensor Web service platform. In addition, two use cases of real-time air quality monitoring and real-time soil moisture monitoring based on the real-time GIS data model in the Sensor Web service platform are realized and demonstrated. The total time efficiency of the two experiments is 3.7 s and 9.2 s. The experimental results show that the method integrating real-time GIS data model and Sensor Web Service Platform is an effective way to manage environmental data under the Geospatial Service Web framework.

  12. MODELS OF PROJECT REVERSE ENGINEERING

    Directory of Open Access Journals (Sweden)

    Віктор Володимирович ІВАНОВ

    2017-03-01

    Full Text Available Reverse engineering decided important scientific and technical problems of increasing the cost of the existing technical product by transforming it into a product with other features or design. Search ideas of the new application of existing products on the base of heuristic analysis were created. The concept of reverse engineering and its division into three types: conceptual, aggregate and complete was expanded. The use of heuristic methods for reverse engineering concept was showed. The modification model of Reverse engineering based on the model of РМВОК was developed. Our model includes two new phases: identification and transformation. At the identification phase, technical control is made. At the transformation phase, search heuristic idea of the new applied existing technical product was made. The model of execution phase that included heuristic methods, metrological equipment, and CAD/CAM/CAE program complex was created. The model that connected economic indicators of reverse engineering project was developed.

  13. Transforming Systems Engineering through Model-Centric Engineering

    Science.gov (United States)

    2018-02-28

    Contract No. HQ0034-13-D-0004 Research Tasks: 48, 118, 141, 157, 170 Report No. SERC-2018-TR-103 Transforming Systems Engineering through...Model-Centric Engineering Technical Report SERC-2018-TR-103 February 28, 2018 Principal Investigator Dr. Mark Blackburn, Stevens Institute of...Systems Engineering Research Center This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the

  14. A verification strategy for web services composition using enhanced stacked automata model.

    Science.gov (United States)

    Nagamouttou, Danapaquiame; Egambaram, Ilavarasan; Krishnan, Muthumanickam; Narasingam, Poonkuzhali

    2015-01-01

    Currently, Service-Oriented Architecture (SOA) is becoming the most popular software architecture of contemporary enterprise applications, and one crucial technique of its implementation is web services. Individual service offered by some service providers may symbolize limited business functionality; however, by composing individual services from different service providers, a composite service describing the intact business process of an enterprise can be made. Many new standards have been defined to decipher web service composition problem namely Business Process Execution Language (BPEL). BPEL provides an initial work for forming an Extended Markup Language (XML) specification language for defining and implementing business practice workflows for web services. The problems with most realistic approaches to service composition are the verification of composed web services. It has to depend on formal verification method to ensure the correctness of composed services. A few research works has been carried out in the literature survey for verification of web services for deterministic system. Moreover the existing models did not address the verification properties like dead transition, deadlock, reachability and safetyness. In this paper, a new model to verify the composed web services using Enhanced Stacked Automata Model (ESAM) has been proposed. The correctness properties of the non-deterministic system have been evaluated based on the properties like dead transition, deadlock, safetyness, liveness and reachability. Initially web services are composed using Business Process Execution Language for Web Service (BPEL4WS) and it is converted into ESAM (combination of Muller Automata (MA) and Push Down Automata (PDA)) and it is transformed into Promela language, an input language for Simple ProMeLa Interpreter (SPIN) tool. The model is verified using SPIN tool and the results revealed better recital in terms of finding dead transition and deadlock in contrast to the

  15. Faculty Recommendations for Web Tools: Implications for Course Management Systems

    Science.gov (United States)

    Oliver, Kevin; Moore, John

    2008-01-01

    A gap analysis of web tools in Engineering was undertaken as one part of the Digital Library Network for Engineering and Technology (DLNET) grant funded by NSF (DUE-0085849). DLNET represents a Web portal and an online review process to archive quality knowledge objects in Engineering and Technology disciplines. The gap analysis coincided with the…

  16. Analysis and Testing of Ajax-based Single-page Web Applications

    NARCIS (Netherlands)

    Mesbah, A.

    2009-01-01

    This dissertation has focused on better understanding the shifting web paradigm and the consequences of moving from the classical multi-page model to an Ajax-based single-page style. Specifically to that end, this work has examined this new class of software from three main software engineering

  17. Remote Experiments in Control Engineering Education Laboratory

    Directory of Open Access Journals (Sweden)

    Milica B Naumović

    2008-05-01

    Full Text Available This paper presents Automatic Control Engineering Laboratory (ACEL - WebLab, an under-developed, internet-based remote laboratory for control engineering education at the Faculty of Electronic Engineering in Niš. Up to now, the remote laboratory integrates two physical systems (velocity servo system and magnetic levitation system and enables some levels of measurement and control. To perform experiments in ACEL-WebLab, the "LabVIEW Run Time Engine"and a standard web browser are needed.

  18. An assessment system for the system safety engineering capability maturity model in the case of spent fuel reprocessing

    International Nuclear Information System (INIS)

    Yang Xiaohua; Liu Zhenghai; Liu Zhiming; Wan Yaping; Bai Xiaofeng

    2012-01-01

    We can improve the processing, the evaluation of capability and promote the user's trust by using system security engineering capability maturity model (SSE-CMM). SSE-CMM is the common method for organizing and implementing safety engineering, and it is a mature method for system safety engineering. Combining capability maturity model (CMM) with total quality management and statistic theory, SSE-CMM turns systems security engineering into a well-defined, mature, measurable, advanced engineering discipline. Lack of domain knowledge, the size of data, the diversity of evidences, the cumbersomeness of processes, and the complexity of matching evidences with problems are the main issues that SSE-CMM assessment has to face. To improve effectively the efficiency of assessment of spent fuel reprocessing system security engineering capability maturity model (SFR-SSE-CMM), in this paper we de- signed an intelligent assessment software based on domain ontology and that uses methods such as ontology, evidence theory, semantic web, intelligent information retrieval and intelligent auto-matching techniques. This software includes four subsystems, which are domain ontology creation and management system, evidence auto collection system, and a problem and evidence matching system. The architecture of the software is divided into five layers: a data layer, an oncology layer, a knowledge layer, a service layer arid a presentation layer. (authors)

  19. From people to entities new semantic search paradigms for the web

    CERN Document Server

    Demartini, G

    2014-01-01

    The exponential growth of digital information available in companies and on the Web creates the need for search tools that can respond to the most sophisticated information needs. Many user tasks would be simplified if Search Engines would support typed search, and return entities instead of just Web documents. For example, an executive who tries to solve a problem needs to find people in the company who are knowledgeable about a certain topic.In the first part of the book, we propose a model for expert finding based on the well-consolidated vector space model for Information Retrieval and inv

  20. Exposing the Hidden-Web Induced by Ajax

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.

    2008-01-01

    AJAX is a very promising approach for improving rich interactivity and responsiveness of web applications. At the same time, AJAX techniques increase the totality of the hidden web by shattering the metaphor of a web ‘page’ upon which general search engines are based. This paper describes a

  1. Semantically-Enabled Sensor Plug & Play for the Sensor Web

    Science.gov (United States)

    Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian

    2011-01-01

    Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC’s Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research. PMID:22164033

  2. Engineering model for body armor

    NARCIS (Netherlands)

    Roebroeks, G.H.J.J.; Carton, E.P.

    2014-01-01

    TNO has developed an engineering model for flexible body armor, as one of their energy based engineering models that describe the physics of projectile to target interactions (weaves, metals, ceramics). These models form the basis for exploring the possibilities for protection improvement. This

  3. Modelo de programación asíncrona para Web transaccionales en un ambiente distribuido Asynchronous programming model for transactional Web in a distributed environment

    Directory of Open Access Journals (Sweden)

    Luis Marco Cáceres Alvarez

    2011-06-01

    Full Text Available El presente trabajo define y detalla un modelo de programación asíncrono para sistemas de información Web transaccionales orientado a servicios en un ambiente distribuido uniendo las ventajas de las técnicas de programación Web asíncronas (AJAX, patrones de diseño orientados a objetos y servicios Web, para la obtención de aplicativos caracterizados por ser tolerantes a fallos, distribuidos, eficientes y usables. Principalmente se puntualizan los problemas encontrados en el modelo para uso de servicios Web clásicos, por ende se define, documenta y desarrolla un modelo de programación que solucione y mejore los servicios Web clásicos y se valide la solución a través del desarrollo de un prototipo basado en el modelo de programación definido.The present work defines and details an asynchronous programming model for transactional web information systems in distributed environment, joining the advantages of web asynchronous programming techniques (AJAX, object oriented design patterns and web services, to obtain fail tolerant, distributed, efficient and usable applications. The problems found in the classical Web services model are pointed out and a new model to solve and improve most common web services is defined, documented and developed. Also, a prototype is developed to validate the given solution using the defined model.

  4. Deep Web and Dark Web: Deep World of the Internet

    OpenAIRE

    Çelik, Emine

    2018-01-01

    The Internet is undoubtedly still a revolutionary breakthrough in the history of humanity. Many people use the internet for communication, social media, shopping, political and social agenda, and more. Deep Web and Dark Web concepts not only handled by computer, software engineers but also handled by social siciensists because of the role of internet for the States in international arenas, public institutions and human life. By the moving point that very importantrole of internet for social s...

  5. Learning Hierarchical User Interest Models from Web Pages

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    We propose an algorithm for learning hierarchical user interest models according to the Web pages users have browsed. In this algorithm, the interests of a user are represented into a tree which is called a user interest tree, the content and the structure of which can change simultaneously to adapt to the changes in a user's interests. This expression represents a user's specific and general interests as a continuum. In some sense, specific interests correspond to short-term interests, while general interests correspond to long-term interests. So this representation more really reflects the users' interests. The algorithm can automatically model a user's multiple interest domains, dynamically generate the interest models and prune a user interest tree when the number of the nodes in it exceeds given value. Finally, we show the experiment results in a Chinese Web Site.

  6. Applying Semantic Web technologies to improve the retrieval, credibility and use of health-related web resources.

    Science.gov (United States)

    Mayer, Miguel A; Karampiperis, Pythagoras; Kukurikos, Antonis; Karkaletsis, Vangelis; Stamatakis, Kostas; Villarroel, Dagmar; Leis, Angela

    2011-06-01

    The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various "trust marks" and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.

  7. Start Your Engines: Surfing with Search Engines for Kids.

    Science.gov (United States)

    Byerly, Greg; Brodie, Carolyn S.

    1999-01-01

    Suggests that to be an effective educator and user of the Web it is essential to know the basics about search engines. Presents tips for using search engines. Describes several search engines for children and young adults, as well as some general filtered search engines for children. (AEF)

  8. Research on Artificial Spider Web Model for Farmland Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Jun Wang

    2018-01-01

    Full Text Available Through systematic analysis of the structural characteristics and invulnerability of spider web, this paper explores the possibility of combining the advantages of spider web such as network robustness and invulnerability with farmland wireless sensor network. A universally applicable definition and mathematical model of artificial spider web structure are established. The comparison between artificial spider web and traditional networks is discussed in detail. The simulation result shows that the networking structure of artificial spider web is better than that of traditional networks in terms of improving the overall reliability and invulnerability of communication system. A comprehensive study on the advantage characteristics of spider web has important theoretical and practical significance for promoting the invulnerability research of farmland wireless sensor network.

  9. Study of Search Engine Transaction Logs Shows Little Change in How Users use Search Engines. A review of: Jansen, Bernard J., and Amanda Spink. “How Are We Searching the World Wide Web? A Comparison of Nine Search Engine Transaction Logs.” Information Processing & Management 42.1 (2006: 248‐263.

    Directory of Open Access Journals (Sweden)

    David Hook

    2006-09-01

    Full Text Available Objective – To examine the interactions between users and search engines, and how they have changed over time. Design – Comparative analysis of search engine transaction logs. Setting – Nine major analyses of search engine transaction logs. Subjects – Nine web search engine studies (4 European, 5 American over a seven‐year period, covering the search engines Excite, Fireball, AltaVista, BWIE and AllTheWeb. Methods – The results from individual studies are compared by year of study for percentages of single query sessions, one term queries, operator (and, or, not, etc. usage and single result page viewing. As well, the authors group the search queries into eleven different topical categories and compare how the breakdown has changed over time. Main Results – Based on the percentage of single query sessions, it does not appear that the complexity of interactions has changed significantly for either the U.S.‐based or the European‐based search engines. As well, there was little change observed in the percentage of one‐term queries over the years of study for either the U.S.‐based or the European‐based search engines. Few users (generally less than 20% use Boolean or other operators in their queries, and these percentages have remained relatively stable. One area of noticeable change is in the percentage of users viewing only one results page, which has increased over the years of study. Based on the studies of the U.S.‐based search engines, the topical categories of ‘People, Place or Things’ and ‘Commerce, Travel, Employment or Economy’ are becoming more popular, while the categories of ‘Sex and Pornography’ and ‘Entertainment or Recreation’ are declining. Conclusions – The percentage of users viewing only one results page increased during the years of the study, while the percentages of single query sessions, oneterm sessions and operator usage remained stable. The increase in single result page viewing

  10. Experience of Developing a Meta-Semantic Search Engine

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Thinking of todays web search scenario which is mainly keyword based, leads to the need of effective and meaningful search provided by Semantic Web. Existing search engines are vulnerable to provide relevant answers to users query due to their dependency on simple data available in web pages. On other hand, semantic search engines provide efficient and relevant results as the semantic web manages information with well defined meaning using ontology. A Meta-Search engine is a search tool that ...

  11. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Nejdl, Wolfgang

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...

  12. A Model for Freshman Engineering Retention

    Science.gov (United States)

    Veenstra, Cindy P.; Dey, Eric L.; Herrin, Gary D.

    2009-01-01

    With the current concern over the growing need for more engineers, there is an immediate need to improve freshman engineering retention. A working model for freshman engineering retention is needed. This paper proposes such a model based on Tinto's Interactionalist Theory. Emphasis in this model is placed on pre-college characteristics as…

  13. Development of Web-Based Formative Assessment Model to Enhance Physics Concepts of Students

    Directory of Open Access Journals (Sweden)

    Ediyanto Ediyanto

    2015-03-01

    Full Text Available Pengembangan Model Penilaian Formatif Berbasis Web untuk Meningkatkan Pemahaman Konsep Fisika Siswa   Abstract: There are two approaches of learning assessment, called formative and summative. The formative assessment is applicable because it involves students directly during the process, may im-prove these students perceptive. The limited time in class makes this process difficult, then the de-velopment of both online and offline formative assessment, provide responsive feedback for teachers and students, is definitely needed. This research goal is to produce a model of web-based formative assessment for physics. This study used research design and development of the formative assess-ment-model. Questionnaire is used for product validation, consist of validation of textbook,  instrument of pre and post-learning quizzes and web product.The result of quantitative analysis shows that the developed product is valid without any revision. Based on qualitative data, the product revision follows comments and suggestions from expert’s validation, teachers and students. The product testing shows that the formative assessment-model may improve students’ conceptual comprehension. Key Words: formatice assessment-model, students’ conceptual comprehension of physics, web-based   Abstrak: Penilaian terbagi menjadi dua macam yaitu penilaian formatif dan penilaian sumatif. Penilaian formatif tepat digunakan karena prosesnya melibatkan siswa secara langsung di dalam proses pembelajaran dan mampu meningkatkan pemahaman konsep siswa. Keterbatasan waktu di kelas menyebabkan proses ini sulit dilakukan, maka perlu dikembangkan model penilaian formatif secara online dan off-line yang dapat memberikan umpan balik yang cepat bagi siswa dan guru. Tujuan dari penelitian adalah menghasilkan model web-based penilaian formatif untuk pembelajaran fisika. Penelitian menggunakan rancangan penelitian dan pengembangan model penilaian formatif. Instrumen yang digunakan

  14. Workshop on Engineering Turbulence Modeling

    Science.gov (United States)

    Povinelli, Louis A. (Editor); Liou, W. W. (Editor); Shabbir, A. (Editor); Shih, T.-H. (Editor)

    1992-01-01

    Discussed here is the future direction of various levels of engineering turbulence modeling related to computational fluid dynamics (CFD) computations for propulsion. For each level of computation, there are a few turbulence models which represent the state-of-the-art for that level. However, it is important to know their capabilities as well as their deficiencies in order to help engineers select and implement the appropriate models in their real world engineering calculations. This will also help turbulence modelers perceive the future directions for improving turbulence models. The focus is on one-point closure models (i.e., from algebraic models to higher order moment closure schemes and partial differential equation methods) which can be applied to CFD computations. However, other schemes helpful in developing one-point closure models, are also discussed.

  15. FindZebra: A search engine for rare diseases

    DEFF Research Database (Denmark)

    Dragusin, Radu; Petcu, Paula; Lioma, Christina Amalia

    2013-01-01

    Background: The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface for such information. It is therefore of interest to find out how well web search engines work for diagnostic...... approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, state-of-the-art evaluation measures, and curated information resources. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source...... medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Conclusions: Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular web search engines. The proposed...

  16. Epidemic model for information diffusion in web forums: experiments in marketing exchange and political dialog.

    Science.gov (United States)

    Woo, Jiyoung; Chen, Hsinchun

    2016-01-01

    As social media has become more prevalent, its influence on business, politics, and society has become significant. Due to easy access and interaction between large numbers of users, information diffuses in an epidemic style on the web. Understanding the mechanisms of information diffusion through these new publication methods is important for political and marketing purposes. Among social media, web forums, where people in online communities disseminate and receive information, provide a good environment for examining information diffusion. In this paper, we model topic diffusion in web forums using the epidemiology model, the susceptible-infected-recovered (SIR) model, frequently used in previous research to analyze both disease outbreaks and knowledge diffusion. The model was evaluated on a large longitudinal dataset from the web forum of a major retail company and from a general political discussion forum. The fitting results showed that the SIR model is a plausible model to describe the diffusion process of a topic. This research shows that epidemic models can expand their application areas to topic discussion on the web, particularly social media such as web forums.

  17. A reverse engineering approach for automatic annotation of Web pages

    NARCIS (Netherlands)

    R. de Virgilio (Roberto); F. Frasincar (Flavius); W. Hop (Walter); S. Lachner (Stephan)

    2013-01-01

    textabstractThe Semantic Web is gaining increasing interest to fulfill the need of sharing, retrieving, and reusing information. Since Web pages are designed to be read by people, not machines, searching and reusing information on the Web is a difficult task without human participation. To this aim

  18. SPADOCK: Adaptive Pipeline Technology for Web System using WebSocket

    Directory of Open Access Journals (Sweden)

    Aries RICHI

    2013-01-01

    Full Text Available As information technology grows to the era of IoT(Internet of Things and cloud computing, the performance ofweb application and web service which acts as the informationgateway becomes an issue. Horizontal quality of serviceimprovement through system performance escalation becomesan issue pursued by engineers and scientists, giving birth toBigPipe pipeline technology which was developed by Facebook.We make SPADOCK, an adaptive pipeline system which is builtunder distributed system architecture with the utilization ofHTML5 WebSocket, then measure its performance. Parametersused for the measurement includes latency, workload, andbandwidth. The result shows that SPADOCK could reduceserving latency by 68.28% compared with the conventional web,and it is 20.63% faster than BigPipe.

  19. Resource quantity and quality determine the inter-specific associations between ecosystem engineers and resource users in a cavity-nest web.

    Science.gov (United States)

    Robles, Hugo; Martin, Kathy

    2013-01-01

    While ecosystem engineering is a widespread structural force of ecological communities, the mechanisms underlying the inter-specific associations between ecosystem engineers and resource users are poorly understood. A proper knowledge of these mechanisms is, however, essential to understand how communities are structured. Previous studies suggest that increasing the quantity of resources provided by ecosystem engineers enhances populations of resource users. In a long-term study (1995-2011), we show that the quality of the resources (i.e. tree cavities) provided by ecosystem engineers is also a key feature that explains the inter-specific associations in a tree cavity-nest web. Red-naped sapsuckers (Sphyrapicusnuchalis) provided the most abundant cavities (52% of cavities, 0.49 cavities/ha). These cavities were less likely to be used than other cavity types by mountain bluebirds (Sialiacurrucoides), but provided numerous nest-sites (41% of nesting cavities) to tree swallows (Tachycinetabicolour). Swallows experienced low reproductive outputs in northern flicker (Colaptesauratus) cavities compared to those in sapsucker cavities (1.1 vs. 2.1 fledglings/nest), but the highly abundant flickers (33% of cavities, 0.25 cavities/ha) provided numerous suitable nest-sites for bluebirds (58%). The relative shortage of cavities supplied by hairy woodpeckers (Picoidesvillosus) and fungal/insect decay (high quality nest-sites for both bluebirds and swallows. Because both the quantity and quality of resources supplied by different ecosystem engineers may explain the amount of resources used by each resource user, conservation strategies may require different management actions to be implemented for the key ecosystem engineer of each resource user. We, therefore, urge the incorporation of both resource quantity and quality into models that assess community dynamics to improve conservation actions and our understanding of ecological communities based on ecosystem engineering.

  20. How much data resides in a web collection: how to estimate size of a web collection

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2013-01-01

    With increasing amount of data in deep web sources (hidden from general search engines behind web forms), accessing this data has gained more attention. In the algorithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate decisions in

  1. FloorspaceJS - A New, Open Source, Web-Based Geometry Editor for Building Energy Modeling (BEM): Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Macumber, Daniel L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Horowitz, Scott G [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Schott, Marjorie [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Nolan, Katie [Devetry; Schiller, Brian [Devetry

    2018-03-19

    Across most industries, desktop applications are being rapidly migrated to web applications for a variety of reasons. Web applications are inherently cross platform, mobile, and easier to distribute than desktop applications. Fueling this trend are a wide range of free, open source libraries and frameworks that make it incredibly easy to develop powerful web applications. The building energy modeling community is just beginning to pick up on these larger trends, with a small but growing number of building energy modeling applications starting on or moving to the web. This paper presents a new, open source, web based geometry editor for Building Energy Modeling (BEM). The editor is written completely in JavaScript and runs in a modern web browser. The editor works on a custom JSON file format and is designed to be integrated into a variety of web and desktop applications. The web based editor is available to use as a standalone web application at: https://nrel.github.io/openstudio-geometry-editor/. An example integration is demonstrated with the OpenStudio desktop application. Finally, the editor can be easily integrated with a wide range of possible building energy modeling web applications.

  2. Exploring the academic invisible web

    OpenAIRE

    Lewandowski, Dirk

    2006-01-01

    The Invisible Web is often discussed in the academic context, where its contents (mainly in the form of databases) are of great importance. But this discussion is mainly based on some seminal research done by Sherman and Price (2001) and Bergman (2001), respectively. We focus on the types of Invisible Web content relevant for academics and the improvements made by search engines to deal with these content types. In addition, we question the volume of the Invisible Web as stated by Bergman. Ou...

  3. Finding Specification Pages from the Web

    Science.gov (United States)

    Yoshinaga, Naoki; Torisawa, Kentaro

    This paper presents a method of finding a specification page on the Web for a given object (e.g., ``Ch. d'Yquem'') and its class label (e.g., ``wine''). A specification page for an object is a Web page which gives concise attribute-value information about the object (e.g., ``county''-``Sauternes'') in well formatted structures. A simple unsupervised method using layout and symbolic decoration cues was applied to a large number of the Web pages to acquire candidate attributes for each class (e.g., ``county'' for a class ``wine''). We then filter out irrelevant words from the putative attributes through an author-aware scoring function that we called site frequency. We used the acquired attributes to select a representative specification page for a given object from the Web pages retrieved by a normal search engine. Experimental results revealed that our system greatly outperformed the normal search engine in terms of this specification retrieval.

  4. Modeling the Interaction Between Semantic Agents and Semantic Web Services Using MDA Approach

    NARCIS (Netherlands)

    Kardas, Geylani; Göknil, Arda; Dikenelli, Oguz; Topaloglu, N. Yasemin

    2007-01-01

    In this paper, we present our metamodeling approach for integrating semantic web services and semantic web enabled agents under Model Driven Architecture (MDA) view which defines a conceptual framework to realize model driven development. We believe that agents must have well designed environment

  5. Surfing the World Wide Web to Education Hot-Spots.

    Science.gov (United States)

    Dyrli, Odvard Egil

    1995-01-01

    Provides a brief explanation of Web browsers and their use, as well as technical information for those considering access to the WWW (World Wide Web). Curriculum resources and addresses to useful Web sites are included. Sidebars show sample searches using Yahoo and Lycos search engines, and a list of recommended Web resources. (JKP)

  6. WebPIE : A web-scale parallel inference engine using MapReduce

    NARCIS (Netherlands)

    Urbani, Jacopo; Kotoulas, Spyros; Maassen, Jason; Van Harmelen, Frank; Bal, Henri

    2012-01-01

    The large amount of Semantic Web data and its fast growth pose a significant computational challenge in performing efficient and scalable reasoning. On a large scale, the resources of single machines are no longer sufficient and we are required to distribute the process to improve performance. The

  7. Algorithms and Models for the Web Graph

    NARCIS (Netherlands)

    Gleich, David F.; Komjathy, Julia; Litvak, Nelli

    2015-01-01

    This volume contains the papers presented at WAW2015, the 12th Workshop on Algorithms and Models for the Web-Graph held during December 10–11, 2015, in Eindhoven. There were 24 submissions. Each submission was reviewed by at least one, and on average two, Program Committee members. The committee

  8. Gas Turbine Engine Behavioral Modeling

    OpenAIRE

    Meyer, Richard T; DeCarlo, Raymond A.; Pekarek, Steve; Doktorcik, Chris

    2014-01-01

    This paper develops and validates a power flow behavioral model of a gas tur- bine engine with a gas generator and free power turbine. “Simple” mathematical expressions to describe the engine’s power flow are derived from an understand- ing of basic thermodynamic and mechanical interactions taking place within the engine. The engine behavioral model presented is suitable for developing a supervisory level controller of an electrical power system that contains the en- gine connected to a gener...

  9. Modeling student success in engineering education

    Science.gov (United States)

    Jin, Qu

    In order for the United States to maintain its global competitiveness, the long-term success of our engineering students in specific courses, programs, and colleges is now, more than ever, an extremely high priority. Numerous studies have focused on factors that impact student success, namely academic performance, retention, and/or graduation. However, there are only a limited number of works that have systematically developed models to investigate important factors and to predict student success in engineering. Therefore, this research presents three separate but highly connected investigations to address this gap. The first investigation involves explaining and predicting engineering students' success in Calculus I courses using statistical models. The participants were more than 4000 first-year engineering students (cohort years 2004 - 2008) who enrolled in Calculus I courses during the first semester in a large Midwestern university. Predictions from statistical models were proposed to be used to place engineering students into calculus courses. The success rates were improved by 12% in Calculus IA using predictions from models developed over traditional placement method. The results showed that these statistical models provided a more accurate calculus placement method than traditional placement methods and help improve success rates in those courses. In the second investigation, multi-outcome and single-outcome neural network models were designed to understand and to predict first-year retention and first-year GPA of engineering students. The participants were more than 3000 first year engineering students (cohort years 2004 - 2005) enrolled in a large Midwestern university. The independent variables include both high school academic performance factors and affective factors measured prior to entry. The prediction performances of the multi-outcome and single-outcome models were comparable. The ability to predict cumulative GPA at the end of an engineering

  10. Construction of a bibliographic information database and a web directory for the nuclear science and engineering

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jeong Hoon; Kim, Tae Whan; Lee, Ji Ho; Chun, Young Chun; Yu, An Na

    2005-11-15

    The objective of this project is to construct the bibliographic information database and the web directory in the nuclear field. Its construction is very timely and important. Because nuclear science and technology has an considerable effect all over the other sciences and technologies due to its property of giant and complex engineering. We aimed to firmly build up a basis of efficient management of the bibliographic information database and the web directory in the nuclear field. The results of this project that we achieved in this year are as follows : first, construction of the bibliographic information database in the nuclear field(the target title: 1,500 titles ; research report: 1,000 titles, full-text report: 250 titles, full-text article: 250 titles). Second, completion of construction of the web directory in the nuclear field by using SWING (the total figure achieved : 2,613 titles). We plan that we will positively give more information to the general public interested in the nuclear field and to the experts of the field through this bibliographic information database on KAERI's home page, KAERI's electronic library and other related sites as well as participation at various seminars and meetings related to the nuclear field.

  11. Modeling Views for Semantic Web Using eXtensible Semantic (XSemantic) Nets

    NARCIS (Netherlands)

    Rajugan, R.; Chang, E.; Feng, L.; Dillon, T.; meersman, R; Tari, Z; herrero, p; Méndez, G.; Cavedon, L.; Martin, D.; Hinze, A.; Buchanan, G.

    2005-01-01

    The emergence of Semantic Web (SW) and the related technologies promise to make the web a meaningful experience. Yet, high level modeling, design and querying techniques proves to be a challenging task for organizations that are hoping utilize the SW paradigm for their industrial applications, which

  12. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  13. A novel architecture for information retrieval system based on semantic web

    Science.gov (United States)

    Zhang, Hui

    2011-12-01

    Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.

  14. A security modeling approach for web-service-based business processes

    DEFF Research Database (Denmark)

    Jensen, Meiko; Feja, Sven

    2009-01-01

    a transformation that automatically derives WS-SecurityPolicy-conformant security policies from the process model, which in conjunction with the generated WS-BPEL processes and WSDL documents provides the ability to deploy and run the complete security-enhanced process based on Web Service technology.......The rising need for security in SOA applications requires better support for management of non-functional properties in web-based business processes. Here, the model-driven approach may provide valuable benefits in terms of maintainability and deployment. Apart from modeling the pure functionality...... of a process, the consideration of security properties at the level of a process model is a promising approach. In this work-in-progress paper we present an extension to the ARIS SOA Architect that is capable of modeling security requirements as a separate security model view. Further we provide...

  15. Cost estimation in software engineering projects with web components development

    Directory of Open Access Journals (Sweden)

    Javier de Andrés

    2015-01-01

    Full Text Available Existen multitud de modelos propuestos para la predicción de co stes en proyectos de software, al gunos orientados específicamen te para proyectos Web. Este trabajo analiza si los modelos específicos para proyectos Web están justifi cados, examinando el comportami ento diferencial de los costes entre proyectos de desarrollo softwar e Web y no Web. Se analizan dos aspectos del cálculo de costes: las deseconomías de escala, y el im pacto de algunas características de estos proyectos que son utilizadas como cost drivers. Se en uncian dos hipótesis: (a en estos proyect os las deseconomías de escala so n mayores y (b el incremento de coste que provocan los cost dr ivers es menor para los proyectos Web. Se contrastaron estas hipótesis a nalizando un conjunto de proyectos reales. Los resultados sugie ren que ambas hipótesis se cumplen. Por lo tanto, la principal contribu ción a la literatura de esta inv estigación es que el desarrollo de modelos específicos para los proyectos Web está justificado.

  16. Web sites that work secrets from winning web sites

    CERN Document Server

    Smith, Jon

    2012-01-01

    Leading web site entrepreneur Jon Smith has condensed the secrets of his success into 52 inspiring ideas that even the most hopeless technophobe can implement. The brilliant tips and practical advice in Web sites that work will uplift and transform any website, from the simplest to the most complicated. It deals with everything from fundamentals such as how to assess the effectiveness of a website and how to get a site listed on the most popular search engines to more sophisticated challenges like creating a community and dealing with legal requirements. Straight-talking, practical and humorou

  17. Using ant-behavior-based simulation model AntWeb to improve website organization

    Science.gov (United States)

    Li, Weigang; Pinheiro Dib, Marcos V.; Teles, Wesley M.; Morais de Andrade, Vlaudemir; Alves de Melo, Alba C. M.; Cariolano, Judas T.

    2002-03-01

    Some web usage mining algorithms showed the potential application to find the difference among the organizations expected by visitors to the website. However, there are still no efficient method and criterion for a web administrator to measure the performance of the modification. In this paper, we developed an AntWeb, a model inspired by ants' behavior to simulate the sequence of visiting the website, in order to measure the efficient of the web structure. We implemented a web usage mining algorithm using backtrack to the intranet website of the Politec Informatic Ltd., Brazil. We defined throughput (the number of visitors to reach their target pages per time unit relates to the total number of visitors) as an index to measure the website's performance. We also used the link in a web page to represent the effect of visitors' pheromone trails. For every modification in the website organization, for example, putting a link from the expected location to the target object, the simulation reported the value of throughput as a quick answer about this modification. The experiment showed the stability of our simulation model, and a positive modification to the intranet website of the Politec.

  18. Hydrological Modeling and WEB-GIS for the Water Resource Management

    Science.gov (United States)

    Pierleoni, A.; Bellezza, M.; Casadei, S.; Manciola, P.

    2006-12-01

    Water resources are a strategically natural resource although they can be extremely susceptible to degradation. As a matter of fact the increasing demand from multipurpose uses, which often are in competition amongst themselves, seems to affect the concept of sustainability per se', thus highlighting phenomena of quality-quantity degradation of water resources. In this context, the issue of water resource management rises to a more important role, especially when, other then the traditional uses for civil, industrial and agronomic purposes, environmental demands are taken into consideration. In particular, for environmental demands we mean: to preserve minimal flows, to conserve ecosystems and biodiversities, to protect and improve the environment and finally also the recreational facilities. In the present work, two software tools are presented; they combine the scientific aspect of the issues with a feasible and widely accessible application of the mathematical modeling in techno-operative fields within a sustainable management policy of the water resource at the basin scale. The first evaluation model of the available superficial water resource bases its algorithms upon regionalization procedures of flow parameters deduced from the geomorphologic features of the soil of the basin (BFI, Area) and presents, as output, a set of duration curves (DC) of the natural, measurable (natural after withdrawal), and residual (discharge usable for dissipative use) flow. The hydrological modeling combined with a GIS engine allows to process the dataset and regionalize the information of each section of the hydrographic network, in order to attain information about the effect of upriver withdrawals, in terms of evaluation parameters (measurable DC) to maintain an optimal water supply all along the entire downstream network. This model, projected with a WEB interface developed in PERL and connected to a MySQL database, has also been tested at the basin and sub-basin scale as an

  19. Focused Crawling of the Deep Web Using Service Class Descriptions

    Energy Technology Data Exchange (ETDEWEB)

    Rocco, D; Liu, L; Critchlow, T

    2004-06-21

    Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address these challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.

  20. A Dynamic Defense Modeling and Simulation Methodology using Semantic Web Services

    Directory of Open Access Journals (Sweden)

    Kangsun Lee

    2010-04-01

    Full Text Available Defense Modeling and Simulations require interoperable and autonomous federates in order to fully simulate complex behavior of war-fighters and to dynamically adapt themselves to various war-game events, commands and controls. In this paper, we propose a semantic web service based methodology to develop war-game simulations. Our methodology encapsulates war-game logic into a set of web services with additional semantic information in WSDL (Web Service Description Language and OWL (Web Ontology Language. By utilizing dynamic discovery and binding power of semantic web services, we are able to dynamically reconfigure federates according to various simulation events. An ASuW (Anti-Surface Warfare simulator is constructed to demonstrate the methodology and successfully shows that the level of interoperability and autonomy can be greatly improved.

  1. Pro JavaScript for web apps

    CERN Document Server

    Freeman, Adam

    2012-01-01

    JavaScript is the engine behind every web app, and a solid knowledge of it is essential for all modern web developers. Pro JavaScript for Web Apps gives you all of the information that you need to create professional, optimized, and efficient JavaScript applications that will run across all devices. It takes you through all aspects of modern JavaScript application creation, showing you how to combine JavaScript with the new features of HTML5 and CSS3 to make the most of the new web technologies. The focus of the book is on creating professional web applications, ensuring that your app provides

  2. Search engines that learn from their users

    NARCIS (Netherlands)

    Schuth, A.G.

    2016-01-01

    More than half the world’s population uses web search engines, resulting in over half a billion search queries every single day. For many people web search engines are among the first resources they go to when a question arises. Moreover, search engines have for many become the most trusted route to

  3. Web X-Ray: Developing and Adopting Web Best Practices in Enterprises

    Directory of Open Access Journals (Sweden)

    Reinaldo Ferreira

    2016-12-01

    Full Text Available The adoption of Semantic Web technologies constitutes a promising approach to data structuring and integration, both for public and private usage. While these technologies have been around for some time, their adoption is behind overall expectations, particularly in the case of Enterprises. Having that in mind, we developed a Semantic Web Implementation Model that measures and facilitates the implementation of the technology. The advantages of using the model proposed are two-fold: the model serves as a guide for driving the implementation of the Semantic Web as well as it helps to evaluate the impact of the introduction of the technology. The model was adopted by 19 enterprises in an Action Research intervention of one year with promising results: according to the model's scale, in average, all enterprises evolved from a 6% evaluation to 46% during that period. Furthermore, practical implementation recommendations, a typical consulting tool, were developed and adopted during the project by all enterprises, providing important guidelines for the identification of a development path that may be adopted on a larger scale. Meanwhile, the project also outlined that most enterprises were interested in an even broader scope of the Implementation Model and the ambition of a "All Web Technologies" approach arose. One model that could embrace the observable overlapping of different Web generations, namely the Web of Documents, the Social Web, the Web of Data and, ultimately, the Web of Context. One model that could combine the evaluation and guidance for all enterprises to follow. That's the goal of the undergoing "Project Web X-ray" that aims to involve 200 enterprises in the adoption of best practices that may lead to their business development based on Web technologies. This paper presents a case of how Action Research promoted the simultaneous advancement of academic research and enterprise development and introduces the framework and opportunities

  4. Computational modeling for eco engineering: Making the connections between engineering and ecology (Invited)

    Science.gov (United States)

    Bowles, C.

    2013-12-01

    Ecological engineering, or eco engineering, is an emerging field in the study of integrating ecology and engineering, concerned with the design, monitoring, and construction of ecosystems. According to Mitsch (1996) 'the design of sustainable ecosystems intends to integrate human society with its natural environment for the benefit of both'. Eco engineering emerged as a new idea in the early 1960s, and the concept has seen refinement since then. As a commonly practiced field of engineering it is relatively novel. Howard Odum (1963) and others first introduced it as 'utilizing natural energy sources as the predominant input to manipulate and control environmental systems'. Mtisch and Jorgensen (1989) were the first to define eco engineering, to provide eco engineering principles and conceptual eco engineering models. Later they refined the definition and increased the number of principles. They suggested that the goals of eco engineering are: a) the restoration of ecosystems that have been substantially disturbed by human activities such as environmental pollution or land disturbance, and b) the development of new sustainable ecosystems that have both human and ecological values. Here a more detailed overview of eco engineering is provided, particularly with regard to how engineers and ecologists are utilizing multi-dimensional computational models to link ecology and engineering, resulting in increasingly successful project implementation. Descriptions are provided pertaining to 1-, 2- and 3-dimensional hydrodynamic models and their use at small- and large-scale applications. A range of conceptual models that have been developed to aid the in the creation of linkages between ecology and engineering are discussed. Finally, several case studies that link ecology and engineering via computational modeling are provided. These studies include localized stream rehabilitation, spawning gravel enhancement on a large river system, and watershed-wide floodplain modeling of

  5. Optimization in engineering models and algorithms

    CERN Document Server

    Sioshansi, Ramteen

    2017-01-01

    This textbook covers the fundamentals of optimization, including linear, mixed-integer linear, nonlinear, and dynamic optimization techniques, with a clear engineering focus. It carefully describes classical optimization models and algorithms using an engineering problem-solving perspective, and emphasizes modeling issues using many real-world examples related to a variety of application areas. Providing an appropriate blend of practical applications and optimization theory makes the text useful to both practitioners and students, and gives the reader a good sense of the power of optimization and the potential difficulties in applying optimization to modeling real-world systems. The book is intended for undergraduate and graduate-level teaching in industrial engineering and other engineering specialties. It is also of use to industry practitioners, due to the inclusion of real-world applications, opening the door to advanced courses on both modeling and algorithm development within the industrial engineering ...

  6. Design and implementation of space physics multi-model application integration based on web

    Science.gov (United States)

    Jiang, Wenping; Zou, Ziming

    independent modules according to different business needs is applied to solve the problem of the independence of the physical space between multiple models. The classic MVC(Model View Controller) software design pattern is concerned to build the architecture of space physics multi-model application integrated system. The JSP+servlet+javabean technology is used to integrate the web application programs of space physics multi-model. It solves the problem of multi-user requesting the same job of model computing and effectively balances each server computing tasks. In addition, we also complete follow tasks: establishing standard graphical user interface based on Java Applet application program; Designing the interface between model computing and model computing results visualization; Realizing three-dimensional network visualization without plug-ins; Using Java3D technology to achieve a three-dimensional network scene interaction; Improved ability to interact with web pages and dynamic execution capabilities, including rendering three-dimensional graphics, fonts and color control. Through the design and implementation of the SPMAIS based on Web, we provide an online computing and application runtime environment of space physics multi-model. The practical application improves that researchers could be benefit from our system in space physics research and engineering applications.

  7. GeNemo: a search engine for web-based functional genomic data.

    Science.gov (United States)

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-07-08

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Web server for priority ordered multimedia services

    Science.gov (United States)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  9. Acorn: A grid computing system for constraint based modeling and visualization of the genome scale metabolic reaction networks via a web interface

    Directory of Open Access Journals (Sweden)

    Bushell Michael E

    2011-05-01

    Full Text Available Abstract Background Constraint-based approaches facilitate the prediction of cellular metabolic capabilities, based, in turn on predictions of the repertoire of enzymes encoded in the genome. Recently, genome annotations have been used to reconstruct genome scale metabolic reaction networks for numerous species, including Homo sapiens, which allow simulations that provide valuable insights into topics, including predictions of gene essentiality of pathogens, interpretation of genetic polymorphism in metabolic disease syndromes and suggestions for novel approaches to microbial metabolic engineering. These constraint-based simulations are being integrated with the functional genomics portals, an activity that requires efficient implementation of the constraint-based simulations in the web-based environment. Results Here, we present Acorn, an open source (GNU GPL grid computing system for constraint-based simulations of genome scale metabolic reaction networks within an interactive web environment. The grid-based architecture allows efficient execution of computationally intensive, iterative protocols such as Flux Variability Analysis, which can be readily scaled up as the numbers of models (and users increase. The web interface uses AJAX, which facilitates efficient model browsing and other search functions, and intuitive implementation of appropriate simulation conditions. Research groups can install Acorn locally and create user accounts. Users can also import models in the familiar SBML format and link reaction formulas to major functional genomics portals of choice. Selected models and simulation results can be shared between different users and made publically available. Users can construct pathway map layouts and import them into the server using a desktop editor integrated within the system. Pathway maps are then used to visualise numerical results within the web environment. To illustrate these features we have deployed Acorn and created a

  10. Detection And Classification Of Web Robots With Honeypots

    Science.gov (United States)

    2016-03-01

    Web robots are valuable tools for indexing content on the Web, they can also be malicious through phishing , spamming, or performing targeted attacks...indexing content on the Web, they can also be malicious through phishing , spamming, or performing targeted attacks. In this thesis, we study an approach...programs has been attributed to the explosion in content and user-generated social media on the Internet. The Web search engines like Google require

  11. Social Networking on the Semantic Web

    Science.gov (United States)

    Finin, Tim; Ding, Li; Zhou, Lina; Joshi, Anupam

    2005-01-01

    Purpose: Aims to investigate the way that the semantic web is being used to represent and process social network information. Design/methodology/approach: The Swoogle semantic web search engine was used to construct several large data sets of Resource Description Framework (RDF) documents with social network information that were encoded using the…

  12. IntegromeDB: an integrated system and biological search engine.

    Science.gov (United States)

    Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia

    2012-01-19

    With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.

  13. Marketing plan for a web shop business

    OpenAIRE

    Koskivaara, Leonilla

    2014-01-01

    Internet has changed the buying behavior of consumers during the past years and companies need to adapt to the changes. Web shop business is an important sales channel of today’s companies. Advantages of a web shop business include cost effectiveness and potential to do business globally. Challenges of a web shop business include search engine optimization and running both, a retail store and a web shop at the same time. Social media has become an important marketing channel and has bec...

  14. CHIME : service-oriented framework for adaptive web-based systems

    NARCIS (Netherlands)

    Chepegin, V.; Aroyo, L.M.; De Bra, P.M.E.; Houben, G.J.P.M.; De Bra, P.M.E.

    2003-01-01

    In this paper we present our view on how the current development of knowledge engineering in the context of Semantic Web can contribute to the better applicability, reusability and sharability of adaptive web-based systems. We propose a service-oriented framework for adaptive web-based systems,

  15. Overview of the TREC 2014 Federated Web Search Track

    NARCIS (Netherlands)

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Nguyen, Dong-Phuong; Zhou, Ke; Hiemstra, Djoerd

    2014-01-01

    The TREC Federated Web Search track facilitates research in topics related to federated web search, by providing a large realistic data collection sampled from a multitude of online search engines. The FedWeb 2013 challenges of Resource Selection and Results Merging challenges are again included in

  16. Earth Science Mining Web Services

    Science.gov (United States)

    Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken

    2008-01-01

    To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.

  17. A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service

    Science.gov (United States)

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016

  18. A framework for sharing and integrating remote sensing and GIS models based on Web service.

    Science.gov (United States)

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.

  19. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services.

    Science.gov (United States)

    Gessler, Damian D G; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-09-23

    SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at http://sswap.info (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at http://sswap.info/protocol.jsp, developer tools at http://sswap.info/developer.jsp, and a portal to third-party ontologies at http://sswapmeet.sswap.info (a "swap meet"). SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the

  20. Web-based Analysis Services Report

    CERN Document Server

    AUTHOR|(CDS)2108758; Canali, Luca; Grancher, Eric; Lamanna, Massimo; McCance, Gavin; Mato Vila, Pere; Piparo, Danilo; Moscicki, Jakub; Pace, Alberto; Brito Da Rocha, Ricardo; Simko, Tibor; Smith, Tim; Tejedor Saavedra, Enric; CERN. Geneva. IT Department

    2017-01-01

    Web-based services (cloud services) is an important trend to innovate end-user services while optimising the service operational costs. CERN users are constantly proposing new approaches (inspired from services existing on the web, tools used in education or other science or based on their experience in using existing computing services). In addition, industry and open source communities have recently made available a large number of powerful and attractive tools and platforms that enable large scale data processing. “Big Data” software stacks notably provide solutions for scalable storage, distributed compute and data analysis engines, data streaming, web-based interfaces (notebooks). Some of those platforms and tools, typically available as open source products, are experiencing a very fast adoption in industry and science such that they are becoming “de facto” references in several areas of data engineering, data science and machine learning. In parallel to users' requests, WLCG is considering to c...

  1. Maximum Spanning Tree Model on Personalized Web Based Collaborative Learning in Web 3.0

    OpenAIRE

    Padma, S.; Seshasaayee, Ananthi

    2012-01-01

    Web 3.0 is an evolving extension of the current web environme bnt. Information in web 3.0 can be collaborated and communicated when queried. Web 3.0 architecture provides an excellent learning experience to the students. Web 3.0 is 3D, media centric and semantic. Web based learning has been on high in recent days. Web 3.0 has intelligent agents as tutors to collect and disseminate the answers to the queries by the students. Completely Interactive learner's query determine the customization of...

  2. A survey on web modeling approaches for ubiquitous web applications

    NARCIS (Netherlands)

    Schwinger, W.; Retschitzegger, W.; Schauerhuber, A.; Kappel, G.; Wimmer, M.; Pröll, B.; Cachero Castro, C.; Casteleyn, S.; De Troyer, O.; Fraternali, P.; Garrigos, I.; Garzotto, F.; Ginige, A.; Houben, G.J.P.M.; Koch, N.; Moreno, N.; Pastor, O.; Paolini, P.; Pelechano Ferragud, V.; Rossi, G.; Schwabe, D.; Tisi, M.; Vallecillo, A.; Sluijs, van der K.A.M.; Zhang, G.

    2008-01-01

    Purpose – Ubiquitous web applications (UWA) are a new type of web applications which are accessed in various contexts, i.e. through different devices, by users with various interests, at anytime from anyplace around the globe. For such full-fledged, complex software systems, a methodologically sound

  3. An Improved Abstract State Machine Based Choreography Specification and Execution Algorithm for Semantic Web Services

    Directory of Open Access Journals (Sweden)

    Shahin Mehdipour Ataee

    2018-01-01

    Full Text Available We identify significant weaknesses in the original Abstract State Machine (ASM based choreography algorithm of Web Service Modeling Ontology (WSMO, which make it impractical for use in semantic web service choreography engines. We present an improved algorithm which rectifies the weaknesses of the original algorithm, as well as a practical, fully functional choreography engine implementation in Flora-2 based on the improved algorithm. Our improvements to the choreography algorithm include (i the linking of the initial state of the ASM to the precondition of the goal, (ii the introduction of the concept of a final state in the execution of the ASM and its linking to the postcondition of the goal, and (iii modification to the execution of the ASM so that it stops when the final state condition is satisfied by the current configuration of the machine. Our choreography engine takes as input semantic web service specifications written in the Flora-2 dialect of F-logic. Furthermore, we prove the equivalence of ASMs (evolving algebras and evolving ontologies in the sense that one can simulate the other, a first in literature. Finally, we present a visual editor which facilitates the design and deployment of our F-logic based web service and goal specifications.

  4. Development of Content Management System-based Web Applications

    NARCIS (Netherlands)

    Souer, J.

    2012-01-01

    Web engineering is the application of systematic and quantifiable approaches (concepts, methods, techniques, tools) to cost-effective requirements analysis, design, implementation, testing, operation, and maintenance of high quality web applications. Over the past years, Content Management Systems

  5. A Survey On Various Web Template Detection And Extraction Methods

    Directory of Open Access Journals (Sweden)

    Neethu Mary Varghese

    2015-03-01

    Full Text Available Abstract In todays digital world reliance on the World Wide Web as a source of information is extensive. Users increasingly rely on web based search engines to provide accurate search results on a wide range of topics that interest them. The search engines in turn parse the vast repository of web pages searching for relevant information. However majority of web portals are designed using web templates which are designed to provide consistent look and feel to end users. The presence of these templates however can influence search results leading to inaccurate results being delivered to the users. Therefore to improve the accuracy and reliability of search results identification and removal of web templates from the actual content is essential. A wide range of approaches are commonly employed to achieve this and this paper focuses on the study of the various approaches of template detection and extraction that can be applied across homogenous as well as heterogeneous web pages.

  6. Animal models for bone tissue engineering and modelling disease

    Science.gov (United States)

    Griffin, Michelle

    2018-01-01

    ABSTRACT Tissue engineering and its clinical application, regenerative medicine, are instructing multiple approaches to aid in replacing bone loss after defects caused by trauma or cancer. In such cases, bone formation can be guided by engineered biodegradable and nonbiodegradable scaffolds with clearly defined architectural and mechanical properties informed by evidence-based research. With the ever-increasing expansion of bone tissue engineering and the pioneering research conducted to date, preclinical models are becoming a necessity to allow the engineered products to be translated to the clinic. In addition to creating smart bone scaffolds to mitigate bone loss, the field of tissue engineering and regenerative medicine is exploring methods to treat primary and secondary bone malignancies by creating models that mimic the clinical disease manifestation. This Review gives an overview of the preclinical testing in animal models used to evaluate bone regeneration concepts. Immunosuppressed rodent models have shown to be successful in mimicking bone malignancy via the implantation of human-derived cancer cells, whereas large animal models, including pigs, sheep and goats, are being used to provide an insight into bone formation and the effectiveness of scaffolds in induced tibial or femoral defects, providing clinically relevant similarity to human cases. Despite the recent progress, the successful translation of bone regeneration concepts from the bench to the bedside is rooted in the efforts of different research groups to standardise and validate the preclinical models for bone tissue engineering approaches. PMID:29685995

  7. Discovering Land Cover Web Map Services from the Deep Web with JavaScript Invocation Rules

    Directory of Open Access Journals (Sweden)

    Dongyang Hou

    2016-06-01

    Full Text Available Automatic discovery of isolated land cover web map services (LCWMSs can potentially help in sharing land cover data. Currently, various search engine-based and crawler-based approaches have been developed for finding services dispersed throughout the surface web. In fact, with the prevalence of geospatial web applications, a considerable number of LCWMSs are hidden in JavaScript code, which belongs to the deep web. However, discovering LCWMSs from JavaScript code remains an open challenge. This paper aims to solve this challenge by proposing a focused deep web crawler for finding more LCWMSs from deep web JavaScript code and the surface web. First, the names of a group of JavaScript links are abstracted as initial judgements. Through name matching, these judgements are utilized to judge whether or not the fetched webpages contain predefined JavaScript links that may prompt JavaScript code to invoke WMSs. Secondly, some JavaScript invocation functions and URL formats for WMS are summarized as JavaScript invocation rules from prior knowledge of how WMSs are employed and coded in JavaScript. These invocation rules are used to identify the JavaScript code for extracting candidate WMSs through rule matching. The above two operations are incorporated into a traditional focused crawling strategy situated between the tasks of fetching webpages and parsing webpages. Thirdly, LCWMSs are selected by matching services with a set of land cover keywords. Moreover, a search engine for LCWMSs is implemented that uses the focused deep web crawler to retrieve and integrate the LCWMSs it discovers. In the first experiment, eight online geospatial web applications serve as seed URLs (Uniform Resource Locators and crawling scopes; the proposed crawler addresses only the JavaScript code in these eight applications. All 32 available WMSs hidden in JavaScript code were found using the proposed crawler, while not one WMS was discovered through the focused crawler

  8. A resource-oriented architecture for a Geospatial Web

    Science.gov (United States)

    Mazzetti, Paolo; Nativi, Stefano

    2010-05-01

    point b) (manipulation of resources through representations), the Geospatial Web poses several issues. In fact, while the Web mainly handles semi-structured information, in the Geospatial Web the information is typically structured with several possible data models (e.g. point series, gridded coverages, trajectories, etc.) and encodings. A possibility would be to simplify the interchange formats, choosing to support a subset of data models and format(s). This is what actually the Web designers did choosing to define a common format for hypermedia (HTML), although the underlying protocol would be generic. Concerning point c), self-descriptive messages, the exchanged messages should describe themselves and their content. This would not be actually a major issue considering the effort put in recent years on geospatial metadata models and specifications. The point d), hypermedia as the engine of application state, is actually where the Geospatial Web would mainly differ from existing geospatial information sharing systems. In fact the existing systems typically adopt a service-oriented architecture, where applications are built as a single service or as a workflow of services. On the other hand, in the Geospatial Web, applications should be built following the path between interconnected resources. The link between resources should be made explicit as hyperlinks. The adoption of Semantic Web solutions would allow to define not only the existence of a link between two resources, but also the nature of the link. The implementation of a Geospatial Web would allow to build an information system with the same characteristics of the Web sharing its points-of-strength and weaknesses. The main advantages would be the following: • The user would interact with the Geospatial Web according to the well-known Web navigation paradigm. This would lower the barrier to the access to geospatial applications for non-specialists (e.g. the success of Google Maps and other Web mapping

  9. WebWorkFlow : An Object-Oriented Workflow Modeling Language for Web Applications

    NARCIS (Netherlands)

    Hemel, Z.; Verhaaf, R.; Visser, E.

    2008-01-01

    Preprint of paper published in: MODELS 2008 - International Conference on Model Driven Engineering Languages and Systems, Lecture Notes in Computer Science 5301, 2008; doi:10.1007/978-3-540-87875-9_8 Workflow languages are designed for the high-level description of processes and are typically not

  10. ARCAS (ACACIA Regional Climate-data Access System) -- a Web Access System for Climate Model Data Access, Visualization and Comparison

    Science.gov (United States)

    Hakkarinen, C.; Brown, D.; Callahan, J.; hankin, S.; de Koningh, M.; Middleton-Link, D.; Wigley, T.

    2001-05-01

    A Web-based access system to climate model output data sets for intercomparison and analysis has been produced, using the NOAA-PMEL developed Live Access Server software as host server and Ferret as the data serving and visualization engine. Called ARCAS ("ACACIA Regional Climate-data Access System"), and publicly accessible at http://dataserver.ucar.edu/arcas, the site currently serves climate model outputs from runs of the NCAR Climate System Model for the 21st century, for Business as Usual and Stabilization of Greenhouse Gas Emission scenarios. Users can select, download, and graphically display single variables or comparisons of two variables from either or both of the CSM model runs, averaged for monthly, seasonal, or annual time resolutions. The time length of the averaging period, and the geographical domain for download and display, are fully selectable by the user. A variety of arithmetic operations on the data variables can be computed "on-the-fly", as defined by the user. Expansions of the user-selectable options for defining analysis options, and for accessing other DOD-compatible ("Distributed Ocean Data System-compatible") data sets, residing at locations other than the NCAR hardware server on which ARCAS operates, are planned for this year. These expansions are designed to allow users quick and easy-to-operate web-based access to the largest possible selection of climate model output data sets available throughout the world.

  11. Needle Custom Search: Recall-oriented search on the Web using semantic annotations

    NARCIS (Netherlands)

    Kaptein, Rianne; Koot, Gijs; Huis in 't Veld, Mirjam A.A.; van den Broek, Egon; de Rijke, Maarten; Kenter, Tom; de Vries, A.P.; Zhai, Chen Xiang; de Jong, Franciska M.G.; Radinsky, Kira; Hofmann, Katja

    Web search engines are optimized for early precision, which makes it difficult to perform recall-oriented tasks using these search engines. In this article, we present our tool Needle Custom Search. This tool exploits semantic annotations of Web search results and, thereby, increase the efficiency

  12. Needle Custom Search : Recall-oriented search on the web using semantic annotations

    NARCIS (Netherlands)

    Kaptein, Rianne; Koot, Gijs; Huis in 't Veld, Mirjam A.A.; van den Broek, Egon L.

    2014-01-01

    Web search engines are optimized for early precision, which makes it difficult to perform recall-oriented tasks using these search engines. In this article, we present our tool Needle Custom Search. This tool exploits semantic annotations of Web search results and, thereby, increase the efficiency

  13. Overview of the TREC 2013 Federated Web Search Track

    NARCIS (Netherlands)

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Nguyen, Dong-Phuong; Hiemstra, Djoerd

    2014-01-01

    The TREC Federated Web Search track is intended to promote research related to federated search in a realistic web setting, and hereto provides a large data collection gathered from a series of online search engines. This overview paper discusses the results of the first edition of the track, FedWeb

  14. Recommendations for Benchmarking Web Site Usage among Academic Libraries.

    Science.gov (United States)

    Hightower, Christy; Sih, Julie; Tilghman, Adam

    1998-01-01

    To help library directors and Web developers create a benchmarking program to compare statistics of academic Web sites, the authors analyzed the Web server log files of 14 university science and engineering libraries. Recommends a centralized voluntary reporting structure coordinated by the Association of Research Libraries (ARL) and a method for…

  15. An interactive web-based extranet system model for managing ...

    African Journals Online (AJOL)

    ... objectives for students, lecturers and parents to access and compute results ... The database will serve as repository of students' academic records over a ... Keywords: Extranet-Model, Interactive, Web-Based, Students, Academic, Records ...

  16. A probabilistic maintenance model for diesel engines

    Science.gov (United States)

    Pathirana, Shan; Abeygunawardane, Saranga Kumudu

    2018-02-01

    In this paper, a probabilistic maintenance model is developed for inspection based preventive maintenance of diesel engines based on the practical model concepts discussed in the literature. Developed model is solved using real data obtained from inspection and maintenance histories of diesel engines and experts' views. Reliability indices and costs were calculated for the present maintenance policy of diesel engines. A sensitivity analysis is conducted to observe the effect of inspection based preventive maintenance on the life cycle cost of diesel engines.

  17. Web corpus construction

    CERN Document Server

    Schafer, Roland

    2013-01-01

    The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are several adavantages of this approach: (i) Working with such corpora obviates the problems encountered when using Internet search engines in quantitative linguistic research (such as non-transparent ranking algorithms). (ii) Creating a corpus from web data is virtually free. (iii) The size of corpora compiled from the WWW may exceed by several orders of magnitudes the size of language resources offered elsewhere. (iv) The data is locally available to the user, and it can be linguistically post-processed and queried with the tools preferred by her/him. This book addresses the main practical tasks in the creation of web corpora up to giga-token size. Among these tasks are the sampling process (i.e., web crawling) and the usual cleanups including boilerplate removal and rem...

  18. A web GIS based integrated flood assessment modeling tool for coastal urban watersheds

    Science.gov (United States)

    Kulkarni, A. T.; Mohanty, J.; Eldho, T. I.; Rao, E. P.; Mohan, B. K.

    2014-03-01

    Urban flooding has become an increasingly important issue in many parts of the world. In this study, an integrated flood assessment model (IFAM) is presented for the coastal urban flood simulation. A web based GIS framework has been adopted to organize the spatial datasets for the study area considered and to run the model within this framework. The integrated flood model consists of a mass balance based 1-D overland flow model, 1-D finite element based channel flow model based on diffusion wave approximation and a quasi 2-D raster flood inundation model based on the continuity equation. The model code is written in MATLAB and the application is integrated within a web GIS server product viz: Web Gram Server™ (WGS), developed at IIT Bombay, using Java, JSP and JQuery technologies. Its user interface is developed using open layers and the attribute data are stored in MySQL open source DBMS. The model is integrated within WGS and is called via Java script. The application has been demonstrated for two coastal urban watersheds of Navi Mumbai, India. Simulated flood extents for extreme rainfall event of 26 July, 2005 in the two urban watersheds of Navi Mumbai city are presented and discussed. The study demonstrates the effectiveness of the flood simulation tool in a web GIS environment to facilitate data access and visualization of GIS datasets and simulation results.

  19. [Development of domain specific search engines].

    Science.gov (United States)

    Takai, T; Tokunaga, M; Maeda, K; Kaminuma, T

    2000-01-01

    As cyber space exploding in a pace that nobody has ever imagined, it becomes very important to search cyber space efficiently and effectively. One solution to this problem is search engines. Already a lot of commercial search engines have been put on the market. However these search engines respond with such cumbersome results that domain specific experts can not tolerate. Using a dedicate hardware and a commercial software called OpenText, we have tried to develop several domain specific search engines. These engines are for our institute's Web contents, drugs, chemical safety, endocrine disruptors, and emergent response for chemical hazard. These engines have been on our Web site for testing.

  20. Students' Attitude in a Web-enhanced Hybrid Course: A Structural Equation Modeling Inquiry

    Directory of Open Access Journals (Sweden)

    Cheng-Chang Sam Pan

    2003-12-01

    Full Text Available The present study focuses on five latent factors affecting students use of WebCT in a Web-enhanced hybrid undergraduate course at a southeastern university in the United States. An online questionnaire is used to measure a hypothetic model composed of two exogenous variables (i.e., subjective norm and computer self-efficacy, three endogenous variables (i.e., perceived ease of use, perceived usefulness, and attitude toward WebCT use, one dependent variable (i.e., actual system use, and eleven demographic items. PROC CALIS is used to analyze the data collected. Results suggest the technology acceptance model may not be applicable to the higher education setting. However, student attitude toward WebCT instruction remains a significant determinant to WebCT use on a non-voluntary basis. Educational achievement (i.e., student final grades is regressed on the attitude factor as an outcome variable.Suggestions for practitioners and researchers in the field are mentioned.

  1. Mathematical model of the Amazon Stirling engine

    Energy Technology Data Exchange (ETDEWEB)

    Vidal Medina, Juan Ricardo [Universidad Autonoma de Occidente (Colombia)], e-mail: jrvidal@uao.edu.co; Cobasa, Vladimir Melian; Silva, Electo [Universidade Federal de Itajuba, MG (Brazil)], e-mail: vlad@unifei.edu.br

    2010-07-01

    The Excellency Group in Thermoelectric and Distributed Generation (NEST, for its acronym in Portuguese) at the Federal University of Itajuba, has designed a Stirling engine prototype to provide electricity to isolated regions of Brazil. The engine was designed to operate with residual biomass from timber process. This paper presents mathematical models of heat exchangers (hot, cold and regenerator) integrated into second order adiabatic models. The general model takes into account the pressure drop losses, hysteresis and internal losses. The results of power output, engine efficiency, optimal velocity of the exhaust gases and the influence of dead volume in engine efficiency are presented in this paper. The objective of this modeling is to propose improvements to the manufactured engine design. (author)

  2. Engineering Education Tool for Distance Telephone Traffic Learning Through Web

    Directory of Open Access Journals (Sweden)

    Leonimer Flávio de Melo

    2012-11-01

    Full Text Available This work subject focuses in distance learning (DL modality by the Internet. The use of calculators and simulators software introduces a high level of interactivity in DL systems, such as Matlab software proposed by using in this work. The use of efficient mathematical packages and hypermedia technologies opens the door to a new paradigm of teaching and learning in the dawn of this new millennium. The use of hypertext, graphics, animation, audio, video, efficient calculators and simulators incorporating artificial intelligence techniques and the advance in the broadband networks will pave the way to this new horizon. The contribution of this work, besides the Matlab Web integration, is the developing of an introductory course in traffic engineering in hypertext format. Also, calculators to the most employed expressions of traffic analysis were developed to the Matlab server environment. By the use of the telephone traffic calculator, the user inputs data on his or her Internet browser and the systems returns numerical data, graphics and tables in HTML pages. The system is also very useful for Professional traffic calculations, replacing with advantages the use of the traditional methods by means of static tables and graphics in paper format.

  3. Model-based Software Engineering

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  4. Model engineering : balancing between virtuality and reality

    NARCIS (Netherlands)

    Hee, van K.M.

    2011-01-01

    Model engineering concerns the development of models of complex systems. This modeling is performed for a variety of reasons, such as system behavior prediction, system optimization or system construction. Model engineering requires a modeling framework that includes a language to represent the

  5. WebSelF: A Web Scraping Framework

    DEFF Research Database (Denmark)

    Thomsen, Jakob; Ernst, Erik; Brabrand, Claus

    2012-01-01

    We present, WebSelF, a framework for web scraping which models the process of web scraping and decomposes it into four conceptually independent, reusable, and composable constituents. We have validated our framework through a full parameterized implementation that is flexible enough to capture...... previous work on web scraping. We have experimentally evaluated our framework and implementation in an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over...... a period of more than one year. Our framework solves three concrete problems with current web scraping and our experimental results indicate that com- position of previous and our new techniques achieve a higher degree of accuracy, precision and specificity than existing techniques alone....

  6. Dynamic Web Pages: Performance Impact on Web Servers.

    Science.gov (United States)

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  7. Expert system for web based collaborative CAE

    Science.gov (United States)

    Hou, Liang; Lin, Zusheng

    2006-11-01

    An expert system for web based collaborative CAE was developed based on knowledge engineering, relational database and commercial FEA (Finite element analysis) software. The architecture of the system was illustrated. In this system, the experts' experiences, theories and typical examples and other related knowledge, which will be used in the stage of pre-process in FEA, were categorized into analysis process and object knowledge. Then, the integrated knowledge model based on object-oriented method and rule based method was described. The integrated reasoning process based on CBR (case based reasoning) and rule based reasoning was presented. Finally, the analysis process of this expert system in web based CAE application was illustrated, and an analysis example of a machine tool's column was illustrated to prove the validity of the system.

  8. Development of a multichemical food web model: application to PBDEs in Lake Ellasjoen, Bear Island, Norway.

    Science.gov (United States)

    Gandhi, Nilima; Bhavsar, Satyendra P; Gewurtz, Sarah B; Diamond, Miriam L; Evenset, Anita; Christensen, Guttorm N; Gregor, Dennis

    2006-08-01

    A multichemical food web model has been developed to estimate the biomagnification of interconverting chemicals in aquatic food webs. We extended a fugacity-based food web model for single chemicals to account for reversible and irreversible biotransformation among a parent chemical and transformation products, by simultaneously solving mass balance equations of the chemicals using a matrix solution. The model can be applied to any number of chemicals and organisms or taxonomic groups in a food web. The model was illustratively applied to four PBDE congeners, BDE-47, -99, -100, and -153, in the food web of Lake Ellasjøen, Bear Island, Norway. In Ellasjøen arctic char (Salvelinus alpinus), the multichemical model estimated PBDE biotransformation from higher to lower brominated congeners and improved the correspondence between estimated and measured concentrations in comparison to estimates from the single-chemical food web model. The underestimation of BDE-47, even after considering bioformation due to biotransformation of the otherthree congeners, suggests its formation from additional biotransformation pathways not considered in this application. The model estimates approximate values for congener-specific biotransformation half-lives of 5.7,0.8,1.14, and 0.45 years for BDE-47, -99, -100, and -153, respectively, in large arctic char (S. alpinus) of Lake Ellasjøen.

  9. Mean Value Modelling of a Turbocharged SI Engine

    DEFF Research Database (Denmark)

    Müller, Martin; Hendricks, Elbert; Sorenson, Spencer C.

    1998-01-01

    An important paradigm for the modelling of naturallly aspirated (NA) spark ignition (SI) engines for control purposes is the Mean Value Engine Model (MVEM). Such models have a time resolution which is just sufficient to capture the main details of the dynamic performance of NA SI engines...... but not the cycle-by-cycle behavior. In principle such models are also physically based,are very compact in a mathematical sense but nevertheless can have reasonable prediction accuracy. Presently no MVEMs have been constructed for intercooled turbocharged SI engines because their complexity confounds the simple...... physical understanding and description of such engines. This paper presents a newly constructed MVEM for a turbocharged SI engine which contains the details of the compressor and turbine characteristics in a compact way. The model has been tested against the responses of an experimental engine and has...

  10. Update on Small Modular Reactors Dynamic System Modeling Tool: Web Application

    International Nuclear Information System (INIS)

    Hale, Richard Edward; Cetiner, Sacit M.; Fugate, David L.; Batteh, John J; Tiller, Michael M.

    2015-01-01

    Previous reports focused on the development of component and system models as well as end-to-end system models using Modelica and Dymola for two advanced reactor architectures: (1) Advanced Liquid Metal Reactor and (2) fluoride high-temperature reactor (FHR). The focus of this report is the release of the first beta version of the web-based application for model use and collaboration, as well as an update on the FHR model. The web-based application allows novice users to configure end-to-end system models from preconfigured choices to investigate the instrumentation and controls implications of these designs and allows for the collaborative development of individual component models that can be benchmarked against test systems for potential inclusion in the model library. A description of this application is provided along with examples of its use and a listing and discussion of all the models that currently exist in the library.

  11. A web service for service composition to aid geospatial modelers

    Science.gov (United States)

    Bigagli, L.; Santoro, M.; Roncella, R.; Mazzetti, P.

    2012-04-01

    The identification of appropriate mechanisms for process reuse, chaining and composition is considered a key enabler for the effective uptake of a global Earth Observation infrastructure, currently pursued by the international geospatial research community. In the Earth and Space Sciences, such a facility could primarily enable integrated and interoperable modeling, for what several approaches have been proposed and developed, over the last years. In fact, GEOSS is specifically tasked with the development of the so-called "Model Web". At increasing levels of abstraction and generalization, the initial stove-pipe software tools have evolved to community-wide modeling frameworks, to Component-Based Architecture solution, and, more recently, started to embrace Service-Oriented Architectures technologies, such as the OGC WPS specification and the WS-* stack of W3C standards for service composition. However, so far, the level of abstraction seems too low for implementing the Model Web vision, and far too complex technological aspects must still be addressed by both providers and users, resulting in limited usability and, eventually, difficult uptake. As by the recent ICT trend of resource virtualization, it has been suggested that users in need of a particular processing capability, required by a given modeling workflow, may benefit from outsourcing the composition activities into an external first-class service, according to the Composition as a Service (CaaS) approach. A CaaS system provides the necessary interoperability service framework for adaptation, reuse and complementation of existing processing resources (including models and geospatial services in general) in the form of executable workflows. This work introduces the architecture of a CaaS system, as a distributed information system for creating, validating, editing, storing, publishing, and executing geospatial workflows. This way, the users can be freed from the need of a composition infrastructure and

  12. Loss terms in free-piston Stirling engine models

    Science.gov (United States)

    Gordon, Lloyd B.

    1992-01-01

    Various models for free piston Stirling engines are reviewed. Initial models were developed primarily for design purposes and to predict operating parameters, especially efficiency. More recently, however, such models have been used to predict engine stability. Free piston Stirling engines have no kinematic constraints and stability may not only be sensitive to the load, but also to various nonlinear loss and spring constraints. The present understanding is reviewed of various loss mechanisms for free piston Stirling engines and how they have been incorporated into engine models is discussed.

  13. Engineering Adaptive Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    for a domain.In this book, we propose a new domain engineering framework which extends a development process of Web applications with techniques required when designing such adaptive customizable Web applications. The framework is provided with design abstractions which deal separately with information served...

  14. An Efficient Approach for Web Indexing of Big Data through Hyperlinks in Web Crawling

    Science.gov (United States)

    Devi, R. Suganya; Manjula, D.; Siddharth, R. K.

    2015-01-01

    Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling. PMID:26137592

  15. Search Engine Optimization

    CERN Document Server

    Davis, Harold

    2006-01-01

    SEO--short for Search Engine Optimization--is the art, craft, and science of driving web traffic to web sites. Web traffic is food, drink, and oxygen--in short, life itself--to any web-based business. Whether your web site depends on broad, general traffic, or high-quality, targeted traffic, this PDF has the tools and information you need to draw more traffic to your site. You'll learn how to effectively use PageRank (and Google itself); how to get listed, get links, and get syndicated; and much more. The field of SEO is expanding into all the possible ways of promoting web traffic. This

  16. Discovering How Students Search a Library Web Site: A Usability Case Study.

    Science.gov (United States)

    Augustine, Susan; Greene, Courtney

    2002-01-01

    Discusses results of a usability study at the University of Illinois Chicago that investigated whether Internet search engines have influenced the way students search library Web sites. Results show students use the Web site's internal search engine rather than navigating through the pages; have difficulty interpreting library terminology; and…

  17. Models Archive and ModelWeb at NSSDC

    Science.gov (United States)

    Bilitza, D.; Papitashvili, N.; King, J. H.

    2002-05-01

    In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.

  18. Engineering-Geological Data Model - The First Step to Build National Polish Standard for Multilevel Information Management

    Science.gov (United States)

    Ryżyński, Grzegorz; Nałęcz, Tomasz

    2016-10-01

    The efficient geological data management in Poland is necessary to support multilevel decision processes for government and local authorities in case of spatial planning, mineral resources and groundwater supply and the rational use of subsurface. Vast amount of geological information gathered in the digital archives and databases of Polish Geological Survey (PGS) is a basic resource for multi-scale national subsurface management. Data integration is the key factor to allow development of GIS and web tools for decision makers, however the main barrier for efficient geological information management is the heterogeneity of data in the resources of the Polish Geological Survey. Engineering-geological database is the first PGS thematic domain applied in the whole data integration plan. The solutions developed within this area will facilitate creation of procedures and standards for multilevel data management in PGS. Twenty years of experience in delivering digital engineering-geological mapping in 1:10 000 scale and archival geotechnical reports acquisition and digitisation allowed gathering of more than 300 thousands engineering-geological boreholes database as well as set of 10 thematic spatial layers (including foundation conditions map, depth to the first groundwater level, bedrock level, geohazards). Historically, the desktop approach was the source form of the geological-engineering data storage, resulting in multiple non-correlated interbase datasets. The need for creation of domain data model emerged and an object-oriented modelling (UML) scheme has been developed. The aim of the aforementioned development was to merge all datasets in one centralised Oracle server and prepare the unified spatial data structure for efficient web presentation and applications development. The presented approach will be the milestone toward creation of the Polish national standard for engineering-geological information management. The paper presents the approach and methodology

  19. Models and methods for building web recommendation systems

    OpenAIRE

    Stekh, Yu.; Artsibasov, V.

    2012-01-01

    Modern Word Wide Web contains a large number of Web sites and pages in each Web site. Web recommendation system (recommendation system for web pages) are typically implemented on web servers and use the data obtained from the collection viewed web templates (implicit data) or user registration data (explicit data). In article considering methods and algorithms of web recommendation system based on the technology of data mining (web mining). Сучасна мережа Інтернет містить велику кількість веб...

  20. The S-Web Model for the Sources of the Slow Solar Wind

    Science.gov (United States)

    Antiochos, Spiro K.; Karpen, Judith T.; DeVore, C. Richard

    2012-01-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: The slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind has large angular width, up to 60 degrees, suggesting that its source extends far from the open-closed boundary. We describe a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices (the S-Web) and quasi-separatrix layers in the heliosphere. We discuss the dynamics of the S-Web model and its implications for present observations and for the upcoming observations from Solar Orbiter and Solar Probe Plus.

  1. Introduction to Webometrics Quantitative Web Research for the Social Sciences

    CERN Document Server

    Thelwall, Michael

    2009-01-01

    Webometrics is concerned with measuring aspects of the web: web sites, web pages, parts of web pages, words in web pages, hyperlinks, web search engine results. The importance of the web itself as a communication medium and for hosting an increasingly wide array of documents, from journal articles to holiday brochures, needs no introduction. Given this huge and easily accessible source of information, there are limitless possibilities for measuring or counting on a huge scale (e.g., the number of web sites, the number of web pages, the number of blogs) or on a smaller scale (e.g., the number o

  2. Adaptable Web Modules to Stimulate Active Learning in Engineering Hydrology using Data and Model Simulations of Three Regional Hydrologic Systems

    Science.gov (United States)

    Habib, E. H.; Tarboton, D. G.; Lall, U.; Bodin, M.; Rahill-Marier, B.; Chimmula, S.; Meselhe, E. A.; Ali, A.; Williams, D.; Ma, Y.

    2013-12-01

    The hydrologic community has long recognized the need for broad reform in hydrologic education. A paradigm shift is critically sought in undergraduate hydrology and water resource education by adopting context-rich, student-centered, and active learning strategies. Hydrologists currently deal with intricate issues rooted in complex natural ecosystems containing a multitude of interconnected processes. Advances in the multi-disciplinary field include observational settings such as Critical Zone and Water, Sustainability and Climate Observatories, Hydrologic Information Systems, instrumentation and modeling methods. These research advances theory and practices call for similar efforts and improvements in hydrologic education. The typical, text-book based approach in hydrologic education has focused on specific applications and/or unit processes associated with the hydrologic cycle with idealizations, rather than the contextual relations in the physical processes and the spatial and temporal dynamics connecting climate and ecosystems. An appreciation of the natural variability of these processes will lead to graduates with the ability to develop independent learning skills and understanding. This appreciation cannot be gained in curricula where field components such as observational and experimental data are deficient. These types of data are also critical when using simulation models to create environments that support this type of learning. Additional sources of observations in conjunction with models and field data are key to students understanding of the challenges associated with using models to represent such complex systems. Recent advances in scientific visualization and web-based technologies provide new opportunities for the development of active learning techniques utilizing ongoing research. The overall goal of the current study is to develop visual, case-based, data and simulation driven learning experiences to instructors and students through a web

  3. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    Science.gov (United States)

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  4. The rendering context for stereoscopic 3D web

    Science.gov (United States)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  5. Stirling Engine Dynamic System Modeling

    Science.gov (United States)

    Nakis, Christopher G.

    2004-01-01

    The Thermo-Mechanical systems branch at the Glenn Research Center focuses a large amount time on Stirling engines. These engines will be used on missions where solar power is inefficient, especially in deep space. I work with Tim Regan and Ed Lewandowski who are currently developing and validating a mathematical model for the Stirling engines. This model incorporates all aspects of the system including, mechanical, electrical and thermodynamic components. Modeling is done through Simplorer, a program capable of running simulations of the model. Once created and then proven to be accurate, a model is used for developing new ideas for engine design. My largest specific project involves varying key parameters in the model and quantifying the results. This can all be done relatively trouble-free with the help of Simplorer. Once the model is complete, Simplorer will do all the necessary calculations. The more complicated part of this project is determining which parameters to vary. Finding key parameters depends on the potential for a value to be independently altered in the design. For example, a change in one dimension may lead to a proportional change to the rest of the model, and no real progress is made. Also, the ability for a changed value to have a substantial impact on the outputs of the system is important. Results will be condensed into graphs and tables with the purpose of better communication and understanding of the data. With the changing of these parameters, a more optimal design can be created without having to purchase or build any models. Also, hours and hours of results can be simulated in minutes. In the long run, using mathematical models can save time and money. Along with this project, I have many other smaller assignments throughout the summer. My main goal is to assist in the processes of model development, validation and testing.

  6. Multitasking Web Searching and Implications for Design.

    Science.gov (United States)

    Ozmutlu, Seda; Ozmutlu, H. C.; Spink, Amanda

    2003-01-01

    Findings from a study of users' multitasking searches on Web search engines include: multitasking searches are a noticeable user behavior; multitasking search sessions are longer than regular search sessions in terms of queries per session and duration; both Excite and AlltheWeb.com users search for about three topics per multitasking session and…

  7. Uncertainty visualisation in the Model Web

    Science.gov (United States)

    Gerharz, L. E.; Autermann, C.; Hopmann, H.; Stasch, C.; Pebesma, E.

    2012-04-01

    Visualisation of geospatial data as maps is a common way to communicate spatially distributed information. If temporal and furthermore uncertainty information are included in the data, efficient visualisation methods are required. For uncertain spatial and spatio-temporal data, numerous visualisation methods have been developed and proposed, but only few tools for visualisation of data in a standardised way exist. Furthermore, usually they are realised as thick clients, and lack functionality of handling data coming from web services as it is envisaged in the Model Web. We present an interactive web tool for visualisation of uncertain spatio-temporal data developed in the UncertWeb project. The client is based on the OpenLayers JavaScript library. OpenLayers provides standard map windows and navigation tools, i.e. pan, zoom in/out, to allow interactive control for the user. Further interactive methods are implemented using jStat, a JavaScript library for statistics plots developed in UncertWeb, and flot. To integrate the uncertainty information into existing standards for geospatial data, the Uncertainty Markup Language (UncertML) was applied in combination with OGC Observations&Measurements 2.0 and JavaScript Object Notation (JSON) encodings for vector and NetCDF for raster data. The client offers methods to visualise uncertain vector and raster data with temporal information. Uncertainty information considered for the tool are probabilistic and quantified attribute uncertainties which can be provided as realisations or samples, full probability distributions functions and statistics. Visualisation is supported for uncertain continuous and categorical data. In the client, the visualisation is realised using a combination of different methods. Based on previously conducted usability studies, a differentiation between expert (in statistics or mapping) and non-expert users has been indicated as useful. Therefore, two different modes are realised together in the tool

  8. WebQuests for Reflection and Conceptual Change: Variations on a Popular Model for Guided Inquiry.

    Science.gov (United States)

    Young, David L.; Wilson, Brent G.

    WebQuests have become a popular form of guided inquiry using Web resources. The goal of WebQuests is to help students think and reason at higher levels,and use information to solve problems. This paper presents modifications to the WebQuest model drawing on primarily on schema theory. It is believed that these changes will further enhance student…

  9. An algebraic approach to modeling in software engineering

    International Nuclear Information System (INIS)

    Loegel, C.J.; Ravishankar, C.V.

    1993-09-01

    Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ''computer science'' objects like abstract data types, but in practice software errors are often caused because ''real-world'' objects are improperly modeled. There is a large semantic gap between the customer's objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form

  10. Numerical methods and modelling for engineering

    CERN Document Server

    Khoury, Richard

    2016-01-01

    This textbook provides a step-by-step approach to numerical methods in engineering modelling. The authors provide a consistent treatment of the topic, from the ground up, to reinforce for students that numerical methods are a set of mathematical modelling tools which allow engineers to represent real-world systems and compute features of these systems with a predictable error rate. Each method presented addresses a specific type of problem, namely root-finding, optimization, integral, derivative, initial value problem, or boundary value problem, and each one encompasses a set of algorithms to solve the problem given some information and to a known error bound. The authors demonstrate that after developing a proper model and understanding of the engineering situation they are working on, engineers can break down a model into a set of specific mathematical problems, and then implement the appropriate numerical methods to solve these problems. Uses a “building-block” approach, starting with simpler mathemati...

  11. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    Science.gov (United States)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  12. Modelling Potential Consequences of Different Geo-Engineering Treatments for the Baltic Sea Ecosystem

    Science.gov (United States)

    Schrum, C.; Daewel, U.

    2017-12-01

    From 1950 onwards, the Baltic Sea ecosystem suffered increasingly from eutrophication. The most obvious reason for the eutrophication is the huge amount of nutrients (nitrogen and phosphorus) reaching the Baltic Sea from human activities. However, although nutrient loads have been decreasing since 1980, the hypoxic areas have not decreased accordingly. Thus, geo-engineering projects were discussed and evaluated to artificially ventilate the Baltic Sea deep water and suppress nutrient release from the sediments. Here, we aim at understanding the consequences of proposed geo-engineering projects in the Baltic Sea using long-term scenario modelling. For that purpose, we utilize a 3d coupled ecosystem model ECOSMO E2E, a novel NPZD-Fish model approach that resolves hydrodynamics, biogeochemical cycling and lower and higher trophic level dynamics. We performed scenario modelling that consider proposed geo-engineering projects such as artificial ventilation of Baltic Sea deep waters and phosphorus binding in sediments with polyaluminium chlorides. The model indicates that deep-water ventilation indeed suppresses phosphorus release in the first 1-4 years of treatment. Thereafter macrobenthos repopulates the formerly anoxic bottom regions and nutrients are increasingly recycled in the food web. Consequently, overall system productivity and fish biomass increases and toxic algae blooms decrease. However, deep-water ventilation has no long-lasting effect on the ecosystem: soon after completion of the ventilation process, the system turns back into its original state. Artificial phosphorus binding in sediments in contrast decreases overall ecosystem productivity through permanent removal of phosphorus. As expected it decreases bacterial production and toxic algae blooms, but it also decreases fish production substantially. Contrastingly to deep water ventilation, artificial phosphorus binding show a long-lasting effect over decades after termination of the treatment.

  13. Comparing the diversity of information by word-of-mouth vs. web spread

    Science.gov (United States)

    Sela, Alon; Shekhtman, Louis; Havlin, Shlomo; Ben-Gal, Irad

    2016-06-01

    Many studies have explored spreading and diffusion through complex networks. The following study examines a specific case of spreading of opinions in modern society through two spreading schemes —defined as being either through “word of mouth” (WOM), or through online search engines (WEB). We apply both modelling and real experimental results and compare the opinions people adopt through an exposure to their friend's opinions, as opposed to the opinions they adopt when using a search engine based on the PageRank algorithm. A simulated study shows that when members in a population adopt decisions through the use of the WEB scheme, the population ends up with a few dominant views, while other views are barely expressed. In contrast, when members adopt decisions based on the WOM scheme, there is a far more diverse distribution of opinions in that population. The simulative results are further supported by an online experiment which finds that people searching information through a search engine end up with far more homogenous opinions as compared to those asking their friends.

  14. Ontology Translation: The Semiotic Engineering of Content Management Systems

    Directory of Open Access Journals (Sweden)

    Alejandro Villamarin M.

    2015-12-01

    Full Text Available The present paper proposes the application of Semiotic Engineering theory to Content Management Systems (CMS focusing on the analysis of how the use of different ontologies can affect the user’s efficiency when performing tasks in a CMS. The analysis is performed using the theoretical semiotic model Web-Semiotic Interface Design Evaluation (W-SIDE model.

  15. Quality of Web-Based Information on Cannabis Addiction

    Science.gov (United States)

    Khazaal, Yasser; Chatton, Anne; Cochand, Sophie; Zullino, Daniele

    2008-01-01

    This study evaluated the quality of Web-based information on cannabis use and addiction and investigated particular content quality indicators. Three keywords ("cannabis addiction," "cannabis dependence," and "cannabis abuse") were entered into two popular World Wide Web search engines. Websites were assessed with a standardized proforma designed…

  16. Semantic Web Services Challenge, Results from the First Year. Series: Semantic Web And Beyond, Volume 8.

    Science.gov (United States)

    Petrie, C.; Margaria, T.; Lausen, H.; Zaremba, M.

    Explores trade-offs among existing approaches. Reveals strengths and weaknesses of proposed approaches, as well as which aspects of the problem are not yet covered. Introduces software engineering approach to evaluating semantic web services. Service-Oriented Computing is one of the most promising software engineering trends because of the potential to reduce the programming effort for future distributed industrial systems. However, only a small part of this potential rests on the standardization of tools offered by the web services stack. The larger part of this potential rests upon the development of sufficient semantics to automate service orchestration. Currently there are many different approaches to semantic web service descriptions and many frameworks built around them. A common understanding, evaluation scheme, and test bed to compare and classify these frameworks in terms of their capabilities and shortcomings, is necessary to make progress in developing the full potential of Service-Oriented Computing. The Semantic Web Services Challenge is an open source initiative that provides a public evaluation and certification of multiple frameworks on common industrially-relevant problem sets. This edited volume reports on the first results in developing common understanding of the various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. Semantic Web Services Challenge: Results from the First Year is designed for a professional audience composed of practitioners and researchers in industry. Professionals can use this book to evaluate SWS technology for their potential practical use. The book is also suitable for advanced-level students in computer science.

  17. Reflect: a practical approach to web semantics

    DEFF Research Database (Denmark)

    O'Donoghue, S.I.; Horn, Heiko; Pafilisa, E.

    2010-01-01

    To date, adding semantic capabilities to web content usually requires considerable server-side re-engineering, thus only a tiny fraction of all web content currently has semantic annotations. Recently, we announced Reflect (http://reflect.ws), a free service that takes a more practical approach......: Reflect uses augmented browsing to allow end-users to add systematic semantic annotations to any web-page in real-time, typically within seconds. In this paper we describe the tagging process in detail and show how further entity types can be added to Reflect; we also describe how publishers and content...... web technologies....

  18. Ecological-network models link diversity, structure and function in the plankton food-web

    Science.gov (United States)

    D'Alelio, Domenico; Libralato, Simone; Wyatt, Timothy; Ribera D'Alcalà, Maurizio

    2016-02-01

    A planktonic food-web model including sixty-three functional nodes (representing auto- mixo- and heterotrophs) was developed to integrate most trophic diversity present in the plankton. The model was implemented in two variants - which we named ‘green’ and ‘blue’ - characterized by opposite amounts of phytoplankton biomass and representing, respectively, bloom and non-bloom states of the system. Taxonomically disaggregated food-webs described herein allowed to shed light on how components of the plankton community changed their trophic behavior in the two different conditions, and modified the overall functioning of the plankton food web. The green and blue food-webs showed distinct organizations in terms of trophic roles of the nodes and carbon fluxes between them. Such re-organization stemmed from switches in selective grazing by both metazoan and protozoan consumers. Switches in food-web structure resulted in relatively small differences in the efficiency of material transfer towards higher trophic levels. For instance, from green to blue states, a seven-fold decrease in phytoplankton biomass translated into only a two-fold decrease in potential planktivorous fish biomass. By linking diversity, structure and function in the plankton food-web, we discuss the role of internal mechanisms, relying on species-specific functionalities, in driving the ‘adaptive’ responses of plankton communities to perturbations.

  19. ConsExpo Web. Consumer exposure models - model documentation : Update for ConsExpo Web 1.0.2

    NARCIS (Netherlands)

    Delmaar JE; Schuur AG; CPV; VSP

    2018-01-01

    RIVM has developed a manual for ConsExpo Web. This web application has been developed for use by exposure experts and risk assessors to estimate exposure to chemical substances from various products under various exposure conditions. Exposure assessments provide necessary information for the

  20. Federated Search and the Library Web Site: A Study of Association of Research Libraries Member Web Sites

    Science.gov (United States)

    Williams, Sarah C.

    2010-01-01

    The purpose of this study was to investigate how federated search engines are incorporated into the Web sites of libraries in the Association of Research Libraries. In 2009, information was gathered for each library in the Association of Research Libraries with a federated search engine. This included the name of the federated search service and…

  1. Web analytics tools and web metrics tools: An overview and comparative analysis

    Directory of Open Access Journals (Sweden)

    Ivan Bekavac

    2015-10-01

    Full Text Available The aim of the paper is to compare and analyze the impact of web analytics tools for measuring the performance of a business model. Accordingly, an overview of web analytics and web metrics tools is given, including their characteristics, main functionalities and available types. The data acquisition approaches and proper choice of web tools for particular business models are also reviewed. The research is divided in two sections. First, a qualitative focus is placed on reviewing web analytics tools to exploring their functionalities and ability to be integrated into the respective business model. Web analytics tools support the business analyst’s efforts in obtaining useful and relevant insights into market dynamics. Thus, generally speaking, selecting a web analytics and web metrics tool should be based on an investigative approach, not a random decision. The second section is a quantitative focus shifting from theory to an empirical approach, and which subsequently presents output data resulting from a study based on perceived user satisfaction of web analytics tools. The empirical study was carried out on employees from 200 Croatian firms from either an either IT or marketing branch. The paper contributes to highlighting the support for management that available web analytics and web metrics tools available on the market have to offer, and based on the growing needs of understanding and predicting global market trends.

  2. Web service composition: a semantic web and automated planning technique application

    Directory of Open Access Journals (Sweden)

    Jaime Alberto Guzmán Luna

    2008-09-01

    Full Text Available This article proposes applying semantic web and artificial intelligence planning techniques to a web services composition model dealing with problems of ambiguity in web service description and handling incomplete web information. The model uses an OWL-S services and implements a planning technique which handles open world semantics in its reasoning process to resolve these problems. This resulted in a web services composition system incorporating a module for interpreting OWL-S services and converting them into a planning problem in PDDL (a planning module handling incomplete information and an execution service module concurrently interacting with the planner for executing each composition plan service.

  3. Classification and moral evaluation of uncertainties in engineering modeling.

    Science.gov (United States)

    Murphy, Colleen; Gardoni, Paolo; Harris, Charles E

    2011-09-01

    Engineers must deal with risks and uncertainties as a part of their professional work and, in particular, uncertainties are inherent to engineering models. Models play a central role in engineering. Models often represent an abstract and idealized version of the mathematical properties of a target. Using models, engineers can investigate and acquire understanding of how an object or phenomenon will perform under specified conditions. This paper defines the different stages of the modeling process in engineering, classifies the various sources of uncertainty that arise in each stage, and discusses the categories into which these uncertainties fall. The paper then considers the way uncertainty and modeling are approached in science and the criteria for evaluating scientific hypotheses, in order to highlight the very different criteria appropriate for the development of models and the treatment of the inherent uncertainties in engineering. Finally, the paper puts forward nine guidelines for the treatment of uncertainty in engineering modeling.

  4. Marketing for a Web-Based Master's Degree Program in Light of Marketing Mix Model

    Science.gov (United States)

    Pan, Cheng-Chang

    2012-01-01

    The marketing mix model was applied with a focus on Web media to re-strategize a Web-based Master's program in a southern state university in U.S. The program's existing marketing strategy was examined using the four components of the model: product, price, place, and promotion, in hopes to repackage the program (product) to prospective students…

  5. Overview of the TREC 2014 Federated Web Search Track

    OpenAIRE

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Nguyen, Dong-Phuong; Zhou, Ke; Hiemstra, Djoerd

    2014-01-01

    The TREC Federated Web Search track facilitates research in topics related to federated web search, by providing a large realistic data collection sampled from a multitude of online search engines. The FedWeb 2013 challenges of Resource Selection and Results Merging challenges are again included in FedWeb 2014, and we additionally introduced the task of vertical selection. Other new aspects are the required link between the Resource Selection and Results Merging, and the importance of diversi...

  6. Automated Security Testing of Web Widget Interactions

    NARCIS (Netherlands)

    Bezemer, C.P.; Mesbah, A.; Van Deursen, A.

    2009-01-01

    This paper is a pre-print of: Cor-Paul Bezemer, Ali Mesbah, and Arie van Deursen. Automated Security Testing of Web Widget Interactions. In Proceedings of the 7th joint meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering

  7. Propulsion Controls Modeling for a Small Turbofan Engine

    Science.gov (United States)

    Connolly, Joseph W.; Csank, Jeffrey T.; Chicatelli, Amy; Franco, Kevin

    2017-01-01

    A nonlinear dynamic model and propulsion controller are developed for a small-scale turbofan engine. The small-scale turbofan engine is based on the Price Induction company's DGEN 380, one of the few turbofan engines targeted for the personal light jet category. Comparisons of the nonlinear dynamic turbofan engine model to actual DGEN 380 engine test data and a Price Induction simulation are provided. During engine transients, the nonlinear model typically agrees within 10 percent error, even though the nonlinear model was developed from limited available engine data. A gain scheduled proportional integral low speed shaft controller with limiter safety logic is created to replicate the baseline DGEN 380 controller. The new controller provides desired gain and phase margins and is verified to meet Federal Aviation Administration transient propulsion system requirements. In understanding benefits, there is a need to move beyond simulation for the demonstration of advanced control architectures and technologies by using real-time systems and hardware. The small-scale DGEN 380 provides a cost effective means to accomplish advanced controls testing on a relevant turbofan engine platform.

  8. AN OVERVIEW OF SEARCHING AND DISCOVERING WEB BASED INFORMATION RESOURCES

    Directory of Open Access Journals (Sweden)

    Cezar VASILESCU

    2010-01-01

    Full Text Available The Internet becomes for most of us a daily used instrument, for professional or personal reasons. We even do not remember the times when a computer and a broadband connection were luxury items. More and more people are relying on the complicated web network to find the needed information.This paper presents an overview of Internet search related issues, upon search engines and describes the parties and the basic mechanism that is embedded in a search for web based information resources. Also presents ways to increase the efficiency of web searches, through a better understanding of what search engines ignore at websites content.

  9. Building web information systems using web services

    NARCIS (Netherlands)

    Frasincar, F.; Houben, G.J.P.M.; Barna, P.; Vasilecas, O.; Eder, J.; Caplinskas, A.

    2006-01-01

    Hera is a model-driven methodology for designing Web information systems. In the past a CASE tool for the Hera methodology was implemented. This software had different components that together form one centralized application. In this paper, we present a distributed Web service-oriented architecture

  10. Quality analysis of patient information about knee arthroscopy on the World Wide Web.

    Science.gov (United States)

    Sambandam, Senthil Nathan; Ramasamy, Vijayaraj; Priyanka, Priyanka; Ilango, Balakrishnan

    2007-05-01

    This study was designed to ascertain the quality of patient information available on the World Wide Web on the topic of knee arthroscopy. For the purpose of quality analysis, we used a pool of 232 search results obtained from 7 different search engines. We used a modified assessment questionnaire to assess the quality of these Web sites. This questionnaire was developed based on similar studies evaluating Web site quality and includes items on illustrations, accessibility, availability, accountability, and content of the Web site. We also compared results obtained with different search engines and tried to establish the best possible search strategy to attain the most relevant, authentic, and adequate information with minimum time consumption. For this purpose, we first compared 100 search results from the single most commonly used search engine (AltaVista) with the pooled sample containing 20 search results from each of the 7 different search engines. The search engines used were metasearch (Copernic and Mamma), general search (Google, AltaVista, and Yahoo), and health topic-related search engines (MedHunt and Healthfinder). The phrase "knee arthroscopy" was used as the search terminology. Excluding the repetitions, there were 117 Web sites available for quality analysis. These sites were analyzed for accessibility, relevance, authenticity, adequacy, and accountability by use of a specially designed questionnaire. Our analysis showed that most of the sites providing patient information on knee arthroscopy contained outdated information, were inadequate, and were not accountable. Only 16 sites were found to be providing reasonably good patient information and hence can be recommended to patients. Understandably, most of these sites were from nonprofit organizations and educational institutions. Furthermore, our study revealed that using multiple search engines increases patients' chances of obtaining more relevant information rather than using a single search

  11. GLIDERS - A web-based search engine for genome-wide linkage disequilibrium between HapMap SNPs

    Directory of Open Access Journals (Sweden)

    Broxholme John

    2009-10-01

    Full Text Available Abstract Background A number of tools for the examination of linkage disequilibrium (LD patterns between nearby alleles exist, but none are available for quickly and easily investigating LD at longer ranges (>500 kb. We have developed a web-based query tool (GLIDERS: Genome-wide LInkage DisEquilibrium Repository and Search engine that enables the retrieval of pairwise associations with r2 ≥ 0.3 across the human genome for any SNP genotyped within HapMap phase 2 and 3, regardless of distance between the markers. Description GLIDERS is an easy to use web tool that only requires the user to enter rs numbers of SNPs they want to retrieve genome-wide LD for (both nearby and long-range. The intuitive web interface handles both manual entry of SNP IDs as well as allowing users to upload files of SNP IDs. The user can limit the resulting inter SNP associations with easy to use menu options. These include MAF limit (5-45%, distance limits between SNPs (minimum and maximum, r2 (0.3 to 1, HapMap population sample (CEU, YRI and JPT+CHB combined and HapMap build/release. All resulting genome-wide inter-SNP associations are displayed on a single output page, which has a link to a downloadable tab delimited text file. Conclusion GLIDERS is a quick and easy way to retrieve genome-wide inter-SNP associations and to explore LD patterns for any number of SNPs of interest. GLIDERS can be useful in identifying SNPs with long-range LD. This can highlight mis-mapping or other potential association signal localisation problems.

  12. Coupled dynamic-multidimensional modelling of free-piston engine combustion

    International Nuclear Information System (INIS)

    Mikalsen, R.; Roskilly, A.P.

    2009-01-01

    Free-piston engines are under investigation by a number of research groups worldwide, as an alternative to conventional technology in applications such as electric and hydraulic power generation. The piston dynamics of the free-piston engine differ significantly from those of conventional engines, and this may influence in-cylinder gas motion, combustion and emissions formation. Due to the complex interaction between mechanics and thermodynamics, the modelling of free-piston engines is not straight-forward. This paper presents a novel approach to the modelling of free-piston engines through the introduction of solution-dependent mesh motion in an engine CFD code. The particular features of free-piston engines are discussed, and the model for engine dynamics implemented in the CFD code is described. Finally, the coupled solver is demonstrated through the modelling of a spark ignited free-piston engine generator

  13. Coupled dynamic-multidimensional modelling of free-piston engine combustion

    Energy Technology Data Exchange (ETDEWEB)

    Mikalsen, R. [Sir Joseph Swan Institute for Energy Research, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom); Roskilly, A.P. [Sir Joseph Swan Institute for Energy Research, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom)], E-mail: tony.roskilly@ncl.ac.uk

    2009-01-15

    Free-piston engines are under investigation by a number of research groups worldwide, as an alternative to conventional technology in applications such as electric and hydraulic power generation. The piston dynamics of the free-piston engine differ significantly from those of conventional engines, and this may influence in-cylinder gas motion, combustion and emissions formation. Due to the complex interaction between mechanics and thermodynamics, the modelling of free-piston engines is not straight-forward. This paper presents a novel approach to the modelling of free-piston engines through the introduction of solution-dependent mesh motion in an engine CFD code. The particular features of free-piston engines are discussed, and the model for engine dynamics implemented in the CFD code is described. Finally, the coupled solver is demonstrated through the modelling of a spark ignited free-piston engine generator.

  14. Students' Attitude in a Web-enhanced Hybrid Course: A Structural Equation Modeling Inquiry

    OpenAIRE

    Cheng-Chang Sam Pan; Stephen Sivo; James Brophy

    2003-01-01

    The present study focuses on five latent factors affecting students use of WebCT in a Web-enhanced hybrid undergraduate course at a southeastern university in the United States. An online questionnaire is used to measure a hypothetic model composed of two exogenous variables (i.e., subjective norm and computer self-efficacy), three endogenous variables (i.e., perceived ease of use, perceived usefulness, and attitude toward WebCT use), one dependent variable (i.e., actual system use), and elev...

  15. Perancangan Sistem Informasi Data Alumni Fakultas Teknik Unsrat Berbasis Web

    OpenAIRE

    Watung, Ivan Arifard; Sinsuw, Alicia A. E

    2014-01-01

    Information technology has become the primary choice in creating an information system that can provide accurate and precise information. Backround of alumni data information system that is still manual data processing. The purpose of this system is to design a web-based information system. System design using the waterfall method comprising steps Information System Engineering, Requirements Analysis, Design, Coding, Testing, and Maintenance. Modeling using Flowmap or Flowchart, Context Diagr...

  16. Problem-Based Learning in Web Environments: The Case of ``Virtual eBMS'' for Business Engineering Education

    Science.gov (United States)

    Elia, Gianluca; Secundo, Giustina; Taurino, Cesare

    This chapter presents a case study where Problem Based Learning (PBL) approach is applied to a Web-based environment. It first describes the main features behind the PBL for creating Business Engineers able to face the grand technological challenges of the 2020. Then it introduces a Web Based system supporting the PBL strategy, called the “Virtual eBMS”. This system has been designed and implemented at the e-Business Management Section of the Scuola Superiore ISUFI - University of Salento (Italy), in the framework of a research project carried out in collaboration with IBM. Besides the logical and technological description of Virtual eBMS, the chapter presents two applications of the platform in two different contexts: an academic context (international master) and an entrepreneurial context (awareness workshop with companies and entrepreneurs). The system is illustrated starting from the description of an operational framework for designing curricula PBL based from the author perspective and, then, illustrating a typical scenario of a learner accessing to the curricula. In the description, it is highlighted both the “structured” way and the “unstructured” way to create and follow an entire learning path.

  17. Development of Web GIS-Based VFSMOD System with Three Modules for Effective Vegetative Filter Strip Design

    Directory of Open Access Journals (Sweden)

    Dong Soo Kong

    2013-08-01

    Full Text Available In recent years, Non-Point Source Pollution has been rising as a significant environmental issue. The sediment-laden water problem is causing serious impacts on river ecosystems not only in South Korea but also in most countries. The vegetative filter strip (VFS has been thought to be one of the most effective methods to reduce the transport of sediment to down-gradient area. However, the effective width of the VFS first needs to be determined before VFS installation in the field. To provide an easy-to-use interface with a scientific VFS modeling engine, the Web GIS-based VFSMOD system was developed in this study. The Web GIS-based VFSMOD uses the UH and VFSM executable programs from the VFSMOD-w model as core engines to simulate rainfall-runoff and sediment trapping. To provide soil information for a point of interest, the Google Map interface to the MapServer soil database system was developed using the Google Map API, Javascript, Perl/CGI, and Oracle DB programming. Three modules of the Web GIS-based VFSMOD system were developed for various VFS designs under single storm, multiple storm, and long-term period scenarios. These modules in the Web GIS-based VFSMOD system were applied to the study watershed in South Korea and these were proven as efficient tools for the VFS design for various purposes.

  18. Measuring Personalization of Web Search

    DEFF Research Database (Denmark)

    Hannak, Aniko; Sapiezynski, Piotr; Kakhki, Arash Molavi

    2013-01-01

    are simply unable to access information that the search engines’ algorithm decidesis irrelevant. Despitetheseconcerns, there has been little quantification of the extent of personalization in Web search today, or the user attributes that cause it. In light of this situation, we make three contributions...... as a result of searching with a logged in account and the IP address of the searching user. Our results are a first step towards understanding the extent and effects of personalization on Web search engines today....

  19. IMPORTANCE OF TEMPERATURE IN MODELLING PCB BIOACCUMULATION IN THE LAKE MICHIGAN FOOD WEB

    Science.gov (United States)

    In most food web models, the exposure temperature of a food web is typically defined using a single spatial compartment. This essentially assumes that the predator and prey are exposed to the same temperature. However, in a large water body such as Lake Michigan, due to the spati...

  20. Vibration modelling and verifications for whole aero-engine

    Science.gov (United States)

    Chen, G.

    2015-08-01

    In this study, a new rotor-ball-bearing-casing coupling dynamic model for a practical aero-engine is established. In the coupling system, the rotor and casing systems are modelled using the finite element method, support systems are modelled as lumped parameter models, nonlinear factors of ball bearings and faults are included, and four types of supports and connection models are defined to model the complex rotor-support-casing coupling system of the aero-engine. A new numerical integral method that combines the Newmark-β method and the improved Newmark-β method (Zhai method) is used to obtain the system responses. Finally, the new model is verified in three ways: (1) modal experiment based on rotor-ball bearing rig, (2) modal experiment based on rotor-ball-bearing-casing rig, and (3) fault simulations for a certain type of missile turbofan aero-engine vibration. The results show that the proposed model can not only simulate the natural vibration characteristics of the whole aero-engine but also effectively perform nonlinear dynamic simulations of a whole aero-engine with faults.

  1. Systems Security Engineering Capability Maturity Model SSE-CMM Model Description Document

    National Research Council Canada - National Science Library

    1999-01-01

    The Systems Security Engineering Capability Maturity Model (SSE-CMM) describes the essential characteristics of an organization's security engineering process that must exist to ensure good security engineering...

  2. Research and Teaching: WikiED--Using Web 2.0 Tools to Teach Content and Critical Thinking

    Science.gov (United States)

    Frisch, Jennifer K.; Jackson, Paula C.; Murray, Meg C.

    2013-01-01

    WIKIed Biology is a National Science Foundation Transforming Undergraduate Education in Science, Technology, Engineering, and Mathematics interdisciplinary project in which the authors developed and implemented a model for student centered, inquiry-driven instruction using Web 2.0 technologies to increase inquiry and conceptual understanding in…

  3. Qualitative models for space system engineering

    Science.gov (United States)

    Forbus, Kenneth D.

    1990-01-01

    The objectives of this project were: (1) to investigate the implications of qualitative modeling techniques for problems arising in the monitoring, diagnosis, and design of Space Station subsystems and procedures; (2) to identify the issues involved in using qualitative models to enhance and automate engineering functions. These issues include representing operational criteria, fault models, alternate ontologies, and modeling continuous signals at a functional level of description; and (3) to develop a prototype collection of qualitative models for fluid and thermal systems commonly found in Space Station subsystems. Potential applications of qualitative modeling to space-systems engineering, including the notion of intelligent computer-aided engineering are summarized. Emphasis is given to determining which systems of the proposed Space Station provide the most leverage for study, given the current state of the art. Progress on using qualitative models, including development of the molecular collection ontology for reasoning about fluids, the interaction of qualitative and quantitative knowledge in analyzing thermodynamic cycles, and an experiment on building a natural language interface to qualitative reasoning is reported. Finally, some recommendations are made for future research.

  4. Development of Web-Based RECESS Model for Estimating Baseflow Using SWAT

    Directory of Open Access Journals (Sweden)

    Gwanjae Lee

    2014-04-01

    Full Text Available Groundwater has received increasing attention as an important strategic water resource for adaptation to climate change. In this regard, the separation of baseflow from streamflow and the analysis of recession curves make a significant contribution to integrated river basin management. The United States Geological Survey (USGS RECESS model adopting the master-recession curve (MRC method can enhance the accuracy with which baseflow may be separated from streamflow, compared to other baseflow-separation schemes that are more limited in their ability to reflect various watershed/aquifer characteristics. The RECESS model has been widely used for the analysis of hydrographs, but the applications using RECESS were only available through Microsoft-Disk Operating System (MS-DOS. Thus, this study aims to develop a web-based RECESS model for easy separation of baseflow from streamflow, with easy applications for ungauged regions. RECESS on the web derived the alpha factor, which is a baseflow recession constant in the Soil Water Assessment Tool (SWAT, and this variable was provided to SWAT as the input. The results showed that the alpha factor estimated from the web-based RECESS model improved the predictions of streamflow and recession. Furthermore, these findings showed that the baseflow characteristics of the ungauged watersheds were influenced by the land use and slope angle of watersheds, as well as by precipitation and streamflow.

  5. Cycle Engine Modelling Of Spark Ignition Engine Processes during Wide-Open Throttle (WOT) Engine Operation Running By Gasoline Fuel

    International Nuclear Information System (INIS)

    Rahim, M F Abdul; Rahman, M M; Bakar, R A

    2012-01-01

    One-dimensional engine model is developed to simulate spark ignition engine processes in a 4-stroke, 4 cylinders gasoline engine. Physically, the baseline engine is inline cylinder engine with 3-valves per cylinder. Currently, the engine's mixture is formed by external mixture formation using piston-type carburettor. The model of the engine is based on one-dimensional equation of the gas exchange process, isentropic compression and expansion, progressive engine combustion process, and accounting for the heat transfer and frictional losses as well as the effect of valves overlapping. The model is tested for 2000, 3000 and 4000 rpm of engine speed and validated using experimental engine data. Results showed that the engine is able to simulate engine's combustion process and produce reasonable prediction. However, by comparing with experimental data, major discrepancy is noticeable especially on the 2000 and 4000 rpm prediction. At low and high engine speed, simulated cylinder pressures tend to under predict the measured data. Whereas the cylinder temperatures always tend to over predict the measured data at all engine speed. The most accurate prediction is obtained at medium engine speed of 3000 rpm. Appropriate wall heat transfer setup is vital for more precise calculation of cylinder pressure and temperature. More heat loss to the wall can lower cylinder temperature. On the hand, more heat converted to the useful work mean an increase in cylinder pressure. Thus, instead of wall heat transfer setup, the Wiebe combustion parameters are needed to be carefully evaluated for better results.

  6. Around power law for PageRank components in Buckley-Osthus model of web graph

    OpenAIRE

    Gasnikov, Alexander; Zhukovskii, Maxim; Kim, Sergey; Noskov, Fedor; Plaunov, Stepan; Smirnov, Daniil

    2017-01-01

    In the paper we investigate power law for PageRank components for the Buckley-Osthus model for web graph. We compare different numerical methods for PageRank calculation. With the best method we do a lot of numerical experiments. These experiments confirm the hypothesis about power law. At the end we discuss real model of web-ranking based on the classical PageRank approach.

  7. Computational Modeling in Tissue Engineering

    CERN Document Server

    2013-01-01

    One of the major challenges in tissue engineering is the translation of biological knowledge on complex cell and tissue behavior into a predictive and robust engineering process. Mastering this complexity is an essential step towards clinical applications of tissue engineering. This volume discusses computational modeling tools that allow studying the biological complexity in a more quantitative way. More specifically, computational tools can help in:  (i) quantifying and optimizing the tissue engineering product, e.g. by adapting scaffold design to optimize micro-environmental signals or by adapting selection criteria to improve homogeneity of the selected cell population; (ii) quantifying and optimizing the tissue engineering process, e.g. by adapting bioreactor design to improve quality and quantity of the final product; and (iii) assessing the influence of the in vivo environment on the behavior of the tissue engineering product, e.g. by investigating vascular ingrowth. The book presents examples of each...

  8. Web application to access U.S. Army Corps of Engineers Civil Works and Restoration Projects information for the Rio Grande Basin, southern Colorado, New Mexico, and Texas

    Science.gov (United States)

    Archuleta, Christy-Ann M.; Eames, Deanna R.

    2009-01-01

    The Rio Grande Civil Works and Restoration Projects Web Application, developed by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers (USACE) Albuquerque District, is designed to provide publicly available information through the Internet about civil works and restoration projects in the Rio Grande Basin. Since 1942, USACE Albuquerque District responsibilities have included building facilities for the U.S. Army and U.S. Air Force, providing flood protection, supplying water for power and public recreation, participating in fire remediation, protecting and restoring wetlands and other natural resources, and supporting other government agencies with engineering, contracting, and project management services. In the process of conducting this vast array of engineering work, the need arose for easily tracking the locations of and providing information about projects to stakeholders and the public. This fact sheet introduces a Web application developed to enable users to visualize locations and search for information about USACE (and some other Federal, State, and local) projects in the Rio Grande Basin in southern Colorado, New Mexico, and Texas.

  9. Sharing environmental models: An Approach using GitHub repositories and Web Processing Services

    Science.gov (United States)

    Stasch, Christoph; Nuest, Daniel; Pross, Benjamin

    2016-04-01

    The GLUES (Global Assessment of Land Use Dynamics, Greenhouse Gas Emissions and Ecosystem Services) project established a spatial data infrastructure for scientific geospatial data and metadata (http://geoportal-glues.ufz.de), where different regional collaborative projects researching the impacts of climate and socio-economic changes on sustainable land management can share their underlying base scenarios and datasets. One goal of the project is to ease the sharing of computational models between institutions and to make them easily executable in Web-based infrastructures. In this work, we present such an approach for sharing computational models relying on GitHub repositories (http://github.com) and Web Processing Services. At first, model providers upload their model implementations to GitHub repositories in order to share them with others. The GitHub platform allows users to submit changes to the model code. The changes can be discussed and reviewed before merging them. However, while GitHub allows sharing and collaborating of model source code, it does not actually allow running these models, which requires efforts to transfer the implementation to a model execution framework. We thus have extended an existing implementation of the OGC Web Processing Service standard (http://www.opengeospatial.org/standards/wps), the 52°North Web Processing Service (http://52north.org/wps) platform to retrieve all model implementations from a git (http://git-scm.com) repository and add them to the collection of published geoprocesses. The current implementation is restricted to models implemented as R scripts using WPS4R annotations (Hinz et al.) and to Java algorithms using the 52°North WPS Java API. The models hence become executable through a standardized Web API by multiple clients such as desktop or browser GIS and modelling frameworks. If the model code is changed on the GitHub platform, the changes are retrieved by the service and the processes will be updated

  10. QUEST: An Assessment Tool for Web-Based Learning.

    Science.gov (United States)

    Choren, Ricardo; Blois, Marcelo; Fuks, Hugo

    In 1997, the Software Engineering Laboratory at Pontifical Catholic University of Rio de Janeiro (Brazil) implemented the first version of AulaNet (TM) a World Wide Web-based educational environment. Some of the teaching staff will use this environment in 1998 to offer regular term disciplines through the Web. This paper introduces Quest, a tool…

  11. MDEForge: an Extensible Web-Based Modeling Platform

    OpenAIRE

    Basciani, Francesco; Di Rocco, Juri; Di Ruscio, Davide; Di Salle, Amleto; Iovino, Ludovico; Pierantonio, Alfonso

    2014-01-01

    Model-Driven Engineering (MDE) refers to the systematic use of models as first class entities throughout the software development life cycle. Over the last few years, many MDE technologies have been conceived for developing domain specific modeling languages, and for supporting a wide range of model management activities. However, existing modeling platforms neglect a number of important features that if missed reduce the acceptance and the relevance of MDE in industrial contexts, e.g., the p...

  12. Developing engineering processes through integrated modelling of product and process

    DEFF Research Database (Denmark)

    Nielsen, Jeppe Bjerrum; Hvam, Lars

    2012-01-01

    This article aims at developing an operational tool for integrated modelling of product assortments and engineering processes in companies making customer specific products. Integrating a product model in the design of engineering processes will provide a deeper understanding of the engineering...... activities as well as insight into how product features affect the engineering processes. The article suggests possible ways of integrating models of products with models of engineering processes. The models have been tested and further developed in an action research study carried out in collaboration...... with a major international engineering company....

  13. Drexel at TREC 2014 Federated Web Search Track

    Science.gov (United States)

    2014-11-01

    of its input RS results. 1. INTRODUCTION Federated Web Search is the task of searching multiple search engines simultaneously and combining their...or distributed properly[5]. The goal of RS is then, for a given query, to select only the most promising search engines from all those available. Most...result pages of 149 search engines . 4000 queries are used in building the sample set. As a part of the Vertical Selection task, search engines are

  14. Pattern-based translation of BPMN process models to BPEL web services

    NARCIS (Netherlands)

    Ouyang, C.; Dumas, M.; Hofstede, ter A.H.M.; Aalst, van der W.M.P.

    2008-01-01

    The business process modeling notation (BPMN) is a graph-oriented language primarily targeted at domain analysts and supported by many modeling tools. The business process execution language for Web services (BPEL) on the other hand is a mainly block-structured language targeted at software

  15. Graph-based modelling in engineering

    CERN Document Server

    Rysiński, Jacek

    2017-01-01

    This book presents versatile, modern and creative applications of graph theory in mechanical engineering, robotics and computer networks. Topics related to mechanical engineering include e.g. machine and mechanism science, mechatronics, robotics, gearing and transmissions, design theory and production processes. The graphs treated are simple graphs, weighted and mixed graphs, bond graphs, Petri nets, logical trees etc. The authors represent several countries in Europe and America, and their contributions show how different, elegant, useful and fruitful the utilization of graphs in modelling of engineering systems can be. .

  16. Software engineering

    CERN Document Server

    Sommerville, Ian

    2010-01-01

    The ninth edition of Software Engineering presents a broad perspective of software engineering, focusing on the processes and techniques fundamental to the creation of reliable, software systems. Increased coverage of agile methods and software reuse, along with coverage of 'traditional' plan-driven software engineering, gives readers the most up-to-date view of the field currently available. Practical case studies, a full set of easy-to-access supplements, and extensive web resources make teaching the course easier than ever.

  17. A concise wall temperature model for DI Diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Torregrosa, A.; Olmeda, P.; Degraeuwe, B. [CMT-Motores Termicos, Universidad Politecnica de Valencia (Spain); Reyes, M. [Centro de Mecanica de Fluidos y Aplicaciones, Universidad Simon Bolivar (Venezuela)

    2006-08-15

    A concise resistor model for wall temperature prediction in diesel engines with piston cooling is presented here. The model uses the instantaneous in-cylinder pressure and some usually measured operational parameters to predict the temperature of the structural elements of the engine. The resistor model was adjusted by means of temperature measurements in the cylinder head, the liner and the piston. For each model parameter, an expression as a function of the engine geometry, operational parameters and material properties was derived to make the model applicable to other similar engines. The model predicts well the cylinder head, liner and piston temperature and is sensitive to variations of operational parameters such as the start of injection, coolant and oil temperature and engine speed and load. (author)

  18. Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model

    Directory of Open Access Journals (Sweden)

    Rory A. Roberts

    2014-01-01

    Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.

  19. Web Services and Model-Driven Enterprise Information Services. Proceedings of the Joint Workshop on Web Services and Model-Driven Enterprise Information Services, WSMDEIS 2005.

    NARCIS (Netherlands)

    Bevinakoppa, S.; Ferreira Pires, Luis; Hammoudi, S.

    2005-01-01

    Web services and Model-driven development are two emerging research fields and have been receiving a lot of attention in the recent years. New approaches on these two areas can bring many benefits to the development of information systems, distribution flexibility, interoperability, maintainability

  20. Recovery Act: Web-based CO{sub 2} Subsurface Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Paolini, Christopher; Castillo, Jose

    2012-11-30

    The Web-based CO{sub 2} Subsurface Modeling project focused primarily on extending an existing text-only, command-line driven, isothermal and isobaric, geochemical reaction-transport simulation code, developed and donated by Sienna Geodynamics, into an easier-to-use Web-based application for simulating long-term storage of CO{sub 2} in geologic reservoirs. The Web-based interface developed through this project, publically accessible via URL http://symc.sdsu.edu/, enables rapid prototyping of CO{sub 2} injection scenarios and allows students without advanced knowledge of geochemistry to setup a typical sequestration scenario, invoke a simulation, analyze results, and then vary one or more problem parameters and quickly re-run a simulation to answer what-if questions. symc.sdsu.edu has 2x12 core AMD Opteron™ 6174 2.20GHz processors and 16GB RAM. The Web-based application was used to develop a new computational science course at San Diego State University, COMP 670: Numerical Simulation of CO{sub 2} Sequestration, which was taught during the fall semester of 2012. The purpose of the class was to introduce graduate students to Carbon Capture, Use and Storage (CCUS) through numerical modeling and simulation, and to teach students how to interpret simulation results to make predictions about long-term CO{sub 2} storage capacity in deep brine reservoirs. In addition to the training and education component of the project, significant software development efforts took place. Two computational science doctoral and one geological science masters student, under the direction of the PIs, extended the original code developed by Sienna Geodynamics, named Sym.8. New capabilities were added to Sym.8 to simulate non-isothermal and non-isobaric flows of charged aqueous solutes in porous media, in addition to incorporating HPC support into the code for execution on many-core XSEDE clusters. A successful outcome of this project was the funding and training of three new computational

  1. COEUS: "semantic web in a box" for biomedical applications.

    Science.gov (United States)

    Lopes, Pedro; Oliveira, José Luís

    2012-12-17

    As the "omics" revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter's complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a "semantic web in a box" approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.

  2. Python for Google app engine

    CERN Document Server

    Pippi, Massimiliano

    2015-01-01

    If you are a Python developer, whether you have experience in web applications development or not, and want to rapidly deploy a scalable backend service or a modern web application on Google App Engine, then this book is for you.

  3. Chemical Kinetic Models for Advanced Engine Combustion

    Energy Technology Data Exchange (ETDEWEB)

    Pitz, William J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mehl, Marco [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Westbrook, Charles K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-10-22

    The objectives for this project are as follows: Develop detailed chemical kinetic models for fuel components used in surrogate fuels for compression ignition (CI), homogeneous charge compression ignition (HCCI) and reactivity-controlled compression-ignition (RCCI) engines; and Combine component models into surrogate fuel models to represent real transportation fuels. Use them to model low-temperature combustion strategies in HCCI, RCCI, and CI engines that lead to low emissions and high efficiency.

  4. Software architecture and design of the web services facilitating climate model diagnostic analysis

    Science.gov (United States)

    Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.

    2015-12-01

    Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.

  5. Landfill Gas Energy Cost Model Version 3.0 (LFGcost-Web V3 ...

    Science.gov (United States)

    To help stakeholders estimate the costs of a landfill gas (LFG) energy project, in 2002, LMOP developed a cost tool (LFGcost). Since then, LMOP has routinely updated the tool to reflect changes in the LFG energy industry. Initially the model was designed for EPA to assist landfills in evaluating the economic and financial feasibility of LFG energy project development. In 2014, LMOP developed a public version of the model, LFGcost-Web (Version 3.0), to allow landfill and industry stakeholders to evaluate project feasibility on their own. LFGcost-Web can analyze costs for 12 energy recovery project types. These project costs can be estimated with or without the costs of a gas collection and control system (GCCS). The EPA used select equations from LFGcost-Web to estimate costs of the regulatory options in the 2015 proposed revisions to the MSW Landfills Standards of Performance (also known as New Source Performance Standards) and the Emission Guidelines (herein thereafter referred to collectively as the Landfill Rules). More specifically, equations derived from LFGcost-Web were applied to each landfill expected to be impacted by the Landfill Rules to estimate annualized installed capital costs and annual O&M costs of a gas collection and control system. In addition, after applying the LFGcost-Web equations to the list of landfills expected to require a GCCS in year 2025 as a result of the proposed Landfill Rules, the regulatory analysis evaluated whether electr

  6. Distributed Deep Web Search

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien

    2013-01-01

    The World Wide Web contains billions of documents (and counting); hence, it is likely that some document will contain the answer or content you are searching for. While major search engines like Bing and Google often manage to return relevant results to your query, there are plenty of situations in

  7. Underwater striling engine design with modified one-dimensional model

    Directory of Open Access Journals (Sweden)

    Daijin Li

    2015-05-01

    Full Text Available Stirling engines are regarded as an efficient and promising power system for underwater devices. Currently, many researches on one-dimensional model is used to evaluate thermodynamic performance of Stirling engine, but in which there are still some aspects which cannot be modeled with proper mathematical models such as mechanical loss or auxiliary power. In this paper, a four-cylinder double-acting Stirling engine for Unmanned Underwater Vehicles (UUVs is discussed. And a one-dimensional model incorporated with empirical equations of mechanical loss and auxiliary power obtained from experiments is derived while referring to the Stirling engine computer model of National Aeronautics and Space Administration (NASA. The P-40 Stirling engine with sufficient testing results from NASA is utilized to validate the accuracy of this one-dimensional model. It shows that the maximum error of output power of theoretical analysis results is less than 18% over testing results, and the maximum error of input power is no more than 9%. Finally, a Stirling engine for UUVs is designed with Schmidt analysis method and the modified one-dimensional model, and the results indicate this designed engine is capable of showing desired output power.

  8. Web-Based Virtual Laboratory for Food Analysis Course

    Science.gov (United States)

    Handayani, M. N.; Khoerunnisa, I.; Sugiarti, Y.

    2018-02-01

    Implementation of learning on food analysis course in Program Study of Agro-industrial Technology Education faced problems. These problems include the availability of space and tools in the laboratory that is not comparable with the number of students also lack of interactive learning tools. On the other hand, the information technology literacy of students is quite high as well the internet network is quite easily accessible on campus. This is a challenge as well as opportunities in the development of learning media that can help optimize learning in the laboratory. This study aims to develop web-based virtual laboratory as one of the alternative learning media in food analysis course. This research is R & D (research and development) which refers to Borg & Gall model. The results showed that assessment’s expert of web-based virtual labs developed, in terms of software engineering aspects; visual communication; material relevance; usefulness and language used, is feasible as learning media. The results of the scaled test and wide-scale test show that students strongly agree with the development of web based virtual laboratory. The response of student to this virtual laboratory was positive. Suggestions from students provided further opportunities for improvement web based virtual laboratory and should be considered for further research.

  9. BAIK– PROGRAMMING LANGUAGE BASED ON INDONESIAN LEXICAL PARSING FOR MULTITIER WEB DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Haris Hasanudin

    2012-05-01

    Full Text Available Business software development with global team is increasing rapidly and the programming language as development tool takes the important role in the global web development. The real user friendly programming language should be written in local language for programmer who has native language is not in English. This paper presents our design of BAIK (Bahasa Anak Indonesia untuk Komputerscripting language which syntax is modeled with Bahasa Indonesian for multitier web development. Researcher propose the implementation of Indonesian Parsing Engine and Binary Search Tree structure for memory allocation of variable and compose the language features that support basic Object Oriented Programming, Common Gateway Interface, HTML style manipulation and database connection. Our goal is to build real programming language from simple structure design for web development using Indonesian lexical words. Pengembangan bisnis perangkat lunak dalam tim berskala global meningkat dengan cepat dan bahasa pemrograman berperan penting dalam pengembangan web secara global. Bahasa pemrograman yang benar-benar ramah terhadap pengguna harus ditulis dalam bahasa lokal programmer yang bahasa ibunya bukan Bahasa Inggris. Paper ini menyajikan desain dari bahasa penulisan BAIK (Bahasa Anak Indonesia untuk Komputer, yang sintaksisnya dimodelkan dengan Bahasa Indonesia untuk pengembangan web multitier. Peneliti mengusulkan implementasi dari parsing engine Bahasa Indonesia dan struktur binary search tree untuk alokasi memori terhadap variabel, serta membuat fitur bahasa yang mendukung dasar pemrograman berbasis objek, common gateway interface, manipulasi gaya HTML, dan koneksi basis data. Tujuan penelitian ini adalah untuk menciptakan bahasa pemrograman yang sesungguhnya dan menggunakan desain struktur sederhana untuk pengembangan web dengan menggunakan kata-kata dari Bahasa Indonesia.

  10. Teknik Perangkingan Meta-search Engine

    OpenAIRE

    Puspitaningrum, Diyah

    2014-01-01

    Meta-search engine mengorganisasikan penyatuan hasil dari berbagai search engine dengan tujuan untuk meningkatkan presisi hasil pencarian dokumen web. Pada survei teknik perangkingan meta-search engine ini akan didiskusikan isu-isu pra-pemrosesan, rangking, dan berbagai teknik penggabungan hasil pencarian dari search engine yang berbeda-beda (multi-kombinasi). Isu-isu implementasi penggabungan 2 search engine dan 3 search engine juga menjadi sorotan. Pada makalah ini juga dibahas arahan penel...

  11. Engineering Geology | Alaska Division of Geological & Geophysical Surveys

    Science.gov (United States)

    Alaska's Mineral Industry Reports AKGeology.info Rare Earth Elements WebGeochem Engineering Geology Alaska content Engineering Geology Additional information Engineering Geology Posters and Presentations Alaska Alaska MAPTEACH Tsunami Inundation Mapping Engineering Geology Staff Projects The Engineering Geology

  12. WAsP engineering 2000

    Energy Technology Data Exchange (ETDEWEB)

    Mann, J.; Ott, S.; Hoffmann Joergensen, B.; Frank, H.P.

    2002-08-01

    This report summarizes the findings of the EFP project WAsP Engineering Version 2000. The main product of this project is the computer program WAsP Engineering which is used for the estimation of extreme wind speeds, wind shears, profiles, and turbulence in complex terrain. At the web page http://www.waspengineering.dk more information of the program can be obtained and a copy of the manual can be downloaded. The reports contains a complete description of the turbulence modelling in moderately complex terrain, implemented in WAsP Engineering. Also experimental validation of the model together with comparison with spectra from engineering codes is done. Some shortcomings of the linear flow model LINCOM, which is at the core of WAsP Engineering, is pointed out and modifications to eliminate the problem are presented. The global database of meteorological 'reanalysis' data from NCAP/NCEP are used to estimate the extreme wind climate around Denmark. Among various alternative physical parameters in the database, such as surface winds, wind at various pressure levels or geostrophic winds at various heights, the surface geostrophic wind seems to give the most realistic results. Because of spatial filtering and intermittent temporal sampling the 50 year winds are underestimated by approximately 12%. Whether the method applies to larger areas of the world remains to be seen. The 50 year winds in Denmark is estimated from data using the flow model inWAsP Engineering and the values are approximately 1 m/s larger than previous analysis (Kristensen et al. 2000). A tool is developed to estimate crudely an extreme wind climate from a WAsP lib file. (au)

  13. Web document clustering using hyperlink structures

    Energy Technology Data Exchange (ETDEWEB)

    He, Xiaofeng; Zha, Hongyuan; Ding, Chris H.Q; Simon, Horst D.

    2001-05-07

    With the exponential growth of information on the World Wide Web there is great demand for developing efficient and effective methods for organizing and retrieving the information available. Document clustering plays an important role in information retrieval and taxonomy management for the World Wide Web and remains an interesting and challenging problem in the field of web computing. In this paper we consider document clustering methods exploring textual information hyperlink structure and co-citation relations. In particular we apply the normalized cut clustering method developed in computer vision to the task of hyperdocument clustering. We also explore some theoretical connections of the normalized-cut method to K-means method. We then experiment with normalized-cut method in the context of clustering query result sets for web search engines.

  14. A Web-Based Model to Estimate the Impact of Best Management Practices

    Directory of Open Access Journals (Sweden)

    Youn Shik Park

    2014-03-01

    Full Text Available The Spreadsheet Tool for the Estimation of Pollutant Load (STEPL can be used for Total Maximum Daily Load (TMDL processes, since the model is capable of simulating the impacts of various best management practices (BMPs and low impact development (LID practices. The model computes average annual direct runoff using the Soil Conservation Service Curve Number (SCS-CN method with average rainfall per event, which is not a typical use of the SCS-CN method. Five SCS-CN-based approaches to compute average annual direct runoff were investigated to explore estimated differences in average annual direct runoff computations using daily precipitation data collected from the National Climate Data Center and generated by the CLIGEN model for twelve stations in Indiana. Compared to the average annual direct runoff computed for the typical use of the SCS-CN method, the approaches to estimate average annual direct runoff within EPA STEPL showed large differences. A web-based model (STEPL WEB was developed with a corrected approach to estimate average annual direct runoff. Moreover, the model was integrated with the Web-based Load Duration Curve Tool, which identifies the least cost BMPs for each land use and optimizes BMP selection to identify the most cost-effective BMP implementations. The integrated tools provide an easy to use approach for performing TMDL analysis and identifying cost-effective approaches for controlling nonpoint source pollution.

  15. A semantics-based aspect-oriented approach to adaptation in web engineering

    NARCIS (Netherlands)

    Casteleyn, S.; Van Woensel, W.; Houben, G.J.P.M.

    2007-01-01

    In the modern Web, users are accessing their favourite Web applications from any place, at any time and with any device. In this setting, they expect the application to user-tailor and personalize content access upon their particular needs. Exhibiting some kind of user- and context-dependency is

  16. Statistical models of petrol engines vehicles dynamics

    Science.gov (United States)

    Ilie, C. O.; Marinescu, M.; Alexa, O.; Vilău, R.; Grosu, D.

    2017-10-01

    This paper focuses on studying statistical models of vehicles dynamics. It was design and perform a one year testing program. There were used many same type cars with gasoline engines and different mileage. Experimental data were collected of onboard sensors and those on the engine test stand. A database containing data of 64th tests was created. Several mathematical modelling were developed using database and the system identification method. Each modelling is a SISO or a MISO linear predictive ARMAX (AutoRegressive-Moving-Average with eXogenous inputs) model. It represents a differential equation with constant coefficients. It were made 64th equations for each dependency like engine torque as output and engine’s load and intake manifold pressure, as inputs. There were obtained strings with 64 values for each type of model. The final models were obtained using average values of the coefficients. The accuracy of models was assessed.

  17. Modeling the impact of oyster culture on a mudflat food web in Marennes-Oleron Bay (France)

    OpenAIRE

    Leguerrier, D; Niquil, Nathalie; Petiau, A; Bodoy, Alain

    2004-01-01

    We used a carbon-based food web model to investigate the effects of oyster cultivation on the ecosystem of an intertidal mudflat. A previously published food web model of a mudflat in Marennes-Oleron Bay, France, was updated with revised parameters, and a realistic surface area and density of existing oyster cultures on the mudflat. We developed 2 hypothetical scenarios to estimate the impact of oyster cultivation on the food web structure of the ecosystem: one with no oysters, the other with...

  18. Journal of Modeling, Design and Management of Engineering ...

    African Journals Online (AJOL)

    The Journal of Modeling, Design & Management of Engineering Systems publishes original ... systems Electronic/Electrical systems Engineering management systems Fuel and Energy systems Information Technology ... systems Pubic Health systems Software Engineering systems Systems and Industrial Engineering ...

  19. How Google Web Search copes with very similar documents

    NARCIS (Netherlands)

    W. Mettrop (Wouter); P. Nieuwenhuysen; H. Smulders

    2006-01-01

    textabstractA significant portion of the computer files that carry documents, multimedia, programs etc. on the Web are identical or very similar to other files on the Web. How do search engines cope with this? Do they perform some kind of “deduplication”? How should users take into account that

  20. Tissue Engineering in Animal Models for Urinary Diversion: A Systematic Review

    Science.gov (United States)

    Sloff, Marije; de Vries, Rob; Geutjes, Paul; IntHout, Joanna; Ritskes-Hoitinga, Merel

    2014-01-01

    Tissue engineering and regenerative medicine (TERM) approaches may provide alternatives for gastrointestinal tissue in urinary diversion. To continue to clinically translatable studies, TERM alternatives need to be evaluated in (large) controlled and standardized animal studies. Here, we investigated all evidence for the efficacy of tissue engineered constructs in animal models for urinary diversion. Studies investigating this subject were identified through a systematic search of three different databases (PubMed, Embase and Web of Science). From each study, animal characteristics, study characteristics and experimental outcomes for meta-analyses were tabulated. Furthermore, the reporting of items vital for study replication was assessed. The retrieved studies (8 in total) showed extreme heterogeneity in study design, including animal models, biomaterials and type of urinary diversion. All studies were feasibility studies, indicating the novelty of this field. None of the studies included appropriate control groups, i.e. a comparison with the classical treatment using GI tissue. The meta-analysis showed a trend towards successful experimentation in larger animals although no specific animal species could be identified as the most suitable model. Larger animals appear to allow a better translation to the human situation, with respect to anatomy and surgical approaches. It was unclear whether the use of cells benefits the formation of a neo urinary conduit. The reporting of the methodology and data according to standardized guidelines was insufficient and should be improved to increase the value of such publications. In conclusion, animal models in the field of TERM for urinary diversion have probably been chosen for reasons other than their predictive value. Controlled and comparative long term animal studies, with adequate methodological reporting are needed to proceed to clinical translatable studies. This will aid in good quality research with the reduction in

  1. Web-based information search and retrieval: effects of strategy use and age on search success.

    Science.gov (United States)

    Stronge, Aideen J; Rogers, Wendy A; Fisk, Arthur D

    2006-01-01

    The purpose of this study was to investigate the relationship between strategy use and search success on the World Wide Web (i.e., the Web) for experienced Web users. An additional goal was to extend understanding of how the age of the searcher may influence strategy use. Current investigations of information search and retrieval on the Web have provided an incomplete picture of Web strategy use because participants have not been given the opportunity to demonstrate their knowledge of Web strategies while also searching for information on the Web. Using both behavioral and knowledge-engineering methods, we investigated searching behavior and system knowledge for 16 younger adults (M = 20.88 years of age) and 16 older adults (M = 67.88 years). Older adults were less successful than younger adults in finding correct answers to the search tasks. Knowledge engineering revealed that the age-related effect resulted from ineffective search strategies and amount of Web experience rather than age per se. Our analysis led to the development of a decision-action diagram representing search behavior for both age groups. Older adults had more difficulty than younger adults when searching for information on the Web. However, this difficulty was related to the selection of inefficient search strategies, which may have been attributable to a lack of knowledge about available Web search strategies. Actual or potential applications of this research include training Web users to search more effectively and suggestions to improve the design of search engines.

  2. Model engineering in a modular PSA

    International Nuclear Information System (INIS)

    Friedlhuber, Thomas

    2014-01-01

    For the purpose of PSA (Probabilistic Safety Analysis) for complex industrial systems, often PSA models in the form of fault and event trees are developed to model the risk of unwanted situations (hazards). While the recent decades, PSA models have gained high acceptance and have been developed massively. This lead to an increase in model sizes and complexity. Today, PSA models are often difficult to understand and maintain. This manuscript presents the concept of a modular PSA. A modular PSA tries to cope with the increased complexity by the techniques of modularization and instantiation. Modularization targets to treat a model by smaller pieces (the 'modules') to regain control over models. Instantiation aims to configure a generic model to different contexts. Both try to reduce model complexity. A modular PSA proposes new functionality to manage PSA models. Current model management is rather limited and not efficient. This manuscript shows new methods to manage the evolutions (versions) and deviations (variants) of PSA models in a modular PSA. The concepts of version and variant management are presented in this thesis. In this context, a model comparison and fusion of PSA models is precised. Model comparison provides important feedback to model engineers and model fusion kind of combines the work from different model engineers (concurrent model engineering). Apart from model management, methods to understand the content of PSA models are presented. The methods focus to highlight the dependencies between modules rather than their contents. Dependencies are automatically derived from a model structure. They express relations between model objects (for example a fault tree may have dependencies to basic events). To visualize those dependencies (for example in form of a model cartography) can constitute a crucial aid to model engineers for understanding complex interrelations in PSA models. Within the scope of this thesis, a software named 'Andromeda' has been

  3. A Method for Transforming Existing Web Service Descriptions into an Enhanced Semantic Web Service Framework

    Science.gov (United States)

    Du, Xiaofeng; Song, William; Munro, Malcolm

    Web Services as a new distributed system technology has been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organisation (VO). However, lack of semantics in the current Web Service standards has been a major barrier in service discovery and composition. In this chapter, we propose an enhanced context-based semantic service description framework (CbSSDF+) that tackles the problem and improves the flexibility of service discovery and the correctness of generated composite services. We also provide an agile transformation method to demonstrate how the various formats of Web Service descriptions on the Web can be managed and renovated step by step into CbSSDF+ based service description without large amount of engineering work. At the end of the chapter, we evaluate the applicability of the transformation method and the effectiveness of CbSSDF+ through a series of experiments.

  4. Mathematical modeling a chemical engineer's perspective

    CERN Document Server

    Rutherford, Aris

    1999-01-01

    Mathematical modeling is the art and craft of building a system of equations that is both sufficiently complex to do justice to physical reality and sufficiently simple to give real insight into the situation. Mathematical Modeling: A Chemical Engineer's Perspective provides an elementary introduction to the craft by one of the century's most distinguished practitioners.Though the book is written from a chemical engineering viewpoint, the principles and pitfalls are common to all mathematical modeling of physical systems. Seventeen of the author's frequently cited papers are reprinted to illus

  5. Using a web-based, iterative education model to enhance clinical clerkships.

    Science.gov (United States)

    Alexander, Erik K; Bloom, Nurit; Falchuk, Kenneth H; Parker, Michael

    2006-10-01

    Although most clinical clerkship curricula are designed to provide all students consistent exposure to defined course objectives, it is clear that individual students are diverse in their backgrounds and baseline knowledge. Ideally, the learning process should be individualized towards the strengths and weakness of each student, but, until recently, this has proved prohibitively time-consuming. The authors describe a program to develop and evaluate an iterative, Web-based educational model assessing medical students' knowledge deficits and allowing targeted teaching shortly after their identification. Beginning in 2002, a new educational model was created, validated, and applied in a prospective fashion to medical students during an internal medicine clerkship at Harvard Medical School. Using a Web-based platform, five validated questions were delivered weekly and a specific knowledge deficiency identified. Teaching targeted to the deficiency was provided to an intervention cohort of five to seven students in each clerkship, though not to controls (the remaining 7-10 students). Effectiveness of this model was assessed by performance on the following week's posttest question. Specific deficiencies were readily identified weekly using this model. Throughout the year, however, deficiencies varied unpredictably. Teaching targeted to deficiencies resulted in significantly better performance on follow-up questioning compared to the performance of those who did not receive this intervention. This model was easily applied in an additive fashion to the current curriculum, and student acceptance was high. The authors conclude that a Web-based, iterative assessment model can effectively target specific curricular needs unique to each group; focus teaching in a rapid, formative, and highly efficient manner; and may improve the efficiency of traditional clerkship teaching.

  6. Kansei Engineering and Website Design

    DEFF Research Database (Denmark)

    Song, Zheng; Howard, Thomas J.; Achiche, Sofiane

    2012-01-01

    a methodology based on Kansei Engineering, which has done significant work in product and industrial design but not quite been adopted in the IT field, in order to discover implicit emotional needs of users toward web site and transform them into design details. Survey and interview techniques and statistical...... methods were performed in this paper. A prototype web site was produced based on the Kansei results integrated with technical expertise and practical considerations. The results showed that the Kansei Engineering methodology in this paper played a significant role in web site design in terms of satisfying......Capturing users’ needs is critical in web site design. However, a lot of attention has been paid to enhance the functionality and usability, whereas much less consideration has been given to satisfy the emotional needs of users, which is also important to a successful design. This paper explores...

  7. The sources and popularity of online drug information: an analysis of top search engine results and web page views.

    Science.gov (United States)

    Law, Michael R; Mintzes, Barbara; Morgan, Steven G

    2011-03-01

    The Internet has become a popular source of health information. However, there is little information on what drug information and which Web sites are being searched. To investigate the sources of online information about prescription drugs by assessing the most common Web sites returned in online drug searches and to assess the comparative popularity of Web pages for particular drugs. This was a cross-sectional study of search results for the most commonly dispensed drugs in the US (n=278 active ingredients) on 4 popular search engines: Bing, Google (both US and Canada), and Yahoo. We determined the number of times a Web site appeared as the first result. A linked retrospective analysis counted Wikipedia page hits for each of these drugs in 2008 and 2009. About three quarters of the first result on Google USA for both brand and generic names linked to the National Library of Medicine. In contrast, Wikipedia was the first result for approximately 80% of generic name searches on the other 3 sites. On these other sites, over two thirds of brand name searches led to industry-sponsored sites. The Wikipedia pages with the highest number of hits were mainly for opiates, benzodiazepines, antibiotics, and antidepressants. Wikipedia and the National Library of Medicine rank highly in online drug searches. Further, our results suggest that patients most often seek information on drugs with the potential for dependence, for stigmatized conditions, that have received media attention, and for episodic treatments. Quality improvement efforts should focus on these drugs.

  8. SaaS ve web designu

    OpenAIRE

    Míka, Filip

    2011-01-01

    This thesis is aimed to evaluate if the current SaaS market is able to meet functional re-quirements of web design in order to appropriately support web design's activities. The theoretical part introduces the web design model which describes web design's functional requirements. The next section presents a research concept that describes model assessment (i.e. solutions delivered as SaaS that support web design) and evaluation process. The results show that the current SaaS market is able to...

  9. Free web-based modelling platform for managed aquifer recharge (MAR) applications

    Science.gov (United States)

    Stefan, Catalin; Junghanns, Ralf; Glaß, Jana; Sallwey, Jana; Fatkhutdinov, Aybulat; Fichtner, Thomas; Barquero, Felix; Moreno, Miguel; Bonilla, José; Kwoyiga, Lydia

    2017-04-01

    Managed aquifer recharge represents a valuable instrument for sustainable water resources management. The concept implies purposeful infiltration of surface water into underground for later recovery or environmental benefits. Over decades, MAR schemes were successfully installed worldwide for a variety of reasons: to maximize the natural storage capacity of aquifers, physical aquifer management, water quality management, and ecological benefits. The INOWAS-DSS platform provides a collection of free web-based tools for planning, management and optimization of main components of MAR schemes. The tools are grouped into 13 specific applications that cover most relevant challenges encountered at MAR sites, both from quantitative and qualitative perspectives. The applications include among others the optimization of MAR site location, the assessment of saltwater intrusion, the restoration of groundwater levels in overexploited aquifers, the maximization of natural storage capacity of aquifers, the improvement of water quality, the design and operational optimization of MAR schemes, clogging development and risk assessment. The platform contains a collection of about 35 web-based tools of various degrees of complexity, which are either included in application specific workflows or used as standalone modelling instruments. Among them are simple tools derived from data mining and empirical equations, analytical groundwater related equations, as well as complex numerical flow and transport models (MODFLOW, MT3DMS and SEAWAT). Up to now, the simulation core of the INOWAS-DSS, which is based on the finite differences groundwater flow model MODFLOW, is implemented and runs on the web. A scenario analyser helps to easily set up and evaluate new management options as well as future development such as land use and climate change and compare them to previous scenarios. Additionally simple tools such as analytical equations to assess saltwater intrusion are already running online

  10. Implementasi Seo Web Design Methodology Pada Official Homepage Pondok Pesantren Qodratullah

    OpenAIRE

    Ependi, Usman

    2013-01-01

    Homepage or website for an organization is a way to deliver information to the public. Now the number of homepage or website of the day is always increasing both personal or owned by the organization. To communicate or disseminate information homepage/ website Islamic Boarding School of Qodratullah need a surefire way to use the Search Engine Optimization Web Design Methodology. Conducted with the implementation of the Search Engine Optimization Web Design Methodology on the homepage/ website...

  11. A study on the personalization methods of the web | Hajighorbani ...

    African Journals Online (AJOL)

    ... methods of correct patterns and analyze them. Here we will discuss the basic concepts of web personalization and consider the three approaches of web personalization and we evaluated the methods belonging to each of them. Keywords: personalization, search engine, user preferences, data mining methods ...

  12. Mean Value Modelling of an SI Engine with EGR

    DEFF Research Database (Denmark)

    Føns, Michael; Muller, Martin; Chevalier, Alain

    1999-01-01

    Mean Value Engine Models (MVEMs) are simplified, dynamic engine models which are physically based. Such models are useful for control studies, for engine control system analysis and for model based control systems. Very few published MVEMs have included the effects of Exhaust Gas Recirculation (EGR......). The purpose of this paper is to present a modified MVEM which includes EGR in a physical way. It has been tested using newly developed, ver fast manifold pressure, manifold temperature, port and EGR mass flow sensores. Reasonable agreement has been obtained on an experimental engine, mounted on a dynamometer....

  13. Journal of Modeling, Design and Management of Engineering ...

    African Journals Online (AJOL)

    Journal of Modeling, Design and Management of Engineering Systems. ... Journal Home > Vol 5, No 1 (2007) ... or mathematical modeling, computing, simulation, design and/or operations research tools for solving engineering problems.

  14. Next-Gen Search Engines

    Science.gov (United States)

    Gupta, Amardeep

    2005-01-01

    Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…

  15. Project Photofly: New 3d Modeling Online Web Service (case Studies and Assessments)

    Science.gov (United States)

    Abate, D.; Furini, G.; Migliori, S.; Pierattini, S.

    2011-09-01

    During summer 2010, Autodesk has released a still ongoing project called Project Photofly, freely downloadable from AutodeskLab web site until August 1 2011. Project Photofly based on computer-vision and photogrammetric principles, exploiting the power of cloud computing, is a web service able to convert collections of photographs into 3D models. Aim of our research was to evaluate the Project Photofly, through different case studies, for 3D modeling of cultural heritage monuments and objects, mostly to identify for which goals and objects it is suitable. The automatic approach will be mainly analyzed.

  16. Final Technical Report; NUCLEAR ENGINEERING RECRUITMENT EFFORT

    Energy Technology Data Exchange (ETDEWEB)

    Kerrick, Sharon S.; Vincent, Charles D.

    2007-07-02

    This report provides the summary of a project whose purpose was to support the costs of developing a nuclear engineering awareness program, an instruction program for teachers to integrate lessons on nuclear science and technology into their existing curricula, and web sites for the exchange of nuclear engineering career information and classroom materials. The specific objectives of the program were as follows: OBJECTIVE 1: INCREASE AWARENESS AND INTEREST OF NUCLEAR ENGINEERING; OBJECTIVE 2: INSTRUCT TEACHERS ON NUCLEAR TOPICS; OBJECTIVE 3: NUCLEAR EDUCATION PROGRAMS WEB-SITE; OBJECTIVE 4: SUPPORT TO UNIVERSITY/INDUSTRY MATCHING GRANTS AND REACTOR SHARING; OBJECTIVE 5: PILOT PROJECT; OBJECTIVE 6: NUCLEAR ENGINEERING ENROLLMENT SURVEY AT UNIVERSITIES

  17. Estimating Search Engine Index Size Variability

    DEFF Research Database (Denmark)

    Van den Bosch, Antal; Bogers, Toine; De Kunder, Maurice

    2016-01-01

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel...... method of estimating the size of a Web search engine’s index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing’s indices over a nine-year period, from March 2006...... until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find...

  18. Penerapan Konsep Model View Controller Pada Rancang Bangun Sistem Informasi Klinik Kesehatan Berbasis Web

    Directory of Open Access Journals (Sweden)

    Devy Ferdiansyah

    2018-05-01

    Full Text Available Klinik merupakan sebuah institusi layanan publik yang bergerak dalam bidang jasa kesehatan. Saat ini, masih banyak klinik yang mencatat kegiatan operasional harian secara manual dalam arsip kertas, seperti pendaftaran pasien, pengelolaan data rekam medik, dan sebagainya. Arsip berbagai data klinik menjadi bertumpuk, sehingga membutuhkan ruang penyimpanan yang lebih luas dan pemeliharaan yang lebih ekstra agar kertas catatan tersebut tidak hilang atau mudah rusak. Proses pencarian data pasien dan rekam medik membutuhkan waktu lama, akibat berada di tumpukan arsip yang semakin banyak. Tujuan penelitian adalah membangun sebuah sistem informasi klinik kesehatan berbasis web untuk membantu staff dalam mengelola data klinik kesehatan. Pembangunan Sistem berbasis web ini menggunakan bahasa pemrograman PHP dan database MySQL, dengan menerapkan konsep  Model View Controller(MVC dalam teknik pemrogramannya. Penerapan metode MVC bertujuan untuk memberikan kemudahan bagi pemrogram web dalam membangun website, karena memisahkan data (Model dari tampilannya (View dan cara bagaimana mengolahnya (Controller. Penelitian ini menghasilkan sebuah aplikasi sistem informasi klinik kesehatan berbasis web, yang merubah pola kerja staff dari cara manual dan membutuhkan waktu lama, menjadi terkomputerisasi dengan waktu kerja yang lebih cepat, serta cara kerja yang lebih praktis dan efisien. Penelitian ini memberikan arahan yang jelas tentang bagaimana sebaiknya membangun sebuah sistem informasi berbasis web dengan baik, sehingga tidak menyulitkan pemrogram web saat harus memperbaiki atau mengembangkan sistem di kemudian hari.

  19. Constructing a web recommender system using web usage mining and user’s profiles

    Directory of Open Access Journals (Sweden)

    T. Mombeini

    2014-12-01

    Full Text Available The World Wide Web is a great source of information, which is nowadays being widely used due to the availability of useful information changing, dynamically. However, the large number of webpages often confuses many users and it is hard for them to find information on their interests. Therefore, it is necessary to provide a system capable of guiding users towards their desired choices and services. Recommender systems search among a large collection of user interests and recommend those, which are likely to be favored the most by the user. Web usage mining was designed to function on web server records, which are included in user search results. Therefore, recommender servers use the web usage mining technique to predict users’ browsing patterns and recommend those patterns in the form of a suggestion list. In this article, a recommender system based on web usage mining phases (online and offline was proposed. In the offline phase, the first step is to analyze user access records to identify user sessions. Next, user profiles are built using data from server records based on the frequency of access to pages, the time spent by the user on each page and the date of page view. Date is of importance since it is more possible for users to request new pages more than old ones and old pages are less probable to be viewed, as users mostly look for new information. Following the creation of user profiles, users are categorized in clusters using the Fuzzy C-means clustering algorithm and S(c criterion based on their similarities. In the online phase, a neural network is offered to identify the suggested model while online suggestions are generated using the suggestion module for the active user. Search engines analyze suggestion lists based on rate of user interest in pages and page rank and finally suggest appropriate pages to the active user. Experiments show that the proposed method of predicting user recent requested pages has more accuracy and

  20. Analysis of the benefits of designing and implementing a virtual didactic model of multiple choice exam and problem-solving heuristic report, for first year engineering students

    OpenAIRE

    Bennun, Leonardo; Santibanez, Mauricio

    2015-01-01

    Improvements in performance and approval obtained by first year engineering students from University of Concepcion, Chile, were studied, once a virtual didactic model of multiple-choice exam, was implemented. This virtual learning resource was implemented in the Web ARCO platform and allows training, by facing test models comparable in both time and difficulty to those that they will have to solve during the course. It also provides a feedback mechanism for both: 1) The students, since they c...

  1. Web of Science, Scopus, and Google Scholar citation rates: a case study of medical physics and biomedical engineering: what gets cited and what doesn't?

    Science.gov (United States)

    Trapp, Jamie

    2016-12-01

    There are often differences in a publication's citation count, depending on the database accessed. Here, aspects of citation counts for medical physics and biomedical engineering papers are studied using papers published in the journal Australasian physical and engineering sciences in medicine. Comparison is made between the Web of Science, Scopus, and Google Scholar. Papers are categorised into subject matter, and citation trends are examined. It is shown that review papers as a group tend to receive more citations on average; however the highest cited individual papers are more likely to be research papers.

  2. Raising Reliability of Web Search Tool Research through Replication and Chaos Theory

    OpenAIRE

    Nicholson, Scott

    1999-01-01

    Because the World Wide Web is a dynamic collection of information, the Web search tools (or "search engines") that index the Web are dynamic. Traditional information retrieval evaluation techniques may not provide reliable results when applied to the Web search tools. This study is the result of ten replications of the classic 1996 Ding and Marchionini Web search tool research. It explores the effects that replication can have on transforming unreliable results from one iteration into replica...

  3. Research on Turbofan Engine Model above Idle State Based on NARX Modeling Approach

    Science.gov (United States)

    Yu, Bing; Shu, Wenjun

    2017-03-01

    The nonlinear model for turbofan engine above idle state based on NARX is studied. Above all, the data sets for the JT9D engine from existing model are obtained via simulation. Then, a nonlinear modeling scheme based on NARX is proposed and several models with different parameters are built according to the former data sets. Finally, the simulations have been taken to verify the precise and dynamic performance the models, the results show that the NARX model can well reflect the dynamics characteristic of the turbofan engine with high accuracy.

  4. Consistent Evolution of Software Artifacts and Non-Functional Models

    Science.gov (United States)

    2014-11-14

    induce bad software performance)? 15. SUBJECT TERMS EOARD, Nano particles, Photo-Acoustic Sensors, Model-Driven Engineering ( MDE ), Software Performance...Università degli Studi dell’Aquila, Via Vetoio, 67100 L’Aquila, Italy Email: vittorio.cortellessa@univaq.it Web : http: // www. di. univaq. it/ cortelle/ Phone...Model-Driven Engineering ( MDE ), Software Performance Engineering (SPE), Change Propagation, Performance Antipatterns. For sake of readability of the

  5. Mean Value Modelling of Turbocharged SI Engines

    DEFF Research Database (Denmark)

    Müller, Martin; Hendricks, Elbert; Sorenson, Spencer C.

    1998-01-01

    The development of a computer simulation to predict the performance of a turbocharged spark ignition engine during transient operation. New models have been developed for the turbocharged and the intercooling system. An adiabatic model for the intake manifold is presented.......The development of a computer simulation to predict the performance of a turbocharged spark ignition engine during transient operation. New models have been developed for the turbocharged and the intercooling system. An adiabatic model for the intake manifold is presented....

  6. Modelling and Inverse-Modelling: Experiences with O.D.E. Linear Systems in Engineering Courses

    Science.gov (United States)

    Martinez-Luaces, Victor

    2009-01-01

    In engineering careers courses, differential equations are widely used to solve problems concerned with modelling. In particular, ordinary differential equations (O.D.E.) linear systems appear regularly in Chemical Engineering, Food Technology Engineering and Environmental Engineering courses, due to the usefulness in modelling chemical kinetics,…

  7. A Web-Based Tool to Estimate Pollutant Loading Using LOADEST

    Directory of Open Access Journals (Sweden)

    Youn Shik Park

    2015-09-01

    Full Text Available Collecting and analyzing water quality samples is costly and typically requires significant effort compared to streamflow data, thus water quality data are typically collected at a low frequency. Regression models, identifying a relationship between streamflow and water quality data, are often used to estimate pollutant loads. A web-based tool using LOAD ESTimator (LOADEST as a core engine with four modules was developed to provide user-friendly interfaces and input data collection via web access. The first module requests and receives streamflow and water quality data from the U.S. Geological Survey. The second module retrieves watershed area for computation of pollutant loads per unit area. The third module examines potential error of input datasets for LOADEST runs, and the last module computes estimated and allowable annual average pollutant loads and provides tabular and graphical LOADEST outputs. The web-based tool was applied to two watersheds in this study, one agriculturally-dominated and one urban-dominated. It was found that annual sediment load at the urban-dominant watershed exceeded the target load; therefore, the web-based tool identified correctly the watershed requiring best management practices to reduce pollutant loads.

  8. Systems Engineering Model for ART Energy Conversion

    Energy Technology Data Exchange (ETDEWEB)

    Mendez Cruz, Carmen Margarita [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rochau, Gary E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wilson, Mollye C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    The near-term objective of the EC team is to establish an operating, commercially scalable Recompression Closed Brayton Cycle (RCBC) to be constructed for the NE - STEP demonstration system (demo) with the lowest risk possible. A systems engineering approach is recommended to ensure adequate requirements gathering, documentation, and mode ling that supports technology development relevant to advanced reactors while supporting crosscut interests in potential applications. A holistic systems engineering model was designed for the ART Energy Conversion program by leveraging Concurrent Engineering, Balance Model, Simplified V Model, and Project Management principles. The resulting model supports the identification and validation of lifecycle Brayton systems requirements, and allows designers to detail system-specific components relevant to the current stage in the lifecycle, while maintaining a holistic view of all system elements.

  9. The Development of Web-Based Collaborative Training Model for Enhancing Human Performances on ICT for Students in Banditpattanasilpa Institute

    Science.gov (United States)

    Pumipuntu, Natawut; Kidrakarn, Pachoen; Chetakarn, Somchock

    2015-01-01

    This research aimed to develop the model of Web-based Collaborative (WBC) Training model for enhancing human performances on ICT for students in Banditpattanasilpa Institute. The research is divided into three phases: 1) investigating students and teachers' training needs on ICT web-based contents and performance, 2) developing a web-based…

  10. Pengembangan Model CYBER CLUSTER E-COMMERCE Berbasis CMS dan SEO Produk UMKM

    Directory of Open Access Journals (Sweden)

    Dwi Agus Diartono

    2015-07-01

    Abstract Problems that are often faced by UMKM (SME is the limited number and range of marketing and sales of its products. So is the competition of similar products can occur by inter-local products or products that come from outside. This is because marketing and sales are still done conventionally and done individually. This study intends to make the implementation of a model system of E-Commerce SME product in an area or district with the participation of empowerment models cyber group (cluster cyber participatory in performing web linking system that was developed using Optimisazion Search Engine (SEO and Content Management System (CMS. The goal is for the web that can be developed easily ranked the Web search page (search engines and always updated content and rank. The benefit of this research is to improve the marketing and sale of products of SMEs to global market and making it easy to find the web address and and are found as often appear in top positions in the search engines like google. Outcomes of this research is based CMS website MSME products and optimized the model of internal and external links that always appears at the top position of the search range. Methods This study uses an action research, the model of structured systems development waterfall model (waterfall. Its own web application developed with prototype models, according to consumer needs.   Keywords—Cyber-Cluster, SEO, CMS, SME

  11. Web Auctions in Europe

    NARCIS (Netherlands)

    A. Pouloudi; J. Paarlberg; H.W.G.M. van Heck (Eric)

    2001-01-01

    textabstractThis paper argues that a better understanding of the business model of web auctions can be reached if we adopt a broader view and provide empirical research from different sites. In this paper the business model of web auctions is refined into four dimensions. These are auction model,

  12. Estimating search engine index size variability: a 9-year longitudinal study.

    Science.gov (United States)

    van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.

  13. BP-Broker use-cases in the UncertWeb framework

    Science.gov (United States)

    Roncella, Roberto; Bigagli, Lorenzo; Schulz, Michael; Stasch, Christoph; Proß, Benjamin; Jones, Richard; Santoro, Mattia

    2013-04-01

    The UncertWeb framework is a distributed, Web-based Information and Communication Technology (ICT) system to support scientific data modeling in presence of uncertainty. We designed and prototyped a core component of the UncertWeb framework: the Business Process Broker. The BP-Broker implements several functionalities, such as: discovery of available processes/BPs, preprocessing of a BP into its executable form (EBP), publication of EBPs and their execution through a workflow-engine. According to the Composition-as-a-Service (CaaS) approach, the BP-Broker supports discovery and chaining of modeling resources (and processing resources in general), providing the necessary interoperability services for creating, validating, editing, storing, publishing, and executing scientific workflows. The UncertWeb project targeted several scenarios, which were used to evaluate and test the BP-Broker. The scenarios cover the following environmental application domains: biodiversity and habitat change, land use and policy modeling, local air quality forecasting, and individual activity in the environment. This work reports on the study of a number of use-cases, by means of the BP-Broker, namely: - eHabitat use-case: implements a Monte Carlo simulation performed on a deterministic ecological model; an extended use-case supports inter-comparison of model outputs; - FERA use-case: is composed of a set of models for predicting land-use and crop yield response to climatic and economic change; - NILU use-case: is composed of a Probabilistic Air Quality Forecasting model for predicting concentrations of air pollutants; - Albatross use-case: includes two model services for simulating activity-travel patterns of individuals in time and space; - Overlay use-case: integrates the NILU scenario with the Albatross scenario to calculate the exposure to air pollutants of individuals. Our aim was to prove the feasibility of describing composite modeling processes with a high-level, abstract

  14. Optimum web environment model for e-marketing of religious organizations in the Republic of Croatia

    Directory of Open Access Journals (Sweden)

    Stojanka Dukić

    2013-12-01

    Full Text Available Although religious organizations are essentially conservative, they are not immune to the changes brought on by the information and communication technology. Thus, one can conclude that all religious organizations, be they more liberal or conservative in their position towards change, use information and communication technology, i.e. the communication channel that it creates, more or less successfully. In fact, a religious organization, as any other organization, can choose between a range of communication channels created by the global network system, i.e. the Internet. The web is probably the most widely used and most popular communication channel available to Internet users. However, the web is not only a communication channel; it has developed into a virtual space, which evolved from being a means of presentation into a global social network. Web environment building is often left to the professionals such as web designers and developers of web sites that focus their attention on the appearance and functionality of web sites, but do not address the mission and goals of the religious organization for which the web system has been developed. In particular, the importance of marketing approach is disregarded, i.e. the necessity to meet the needs of the faithful, who are users of religious organization ‘services’. To create a web environment for religious organizations with optimal form and content, especially in the Republic of Croatia, one must address the task using a systematic or a model approach. For this reason, a study was conducted and a model of optimal web environment for e-marketing of religious organizations in the Republic of Croatia was developed

  15. A Model for Web-based Information Systems in E-Retailing.

    Science.gov (United States)

    Wang, Fang; Head, Milena M.

    2001-01-01

    Discusses the use of Web-based information systems (WIS) by electronic retailers to attract and retain consumers and deliver business functions and strategy. Presents an abstract model for WIS design in electronic retailing; discusses customers, business determinants, and business interface; and suggests future research. (Author/LRW)

  16. The Invisible Web: Uncovering Information Sources Search Engines Can't See.

    Science.gov (United States)

    Sherman, Chris; Price, Gary

    This book takes a detailed look at the nature and extent of the Invisible Web, and offers pathfinders for accessing the valuable information it contains. It is designed to fit the needs of both novice and advanced Web searchers. Chapter One traces the development of the Internet and many of the early tools used to locate and share information via…

  17. Compression-based aggregation model for medical web services.

    Science.gov (United States)

    Al-Shammary, Dhiah; Khalil, Ibrahim

    2010-01-01

    Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.

  18. Ada & the Analytical Engine.

    Science.gov (United States)

    Freeman, Elisabeth

    1996-01-01

    Presents a brief history of Ada Byron King, Countess of Lovelace, focusing on her primary role in the development of the Analytical Engine--the world's first computer. Describes the Ada Project (TAP), a centralized World Wide Web site that serves as a clearinghouse for information related to women in computing, and provides a Web address for…

  19. MPEG-7 low level image descriptors for modeling users' web pages visual appeal opinion

    OpenAIRE

    Uribe Mayoral, Silvia; Alvarez Garcia, Federico; Menendez Garcia, Jose Manuel

    2015-01-01

    The study of the users' web pages first impression is an important factor for interface designers, due to its influence over the final opinion about a site. In this regard, the analysis of web aesthetics can be considered as an interesting tool for evaluating this early impression, and the use of low level image descriptors for modeling it in an objective way represents an innovative research field. According to this, in this paper we present a new model for website aesthetics evaluation and ...

  20. Model-driven software engineering

    NARCIS (Netherlands)

    Amstel, van M.F.; Brand, van den M.G.J.; Protic, Z.; Verhoeff, T.; Hamberg, R.; Verriet, J.

    2014-01-01

    Software plays an important role in designing and operating warehouses. However, traditional software engineering methods for designing warehouse software are not able to cope with the complexity, size, and increase of automation in modern warehouses. This chapter describes Model-Driven Software

  1. Uncovering Web search strategies in South African higher education

    Directory of Open Access Journals (Sweden)

    Surika Civilcharran

    2016-11-01

    Full Text Available Background: In spite of the enormous amount of information available on the Web and the fact that search engines are continuously evolving to enhance the search experience, students are nevertheless faced with the difficulty of effectively retrieving information. It is, therefore, imperative for the interaction between students and search tools to be understood and search strategies to be identified, in order to promote successful information retrieval. Objectives: This study identifies the Web search strategies used by postgraduate students and forms part of a wider study into information retrieval strategies used by postgraduate students at the University of KwaZulu-Natal (UKZN, Pietermaritzburg campus, South Africa. Method: Largely underpinned by Thatcher’s cognitive search strategies, the mixed-methods approach was utilised for this study, in which questionnaires were employed in Phase 1 and structured interviews in Phase 2. This article reports and reflects on the findings of Phase 2, which focus on identifying the Web search strategies employed by postgraduate students. The Phase 1 results were reported in Civilcharran, Hughes and Maharaj (2015. Results: Findings reveal the Web search strategies used for academic information retrieval. In spite of easy access to the invisible Web and the advent of meta-search engines, the use of Web search engines still remains the preferred search tool. The UKZN online library databases and especially the UKZN online library, Online Public Access Catalogue system, are being underutilised. Conclusion: Being ranked in the top three percent of the world’s universities, UKZN is investing in search tools that are not being used to their full potential. This evidence suggests an urgent need for students to be trained in Web searching and to have a greater exposure to a variety of search tools. This article is intended to further contribute to the design of undergraduate training programmes in order to deal

  2. SIKLUS PRAPEMBELAJARAN MODEL PENILAIAN FORMATIF WEB-BASED PADA PEMBELAJARAN FISIKA MATERI SUHU DAN KALOR UNTUK SISWA SMK KELAS X

    Directory of Open Access Journals (Sweden)

    E. Ediyanto

    2016-10-01

    Full Text Available Model penilaian formatif Web-based dibagi menjadi tiga siklus yaitu siklus prapembelajaran, siklus pembelajaran dan siklus pascapembelajaran. Penelitian kali ini mengembangkan siklus prapembelajaran model penilaian web-based pada mata pelajaran fisika materi suhu dan kalor untuk siswa SMK kelas X. Metode penelitian yang digunakan dalam penelitian ini adalah penelitian dan pengembangan. Langkah-langkah yang digunakan untuk pengembangan siklus prapembelajaran model penilaian formatif web-based yaitu 1 mengumpulkan informasi, 2 melakukan perencanaan, 3 mengembangkan bentuk produk awal, 4 melakukan uji permulaan, 5 revisi, dan 6 Uji coba. Berdasarkan hasil uji coba, ditemukan bahwa siklus prapembelajaran model penilaian formatif web-based dapat membantu guru dan siswa untuk mendapatkan umpan balik yang cepat. Umpan balik yang cepat dapat membantu siswa untuk mendapatkan pemahaman konsep dengan cepat dan dapat membantu guru untuk menemukan masalah siswa sehingga dapat dipecahkan dengan cepat.Web-based Formative Assessment Model is divided into three cycles: pre-teaching, whilst teaching, post-teaching. This research develops Pre-teaching Cycle of Formative Web-Based Assessment Model on physics material teaching: Temperature and Heat for X Grader of Vocational High School Students. The method used in this research is a Research and Development (R & D. The steps used for the development of pre-learning cycle of web-based formative assessment models: 1 collecting information, 2 conducting planning, 3 developing pre-product form, 4 conducting pre-test, 5 revision, 6 trial test. Based on the trial test, the findings show that pre-teaching cycle of formative web-based assessment model is able to assist teachers and students to get fast feedback. Fast feedback can helps students to gain fast conceptual comprehension and help teachers to find out the students’ problems so it enables to solve faster.

  3. Engineering design of systems models and methods

    CERN Document Server

    Buede, Dennis M

    2009-01-01

    The ideal introduction to the engineering design of systems-now in a new edition. The Engineering Design of Systems, Second Edition compiles a wealth of information from diverse sources to provide a unique, one-stop reference to current methods for systems engineering. It takes a model-based approach to key systems engineering design activities and introduces methods and models used in the real world. Features new to this edition include: * The addition of Systems Modeling Language (SysML) to several of the chapters, as well as the introduction of new terminology * Additional material on partitioning functions and components * More descriptive material on usage scenarios based on literature from use case development * Updated homework assignments * The software product CORE (from Vitech Corporation) is used to generate the traditional SE figures and the software product MagicDraw UML with SysML plugins (from No Magic, Inc.) is used for the SysML figures This book is designed to be an introductory reference ...

  4. Overview of the TREC 2013 federated web search track

    OpenAIRE

    Demeester, Thomas; Trieschnigg, D; Nguyen, D; Hiemstra, D

    2013-01-01

    The TREC Federated Web Search track is intended to promote research related to federated search in a realistic web setting, and hereto provides a large data collection gathered from a series of online search engines. This overview paper discusses the results of the first edition of the track, FedWeb 2013. The focus was on basic challenges in federated search: (1) resource selection, and (2) results merging. After an overview of the provided data collection and the relevance judgments for the ...

  5. Adaptive Engine Torque Compensation with Driveline Model

    Directory of Open Access Journals (Sweden)

    Park Jinrak

    2018-01-01

    Full Text Available Engine net torque is the total torque generated by the engine side, and includes the fuel combustion torque, the friction torque, and additionally the starter motor torque in case of hybrid vehicles. The engine net torque is utilized to control powertrain items such as the engine itself, the transmission clutch, also the engine clutch, and it must be accurate for the precise powertrain control. However, this net torque can vary with the engine operating conditions like the engine wear, the changes of the atmospheric pressure and the friction torque. Thus, this paper proposes the adaptive engine net torque compensator using driveline model which can cope with the net torque change according to engine operating conditions. The adaptive compensator was applied on the parallel hybrid vehicle and investigated via MATLAB Simcape Driveline simulation.

  6. A Metadata Model for E-Learning Coordination through Semantic Web Languages

    Science.gov (United States)

    Elci, Atilla

    2005-01-01

    This paper reports on a study aiming to develop a metadata model for e-learning coordination based on semantic web languages. A survey of e-learning modes are done initially in order to identify content such as phases, activities, data schema, rules and relations, etc. relevant for a coordination model. In this respect, the study looks into the…

  7. The Model of Web 2.0 Technologies Implementation in Student’s Self-Development Work

    Directory of Open Access Journals (Sweden)

    G. D. Bukharova

    2012-01-01

    Full Text Available The paper is devoted to substantiation and development of the model of the web 2.0-technologies implementation in organizing student’s self-dependent work in the course of studying the disciplines based on using the information communications technologies (ICT in professional activities. The methods applied in the above model development include investigation and analysis of psycho-pedagogical and scientific method materials concerning the research subject; systematization and synthesis of the related data; model development of organizing student’s self-dependent work by using the web 2.0-technologies. The theoretical methodological bases combine the technologies and modeling methods of educational process (P. I. Pidkasistyi, V. A. Slastenin; the theory and methods of organizing student’s independent work (P. I. Pidkasistyi, S. I. Archangelskiy ; aspects of using web 2.0-technologies in education (E. D. Patarakin, Tim O’Reilly. The paper provides the description of the designed model along with the complex pedagogic conditions for its implementation. The recommendations given by the authors can facilitate development of organizational process of student’s self- dependent work in training for using ICT in professional activities. 

  8. A web-based rapid assessment tool for production publishing solutions

    Science.gov (United States)

    Sun, Tong

    2010-02-01

    Solution assessment is a critical first-step in understanding and measuring the business process efficiency enabled by an integrated solution package. However, assessing the effectiveness of any solution is usually a very expensive and timeconsuming task which involves lots of domain knowledge, collecting and understanding the specific customer operational context, defining validation scenarios and estimating the expected performance and operational cost. This paper presents an intelligent web-based tool that can rapidly assess any given solution package for production publishing workflows via a simulation engine and create a report for various estimated performance metrics (e.g. throughput, turnaround time, resource utilization) and operational cost. By integrating the digital publishing workflow ontology and an activity based costing model with a Petri-net based workflow simulation engine, this web-based tool allows users to quickly evaluate any potential digital publishing solutions side-by-side within their desired operational contexts, and provides a low-cost and rapid assessment for organizations before committing any purchase. This tool also benefits the solution providers to shorten the sales cycles, establishing a trustworthy customer relationship and supplement the professional assessment services with a proven quantitative simulation and estimation technology.

  9. Using a food web model to inform the design of river restoration—An example at the Barkley Bear Segment, Methow River, north-central Washington

    Science.gov (United States)

    Benjamin, Joseph R.; Bellmore, J. Ryan; Dombroski, Daniel

    2018-01-29

    With the decline of Chinook salmon (Oncorhynchus tshawytscha) and steelhead (O. mykiss), habitat restoration actions in freshwater tributaries have been implemented to improve conditions for juveniles. Typically, physical (for example, hydrologic and engineering) based models are used to design restoration alternatives with the assumption that biological responses will be improved with changes to the physical habitat. Biological models rarely are used. Here, we describe simulations of a food web model, the Aquatic Trophic Productivity (ATP) model, to aid in the design of a restoration project in the Methow River, north-central Washington. The ATP model mechanistically links environmental conditions of the stream to the dynamics of river food webs, and can be used to simulate how alternative river restoration designs influence the potential for river reaches to sustain fish production. Four restoration design alternatives were identified that encompassed varying levels of side channel and floodplain reconnection and large wood addition. Our model simulations suggest that design alternatives focused on reconnecting side channels and the adjacent floodplain may provide the greatest increase in fish capacity. These results were robust to a range of discharge and thermal regimes that naturally occur in the Methow River. Our results suggest that biological models, such as the ATP model, can be used during the restoration planning phase to increase the effectiveness of restoration actions. Moreover, the use of multiple modeling efforts, both physical and biological, when evaluating restoration design alternatives provides a better understanding of the potential outcome of restoration actions.

  10. Enhancing English Language Planning Strategy Using a WebQuest Model

    Science.gov (United States)

    Al-Sayed, Rania Kamal Muhammad; Abdel-Haq, Eman Muhammad; El-Deeb, Mervat Abou-Bakr; Ali, Mahsoub Abdel-Sadeq

    2016-01-01

    The present study aimed at developing English language planning strategy of second year distinguished governmental language preparatory school pupils using the a WebQuest model. Fifty participants from second year at Hassan Abu-Bakr Distinguished Governmental Language School at Al-Qanater Al-Khairia (Qalubia Governorate) were randomly assigned…

  11. A Web Service and Interface for Remote Electronic Device Characterization

    Science.gov (United States)

    Dutta, S.; Prakash, S.; Estrada, D.; Pop, E.

    2011-01-01

    A lightweight Web Service and a Web site interface have been developed, which enable remote measurements of electronic devices as a "virtual laboratory" for undergraduate engineering classes. Using standard browsers without additional plugins (such as Internet Explorer, Firefox, or even Safari on an iPhone), remote users can control a Keithley…

  12. MODEL-BASED SECURITY ENGINEERING OF SOA SYSTEM USING SECURITY INTENT DSL

    OpenAIRE

    Muhammad Qaiser Saleem; Jafreezal Jaafar; Mohd Fadzil Hassan

    2011-01-01

    Currently most of the enterprises are using SOA and web services technologies to build their web information system. They are using MDA principles for design and development of WIS and using UML as a modelling language for business process modelling. Along with the increased connectivity in SOA environment, security risks rise exponentially. Security is not defined during the early phases of development and left onto developer. Properly configuring security requirements in SOA applications is...

  13. Technical Note: Harmonizing met-ocean model data via standard web services within small research groups

    Science.gov (United States)

    Signell, Richard; Camossi, E.

    2016-01-01

    Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.

  14. FindZebra: a search engine for rare diseases.

    Science.gov (United States)

    Dragusin, Radu; Petcu, Paula; Lioma, Christina; Larsen, Birger; Jørgensen, Henrik L; Cox, Ingemar J; Hansen, Lars Kai; Ingwersen, Peter; Winther, Ole

    2013-06-01

    The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface to this information. It is therefore of interest to find out how well web search engines work for diagnostic queries and what factors contribute to successes and failures. Among diseases, rare (or orphan) diseases represent an especially challenging and thus interesting class to diagnose as each is rare, diverse in symptoms and usually has scattered resources associated with it. We design an evaluation approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, performance measures, information resources and guidelines for customising Google Search to this task. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source search technology and uses curated freely available online medical information. FindZebra outperforms Google Search in both default set-up and customised to the resources used by FindZebra. We extend FindZebra with specialized functionalities exploiting medical ontological information and UMLS medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular standard web search. The proposed evaluation approach can be valuable for future development and benchmarking. The FindZebra search engine is available at http://www.findzebra.com/. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. A Collaborative System Software Solution for Modeling Business Flows Based on Automated Semantic Web Service Composition

    Directory of Open Access Journals (Sweden)

    Ion SMEUREANU

    2009-01-01

    Full Text Available Nowadays, business interoperability is one of the key factors for assuring competitive advantage for the participant business partners. In order to implement business cooperation, scalable, distributed and portable collaborative systems have to be implemented. This article presents some of the mostly used technologies in this field. Furthermore, it presents a software application architecture based on Business Process Modeling Notation standard and automated semantic web service coupling for modeling business flow in a collaborative manner. The main business processes will be represented in a single, hierarchic flow diagram. Each element of the diagram will represent calls to semantic web services. The business logic (the business rules and constraints will be structured with the help of OWL (Ontology Web Language. Moreover, OWL will also be used to create the semantic web service specifications.

  16. Basic science through engineering? Synthetic modeling and the idea of biology-inspired engineering.

    Science.gov (United States)

    Knuuttila, Tarja; Loettgers, Andrea

    2013-06-01

    Synthetic biology is often understood in terms of the pursuit for well-characterized biological parts to create synthetic wholes. Accordingly, it has typically been conceived of as an engineering dominated and application oriented field. We argue that the relationship of synthetic biology to engineering is far more nuanced than that and involves a sophisticated epistemic dimension, as shown by the recent practice of synthetic modeling. Synthetic models are engineered genetic networks that are implanted in a natural cell environment. Their construction is typically combined with experiments on model organisms as well as mathematical modeling and simulation. What is especially interesting about this combinational modeling practice is that, apart from greater integration between these different epistemic activities, it has also led to the questioning of some central assumptions and notions on which synthetic biology is based. As a result synthetic biology is in the process of becoming more "biology inspired." Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. A Model of Socially Connected Web Objects for IoT Applications

    Directory of Open Access Journals (Sweden)

    Sajjad Ali

    2018-01-01

    Full Text Available The Internet of Things (IoT is evolving with the connected objects at an unprecedented rate, bringing about enormous opportunities for the future IoT applications as well as challenges. One of the major challenges is to handle the complexity generated by the interconnection of billions of objects. However, Social Internet of Things (SIoT, emerging from the conglomeration of IoT and social networks, has realized an efficient way to facilitate the development of complex future IoT applications. Nevertheless, to fully utilize the benefits of SIoT, a platform that can provide efficient services using social relations among heterogeneous objects is highly required. The web objects enabled IoT environment promotes SIoT features by enabling virtualization using virtual objects and supporting the modularity with microservices. To realize SIoT services, this article proposes an architecture that provides a foundation for the development of lightweight microservices based on socially connected web objects. To efficiently discover web objects and reduce the complexity of service provisioning processes, a social relationship model is presented. To realize the interoperable service operations, a semantic ontology model has been developed. Finally, to evaluate the proposed design, a prototype has been implemented based on a use case scenario.

  18. PENGEMBANGAN MODEL PELATIHAN ONLINE BERBASIS WEB UNTUK KEUNGGULAN BERSAING PADA PT INTELLISYS TRIPRATAMA

    Directory of Open Access Journals (Sweden)

    Thomas Ivantoro Prasetyo

    2010-10-01

    Full Text Available In order to face competition, PT Intellisys, as the service provided in web-based online training, is trying to serve every customer from wherever and whenever in a good quality service. It started with the analysis of competitior’s condition using Porter’s five competitive forces analysis, internal condition using SWOT analysis, and internal componay process using Value Chain analysis to conclude a suitable IT strategy for the company. It is then continued with Work-Centered analysis to produce the increasing of business process then designing web-based online training system model in doing evaluation to show good accommodation and training service from Intellisys to customers that could bring the company competitive advantages. The research result is a model design from web-based online training system that is cheap and flexible, easier to get wherever and whenever, and also innovative and easier-to-learn material.Keywords: e-learning, competitive advantages, information technology, training service provider, SCROM

  19. Semantic modeling and interoperability in product and process engineering a technology for engineering informatics

    CERN Document Server

    2013-01-01

    In the past decade, feature-based design and manufacturing has gained some momentum in various engineering domains to represent and reuse semantic patterns with effective applicability. However, the actual scope of feature application is still very limited. Semantic Modeling and Interoperability in Product and Process Engineering provides a systematic solution for the challenging engineering informatics field aiming at the enhancement of sustainable knowledge representation, implementation and reuse in an open and yet practically manageable scale.   This semantic modeling technology supports uniform, multi-facet and multi-level collaborative system engineering with heterogeneous computer-aided tools, such as CADCAM, CAE, and ERP.  This presented unified feature model can be applied to product and process representation, development, implementation and management. Practical case studies and test samples are provided to illustrate applications which can be implemented by the readers in real-world scenarios. �...

  20. TRENT2D WG: a smart web infrastructure for debris-flow modelling and hazard assessment

    Science.gov (United States)

    Zorzi, Nadia; Rosatti, Giorgio; Zugliani, Daniel; Rizzi, Alessandro; Piffer, Stefano

    2016-04-01

    Mountain regions are naturally exposed to geomorphic flows, which involve large amounts of sediments and induce significant morphological modifications. The physical complexity of this class of phenomena represents a challenging issue for modelling, leading to elaborate theoretical frameworks and sophisticated numerical techniques. In general, geomorphic-flows models proved to be valid tools in hazard assessment and management. However, model complexity seems to represent one of the main obstacles to the diffusion of advanced modelling tools between practitioners and stakeholders, although the UE Flood Directive (2007/60/EC) requires risk management and assessment to be based on "best practices and best available technologies". Furthermore, several cutting-edge models are not particularly user-friendly and multiple stand-alone software are needed to pre- and post-process modelling data. For all these reasons, users often resort to quicker and rougher approaches, leading possibly to unreliable results. Therefore, some effort seems to be necessary to overcome these drawbacks, with the purpose of supporting and encouraging a widespread diffusion of the most reliable, although sophisticated, modelling tools. With this aim, this work presents TRENT2D WG, a new smart modelling solution for the state-of-the-art model TRENT2D (Armanini et al., 2009, Rosatti and Begnudelli, 2013), which simulates debris flows and hyperconcentrated flows adopting a two-phase description over a mobile bed. TRENT2D WG is a web infrastructure joining advantages offered by the software-delivering model SaaS (Software as a Service) and by WebGIS technology and hosting a complete and user-friendly working environment for modelling. In order to develop TRENT2D WG, the model TRENT2D was converted into a service and exposed on a cloud server, transferring computational burdens from the user hardware to a high-performing server and reducing computational time. Then, the system was equipped with an

  1. Study on online community user motif using web usage mining

    Science.gov (United States)

    Alphy, Meera; Sharma, Ajay

    2016-04-01

    The Web usage mining is the application of data mining, which is used to extract useful information from the online community. The World Wide Web contains at least 4.73 billion pages according to Indexed Web and it contains at least 228.52 million pages according Dutch Indexed web on 6th august 2015, Thursday. It’s difficult to get needed data from these billions of web pages in World Wide Web. Here is the importance of web usage mining. Personalizing the search engine helps the web user to identify the most used data in an easy way. It reduces the time consumption; automatic site search and automatic restore the useful sites. This study represents the old techniques to latest techniques used in pattern discovery and analysis in web usage mining from 1996 to 2015. Analyzing user motif helps in the improvement of business, e-commerce, personalisation and improvement of websites.

  2. Noise and Vibration Risk Prevention Virtual Web for Ubiquitous Training

    Science.gov (United States)

    Redel-Macías, María Dolores; Cubero-Atienza, Antonio J.; Martínez-Valle, José Miguel; Pedrós-Pérez, Gerardo; del Pilar Martínez-Jiménez, María

    2015-01-01

    This paper describes a new Web portal offering experimental labs for ubiquitous training of university engineering students in work-related risk prevention. The Web-accessible computer program simulates the noise and machine vibrations met in the work environment, in a series of virtual laboratories that mimic an actual laboratory and provide the…

  3. Web-based topology queries on a BIM model

    DEFF Research Database (Denmark)

    Rasmussen, Mads Holten; Hviid, Christian Anker; Karlshøj, Jan

    Building Information Modeling (BIM) is in the industry often confused with 3D-modeling regardless that the potential of modeling information goes way beyond performing clash detections on geometrical objects occupying the same physical space. Lately, several research projects have tried to change...... that by extending BIM with information using linked data technologies. However, when showing information alone the strong communication benefits of 3D are neglected, and a practical way of connecting the two worlds is currently missing. In this paper, we present a prototype of a visual query interface running...... is to establish a baseline for discussion of the general design choices that have been considered, and the developed application further serves as a proof of concept for combining BIM model data with a knowledge graph and potentially other sources of Linked Open Data, in a simple web interface....

  4. Engine Performance Test of the 1975 Chrysler - Nissan Model CN633 Diesel Engine

    Science.gov (United States)

    1975-09-01

    An engine test of the Chrysler-Nissan Model CN633 diesel engine was performed to determine its steady-state fuel consumption and emissions (HC, CO, NOx) maps. The data acquired are summarized in this report.

  5. Strategies for the Curation of CAD Engineering Models

    Directory of Open Access Journals (Sweden)

    Manjula Patel

    2009-06-01

    Full Text Available Normal 0 Product Lifecycle Management (PLM has become increasingly important in the engineering community over the last decade or so, due to the globalisation of markets and the rising popularity of products provided as services. It demands the efficient capture, representation, organisation, retrieval and reuse of product data over its entire life. Simultaneously, there is now a much greater reliance on CAD models for communicating designs to manufacturers, builders, maintenance crews and regulators, and for definitively expressing designs. Creating the engineering record digitally, however, presents problems not only for its long-term maintenance and accessibility - due in part to the rapid obsolescence of the hardware, software and file formats involved - but also for recording the evolution of designs, artefacts and products. We examine the curation and preservation requirements in PLM and suggest ways of alleviating the problems of sustaining CAD engineering models through the use of lightweight formats, layered annotation and the collection of Representation Information as defined in the Open Archival Information System (OAIS Reference Model.  We describe two tools which have been specifically developed to aid in the curation of CAD engineering models in the context of PLM: Lightweight Models with Multilayered Annotation (LiMMA and a Registry/Repository of Representation Information for Engineering (RRoRIfE.

  6. Thermodynamic modeling of direct injection methanol fueled engines

    International Nuclear Information System (INIS)

    Shen Yuan; Bedford, Joshua; Wichman, Indrek S.

    2009-01-01

    In-cylinder pressure is an important parameter that is used to investigate the combustion process in internal combustion (IC) engines. In this paper, a thermodynamic model of IC engine combustion is presented and examined. A heat release function and an empirical conversion efficiency factor are introduced to solve the model. The pressure traces obtained by solving the thermodynamic model are compared with measured pressure data for a fully instrumented laboratory IC spark ignition (SI) engine. Derived scaling parameters for time to peak pressure, peak pressure, and maximum rate of pressure rise (among others) are developed and compared with the numerical simulations. The models examined here may serve as pedagogic tools and, when suitably refined, as preliminary design tools.

  7. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.

    Directory of Open Access Journals (Sweden)

    Mansour Alsaleh

    Full Text Available Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.

  8. Applied data analysis and modeling for energy engineers and scientists

    CERN Document Server

    Reddy, T Agami

    2011-01-01

    ""Applied Data Analysis and Modeling for Energy Engineers and Scientists"" discusses mathematical models, data analysis, and decision analysis in modeling. The approach taken in this volume focuses on the modeling and analysis of thermal systems in an engineering environment, while also covering a number of other critical areas. Other material covered includes the tools that researchers and engineering professionals will need in order to explore different analysis methods, use critical assessment skills and reach sound engineering conclusions. The book also covers process and system design and

  9. A systematic framework to discover pattern for web spam classification

    OpenAIRE

    Jelodar, Hamed; Wang, Yongli; Yuan, Chi; Jiang, Xiaohui

    2017-01-01

    Web spam is a big problem for search engine users in World Wide Web. They use deceptive techniques to achieve high rankings. Although many researchers have presented the different approach for classification and web spam detection still it is an open issue in computer science. Analyzing and evaluating these websites can be an effective step for discovering and categorizing the features of these websites. There are several methods and algorithms for detecting those websites, such as decision t...

  10. Internal combustion engines - Modelling, estimation and control issues

    Energy Technology Data Exchange (ETDEWEB)

    Vigild, C.W.

    2001-12-01

    Alternative power-trains have become buzz words in the automotive industry in the recent past. New technologies like Lithium-Ion batteries or fuel cells combined with high efficient electrical motors show promising results. However both technologies are extremely expensive and important questions like 'How are we going to supply fuel-cells with hydrogen in an environmentally friendly way?', 'How are we going to improve the range - and recharging speed - of electrical vehicles?' and 'How will our existing infrastructure cope with such changes?' are still left unanswered. Hence, the internal combustion engine with all its shortcomings is to stay with us for the next many years. What the future will really bring in this area is uncertain, but one thing can be said for sure; the time of the pipe in - pipe out engine concept is over. Modem engines, Diesel or gasoline, have in the recent past been provided with many new technologies to improve both performance and handling and to cope with the tightening emission legislations. However, as new devices are included, the number of control inputs is also gradually increased. Hence, the control matrix dimension has grown to a considerably size, and the typical table and regression based engine calibration procedures currently in use today contain both challenging and time-consuming tasks. One way to improve understanding of engines and provide a more comprehensive picture of the control problem is by use of simplified physical modelling - one of the main thrusts of this dissertation. The application of simplified physical modelling as a foundation for engine estimation and control design is first motivated by two control applications. The control problem concerns Air/Fuel ratio control of Spark Ignition engines. Two different ways of control are presented; one based on. a model based Extended Kalman Filter updated predictor, and one based on robust H {infinity} techniques. Both controllers are

  11. Engineered Barrier System: Physical and Chemical Environment Model

    International Nuclear Information System (INIS)

    Jolley, D. M.; Jarek, R.; Mariner, P.

    2004-01-01

    The conceptual and predictive models documented in this Engineered Barrier System: Physical and Chemical Environment Model report describe the evolution of the physical and chemical conditions within the waste emplacement drifts of the repository. The modeling approaches and model output data will be used in the total system performance assessment (TSPA-LA) to assess the performance of the engineered barrier system and the waste form. These models evaluate the range of potential water compositions within the emplacement drifts, resulting from the interaction of introduced materials and minerals in dust with water seeping into the drifts and with aqueous solutions forming by deliquescence of dust (as influenced by atmospheric conditions), and from thermal-hydrological-chemical (THC) processes in the drift. These models also consider the uncertainty and variability in water chemistry inside the drift and the compositions of introduced materials within the drift. This report develops and documents a set of process- and abstraction-level models that constitute the engineered barrier system: physical and chemical environment model. Where possible, these models use information directly from other process model reports as input, which promotes integration among process models used for total system performance assessment. Specific tasks and activities of modeling the physical and chemical environment are included in the technical work plan ''Technical Work Plan for: In-Drift Geochemistry Modeling'' (BSC 2004 [DIRS 166519]). As described in the technical work plan, the development of this report is coordinated with the development of other engineered barrier system analysis model reports

  12. QoS prediction for web services based on user-trust propagation model

    Science.gov (United States)

    Thinh, Le-Van; Tu, Truong-Dinh

    2017-10-01

    There is an important online role for Web service providers and users; however, the rapidly growing number of service providers and users, it can create some similar functions among web services. This is an exciting area for research, and researchers seek to to propose solutions for the best service to users. Collaborative filtering (CF) algorithms are widely used in recommendation systems, although these are less effective for cold-start users. Recently, some recommender systems have been developed based on social network models, and the results show that social network models have better performance in terms of CF, especially for cold-start users. However, most social network-based recommendations do not consider the user's mood. This is a hidden source of information, and is very useful in improving prediction efficiency. In this paper, we introduce a new model called User-Trust Propagation (UTP). The model uses a combination of trust and the mood of users to predict the QoS value and matrix factorisation (MF), which is used to train the model. The experimental results show that the proposed model gives better accuracy than other models, especially for the cold-start problem.

  13. Identify Web-page Content meaning using Knowledge based System for Dual Meaning Words

    OpenAIRE

    Sinha, Sukanta; Dattagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Meaning of Web-page content plays a big role while produced a search result from a search engine. Most of the cases Web-page meaning stored in title or meta-tag area but those meanings do not always match with Web-page content. To overcome this situation we need to go through the Web-page content to identify the Web-page meaning. In such cases, where Webpage content holds dual meaning words that time it is really difficult to identify the meaning of the Web-page. In this paper, we are introdu...

  14. Mean Value SI Engine Model for Control Studies

    DEFF Research Database (Denmark)

    Hendricks, Elbert; Sorenson, Spencer C

    1990-01-01

    This paper presents a mathematically simple nonlinear three state (three differential equation) dynamic model of an SI engine which has the same steady state accuracy as a typical dynamometer measurement of the engine over its entire speed/load operating range (± 2.0%). The model's accuracy...... for large, fast transients is of the same order in the same operating region. Because the model is mathematically compact, it has few adjustable parameters and is thus simple to fit to a given engine either on the basis of measurements or given the steady state results of a larger cycle simulation package....... The model can easily be run on a Personal Computer (PC) using a ordinary differential equation (ODE) integrating routine or package. This makes the model is useful for control system design and evaluation....

  15. Web-EEDF: open source software for modeling the electron dynamics

    International Nuclear Information System (INIS)

    Janda, M.; Machala, Z.; Morvova, M.; Francek, V.; Lukac, P.

    2005-01-01

    We present a free software for modeling the electron dynamics in the uniform electric field named Web-EEDF. It uses a Monte Carlo algorithm to calculate electron energy distribution functions (EEDFs) and other plasma parameters in various mixtures. Obtained results are in good agreement with literature. This software represents the first stage in a more complex modeling of plasma chemical processes leading to the decomposition of various air pollutants in electrical discharges at atmospheric pressure (Authors)

  16. Shear Behavior of Corrugated Steel Webs in H Shape Bridge Girders

    Directory of Open Access Journals (Sweden)

    Qi Cao

    2015-01-01

    Full Text Available In bridge engineering, girders with corrugated steel webs have shown good mechanical properties. With the promotion of composite bridge with corrugated steel webs, in particular steel-concrete composite girder bridge with corrugated steel webs, it is necessary to study the shear performance and buckling of the corrugated webs. In this research, by conducting experiment incorporated with finite element analysis, the stability of H shape beam welded with corrugated webs was tested and three failure modes were observed. Structural data including load-deflection, load-strain, and shear capacity of tested beam specimens were collected and compared with FEM analytical results by ANSYS software. The effects of web thickness, corrugation, and stiffening on shear capacity of corrugated webs were further discussed.

  17. A review of the reporting of web searching to identify studies for Cochrane systematic reviews.

    Science.gov (United States)

    Briscoe, Simon

    2018-03-01

    The literature searches that are used to identify studies for inclusion in a systematic review should be comprehensively reported. This ensures that the literature searches are transparent and reproducible, which is important for assessing the strengths and weaknesses of a systematic review and re-running the literature searches when conducting an update review. Web searching using search engines and the websites of topically relevant organisations is sometimes used as a supplementary literature search method. Previous research has shown that the reporting of web searching in systematic reviews often lacks important details and is thus not transparent or reproducible. Useful details to report about web searching include the name of the search engine or website, the URL, the date searched, the search strategy, and the number of results. This study reviews the reporting of web searching to identify studies for Cochrane systematic reviews published in the 6-month period August 2016 to January 2017 (n = 423). Of these reviews, 61 reviews reported using web searching using a search engine or website as a literature search method. In the majority of reviews, the reporting of web searching was found to lack essential detail for ensuring transparency and reproducibility, such as the search terms. Recommendations are made on how to improve the reporting of web searching in Cochrane systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Blending vertical and web results: A case study using video intent

    NARCIS (Netherlands)

    Lefortier, D.; Serdyukov, P.; Romanenko, F.; de Rijke, M.; de Rijke, M.; Kenter, T.; de Vries, A.P.; Zhai, C.X.; de Jong, F.; Radinsky, K.; Hofmann, K.

    2014-01-01

    Modern search engines aggregate results from specialized verticals into the Web search results. We study a setting where vertical and Web results are blended into a single result list, a setting that has not been studied before. We focus on video intent and present a detailed observational study of

  19. Modeling of hybrid vehicle fuel economy and fuel engine efficiency

    Science.gov (United States)

    Wu, Wei

    "Near-CV" (i.e., near-conventional vehicle) hybrid vehicles, with an internal combustion engine, and a supplementary storage with low-weight, low-energy but high-power capacity, are analyzed. This design avoids the shortcoming of the "near-EV" and the "dual-mode" hybrid vehicles that need a large energy storage system (in terms of energy capacity and weight). The small storage is used to optimize engine energy management and can provide power when needed. The energy advantage of the "near-CV" design is to reduce reliance on the engine at low power, to enable regenerative braking, and to provide good performance with a small engine. The fuel consumption of internal combustion engines, which might be applied to hybrid vehicles, is analyzed by building simple analytical models that reflect the engines' energy loss characteristics. Both diesel and gasoline engines are modeled. The simple analytical models describe engine fuel consumption at any speed and load point by describing the engine's indicated efficiency and friction. The engine's indicated efficiency and heat loss are described in terms of several easy-to-obtain engine parameters, e.g., compression ratio, displacement, bore and stroke. Engine friction is described in terms of parameters obtained by fitting available fuel measurements on several diesel and spark-ignition engines. The engine models developed are shown to conform closely to experimental fuel consumption and motored friction data. A model of the energy use of "near-CV" hybrid vehicles with different storage mechanism is created, based on simple algebraic description of the components. With powertrain downsizing and hybridization, a "near-CV" hybrid vehicle can obtain a factor of approximately two in overall fuel efficiency (mpg) improvement, without considering reductions in the vehicle load.

  20. Web Platform for Sharing Modeling Software in the Field of Nonlinear Optics

    Directory of Open Access Journals (Sweden)

    Dubenskaya Julia

    2018-01-01

    Full Text Available We describe the prototype of a Web platform intended for sharing software programs for computer modeling in the rapidly developing field of the nonlinear optics phenomena. The suggested platform is built on the top of the HUBZero open-source middleware. In addition to the basic HUBZero installation we added to our platform the capability to run Docker containers via an external application server and to send calculation programs to those containers for execution. The presented web platform provides a wide range of features and might be of benefit to nonlinear optics researchers.