WorldWideScience

Sample records for web engineering wilga

  1. Photonics and Web Engineering: WILGA 2009

    CERN Document Server

    Romaniuk, Ryszard

    2009-01-01

    The paper is a digest of work presented during a cyclic Ph.D. student symposium on Photonics and Web Engineering WILGA 2009. The subject of WILGA are Photonics Applications in Astronomy, Communications, Industry and High-Energy Physics Experiments. WILGA is sponsored by EuCARD Project. Symposium is organized by ISE PW in cooperation with professional organizations IEEE, SPIE, PSP and KEiT PAN. There are presented mainly Ph.D. and M.Sc. theses as well as achievements of young researchers. These papers, presented in such a big number, more than 250 in some years, are in certain sense a good digest of the condition of academic research capabilities in this branch of science and technology. The undertaken research subjects for Ph.D. theses in electronics is determined by the interest and research capacity (financial, laboratory and intellectual) of the young researchers and their tutors. Basically, the condition of academic electronics research depends on financing coming from applications areas. During Wilga 200...

  2. Photonics Applications and Web Engineering: WILGA 2017

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2017-08-01

    XLth Wilga Summer 2017 Symposium on Photonics Applications and Web Engineering was held on 28 May-4 June 2017. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, modern optics, mechatronics, applied physics, electronics technologies and applications. There were presented around 300 oral and poster papers in a few main topical tracks, which are traditional for Wilga, including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Things, measurement systems for astronomy, high energy physics experiments, and other. The paper is a traditional introduction to the 2017 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations. This year Symposium was divided to the following topical sessions/conferences: Optics, Optoelectronics and Photonics, Computational and Artificial Intelligence, Biomedical Applications, Astronomical and High Energy Physics Experiments Applications, Material Research and Engineering, and Advanced Photonics and Electronics Applications in Research and Industry.

  3. Photonics applications and web engineering: WILGA Summer 2016

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2016-09-01

    Wilga Summer 2016 Symposium on Photonics Applications and Web Engineering was held on 29 May - 06 June. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, electronics technologies and applications. There were presented around 300 presentations in a few main topical tracks including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Thins, and other. The paper is an introduction the 2016 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations.

  4. Photonics applications and web engineering: WILGA Summer 2015

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2015-09-01

    Wilga Summer 2015 Symposium on Photonics Applications and Web Engineering was held on 23-31 May. The Symposium gathered over 350 participants, mainly young researchers active in optics, optoelectronics, photonics, electronics technologies and applications. There were presented around 300 presentations in a few main topical tracks including: bio-photonics, optical sensory networks, photonics-electronics-mechatronics co-design and integration, large functional system design and maintenance, Internet of Thins, and other. The paper is an introduction the 2015 WILGA Summer Symposium Proceedings, and digests some of the Symposium chosen key presentations.

  5. Photonics applications and web engineering: WILGA Winter 2016

    Science.gov (United States)

    Romaniuk, Ryszard S.

    2016-09-01

    Since twenty years, young researchers form the Institute of Electronic Systems, Warsaw University of Technology, organize two times a year, under only a marginal supervision of the senior faculty members, under the patronage of WEiTI PW, KEiT PAN, SPIE, IEEE, PKOpto SEP and PSF, the WILGA Symposium on advanced, integrated functional electronic, photonic and mechatronic systems [1-5]. All aspects are considered like: research and development, theory and design, technology - material and construction, software and hardware, commissioning and tests, as well as pilot and practical applications. The applications concern mostly, which turned after several years to be a proud specialization of the WILGA Symposium, Internet engineering, high energy physics experiments, new power industry including fusion, nuclear industry, space and satellite technologies, telecommunications, smart municipal environment, as well as biology and medicine [6-8]. XXXVIIth WILGA Symposium was held on 29-31 January 2016 and gathered a few tens of young researchers active in the mentioned research areas. There were presented a few tens of technical papers which will be published in Proc.SPIE together with the accepted articles from the Summer Edition of the WILGA Symposium scheduled for 29.05-06.06.2016. This article is a digest of chosen presentations from WILGA Symposium 2016 Winter Edition. The survey is narrowed to a few chosen and main topical tracks, like electronics and photonics design using industrial standards like ATCA/MTCA, also particular designs of functional systems using this series of industrial standards. The paper, summarizing traditionally since many years the accomplished WILGA Symposium organized by young researchers from Warsaw University of Technology, is also the following part of a cycle of papers concerning their participation in design of new generations of electronic systems used in discovery experiments in Poland and in leading research laboratories of the world.

  6. Advanced Photonic and Electronic Systems WILGA 2010

    CERN Document Server

    Romaniuk, R S

    2010-01-01

    SPIE – PSP WILGA Symposium gathers two times a year in January and in May new adepts of advanced photonic and electronic systems. The event is oriented on components and applications. WILGA Symposium on Photonics and Web Engineering is well known on the web for its devotion to “young research” promotion under the eminent sponsorship of international engineering associations like SPIE and IEEE and their Poland Sections or Counterparts. WILGA is supported by the most important national professional organizations like KEiT PAN and PSP-Photonics Society of Poland. The Symposium is organized since 1998 twice a year. It has gathered over 4000 young researchers and published over 2000 papers mainly internationally, including more than 900 in 10 published so far volumes of Proc. SPIE. This paper is a digest of WILGA Symposium Series and WILGA 2010 summary. Introductory part treats WILGA Photonics Applications characteristics over the period 1998-2010. Following part presents a short report on the XXVth and XXVI...

  7. WILGA Photonics and Web Engineering, January 2012; EuCARD Sessions on HEP and Accelerator Technology

    CERN Document Server

    Romaniuk, R S

    2012-01-01

    Wilga Sessions on HEP experiments and accelerator technology were organized under the umbrella of the EU FP7 Project EuCARD – European Coordination for Accelerator Research and Development. The paper presents a digest of chosen technical work results shown by young researchers from technical universities during the SPIE-IEEE Wilga January 2012 Symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, new technologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonics-electronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET and pi-of-the sky experiments development. The symposium held two times a year is a summary in the development of numerable Ph.D. theses carried out in this country in the area of advanced electronic and photonic systems. It is also a great occasion for SPIE, IEEE, OSA and PSP st...

  8. Astronomy and Space Technologies, WILGA 2012; EuCARD Sessions

    CERN Document Server

    Romaniuk, R S

    2012-01-01

    Wilga Sessions on HEP experiments, astroparticle physics and accelerator technology were organized under the umbrella of the EU FP7 Project EuCARD – European Coordination for Accelerator Research and Development. This paper is the first part (out of five) of the research survey of WILGA Symposium work, May 2012 Edition, concerned with photonics and electronics applications in astronomy and space technologies. It presents a digest of chosen technical work results shown by young researchers from different technical universities from this country during the Jubilee XXXth SPIE-IEEE Wilga 2012, May Edition, symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonics-electronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JE...

  9. Photon Physics and Plasma Research, WILGA 2012; EuCARD Sessions

    CERN Document Server

    Romaniuk, R S

    2012-01-01

    Wilga Sessions on HEP experiments, astroparticle physica and accelerator technology were organized under the umbrella of the EU FP7 Project EuCARD – European Coordination for Accelerator Research and Development. This paper is the third part (out of five) of the research survey of WILGA Symposium work, May 2012 Edition, concerned with Photon Physics and Plasma Research. It presents a digest of chosen technical work results shown by young researchers from different technical universities from this country during the Jubilee XXXth SPIE-IEEE Wilga 2012, May Edition, symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonics-electronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET tokamak and pi-of-the sky experiments ...

  10. Accelerator Technology and High Energy Physic Experiments, WILGA 2012; EuCARD Sessions

    CERN Document Server

    Romaniuk, R S

    2012-01-01

    Wilga Sessions on HEP experiments, astroparticle physica and accelerator technology were organized under the umbrella of the EU FP7 Project EuCARD – European Coordination for Accelerator Research and Development. The paper is the second part (out of five) of the research survey of WILGA Symposium work, May 2012 Edition, concerned with accelerator technology and high energy physics experiments. It presents a digest of chosen technical work results shown by young researchers from different technical universities from this country during the XXXth Jubilee SPIE-IEEE Wilga 2012, May Edition, symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonics-electronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET and pi-of-the ...

  11. Photonics and Web Engineering 2011, International Journal of Electronics and Telecommunication, vol.57, no 3, pp.421-428, September 2011

    CERN Document Server

    Romaniuk, R S

    2011-01-01

    The paper presents a digest of chosen technical work results shown by young researchers from different technical universities in this country during the SPIE-IEEE Wilga 2011 symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics and telecom, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonicselectronics co-design, optoelectronic and electronic systems for telecom, astronomy and high energy physics experiments, JET and pi-of-the sky experiments development. The symposium is an annual summary in the development of numerable Ph.D. theses carried out in this country in the area of advanced electronic and photonic systems. It is also an occasion for young researchers to meet together in a large group (under the patronage of IEEE) spanning the whole country with guests from this part of Europe. A digest of Wilga references is pr...

  12. Web Engineering

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-06-23

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.

  13. Engineering Web Applications

    DEFF Research Database (Denmark)

    Casteleyn, Sven; Daniel, Florian; Dolog, Peter

    Nowadays, Web applications are almost omnipresent. The Web has become a platform not only for information delivery, but also for eCommerce systems, social networks, mobile services, and distributed learning environments. Engineering Web applications involves many intrinsic challenges due...... to their distributed nature, content orientation, and the requirement to make them available to a wide spectrum of users who are unknown in advance. The authors discuss these challenges in the context of well-established engineering processes, covering the whole product lifecycle from requirements engineering through...... design and implementation to deployment and maintenance. They stress the importance of models in Web application development, and they compare well-known Web-specific development processes like WebML, WSDM and OOHDM to traditional software development approaches like the waterfall model and the spiral...

  14. Web Search Engines

    OpenAIRE

    Rajashekar, TB

    1998-01-01

    The World Wide Web is emerging as an all-in-one information source. Tools for searching Web-based information include search engines, subject directories and meta search tools. We take a look at key features of these tools and suggest practical hints for effective Web searching.

  15. Web document engineering

    International Nuclear Information System (INIS)

    White, B.

    1996-05-01

    This tutorial provides an overview of several document engineering techniques which are applicable to the authoring of World Wide Web documents. It illustrates how pre-WWW hypertext research is applicable to the development of WWW information resources

  16. Competence Centered Specialization in Web Engineering Topics in a Software Engineering Masters Degree Programme

    DEFF Research Database (Denmark)

    Dolog, Peter; Thomsen, Lone Leth; Thomsen, Bent

    2010-01-01

    Web applications and Web-based systems are becoming increasingly complex as a result of either customer requests or technology evolution which has eased other aspects of software engineering. Therefore, there is an increasing demand for highly skilled software engineers able to build and also...... advance the systems on the one hand as well as professionals who are able to evaluate their eectiveness on the other hand. With this idea in mind, the computer science department at Aalborg University is continuously working on improvements in its specialization in web engineering topics as well...... as on general competence based web engineering proles oered also for those who specialize in other areas of software engineering. We describe the current state of the art and our experience with a web engineering curriculum within the software engineering masters degree programme. We also discuss an evolution...

  17. The Use of Web Search Engines in Information Science Research.

    Science.gov (United States)

    Bar-Ilan, Judit

    2004-01-01

    Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…

  18. Adding a visualization feature to web search engines: it's time.

    Science.gov (United States)

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  19. Engineering Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    suit the user profile the most. This paper summarizes the domain engineering framework for such adaptive web applications. The framework provides guidelines to develop adaptive web applications as members of a family. It suggests how to utilize the design artifacts as knowledge which can be used......Information and services on the web are accessible for everyone. Users of the web differ in their background, culture, political and social environment, interests and so on. Ambient intelligence was envisioned as a concept for systems which are able to adapt to user actions and needs....... With the growing amount of information and services, the web applications become natural candidates to adopt the concepts of ambient intelligence. Such applications can deal with divers user intentions and actions based on the user profile and can suggest the combination of information content and services which...

  20. A development process meta-model for Web based expert systems: The Web engineering point of view

    DEFF Research Database (Denmark)

    Dokas, I.M.; Alapetite, Alexandre

    2006-01-01

    raised their complexity. Unfortunately, there is so far no clear answer to the question: How may the methods and experience of Web engineering and expert systems be combined and applied in order todevelop effective and successful Web based expert systems? In an attempt to answer this question...... on Web based expert systems – will be presented. The idea behind the presentation of theaccessibility evaluation and its conclusions is to show to Web based expert system developers, who typically have little Web engineering background, that Web engineering issues must be considered when developing Web......Similar to many legacy computer systems, expert systems can be accessed via the Web, forming a set of Web applications known as Web based expert systems. The tough Web competition, the way people and organizations rely on Web applications and theincreasing user requirements for better services have...

  1. Security and computer forensics in web engineering education

    OpenAIRE

    Glisson, W.; Welland, R.; Glisson, L.M.

    2010-01-01

    The integration of security and forensics into Web Engineering curricula is imperative! Poor security in web-based applications is continuing to cost organizations millions and the losses are still increasing annually. Security is frequently taught as a stand-alone course, assuming that security can be 'bolted on' to a web application at some point. Security issues must be integrated into Web Engineering processes right from the beginning to create secure solutions and therefore security shou...

  2. Integrating ecosystem engineering and food webs

    NARCIS (Netherlands)

    Sanders, Dirk; Jones, Clive G.; Thebault, Elisa; Bouma, Tjeerd J.; van der Heide, Tjisse; van Belzen, Jim; Barot, Sebastien

    Ecosystem engineering, the physical modification of the environment by organisms, is a common and often influential process whose significance to food web structure and dynamics is largely unknown. In the light of recent calls to expand food web studies to include non-trophic interactions, we

  3. Integrating ecosystem engineering and food webs

    NARCIS (Netherlands)

    Sanders, D.; Jones, C.G.; Thébault, E.; Bouma, T.J.; van der Heide, T.; van Belzen, J.; Barot, S.

    2014-01-01

    Ecosystem engineering, the physical modification of the environment by organisms, is a common and often influential process whose significance to food web structure and dynamics is largely unknown. In the light of recent calls to expand food web studies to include non-trophic interactions, we

  4. The Little Engines That Could: Modeling the Performance of World Wide Web Search Engines

    OpenAIRE

    Eric T. Bradlow; David C. Schmittlein

    2000-01-01

    This research examines the ability of six popular Web search engines, individually and collectively, to locate Web pages containing common marketing/management phrases. We propose and validate a model for search engine performance that is able to represent key patterns of coverage and overlap among the engines. The model enables us to estimate the typical additional benefit of using multiple search engines, depending on the particular set of engines being considered. It also provides an estim...

  5. BPELPower—A BPEL execution engine for geospatial web services

    Science.gov (United States)

    Yu, Genong (Eugene); Zhao, Peisheng; Di, Liping; Chen, Aijun; Deng, Meixia; Bai, Yuqi

    2012-10-01

    The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms.

  6. Designing a Pedagogical Model for Web Engineering Education: An Evolutionary Perspective

    Science.gov (United States)

    Hadjerrouit, Said

    2005-01-01

    In contrast to software engineering, which relies on relatively well established development approaches, there is a lack of a proven methodology that guides Web engineers in building reliable and effective Web-based systems. Currently, Web engineering lacks process models, architectures, suitable techniques and methods, quality assurance, and a…

  7. Comparison of Physics Frameworks for WebGL-Based Game Engine

    Directory of Open Access Journals (Sweden)

    Yogya Resa

    2014-03-01

    Full Text Available Recently, a new technology called WebGL shows a lot of potentials for developing games. However since this technology is still new, there are still many potentials in the game development area that are not explored yet. This paper tries to uncover the potential of integrating physics frameworks with WebGL technology in a game engine for developing 2D or 3D games. Specifically we integrated three open source physics frameworks: Bullet, Cannon, and JigLib into a WebGL-based game engine. Using experiment, we assessed these frameworks in terms of their correctness or accuracy, performance, completeness and compatibility. The results show that it is possible to integrate open source physics frameworks into a WebGLbased game engine, and Bullet is the best physics framework to be integrated into the WebGL-based game engine.

  8. Knowledge engineering in a temporal symantic web context

    NARCIS (Netherlands)

    Milea, D.V.; Frasincar, F.; Kaymak, U.; Schwabe, D.; Curbera, F.; Dantzig, P.

    2008-01-01

    The emergence of Web 2.0 and the semantic Web as established technologies is fostering a whole new breed of Web applications and systems. These are often centered around knowledge engineering and context awareness. However, adequate temporal formalisms underlying context awareness are currently

  9. Sexual information seeking on web search engines.

    Science.gov (United States)

    Spink, Amanda; Koricich, Andrew; Jansen, B J; Cole, Charles

    2004-02-01

    Sexual information seeking is an important element within human information behavior. Seeking sexually related information on the Internet takes many forms and channels, including chat rooms discussions, accessing Websites or searching Web search engines for sexual materials. The study of sexual Web queries provides insight into sexually-related information-seeking behavior, of value to Web users and providers alike. We qualitatively analyzed queries from logs of 1,025,910 Alta Vista and AlltheWeb.com Web user queries from 2001. We compared the differences in sexually-related Web searching between Alta Vista and AlltheWeb.com users. Differences were found in session duration, query outcomes, and search term choices. Implications of the findings for sexual information seeking are discussed.

  10. A study of medical and health queries to web search engines.

    Science.gov (United States)

    Spink, Amanda; Yang, Yin; Jansen, Jim; Nykanen, Pirrko; Lorence, Daniel P; Ozmutlu, Seda; Ozmutlu, H Cenk

    2004-03-01

    This paper reports findings from an analysis of medical or health queries to different web search engines. We report results: (i). comparing samples of 10000 web queries taken randomly from 1.2 million query logs from the AlltheWeb.com and Excite.com commercial web search engines in 2001 for medical or health queries, (ii). comparing the 2001 findings from Excite and AlltheWeb.com users with results from a previous analysis of medical and health related queries from the Excite Web search engine for 1997 and 1999, and (iii). medical or health advice-seeking queries beginning with the word 'should'. Findings suggest: (i). a small percentage of web queries are medical or health related, (ii). the top five categories of medical or health queries were: general health, weight issues, reproductive health and puberty, pregnancy/obstetrics, and human relationships, and (iii). over time, the medical and health queries may have declined as a proportion of all web queries, as the use of specialized medical/health websites and e-commerce-related queries has increased. Findings provide insights into medical and health-related web querying and suggests some implications for the use of the general web search engines when seeking medical/health information.

  11. F-OWL: An Inference Engine for Semantic Web

    Science.gov (United States)

    Zou, Youyong; Finin, Tim; Chen, Harry

    2004-01-01

    Understanding and using the data and knowledge encoded in semantic web documents requires an inference engine. F-OWL is an inference engine for the semantic web language OWL language based on F-logic, an approach to defining frame-based systems in logic. F-OWL is implemented using XSB and Flora-2 and takes full advantage of their features. We describe how F-OWL computes ontology entailment and compare it with other description logic based approaches. We also describe TAGA, a trading agent environment that we have used as a test bed for F-OWL and to explore how multiagent systems can use semantic web concepts and technology.

  12. The invisible Web uncovering information sources search engines can't see

    CERN Document Server

    Sherman, Chris

    2001-01-01

    Enormous expanses of the Internet are unreachable with standard web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, informa

  13. Web Spam, Social Propaganda and the Evolution of Search Engine Rankings

    Science.gov (United States)

    Metaxas, Panagiotis Takis

    Search Engines have greatly influenced the way we experience the web. Since the early days of the web, users have been relying on them to get informed and make decisions. When the web was relatively small, web directories were built and maintained using human experts to screen and categorize pages according to their characteristics. By the mid 1990's, however, it was apparent that the human expert model of categorizing web pages does not scale. The first search engines appeared and they have been evolving ever since, taking over the role that web directories used to play.

  14. Specification framework for engineering adaptive web applications

    NARCIS (Netherlands)

    Frasincar, F.; Houben, G.J.P.M.; Vdovják, R.

    2002-01-01

    The growing demand for data-driven Web applications has led to the need for a structured and controlled approach to the engineering of such applications. Both designers and developers need a framework that in all stages of the engineering process allows them to specify the relevant aspects of the

  15. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-08-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  16. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-12-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  17. Categorization of web pages - Performance enhancement to search engine

    Digital Repository Service at National Institute of Oceanography (India)

    Lakshminarayana, S.

    of Artificial Intelligence, Volume III. Los Altos, CA.: William Kaufmann. pp 1-74. 18. Brin, S. & Page, L. (1998). The anatomy of a large scale hyper-textual web search engine. In Proceedings of the seventh World Wide Web conference, Brisbane, Australia. 19...

  18. Dynamics of a macroscopic model characterizing mutualism of search engines and web sites

    Science.gov (United States)

    Wang, Yuanshi; Wu, Hong

    2006-05-01

    We present a model to describe the mutualism relationship between search engines and web sites. In the model, search engines and web sites benefit from each other while the search engines are derived products of the web sites and cannot survive independently. Our goal is to show strategies for the search engines to survive in the internet market. From mathematical analysis of the model, we show that mutualism does not always result in survival. We show various conditions under which the search engines would tend to extinction, persist or grow explosively. Then by the conditions, we deduce a series of strategies for the search engines to survive in the internet market. We present conditions under which the initial number of consumers of the search engines has little contribution to their persistence, which is in agreement with the results in previous works. Furthermore, we show novel conditions under which the initial value plays an important role in the persistence of the search engines and deduce new strategies. We also give suggestions for the web sites to cooperate with the search engines in order to form a win-win situation.

  19. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  20. Classifying web genres in context: a case study documenting the web genres used by a software engineer

    NARCIS (Netherlands)

    Montesi, M.; Navarrete, T.

    2008-01-01

    This case study analyzes the Internet-based resources that a software engineer uses in his daily work. Methodologically, we studied the web browser history of the participant, classifying all the web pages he had seen over a period of 12 days into web genres. We interviewed him before and after the

  1. Virtual Reference Services through Web Search Engines: Study of Academic Libraries in Pakistan

    Directory of Open Access Journals (Sweden)

    Rubia Khan

    2017-03-01

    Full Text Available Web search engines (WSE are powerful and popular tools in the field of information service management. This study is an attempt to examine the impact and usefulness of web search engines in providing virtual reference services (VRS within academic libraries in Pakistan. The study also attempts to investigate the relevant expertise and skills of library professionals in providing digital reference services (DRS efficiently using web search engines. Methodology used in this study is quantitative in nature. The data was collected from fifty public and private sector universities in Pakistan using a structured questionnaire. Microsoft Excel and SPSS were used for data analysis. The study concludes that web search engines are commonly used by librarians to help users (especially research scholars by providing digital reference services. The study also finds a positive correlation between use of web search engines and quality of digital reference services provided to library users. It is concluded that although search engines have increased the expectations of users and are really big competitors to a library’s reference desk, they are however not an alternative to reference service. Findings reveal that search engines pose numerous challenges for librarians and the study also attempts to bring together possible remedial measures. This study is useful for library professionals to understand the importance of search engines in providing VRS. The study also provides an intellectual comparison among different search engines, their capabilities, limitations, challenges and opportunities to provide VRS effectively in libraries.

  2. Web Feet Guide to Search Engines: Finding It on the Net.

    Science.gov (United States)

    Web Feet, 2001

    2001-01-01

    This guide to search engines for the World Wide Web discusses selecting the right search engine; interpreting search results; major search engines; online tutorials and guides; search engines for kids; specialized search tools for various subjects; and other specialized engines and gateways. (LRW)

  3. Adding a Visualization Feature to Web Search Engines: It’s Time

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Pak C.

    2008-11-11

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t help but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.

  4. Comparing the Scale of Web Subject Directories Precision in Technical-Engineering Information Retrieval

    Directory of Open Access Journals (Sweden)

    Mehrdokht Wazirpour Keshmiri

    2012-07-01

    Full Text Available The main purpose of this research was to compare the scale of web subject directories precision in information retrieval of technical-engineering science. Information gathering was documentary and webometric. Keywords of technical-engineering science were chosen at twenty different subjects from IEEE (Institute of Electrical and Electronics Engineers and engineering magazines that situated in sciencedirect site. These keywords are used at five subject directories Yahoo, Google, Infomine, Intute, Dmoz, that were web directories high-utilization. Usually first results in searching tools are connected to searching keywords. Because, first ten results was evaluated in every search. These assessments to consist of scale of precision, scale of error, scale retrieval items in technical-engineering categories to retrieval items entirely. The used criteria for determining the scale of precision that was according to high-utilization standards in different documents, to consist of presence of the keywords in title, appearance of keywords at the part of web retrieved pages, keywords adjacency, URL of page, page description and subject categories. Information analysis was according to Kruskal-Wallis Test and L.S.D fisher. Results revealed that there was meaningful difference about precision of web subject directories in information retrieval of technical-engineering science, Therefore this theory was confirmed.web subject directories ranked from point of precision as follows. Google, Yahoo, Intute, Dmoz, and Infomine. The scale of observed error at the first results was another criterion that was used for comparing web subject directories. In this research, Yahoo had minimum scale of error and Infomine had most of error. This research also compared the scale of retrieval items in all of categories web subject directories entirely to retrieval items in technical-engineering categories, results revealed that there was meaningful difference between them. And

  5. Exploiting Semantic Web Technologies to Develop OWL-Based Clinical Practice Guideline Execution Engines.

    Science.gov (United States)

    Jafarpour, Borna; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2016-01-01

    Computerizing paper-based CPG and then executing them can provide evidence-informed decision support to physicians at the point of care. Semantic web technologies especially web ontology language (OWL) ontologies have been profusely used to represent computerized CPG. Using semantic web reasoning capabilities to execute OWL-based computerized CPG unties them from a specific custom-built CPG execution engine and increases their shareability as any OWL reasoner and triple store can be utilized for CPG execution. However, existing semantic web reasoning-based CPG execution engines suffer from lack of ability to execute CPG with high levels of expressivity, high cognitive load of computerization of paper-based CPG and updating their computerized versions. In order to address these limitations, we have developed three CPG execution engines based on OWL 1 DL, OWL 2 DL and OWL 2 DL + semantic web rule language (SWRL). OWL 1 DL serves as the base execution engine capable of executing a wide range of CPG constructs, however for executing highly complex CPG the OWL 2 DL and OWL 2 DL + SWRL offer additional executional capabilities. We evaluated the technical performance and medical correctness of our execution engines using a range of CPG. Technical evaluations show the efficiency of our CPG execution engines in terms of CPU time and validity of the generated recommendation in comparison to existing CPG execution engines. Medical evaluations by domain experts show the validity of the CPG-mediated therapy plans in terms of relevance, safety, and ordering for a wide range of patient scenarios.

  6. Web Services as Product Experience Augmenters and the Implications for Requirements Engineering: A Position Paper

    NARCIS (Netherlands)

    van Eck, Pascal; Nijholt, Antinus; Wieringa, Roelf J.

    There is currently little insight into what requirement engineering for web services is and in which context it will be carried out. In this position paper, we investigate requirements engineering for a special kind of web services, namely web services that are used to augment the perceived value of

  7. Key word placing in Web page body text to increase visibility to search engines

    Directory of Open Access Journals (Sweden)

    W. T. Kritzinger

    2007-11-01

    Full Text Available The growth of the World Wide Web has spawned a wide variety of new information sources, which has also left users with the daunting task of determining which sources are valid. Many users rely on the Web as an information source because of the low cost of information retrieval. It is also claimed that the Web has evolved into a powerful business tool. Examples include highly popular business services such as Amazon.com and Kalahari.net. It is estimated that around 80% of users utilize search engines to locate information on the Internet. This, by implication, places emphasis on the underlying importance of Web pages being listed on search engines indices. Empirical evidence that the placement of key words in certain areas of the body text will have an influence on the Web sites' visibility to search engines could not be found in the literature. The result of two experiments indicated that key words should be concentrated towards the top, and diluted towards the bottom of a Web page to increase visibility. However, care should be taken in terms of key word density, to prevent search engine algorithms from raising the spam alarm.

  8. Web components and the semantic web

    OpenAIRE

    Casey, Maire; Pahl, Claus

    2003-01-01

    Component-based software engineering on the Web differs from traditional component and software engineering. We investigate Web component engineering activites that are crucial for the development,com position, and deployment of components on the Web. The current Web Services and Semantic Web initiatives strongly influence our work. Focussing on Web component composition we develop description and reasoning techniques that support a component developer in the composition activities,fo cussing...

  9. Engineering Compensations in Web Service Environment

    DEFF Research Database (Denmark)

    Schäfer, Micahel; Dolog, Peter; Nejdl, Wolfgang

    2007-01-01

    Business to business integration has recently been performed by employing Web service environments. Moreover, such environments are being provided by major players on the technology markets. Those environments are based on open specifications for transaction coordination. When a failure in such a......Business to business integration has recently been performed by employing Web service environments. Moreover, such environments are being provided by major players on the technology markets. Those environments are based on open specifications for transaction coordination. When a failure...... in such an environment occurs, a compensation can be initiated to recover from the failure. However, current environments have only limited capabilities for compensations, and are usually based on backward recovery. In this paper, we introduce an engineering approach and an environment to deal with advanced...... compensations based on forward recovery principles. We extend the existing Web service transaction coordination architecture and infrastructure in order to support flexible compensation operations. A contract-based approach is being used, which allows the specification of permitted compensations at runtime. We...

  10. Semantic Web technologies in software engineering

    OpenAIRE

    Gall, H C; Reif, G

    2008-01-01

    Over the years, the software engineering community has developed various tools to support the specification, development, and maintainance of software. Many of these tools use proprietary data formats to store artifacts which hamper interoperability. However, the Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Ontologies are used define the concepts in the domain of discourse and their relationships an...

  11. Situational Requirements Engineering for the Development of Content Management System-based Web Applications

    NARCIS (Netherlands)

    Souer, J.; van de Weerd, I.; Versendaal, J.M.; Brinkkemper, S.

    2005-01-01

    Web applications are evolving towards strong content-centered Web applications. The development processes and implementation of these applications are unlike the development and implementation of traditional information systems. In this paper we propose WebEngineering Method; a method for developing

  12. Engineering the presentation layer of adaptable web information systems

    NARCIS (Netherlands)

    Fiala, Z.; Frasincar, F.; Hinz, M.; Houben, G.J.P.M.; Barna, P.; Meissner, K.; Koch, N.; Fraternali, P.; Wirsing, M.

    2004-01-01

    Engineering adaptable Web Information Systems (WIS) requires systematic design models and specification frameworks. A complete model-driven methodology like Hera distinguishes between the conceptual, navigational, and presentational aspects of WIS design and identifies different adaptation hot-spots

  13. Effects of Web-Based Interactive Modules on Engineering Students' Learning Motivations

    Science.gov (United States)

    Bai, Haiyan; Aman, Amjad; Xu, Yunjun; Orlovskaya, Nina; Zhou, Mingming

    2016-01-01

    The purpose of this study is to assess the impact of a newly developed modules, Interactive Web-Based Visualization Tools for Gluing Undergraduate Fuel Cell Systems Courses system (IGLU), on learning motivations of engineering students using two samples (n[subscript 1] = 144 and n[subscript 2] = 135) from senior engineering classes. The…

  14. A Software Engineering Approach based on WebML and BPMN to the Mediation Scenario of the SWS Challenge

    Science.gov (United States)

    Brambilla, Marco; Ceri, Stefano; Valle, Emanuele Della; Facca, Federico M.; Tziviskou, Christina

    Although Semantic Web Services are expected to produce a revolution in the development of Web-based systems, very few enterprise-wide design experiences are available; one of the main reasons is the lack of sound Software Engineering methods and tools for the deployment of Semantic Web applications. In this chapter, we present an approach to software development for the Semantic Web based on classical Software Engineering methods (i.e., formal business process development, computer-aided and component-based software design, and automatic code generation) and on semantic methods and tools (i.e., ontology engineering, semantic service annotation and discovery).

  15. The Effectiveness of Web Search Engines to Index New Sites from Different Countries

    Science.gov (United States)

    Pirkola, Ari

    2009-01-01

    Introduction: Investigates how effectively Web search engines index new sites from different countries. The primary interest is whether new sites are indexed equally or whether search engines are biased towards certain countries. If major search engines show biased coverage it can be considered a significant economic and political problem because…

  16. Reverse Engineering and Software Products Reuse to Teach Collaborative Web Portals: A Case Study with Final-Year Computer Science Students

    Science.gov (United States)

    Medina-Dominguez, Fuensanta; Sanchez-Segura, Maria-Isabel; Mora-Soto, Arturo; Amescua, Antonio

    2010-01-01

    The development of collaborative Web applications does not follow a software engineering methodology. This is because when university students study Web applications in general, and collaborative Web portals in particular, they are not being trained in the use of software engineering techniques to develop collaborative Web portals. This paper…

  17. 25 Years of Model-Driven Web Engineering: What we achieved, What is missing

    Directory of Open Access Journals (Sweden)

    Gustavo Rossi

    2016-12-01

    Full Text Available Model-Driven Web Engineering (MDWE approaches aim to improve the Web applications development process by focusing on modeling instead of coding, and deriving the running application by transformations from conceptual models to code. The emergence of the Interaction Flow Modeling Language (IFML has been an important milestone in the evolution of Web modeling languages, indicating not only the maturity of the field but also a final convergence of languages. In this paper we explain the evolution of modeling and design approaches since the early years (in the 90’s detailing the forces which drove that evolution and discussing the strengths and weaknesses of some of those approaches. A brief presentation of the IFML is accompanied with a thorough analysis of the most important achievements of the MDWE community as well as the problems and obstacles that hinder the dissemination of model-driven techniques in the Web engineering field.

  18. Index Compression and Efficient Query Processing in Large Web Search Engines

    Science.gov (United States)

    Ding, Shuai

    2013-01-01

    The inverted index is the main data structure used by all the major search engines. Search engines build an inverted index on their collection to speed up query processing. As the size of the web grows, the length of the inverted list structures, which can easily grow to hundreds of MBs or even GBs for common terms (roughly linear in the size of…

  19. Research on the optimization strategy of web search engine based on data mining

    Science.gov (United States)

    Chen, Ronghua

    2018-04-01

    With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.

  20. Using Web 2.0 Techniques in NASA's Ares Engineering Operations Network (AEON) Environment - First Impressions

    Science.gov (United States)

    Scott, David W.

    2010-01-01

    The Mission Operations Laboratory (MOL) at Marshall Space Flight Center (MSFC) is responsible for Engineering Support capability for NASA s Ares rocket development and operations. In pursuit of this, MOL is building the Ares Engineering and Operations Network (AEON), a web-based portal to support and simplify two critical activities: Access and analyze Ares manufacturing, test, and flight performance data, with access to Shuttle data for comparison Establish and maintain collaborative communities within the Ares teams/subteams and with other projects, e.g., Space Shuttle, International Space Station (ISS). AEON seeks to provide a seamless interface to a) locally developed engineering applications and b) a Commercial-Off-The-Shelf (COTS) collaborative environment that includes Web 2.0 capabilities, e.g., blogging, wikis, and social networking. This paper discusses how Web 2.0 might be applied to the typically conservative engineering support arena, based on feedback from Integration, Verification, and Validation (IV&V) testing and on searching for their use in similar environments.

  1. Curating the Web: Building a Google Custom Search Engine for the Arts

    Science.gov (United States)

    Hennesy, Cody; Bowman, John

    2008-01-01

    Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…

  2. Search Engine Ranking, Quality, and Content of Web Pages That Are Critical Versus Noncritical of Human Papillomavirus Vaccine.

    Science.gov (United States)

    Fu, Linda Y; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G

    2016-01-01

    Online information can influence attitudes toward vaccination. The aim of the present study was to provide a systematic evaluation of the search engine ranking, quality, and content of Web pages that are critical versus noncritical of human papillomavirus (HPV) vaccination. We identified HPV vaccine-related Web pages with the Google search engine by entering 20 terms. We then assessed each Web page for critical versus noncritical bias and for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than ninth grade. We also determined Web page comprehensiveness in terms of mention of 14 HPV vaccine-relevant topics. Twenty searches yielded 116 unique Web pages. HPV vaccine-critical Web pages comprised roughly a third of the top, top 5- and top 10-ranking Web pages. The prevalence of HPV vaccine-critical Web pages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical Web pages, Web pages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (p engine queries despite being of lower quality and less comprehensive than noncritical Web pages. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  3. Integrating Ecosystem Engineering and Food Web Ecology: Testing the Effect of Biogenic Reefs on the Food Web of a Soft-Bottom Intertidal Area.

    Science.gov (United States)

    De Smet, Bart; Fournier, Jérôme; De Troch, Marleen; Vincx, Magda; Vanaverbeke, Jan

    2015-01-01

    The potential of ecosystem engineers to modify the structure and dynamics of food webs has recently been hypothesised from a conceptual point of view. Empirical data on the integration of ecosystem engineers and food webs is however largely lacking. This paper investigates the hypothesised link based on a field sampling approach of intertidal biogenic aggregations created by the ecosystem engineer Lanice conchilega (Polychaeta, Terebellidae). The aggregations are known to have a considerable impact on the physical and biogeochemical characteristics of their environment and subsequently on the abundance and biomass of primary food sources and the macrofaunal (i.e. the macro-, hyper- and epibenthos) community. Therefore, we hypothesise that L. conchilega aggregations affect the structure, stability and isotopic niche of the consumer assemblage of a soft-bottom intertidal food web. Primary food sources and the bentho-pelagic consumer assemblage of a L. conchilega aggregation and a control area were sampled on two soft-bottom intertidal areas along the French coast and analysed for their stable isotopes. Despite the structural impacts of the ecosystem engineer on the associated macrofaunal community, the presence of L. conchilega aggregations only has a minor effect on the food web structure of soft-bottom intertidal areas. The isotopic niche width of the consumer communities of the L. conchilega aggregations and control areas are highly similar, implying that consumer taxa do not shift their diet when feeding in a L. conchilega aggregation. Besides, species packing and hence trophic redundancy were not affected, pointing to an unaltered stability of the food web in the presence of L. conchilega.

  4. How to Boost Engineering Support Via Web 2.0 - Seeds for the Ares Project...and/or Yours?

    Science.gov (United States)

    Scott, David W.

    2010-01-01

    The Mission Operations Laboratory (MOL) at Marshall Space Flight Center (MSFC) is responsible for Engineering Support capability for NASA s Ares launch system development. In pursuit of this, MOL is building the Ares Engineering and Operations Network (AEON), a web-based portal intended to provide a seamless interface to support and simplify two critical activities: a) Access and analyze Ares manufacturing, test, and flight performance data, with access to Shuttle data for comparison. b) Provide archive storage for engineering instrumentation data to support engineering design, development, and test. A mix of NASA-written and COTS software provides engineering analysis tools. A by-product of using a data portal to access and display data is access to collaborative tools inherent in a Web 2.0 environment. This paper discusses how Web 2.0 techniques, particularly social media, might be applied to the traditionally conservative and formal engineering support arena. A related paper by the author [1] considers use

  5. A Webometric Analysis of ISI Medical Journals Using Yahoo, AltaVista, and All the Web Search Engines

    Directory of Open Access Journals (Sweden)

    Zohreh Zahedi

    2010-12-01

    Full Text Available The World Wide Web is an important information source for scholarly communications. Examining the inlinks via webometrics studies has attracted particular interests among information researchers. In this study, the number of inlinks to 69 ISI medical journals retrieved by Yahoo, AltaVista, and All The web Search Engines were examined via a comparative and Webometrics study. For data analysis, SPSS software was employed. Findings revealed that British Medical Journal website attracted the most links of all in the three search engines. There is a significant correlation between the number of External links and the ISI impact factor. The most significant correlation in the three search engines exists between external links of Yahoo and AltaVista (100% and the least correlation is found between external links of All The web & the number of pages of AltaVista (0.51. There is no significant difference between the internal links & the number of pages found by the three search engines. But in case of impact factors, significant differences are found between these three search engines. So, the study shows that journals with higher impact factor attract more links to their websites. It also indicates that the three search engines are significantly different in terms of total links, outlinks and web impact factors

  6. WebVR——Web Virtual Reality Engine Based on P2P network

    OpenAIRE

    zhihan LV; Tengfei Yin; Yong Han; Yong Chen; Ge Chen

    2011-01-01

    WebVR, a multi-user online virtual reality engine, is introduced. The main contributions are mapping the geographical space and virtual space to the P2P overlay network space, and dividing the three spaces by quad-tree method. The geocoding is identified with Hash value, which is used to index the user list, terrain data, and the model object data. Sharing of data through improved Kademlia network model is designed and implemented. In this model, XOR algorithm is used to calculate the distanc...

  7. Web-page Prediction for Domain Specific Web-search using Boolean Bit Mask

    OpenAIRE

    Sinha, Sukanta; Duttagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Search Engine is a Web-page retrieval tool. Nowadays Web searchers utilize their time using an efficient search engine. To improve the performance of the search engine, we are introducing a unique mechanism which will give Web searchers more prominent search results. In this paper, we are going to discuss a domain specific Web search prototype which will generate the predicted Web-page list for user given search string using Boolean bit mask.

  8. An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.

    Science.gov (United States)

    Zweigenbaum, P; Darmoni, S J; Grabar, N; Douyère, M; Benichou, J

    2002-01-01

    Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF.

  9. ICSE 2009 Tutorial - Semantic Web Technologies in Software Engineering

    OpenAIRE

    Gall, H C; Reif, G

    2009-01-01

    Over the years, the software engineering community has developed various tools to support the specification, development, and maintainance of software. Many of these tools use proprietary data formats to store artifacts which hamper interoperability. On the other hand, the Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Ontologies are used to define the concepts in the domain of discourse and their rel...

  10. Enhancing food engineering education with interactive web-based simulations

    OpenAIRE

    Alexandros Koulouris; Georgios Aroutidis; Dimitrios Vardalis; Petros Giannoulis; Paraskevi Karakosta

    2015-01-01

    In the traditional deductive approach in teaching any engineering topic, teachers would first expose students to the derivation of the equations that govern the behavior of a physical system and then demonstrate the use of equations through a limited number of textbook examples. This methodology, however, is rarely adequate to unmask the cause-effect and quantitative relationships between the system variables that the equations embody. Web-based simulation, which is the integration of simulat...

  11. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    Science.gov (United States)

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Changes in users' mental models of Web search engines after ten ...

    African Journals Online (AJOL)

    Ward's Cluster analyses including the Pseudo T² Statistical analyses were used to determine the mental model clusters for the seventeen salient design features of Web search engines at each time point. The cubic clustering criterion (CCC) and the dendogram were conducted for each sample to help determine the number ...

  13. Can Interactive Web-Based CAD Tools Improve the Learning of Engineering Drawing? A Case Study

    Science.gov (United States)

    Pando Cerra, Pablo; Suárez González, Jesús M.; Busto Parra, Bernardo; Rodríguez Ortiz, Diana; Álvarez Peñín, Pedro I.

    2014-01-01

    Many current Web-based learning environments facilitate the theoretical teaching of a subject but this may not be sufficient for those disciplines that require a significant use of graphic mechanisms to resolve problems. This research study looks at the use of an environment that can help students learn engineering drawing with Web-based CAD…

  14. Design and implementation of Web-based SDUV-FEL engineering database system

    International Nuclear Information System (INIS)

    Sun Xiaoying; Shen Liren; Dai Zhimin; Xie Dong

    2006-01-01

    A design of Web-based SDUV-FEL engineering database and its implementation are introduced. This system will save and offer static data and archived data of SDUV-FEL, and build a proper and effective platform for share of SDUV-FEL data. It offers usable and reliable SDUV-FEL data for operators and scientists. (authors)

  15. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    OpenAIRE

    Filistea Naude; Chris Rensleigh; Adeline S.A. du Toit

    2010-01-01

    This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa) was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The re...

  16. A Web portal for the Engineering and Equipment Data Management System at CERN

    International Nuclear Information System (INIS)

    Tsyganov, A; Petit, S; Martel, P; Milenkovic, S; Suwalska, A; Delamare, C; Widegren, D; Amerigo, S Mallon; Pettersson, T

    2010-01-01

    CERN, the European Laboratory for Particle Physics, located in Geneva - Switzerland, has recently started the Large Hadron Collider (LHC), a 27 km particle accelerator. The CERN Engineering and Equipment Data Management Service (EDMS) provides support for managing engineering and equipment information throughout the entire lifecycle of a project. Based on several both in-house developed and commercial data management systems, this service supports management and follow-up of different kinds of information throughout the lifecycle of the LHC project: design, manufacturing, installation, commissioning data, maintenance and more. The data collection phase, carried out by specialists, is now being replaced by a phase during which data will be consulted on an extensive basis by non-experts users. In order to address this change, a Web portal for the EDMS has been developed. It brings together in one space all the aspects covered by the EDMS: project and document management, asset tracking and safety follow-up. This paper presents the EDMS Web portal, its dynamic content management and its 'one click' information search engine.

  17. 07051 Working Group Outcomes -- Programming Paradigms for the Web: Web Programming and Web Services

    OpenAIRE

    Hull, Richard; Thiemann, Peter; Wadler, Philip

    2007-01-01

    Participants in the seminar broke into groups on ``Patterns and Paradigms'' for web programming, ``Web Services,'' ``Data on the Web,'' ``Software Engineering'' and ``Security.'' Here we give the raw notes recorded during these sessions.

  18. A unified architecture for biomedical search engines based on semantic web technologies.

    Science.gov (United States)

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  19. Web-Based Simulation Games for the Integration of Engineering and Business Fundamentals

    Science.gov (United States)

    Calfa, Bruno; Banholzer, William; Alger, Monty; Doherty, Michael

    2017-01-01

    This paper describes a web-based suite of simulation games that have the purpose to enhance the chemical engineering curriculum with business-oriented decisions. Two simulation cases are discussed whose teaching topics include closing material and energy balances, importance of recycle streams, price-volume relationship in a dynamic market, impact…

  20. Enhancing food engineering education with interactive web-based simulations

    Directory of Open Access Journals (Sweden)

    Alexandros Koulouris

    2015-04-01

    Full Text Available In the traditional deductive approach in teaching any engineering topic, teachers would first expose students to the derivation of the equations that govern the behavior of a physical system and then demonstrate the use of equations through a limited number of textbook examples. This methodology, however, is rarely adequate to unmask the cause-effect and quantitative relationships between the system variables that the equations embody. Web-based simulation, which is the integration of simulation and internet technologies, has the potential to enhance the learning experience by offering an interactive and easily accessible platform for quick and effortless experimentation with physical phenomena.This paper presents the design and development of a web-based platform for teaching basic food engineering phenomena to food technology students. The platform contains a variety of modules (“virtual experiments” covering the topics of mass and energy balances, fluid mechanics and heat transfer. In this paper, the design and development of three modules for mass balances and heat transfer is presented. Each webpage representing an educational module has the following features: visualization of the studied phenomenon through graphs, charts or videos, computation through a mathematical model and experimentation.  The student is allowed to edit key parameters of the phenomenon and observe the effect of these changes on the outputs. Experimentation can be done in a free or guided fashion with a set of prefabricated examples that students can run and self-test their knowledge by answering multiple-choice questions.

  1. Spatial Visualization Learning in Engineering: Traditional Methods vs. a Web-Based Tool

    Science.gov (United States)

    Pedrosa, Carlos Melgosa; Barbero, Basilio Ramos; Miguel, Arturo Román

    2014-01-01

    This study compares an interactive learning manager for graphic engineering to develop spatial vision (ILMAGE_SV) to traditional methods. ILMAGE_SV is an asynchronous web-based learning tool that allows the manipulation of objects with a 3D viewer, self-evaluation, and continuous assessment. In addition, student learning may be monitored, which…

  2. Soil food web changes during spontaneous succession at post mining sites: a possible ecosystem engineering effect on food web organization?

    Science.gov (United States)

    Frouz, Jan; Thébault, Elisa; Pižl, Václav; Adl, Sina; Cajthaml, Tomáš; Baldrián, Petr; Háněl, Ladislav; Starý, Josef; Tajovský, Karel; Materna, Jan; Nováková, Alena; de Ruiter, Peter C

    2013-01-01

    Parameters characterizing the structure of the decomposer food web, biomass of the soil microflora (bacteria and fungi) and soil micro-, meso- and macrofauna were studied at 14 non-reclaimed 1- 41-year-old post-mining sites near the town of Sokolov (Czech Republic). These observations on the decomposer food webs were compared with knowledge of vegetation and soil microstructure development from previous studies. The amount of carbon entering the food web increased with succession age in a similar way as the total amount of C in food web biomass and the number of functional groups in the food web. Connectance did not show any significant changes with succession age, however. In early stages of the succession, the bacterial channel dominated the food web. Later on, in shrub-dominated stands, the fungal channel took over. Even later, in the forest stage, the bacterial channel prevailed again. The best predictor of fungal bacterial ratio is thickness of fermentation layer. We argue that these changes correspond with changes in topsoil microstructure driven by a combination of plant organic matter input and engineering effects of earthworms. In early stages, soil is alkaline, and a discontinuous litter layer on the soil surface promotes bacterial biomass growth, so the bacterial food web channel can dominate. Litter accumulation on the soil surface supports the development of the fungal channel. In older stages, earthworms arrive, mix litter into the mineral soil and form an organo-mineral topsoil, which is beneficial for bacteria and enhances the bacterial food web channel.

  3. REPTREE CLASSIFIER FOR IDENTIFYING LINK SPAM IN WEB SEARCH ENGINES

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2013-01-01

    Full Text Available Search Engines are used for retrieving the information from the web. Most of the times, the importance is laid on top 10 results sometimes it may shrink as top 5, because of the time constraint and reliability on the search engines. Users believe that top 10 or 5 of total results are more relevant. Here comes the problem of spamdexing. It is a method to deceive the search result quality. Falsified metrics such as inserting enormous amount of keywords or links in website may take that website to the top 10 or 5 positions. This paper proposes a classifier based on the Reptree (Regression tree representative. As an initial step Link-based features such as neighbors, pagerank, truncated pagerank, trustrank and assortativity related attributes are inferred. Based on this features, tree is constructed. The tree uses the feature inference to differentiate spam sites from legitimate sites. WEBSPAM-UK-2007 dataset is taken as a base. It is preprocessed and converted into five datasets FEATA, FEATB, FEATC, FEATD and FEATE. Only link based features are taken for experiments. This paper focus on link spam alone. Finally a representative tree is created which will more precisely classify the web spam entries. Results are given. Regression tree classification seems to perform well as shown through experiments.

  4. Introduction to Chemical Engineering Reactor Analysis: A Web-Based Reactor Design Game

    Science.gov (United States)

    Orbey, Nese; Clay, Molly; Russell, T.W. Fraser

    2014-01-01

    An approach to explain chemical engineering through a Web-based interactive game design was developed and used with college freshman and junior/senior high school students. The goal of this approach was to demonstrate how to model a lab-scale experiment, and use the results to design and operate a chemical reactor. The game incorporates both…

  5. A Web-based modeling tool for the SEMAT Essence theory of software engineering

    Directory of Open Access Journals (Sweden)

    Daniel Graziotin

    2013-09-01

    Full Text Available As opposed to more mature subjects, software engineering lacks general theories that establish its foundations as a discipline. The Essence Theory of software engineering (Essence has been proposed by the Software Engineering Methods and Theory (SEMAT initiative. The goal of Essence is to develop a theoretically sound basis for software engineering practice and its wide adoption. However, Essence is far from reaching academic- and industry-wide adoption. The reasons for this include a struggle to foresee its utilization potential and a lack of tools for implementation. SEMAT Accelerator (SematAcc is a Web-positioning tool for a software engineering endeavor, which implements the SEMAT’s Essence kernel. SematAcc permits the use of Essence, thus helping to understand it. The tool enables the teaching, adoption, and research of Essence in controlled experiments and case studies.

  6. EuroGOV: Engineering a Multilingual Web Corpus

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.

    2005-01-01

    EuroGOV is a multilingual web corpus that was created to serve as the document collection for WebCLEF, the CLEF 2005 web retrieval task. EuroGOV is a collection of web pages crawled from the European Union portal, European Union member state governmental web sites, and Russian government web sites.

  7. The fraying web of life and our future engineers

    Science.gov (United States)

    Splitt, Frank G.

    2004-07-01

    Evidence abounds that we are reaching the carrying capacity of the earth -- engaging in deficit spending. The amount of crops, animals, and other biomatter we extract from the earth each year exceeds wth the earth can replace by an estimated 20%. Additionally, signs of climate change are precursors of things to come. Global industrialization and the new technologies of the 20th century have helped to stretch the capacities of our finite natural system to precarious levels. Taken together, this evidence reflects a fraying web of life. Sustainable development and natural capitalism work to reverse these trends, however, we are often still wedded to the notion that environmental conservation and economic development are the 'players' in a zero-sum game. Engineering and its technological derivatives can also help remedy the problem. The well being of future generations will depend to a large extent on how we educate our future engineers. These engineers will be a new breed -- developing and using sustainable technology, benign manufacturing processes and an expanded array of environmental assessment tools that will simultaneously support and maintain healthy economies and a healthy environment. The importance of environment and sustainable development cosiderations, the need for their widespread inclusion in engineering education, the impediments to change, and the important role played by ABET will be presented.

  8. The Evolution of Web Searching.

    Science.gov (United States)

    Green, David

    2000-01-01

    Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…

  9. State-of-the-art WEB -technologies and ecological safety of nuclear power engineering facilities

    International Nuclear Information System (INIS)

    Batij, V.G.; Batij, E.V.; Rud'ko, V.M.; Kotlyarov, V.T.

    2004-01-01

    Prospects of web-technologies using in the field of improvement radiation safety level of nuclear power engineering facilities is seen. It is shown that application of such technologies will enable entirely using the data of all information systems of radiation control

  10. Development of Web-Based Learning Environment Model to Enhance Cognitive Skills for Undergraduate Students in the Field of Electrical Engineering

    Science.gov (United States)

    Lakonpol, Thongmee; Ruangsuwan, Chaiyot; Terdtoon, Pradit

    2015-01-01

    This research aimed to develop a web-based learning environment model for enhancing cognitive skills of undergraduate students in the field of electrical engineering. The research is divided into 4 phases: 1) investigating the current status and requirements of web-based learning environment models. 2) developing a web-based learning environment…

  11. A Web Centric Architecture for Deploying Multi-Disciplinary Engineering Design Processes

    Science.gov (United States)

    Woyak, Scott; Kim, Hongman; Mullins, James; Sobieszczanski-Sobieski, Jaroslaw

    2004-01-01

    There are continuous needs for engineering organizations to improve their design process. Current state of the art techniques use computational simulations to predict design performance, and optimize it through advanced design methods. These tools have been used mostly by individual engineers. This paper presents an architecture for achieving results at an organization level beyond individual level. The next set of gains in process improvement will come from improving the effective use of computers and software within a whole organization, not just for an individual. The architecture takes advantage of state of the art capabilities to produce a Web based system to carry engineering design into the future. To illustrate deployment of the architecture, a case study for implementing advanced multidisciplinary design optimization processes such as Bi-Level Integrated System Synthesis is discussed. Another example for rolling-out a design process for Design for Six Sigma is also described. Each example explains how an organization can effectively infuse engineering practice with new design methods and retain the knowledge over time.

  12. Correct software in web applications and web services

    CERN Document Server

    Thalheim, Bernhard; Prinz, Andreas; Buchberger, Bruno

    2015-01-01

    The papers in this volume aim at obtaining a common understanding of the challenging research questions in web applications comprising web information systems, web services, and web interoperability; obtaining a common understanding of verification needs in web applications; achieving a common understanding of the available rigorous approaches to system development, and the cases in which they have succeeded; identifying how rigorous software engineering methods can be exploited to develop suitable web applications; and at developing a European-scale research agenda combining theory, methods a

  13. Engineering semantic web information systems in Hera

    NARCIS (Netherlands)

    Vdovják, R.; Frasincar, F.; Houben, G.J.P.M.; Barna, P.

    2003-01-01

    The success of the World Wide Web has caused the concept of information system to change. Web Information Systems (WIS) use from the Web its paradigm and technologies in order to retrieve information from sources on the Web, and to present the information in terms of a Web or hypermedia

  14. Engineers and the Web: An analysis of real life gaps in information usage

    NARCIS (Netherlands)

    Kraaijenbrink, Jeroen

    2007-01-01

    Engineers face a wide range of gaps when trying to identify, acquire, and utilize information from the Web. To be able to avoid creating such gaps, it is essential to understand them in detail. This paper reports the results of a study of the real life gaps in information usage processes of 17

  15. A web-based online collaboration platform for formulating engineering design projects

    Science.gov (United States)

    Varikuti, Sainath

    Effective communication and collaboration among students, faculty and industrial sponsors play a vital role while formulating and solving engineering design projects. With the advent in the web technology, online platforms and systems have been proposed to facilitate interactions and collaboration among different stakeholders in the context of senior design projects. However, there are noticeable gaps in the literature with respect to understanding the effects of online collaboration platforms for formulating engineering design projects. Most of the existing literature is focused on exploring the utility of online platforms on activities after the problem is defined and teams are formed. Also, there is a lack of mechanisms and tools to guide the project formation phase in senior design projects, which makes it challenging for students and faculty to collaboratively develop and refine project ideas and to establish appropriate teams. In this thesis a web-based online collaboration platform is designed and implemented to share, discuss and obtain feedback on project ideas and to facilitate collaboration among students and faculty prior to the start of the semester. The goal of this thesis is to understand the impact of an online collaboration platform for formulating engineering design projects, and how a web-based online collaboration platform affects the amount of interactions among stakeholders during the early phases of design process. A survey measuring the amount of interactions among students and faculty is administered. Initial findings show a marked improvement in the students' ability to share project ideas and form teams with other students and faculty. Students found the online platform simple to use. The suggestions for improving the tool generally included features that were not necessarily design specific, indicating that the underlying concept of this collaborative platform provides a strong basis and can be extended for future online platforms

  16. Bioprocess-Engineering Education with Web Technology

    NARCIS (Netherlands)

    Sessink, O.

    2006-01-01

    Development of learning material that is distributed through and accessible via the World Wide Web. Various options from web technology are exploited to improve the quality and efficiency of learning material.

  17. Semantic similarity measure in biomedical domain leverage web search engine.

    Science.gov (United States)

    Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei

    2010-01-01

    Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.

  18. Personalizing Web Search based on User Profile

    OpenAIRE

    Utage, Sharyu; Ahire, Vijaya

    2016-01-01

    Web Search engine is most widely used for information retrieval from World Wide Web. These Web Search engines help user to find most useful information. When different users Searches for same information, search engine provide same result without understanding who is submitted that query. Personalized web search it is search technique for proving useful result. This paper models preference of users as hierarchical user profiles. a framework is proposed called UPS. It generalizes profile and m...

  19. Integration of Web mining and web crawler: Relevance and State of Art

    OpenAIRE

    Subhendu kumar pani; Deepak Mohapatra,; Bikram Keshari Ratha

    2010-01-01

    This study presents the role of web crawler in web mining environment. As the growth of the World Wide Web exceeded all expectations,the research on Web mining is growing more and more.web mining research topic which combines two of the activated research areas: Data Mining and World Wide Web .So, the World Wide Web is a very advanced area for data mining research. Search engines that are based on web crawling framework also used in web mining to find theinteracted web pages. This paper discu...

  20. Web Project Management

    OpenAIRE

    Suralkar, Sunita; Joshi, Nilambari; Meshram, B B

    2013-01-01

    This paper describes about the need for Web project management, fundamentals of project management for web projects: what it is, why projects go wrong, and what's different about web projects. We also discuss Cost Estimation Techniques based on Size Metrics. Though Web project development is similar to traditional software development applications, the special characteristics of Web Application development requires adaption of many software engineering approaches or even development of comple...

  1. Developing as new search engine and browser for libraries to search and organize the World Wide Web library resources

    OpenAIRE

    Sreenivasulu, V.

    2000-01-01

    Internet Granthalaya urges world wide advocates and targets at the task of creating a new search engine and dedicated browseer. Internet Granthalaya may be the ultimate search engine exclusively dedicated for every library use to search and organize the world wide web libary resources

  2. Exploring the academic invisible web

    OpenAIRE

    Lewandowski, Dirk; Mayr, Philipp

    2006-01-01

    Purpose: To provide a critical review of Bergman’s 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodol...

  3. Electronic Grey Literature in Accelerator Science and Its Allied Subjects : Selected Web Resources for Scientists and Engineers

    CERN Document Server

    Rajendiran, P

    2006-01-01

    Grey literature Web resources in the field of accelerator science and its allied subjects are collected for the scientists and engineers of RRCAT (Raja Ramanna Centre for Advanced Technology). For definition purposes the different types of grey literature are described. The Web resources collected and compiled in this article (with an overview and link for each) specifically focus on technical reports, preprints or e-prints, which meet the main information needs of RRCAT users.

  4. Characterizing interdisciplinarity of researchers and research topics using web search engines.

    Science.gov (United States)

    Sayama, Hiroki; Akaishi, Jin

    2012-01-01

    Researchers' networks have been subject to active modeling and analysis. Earlier literature mostly focused on citation or co-authorship networks reconstructed from annotated scientific publication databases, which have several limitations. Recently, general-purpose web search engines have also been utilized to collect information about social networks. Here we reconstructed, using web search engines, a network representing the relatedness of researchers to their peers as well as to various research topics. Relatedness between researchers and research topics was characterized by visibility boost-increase of a researcher's visibility by focusing on a particular topic. It was observed that researchers who had high visibility boosts by the same research topic tended to be close to each other in their network. We calculated correlations between visibility boosts by research topics and researchers' interdisciplinarity at the individual level (diversity of topics related to the researcher) and at the social level (his/her centrality in the researchers' network). We found that visibility boosts by certain research topics were positively correlated with researchers' individual-level interdisciplinarity despite their negative correlations with the general popularity of researchers. It was also found that visibility boosts by network-related topics had positive correlations with researchers' social-level interdisciplinarity. Research topics' correlations with researchers' individual- and social-level interdisciplinarities were found to be nearly independent from each other. These findings suggest that the notion of "interdisciplinarity" of a researcher should be understood as a multi-dimensional concept that should be evaluated using multiple assessment means.

  5. Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.

    2006-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data

  6. Search Engine Optimization for Flash Best Practices for Using Flash on the Web

    CERN Document Server

    Perkins, Todd

    2009-01-01

    Search Engine Optimization for Flash dispels the myth that Flash-based websites won't show up in a web search by demonstrating exactly what you can do to make your site fully searchable -- no matter how much Flash it contains. You'll learn best practices for using HTML, CSS and JavaScript, as well as SWFObject, for building sites with Flash that will stand tall in search rankings.

  7. Applying Web Usage Mining for Personalizing Hyperlinks in Web-Based Adaptive Educational Systems

    Science.gov (United States)

    Romero, Cristobal; Ventura, Sebastian; Zafra, Amelia; de Bra, Paul

    2009-01-01

    Nowadays, the application of Web mining techniques in e-learning and Web-based adaptive educational systems is increasing exponentially. In this paper, we propose an advanced architecture for a personalization system to facilitate Web mining. A specific Web mining tool is developed and a recommender engine is integrated into the AHA! system in…

  8. Assessment and Comparison of Search capabilities of Web-based Meta-Search Engines: A Checklist Approach

    Directory of Open Access Journals (Sweden)

    Alireza Isfandiyari Moghadam

    2010-03-01

    Full Text Available   The present investigation concerns evaluation, comparison and analysis of search options existing within web-based meta-search engines. 64 meta-search engines were identified. 19 meta-search engines that were free, accessible and compatible with the objectives of the present study were selected. An author’s constructed check list was used for data collection. Findings indicated that all meta-search engines studied used the AND operator, phrase search, number of results displayed setting, previous search query storage and help tutorials. Nevertheless, none of them demonstrated any search options for hypertext searching and displaying the size of the pages searched. 94.7% support features such as truncation, keywords in title and URL search and text summary display. The checklist used in the study could serve as a model for investigating search options in search engines, digital libraries and other internet search tools.

  9. Reconsidering the Rhizome: A Textual Analysis of Web Search Engines as Gatekeepers of the Internet

    Science.gov (United States)

    Hess, A.

    Critical theorists have often drawn from Deleuze and Guattari's notion of the rhizome when discussing the potential of the Internet. While the Internet may structurally appear as a rhizome, its day-to-day usage by millions via search engines precludes experiencing the random interconnectedness and potential democratizing function. Through a textual analysis of four search engines, I argue that Web searching has grown hierarchies, or "trees," that organize data in tracts of knowledge and place users in marketing niches rather than assist in the development of new knowledge.

  10. Developing Creativity and Problem-Solving Skills of Engineering Students: A Comparison of Web- and Pen-and-Paper-Based Approaches

    Science.gov (United States)

    Valentine, Andrew; Belski, Iouri; Hamilton, Margaret

    2017-01-01

    Problem-solving is a key engineering skill, yet is an area in which engineering graduates underperform. This paper investigates the potential of using web-based tools to teach students problem-solving techniques without the need to make use of class time. An idea generation experiment involving 90 students was designed. Students were surveyed…

  11. A Taxonomic Search Engine: federating taxonomic databases using web services.

    Science.gov (United States)

    Page, Roderic D M

    2005-03-09

    The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  12. Web services foundations

    CERN Document Server

    Bouguettaya, Athman; Daniel, Florian

    2013-01-01

    Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet.Web Services Foundations is the first installment of a two-book collection coverin

  13. Critical Reading of the Web

    Science.gov (United States)

    Griffin, Teresa; Cohen, Deb

    2012-01-01

    The ubiquity and familiarity of the world wide web means that students regularly turn to it as a source of information. In doing so, they "are said to rely heavily on simple search engines, such as Google to find what they want." Researchers have also investigated how students use search engines, concluding that "the young web users tended to…

  14. Digging Deeper: The Deep Web.

    Science.gov (United States)

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  15. Introducing Model-Based System Engineering Transforming System Engineering through Model-Based Systems Engineering

    Science.gov (United States)

    2014-03-31

    Web  Presentation...Software  .....................................................  20   Figure  6.  Published   Web  Page  from  Data  Collection...the  term  Model  Based  Engineering  (MBE),  Model  Driven  Engineering  ( MDE ),  or  Model-­‐Based  Systems  

  16. Myanmar Language Search Engine

    OpenAIRE

    Pann Yu Mon; Yoshiki Mikami

    2011-01-01

    With the enormous growth of the World Wide Web, search engines play a critical role in retrieving information from the borderless Web. Although many search engines are available for the major languages, but they are not much proficient for the less computerized languages including Myanmar. The main reason is that those search engines are not considering the specific features of those languages. A search engine which capable of searching the Web documents written in those languages is highly n...

  17. Intelligent Agent Based Semantic Web in Cloud Computing Environment

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Considering today's web scenario, there is a need of effective and meaningful search over the web which is provided by Semantic Web. Existing search engines are keyword based. They are vulnerable in answering intelligent queries from the user due to the dependence of their results on information available in web pages. While semantic search engines provides efficient and relevant results as the semantic web is an extension of the current web in which information is given well defined meaning....

  18. MODEST: a web-based design tool for oligonucleotide-mediated genome engineering and recombineering

    DEFF Research Database (Denmark)

    Bonde, Mads; Klausen, Michael Schantz; Anderson, Mads Valdemar

    2014-01-01

    Recombineering and multiplex automated genome engineering (MAGE) offer the possibility to rapidly modify multiple genomic or plasmid sites at high efficiencies. This enables efficient creation of genetic variants including both single mutants with specifically targeted modifications as well......, which confers the corresponding genetic change, is performed manually. To address these challenges, we have developed the MAGE Oligo Design Tool (MODEST). This web-based tool allows designing of MAGE oligos for (i) tuning translation rates by modifying the ribosomal binding site, (ii) generating...

  19. A Taxonomic Search Engine: Federating taxonomic databases using web services

    Directory of Open Access Journals (Sweden)

    Page Roderic DM

    2005-03-01

    Full Text Available Abstract Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata for each name. Conclusion The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  20. Philosophical engineering toward a philosophy of the web

    CERN Document Server

    Halpin, Harry

    2013-01-01

    This is the first interdisciplinary exploration of the philosophical foundations of the Web, a new area of inquiry that has important implications across a range of domains. Contains twelve essays that bridge the fields of philosophy, cognitive science, and phenomenologyTackles questions such as the impact of Google on intelligence and epistemology, the philosophical status of digital objects, ethics on the Web, semantic and ontological changes caused by the Web, and the potential of the Web to serve as a genuine cognitive extensionBrings together insightful new scholarship from well-known an

  1. Promoting Your Web Site.

    Science.gov (United States)

    Raeder, Aggi

    1997-01-01

    Discussion of ways to promote sites on the World Wide Web focuses on how search engines work and how they retrieve and identify sites. Appropriate Web links for submitting new sites and for Internet marketing are included. (LRW)

  2. Advanced web services

    CERN Document Server

    Bouguettaya, Athman; Daniel, Florian

    2013-01-01

    Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet. This book is the second installment of a two-book collection covering the state-o

  3. Semantic Web and Model-Driven Engineering

    CERN Document Server

    Parreiras, Fernando S

    2012-01-01

    The next enterprise computing era will rely on the synergy between both technologies: semantic web and model-driven software development (MDSD). The semantic web organizes system knowledge in conceptual domains according to its meaning. It addresses various enterprise computing needs by identifying, abstracting and rationalizing commonalities, and checking for inconsistencies across system specifications. On the other side, model-driven software development is closing the gap among business requirements, designs and executables by using domain-specific languages with custom-built syntax and se

  4. Development of Content Management System-based Web Applications

    OpenAIRE

    Souer, J.

    2012-01-01

    Web engineering is the application of systematic and quantifiable approaches (concepts, methods, techniques, tools) to cost-effective requirements analysis, design, implementation, testing, operation, and maintenance of high quality web applications. Over the past years, Content Management Systems (CMS) have emerged as an important foundation for the web engineering process. CMS can be defined as a tool for the creation, editing and management of web information in an integral way. A CMS appe...

  5. Incorporating the surfing behavior of web users into PageRank

    OpenAIRE

    Ashyralyyev, Shatlyk

    2013-01-01

    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013. Thesis (Master's) -- Bilkent University, 2013. Includes bibliographical references leaves 68-73 One of the most crucial factors that determines the effectiveness of a large-scale commercial web search engine is the ranking (i.e., order) in which web search results are presented to the end user. In modern web search engines, the skeleton for the rank...

  6. Characteristics of scientific web publications

    DEFF Research Database (Denmark)

    Thorlund Jepsen, Erik; Seiden, Piet; Ingwersen, Peter Emil Rerup

    2004-01-01

    were generated based on specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AllTheWeb, and AltaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality...... of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various...... types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both Alta...

  7. The poor quality of information about laparoscopy on the World Wide Web as indexed by popular search engines.

    Science.gov (United States)

    Allen, J W; Finch, R J; Coleman, M G; Nathanson, L K; O'Rourke, N A; Fielding, G A

    2002-01-01

    This study was undertaken to determine the quality of information on the Internet regarding laparoscopy. Four popular World Wide Web search engines were used with the key word "laparoscopy." Advertisements, patient- or physician-directed information, and controversial material were noted. A total of 14,030 Web pages were found, but only 104 were unique Web sites. The majority of the sites were duplicate pages, subpages within a main Web page, or dead links. Twenty-eight of the 104 pages had a medical product for sale, 26 were patient-directed, 23 were written by a physician or group of physicians, and six represented corporations. The remaining 21 were "miscellaneous." The 46 pages containing educational material were critically reviewed. At least one of the senior authors found that 32 of the pages contained controversial or misleading statements. All of the three senior authors (LKN, NAO, GAF) independently agreed that 17 of the 46 pages contained controversial information. The World Wide Web is not a reliable source for patient or physician information about laparoscopy. Authenticating medical information on the World Wide Web is a difficult task, and no government or surgical society has taken the lead in regulating what is presented as fact on the World Wide Web.

  8. Extracting Macroscopic Information from Web Links.

    Science.gov (United States)

    Thelwall, Mike

    2001-01-01

    Discussion of Web-based link analysis focuses on an evaluation of Ingversen's proposed external Web Impact Factor for the original use of the Web, namely the interlinking of academic research. Studies relationships between academic hyperlinks and research activities for British universities and discusses the use of search engines for Web link…

  9. Using Google App Engine

    CERN Document Server

    Severance, Charles

    2009-01-01

    Build exciting, scalable web applications quickly and confidently using Google App Engine and this book, even if you have little or no experience in programming or web development. App Engine is perhaps the most appealing web technology to appear in the last year, providing an easy-to-use application framework with basic web tools. While Google's own tutorial assumes significant experience, Using Google App Engine will help anyone get started with this platform. By the end of this book, you'll know how to build complete, interactive applications and deploy them to the cloud using the same s

  10. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M.B. Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  11. Advanced Techniques in Web Intelligence-2 Web User Browsing Behaviour and Preference Analysis

    CERN Document Server

    Palade, Vasile; Jain, Lakhmi

    2013-01-01

    This research volume focuses on analyzing the web user browsing behaviour and preferences in traditional web-based environments, social  networks and web 2.0 applications,  by using advanced  techniques in data acquisition, data processing, pattern extraction and  cognitive science for modeling the human actions.  The book is directed to  graduate students, researchers/scientists and engineers  interested in updating their knowledge with the recent trends in web user analysis, for developing the next generation of web-based systems and applications.

  12. WEB APPLICATION TO MANAGE DOCUMENTS USING THE GOOGLE WEB TOOLKIT AND APP ENGINE TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    Velázquez Santana Eugenio César

    2017-12-01

    Full Text Available The application of new information technologies such as Google Web Toolkit and App Engine are making a difference in the academic management of Higher Education Institutions (IES, who seek to streamline their processes as well as reduce infrastructure costs. However, they encounter the problems with regard to acquisition costs, the infrastructure necessary for their use, as well as the maintenance of the software; It is for this reason that the present research aims to describe the application of these new technologies in HEIs, as well as to identify their advantages and disadvantages and the key success factors in their implementation. As a software development methodology, SCRUM was used as well as PMBOK as a project management tool. The main results were related to the application of these technologies in the development of customized software for teachers, students and administrators, as well as the weaknesses and strengths of using them in the cloud. On the other hand, it was also possible to describe the paradigm shift that data warehouses are generating with respect to today's relational databases.

  13. A Novel Personalized Web Search Model

    Institute of Scientific and Technical Information of China (English)

    ZHU Zhengyu; XU Jingqiu; TIAN Yunyan; REN Xiang

    2007-01-01

    A novel personalized Web search model is proposed.The new system, as a middleware between a user and a Web search engine, is set up on the client machine. It can learn a user's preference implicitly and then generate the user profile automatically. When the user inputs query keywords, the system can automatically generate a few personalized expansion words by computing the term-term associations according to the current user profile, and then these words together with the query keywords are submitted to a popular search engine such as Yahoo or Google.These expansion words help to express accurately the user's search intention. The new Web search model can make a common search engine personalized, that is, the search engine can return different search results to different users who input the same keywords. The experimental results show the feasibility and applicability of the presented work.

  14. RESTful web services with Dropwizard

    CERN Document Server

    Dallas, Alexandros

    2014-01-01

    A hands-on focused step-by-step tutorial to help you create Web Service applications using Dropwizard. If you are a software engineer or a web developer and want to learn more about building your own Web Service application, then this is the book for you. Basic knowledge of Java and RESTful Web Service concepts is assumed and familiarity with SQL/MySQL and command-line scripting would be helpful.

  15. Changes in users' Web search performance after ten years ...

    African Journals Online (AJOL)

    The changes in users' Web search performance using search engines over ten years was investigated in this study. Matched data obtained from samples in 2000 and 2010 were used for the comparative analysis. The patterns of Web search engine use suggested a dominance in using a particular search engine. Statistical ...

  16. The Semantic Web: opportunities and challenges for next-generation Web applications

    Directory of Open Access Journals (Sweden)

    2002-01-01

    Full Text Available Recently there has been a growing interest in the investigation and development of the next generation web - the Semantic Web. While most of the current forms of web content are designed to be presented to humans, but are barely understandable by computers, the content of the Semantic Web is structured in a semantic way so that it is meaningful to computers as well as to humans. In this paper, we report a survey of recent research on the Semantic Web. In particular, we present the opportunities that this revolution will bring to us: web-services, agent-based distributed computing, semantics-based web search engines, and semantics-based digital libraries. We also discuss the technical and cultural challenges of realizing the Semantic Web: the development of ontologies, formal semantics of Semantic Web languages, and trust and proof models. We hope that this will shed some light on the direction of future work on this field.

  17. Publicizing Your Web Resources for Maximum Exposure.

    Science.gov (United States)

    Smith, Kerry J.

    2001-01-01

    Offers advice to librarians for marketing their Web sites on Internet search engines. Advises against relying solely on spiders and recommends adding metadata to the source code and delivering that information directly to the search engines. Gives an overview of metadata and typical coding for meta tags. Includes Web addresses for a number of…

  18. A grammar checker based on web searching

    Directory of Open Access Journals (Sweden)

    Joaquim Moré

    2006-05-01

    Full Text Available This paper presents an English grammar and style checker for non-native English speakers. The main characteristic of this checker is the use of an Internet search engine. As the number of web pages written in English is immense, the system hypothesises that a piece of text not found on the Web is probably badly written. The system also hypothesises that the Web will provide examples of how the content of the text segment can be expressed in a grammatically correct and idiomatic way. Thus, when the checker warns the user about the odd nature of a text segment, the Internet engine searches for contexts that can help the user decide whether he/she should correct the segment or not. By means of a search engine, the checker also suggests use of other expressions that appear on the Web more often than the expression he/she actually wrote.

  19. A Web portal for the Engineering and Equipment Data Management System at CERN

    CERN Document Server

    Tsyganov, A; Martel, P; Milenkovic, S; Suwalska, A; Delamare, Christophe; Widegren, David; Mallon Amerigo, S; Pettersson, Thomas Sven

    2010-01-01

    CERN, the European Laboratory for Particle Physics, located in Geneva – Switzerland, has recently started the Large Hadron Collider (LHC), a 27 km particle accelerator. The CERN Engineering and Equipment Data Management Service (EDMS) provides support for managing engineering and equipment information throughout the entire lifecycle of a project. Based on several both in-house developed and commercial data management systems, this service supports management and follow-up of different kinds of information throughout the lifecycle of the LHC project: design, manufacturing, installation, commissioning data, maintenance and more. The data collection phase, carried out by specialists, is now being replaced by a phase during which data will be consulted on an extensive basis by non-experts users. In order to address this change, a Web portal for the EDMS has been developed. It brings together in one space all the aspects covered by the EDMS: project and document management, asset tracking and safety follow-up. T...

  20. A design method for an intuitive web site

    Energy Technology Data Exchange (ETDEWEB)

    Quinniey, M.L.; Diegert, K.V.; Baca, B.G.; Forsythe, J.C.; Grose, E.

    1999-11-03

    The paper describes a methodology for designing a web site for human factor engineers that is applicable for designing a web site for a group of people. Many web pages on the World Wide Web are not organized in a format that allows a user to efficiently find information. Often the information and hypertext links on web pages are not organized into intuitive groups. Intuition implies that a person is able to use their knowledge of a paradigm to solve a problem. Intuitive groups are categories that allow web page users to find information by using their intuition or mental models of categories. In order to improve the human factors engineers efficiency for finding information on the World Wide Web, research was performed to develop a web site that serves as a tool for finding information effectively. The paper describes a methodology for designing a web site for a group of people who perform similar task in an organization.

  1. IMPROVING PERSONALIZED WEB SEARCH USING BOOKSHELF DATA STRUCTURE

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2012-10-01

    Full Text Available Search engines are playing a vital role in retrieving relevant information for the web user. In this research work a user profile based web search is proposed. So the web user from different domain may receive different set of results. The main challenging work is to provide relevant results at the right level of reading difficulty. Estimating user expertise and re-ranking the results are the main aspects of this paper. The retrieved results are arranged in Bookshelf Data Structure for easy access. Better presentation of search results hence increases the usability of web search engines significantly in visual mode.

  2. Engineering semantic-based interactive multi-device web applications

    NARCIS (Netherlands)

    Bellekens, P.A.E.; Sluijs, van der K.A.M.; Aroyo, L.M.; Houben, G.J.P.M.; Baresi, L.; Fraternali, P.; Houben, G.J.

    2007-01-01

    To build high-quality personalized Web applications developers have to deal with a number of complex problems. We look at the growing class of personalized Web Applications that share three characteristic challenges. Firstly, the semantic problem of how to enable content reuse and integration.

  3. Semantic similarity measures in the biomedical domain by leveraging a web search engine.

    Science.gov (United States)

    Hsieh, Sheau-Ling; Chang, Wen-Yung; Chen, Chi-Huang; Weng, Yung-Ching

    2013-07-01

    Various researches in web related semantic similarity measures have been deployed. However, measuring semantic similarity between two terms remains a challenging task. The traditional ontology-based methodologies have a limitation that both concepts must be resided in the same ontology tree(s). Unfortunately, in practice, the assumption is not always applicable. On the other hand, if the corpus is sufficiently adequate, the corpus-based methodologies can overcome the limitation. Now, the web is a continuous and enormous growth corpus. Therefore, a method of estimating semantic similarity is proposed via exploiting the page counts of two biomedical concepts returned by Google AJAX web search engine. The features are extracted as the co-occurrence patterns of two given terms P and Q, by querying P, Q, as well as P AND Q, and the web search hit counts of the defined lexico-syntactic patterns. These similarity scores of different patterns are evaluated, by adapting support vector machines for classification, to leverage the robustness of semantic similarity measures. Experimental results validating against two datasets: dataset 1 provided by A. Hliaoutakis; dataset 2 provided by T. Pedersen, are presented and discussed. In dataset 1, the proposed approach achieves the best correlation coefficient (0.802) under SNOMED-CT. In dataset 2, the proposed method obtains the best correlation coefficient (SNOMED-CT: 0.705; MeSH: 0.723) with physician scores comparing with measures of other methods. However, the correlation coefficients (SNOMED-CT: 0.496; MeSH: 0.539) with coder scores received opposite outcomes. In conclusion, the semantic similarity findings of the proposed method are close to those of physicians' ratings. Furthermore, the study provides a cornerstone investigation for extracting fully relevant information from digitizing, free-text medical records in the National Taiwan University Hospital database.

  4. A web services choreography scenario for interoperating bioinformatics applications

    Directory of Open Access Journals (Sweden)

    Cheung David W

    2004-03-01

    Full Text Available Abstract Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1 the platforms on which the applications run are heterogeneous, 2 their web interface is not machine-friendly, 3 they use a non-standard format for data input and output, 4 they do not exploit standards to define application interface and message exchange, and 5 existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates

  5. Faculty Recommendations for Web Tools: Implications for Course Management Systems

    Science.gov (United States)

    Oliver, Kevin; Moore, John

    2008-01-01

    A gap analysis of web tools in Engineering was undertaken as one part of the Digital Library Network for Engineering and Technology (DLNET) grant funded by NSF (DUE-0085849). DLNET represents a Web portal and an online review process to archive quality knowledge objects in Engineering and Technology disciplines. The gap analysis coincided with the…

  6. Remote Experiments in Control Engineering Education Laboratory

    Directory of Open Access Journals (Sweden)

    Milica B Naumović

    2008-05-01

    Full Text Available This paper presents Automatic Control Engineering Laboratory (ACEL - WebLab, an under-developed, internet-based remote laboratory for control engineering education at the Faculty of Electronic Engineering in Niš. Up to now, the remote laboratory integrates two physical systems (velocity servo system and magnetic levitation system and enables some levels of measurement and control. To perform experiments in ACEL-WebLab, the "LabVIEW Run Time Engine"and a standard web browser are needed.

  7. Exposing the Hidden-Web Induced by Ajax

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.

    2008-01-01

    AJAX is a very promising approach for improving rich interactivity and responsiveness of web applications. At the same time, AJAX techniques increase the totality of the hidden web by shattering the metaphor of a web ‘page’ upon which general search engines are based. This paper describes a

  8. Deep Web and Dark Web: Deep World of the Internet

    OpenAIRE

    Çelik, Emine

    2018-01-01

    The Internet is undoubtedly still a revolutionary breakthrough in the history of humanity. Many people use the internet for communication, social media, shopping, political and social agenda, and more. Deep Web and Dark Web concepts not only handled by computer, software engineers but also handled by social siciensists because of the role of internet for the States in international arenas, public institutions and human life. By the moving point that very importantrole of internet for social s...

  9. Applying Semantic Web technologies to improve the retrieval, credibility and use of health-related web resources.

    Science.gov (United States)

    Mayer, Miguel A; Karampiperis, Pythagoras; Kukurikos, Antonis; Karkaletsis, Vangelis; Stamatakis, Kostas; Villarroel, Dagmar; Leis, Angela

    2011-06-01

    The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various "trust marks" and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.

  10. Start Your Engines: Surfing with Search Engines for Kids.

    Science.gov (United States)

    Byerly, Greg; Brodie, Carolyn S.

    1999-01-01

    Suggests that to be an effective educator and user of the Web it is essential to know the basics about search engines. Presents tips for using search engines. Describes several search engines for children and young adults, as well as some general filtered search engines for children. (AEF)

  11. BaBar - A Community Web Site in an Organizational Setting

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-07-10

    The BABAR Web site was established in 1993 at the Stanford Linear Accelerator Center (SLAC) to support the BABAR experiment, to report its results, and to facilitate communication among its scientific and engineering collaborators, currently numbering about 600 individuals from 75 collaborating institutions in 10 countries. The BABAR Web site is, therefore, a community Web site. At the same time it is hosted at SLAC and funded by agencies that demand adherence to policies decided under different priorities. Additionally, the BABAR Web administrators deal with the problems that arise during the course of managing users, content, policies, standards, and changing technologies. Desired solutions to some of these problems may be incompatible with the overall administration of the SLAC Web sites and/or the SLAC policies and concerns. There are thus different perspectives of the same Web site and differing expectations in segments of the SLAC population which act as constraints and challenges in any review or re-engineering activities. Web Engineering, which post-dates the BABAR Web, has aimed to provide a comprehensive understanding of all aspects of Web development. This paper reports on the first part of a recent review of application of Web Engineering methods to the BABAR Web site, which has led to explicit user and information models of the BABAR community and how SLAC and the BABAR community relate and react to each other. The paper identifies the issues of a community Web site in a hierarchical, semi-governmental sector and formulates a strategy for periodic reviews of BABAR and similar sites. A separate paper reports on the findings of a user survey and selected interviews with users, along with their implications and recommendations for future.

  12. BaBar - A Community Web Site in an Organizational Setting

    International Nuclear Information System (INIS)

    White, Bebo

    2003-01-01

    The BABAR Web site was established in 1993 at the Stanford Linear Accelerator Center (SLAC) to support the BABAR experiment, to report its results, and to facilitate communication among its scientific and engineering collaborators, currently numbering about 600 individuals from 75 collaborating institutions in 10 countries. The BABAR Web site is, therefore, a community Web site. At the same time it is hosted at SLAC and funded by agencies that demand adherence to policies decided under different priorities. Additionally, the BABAR Web administrators deal with the problems that arise during the course of managing users, content, policies, standards, and changing technologies. Desired solutions to some of these problems may be incompatible with the overall administration of the SLAC Web sites and/or the SLAC policies and concerns. There are thus different perspectives of the same Web site and differing expectations in segments of the SLAC population which act as constraints and challenges in any review or re-engineering activities. Web Engineering, which post-dates the BABAR Web, has aimed to provide a comprehensive understanding of all aspects of Web development. This paper reports on the first part of a recent review of application of Web Engineering methods to the BABAR Web site, which has led to explicit user and information models of the BABAR community and how SLAC and the BABAR community relate and react to each other. The paper identifies the issues of a community Web site in a hierarchical, semi-governmental sector and formulates a strategy for periodic reviews of BABAR and similar sites. A separate paper reports on the findings of a user survey and selected interviews with users, along with their implications and recommendations for future

  13. Study of Search Engine Transaction Logs Shows Little Change in How Users use Search Engines. A review of: Jansen, Bernard J., and Amanda Spink. “How Are We Searching the World Wide Web? A Comparison of Nine Search Engine Transaction Logs.” Information Processing & Management 42.1 (2006: 248‐263.

    Directory of Open Access Journals (Sweden)

    David Hook

    2006-09-01

    Full Text Available Objective – To examine the interactions between users and search engines, and how they have changed over time. Design – Comparative analysis of search engine transaction logs. Setting – Nine major analyses of search engine transaction logs. Subjects – Nine web search engine studies (4 European, 5 American over a seven‐year period, covering the search engines Excite, Fireball, AltaVista, BWIE and AllTheWeb. Methods – The results from individual studies are compared by year of study for percentages of single query sessions, one term queries, operator (and, or, not, etc. usage and single result page viewing. As well, the authors group the search queries into eleven different topical categories and compare how the breakdown has changed over time. Main Results – Based on the percentage of single query sessions, it does not appear that the complexity of interactions has changed significantly for either the U.S.‐based or the European‐based search engines. As well, there was little change observed in the percentage of one‐term queries over the years of study for either the U.S.‐based or the European‐based search engines. Few users (generally less than 20% use Boolean or other operators in their queries, and these percentages have remained relatively stable. One area of noticeable change is in the percentage of users viewing only one results page, which has increased over the years of study. Based on the studies of the U.S.‐based search engines, the topical categories of ‘People, Place or Things’ and ‘Commerce, Travel, Employment or Economy’ are becoming more popular, while the categories of ‘Sex and Pornography’ and ‘Entertainment or Recreation’ are declining. Conclusions – The percentage of users viewing only one results page increased during the years of the study, while the percentages of single query sessions, oneterm sessions and operator usage remained stable. The increase in single result page viewing

  14. Experience of Developing a Meta-Semantic Search Engine

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Thinking of todays web search scenario which is mainly keyword based, leads to the need of effective and meaningful search provided by Semantic Web. Existing search engines are vulnerable to provide relevant answers to users query due to their dependency on simple data available in web pages. On other hand, semantic search engines provide efficient and relevant results as the semantic web manages information with well defined meaning using ontology. A Meta-Search engine is a search tool that ...

  15. FindZebra: A search engine for rare diseases

    DEFF Research Database (Denmark)

    Dragusin, Radu; Petcu, Paula; Lioma, Christina Amalia

    2013-01-01

    Background: The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface for such information. It is therefore of interest to find out how well web search engines work for diagnostic...... approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, state-of-the-art evaluation measures, and curated information resources. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source...... medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Conclusions: Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular web search engines. The proposed...

  16. A reverse engineering approach for automatic annotation of Web pages

    NARCIS (Netherlands)

    R. de Virgilio (Roberto); F. Frasincar (Flavius); W. Hop (Walter); S. Lachner (Stephan)

    2013-01-01

    textabstractThe Semantic Web is gaining increasing interest to fulfill the need of sharing, retrieving, and reusing information. Since Web pages are designed to be read by people, not machines, searching and reusing information on the Web is a difficult task without human participation. To this aim

  17. SPADOCK: Adaptive Pipeline Technology for Web System using WebSocket

    Directory of Open Access Journals (Sweden)

    Aries RICHI

    2013-01-01

    Full Text Available As information technology grows to the era of IoT(Internet of Things and cloud computing, the performance ofweb application and web service which acts as the informationgateway becomes an issue. Horizontal quality of serviceimprovement through system performance escalation becomesan issue pursued by engineers and scientists, giving birth toBigPipe pipeline technology which was developed by Facebook.We make SPADOCK, an adaptive pipeline system which is builtunder distributed system architecture with the utilization ofHTML5 WebSocket, then measure its performance. Parametersused for the measurement includes latency, workload, andbandwidth. The result shows that SPADOCK could reduceserving latency by 68.28% compared with the conventional web,and it is 20.63% faster than BigPipe.

  18. How much data resides in a web collection: how to estimate size of a web collection

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2013-01-01

    With increasing amount of data in deep web sources (hidden from general search engines behind web forms), accessing this data has gained more attention. In the algorithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate decisions in

  19. Feature-based engineering of compensations in web service environment

    DEFF Research Database (Denmark)

    Schaefer, Michael; Dolog, Peter

    2009-01-01

    In this paper, we introduce a product line approach for developing Web services with extended compensation capabilities. We adopt a feature modelling approach in order to describe variable and common compensation properties of Web service variants, as well as service consumer application...

  20. Exploring the academic invisible web

    OpenAIRE

    Lewandowski, Dirk

    2006-01-01

    The Invisible Web is often discussed in the academic context, where its contents (mainly in the form of databases) are of great importance. But this discussion is mainly based on some seminal research done by Sherman and Price (2001) and Bergman (2001), respectively. We focus on the types of Invisible Web content relevant for academics and the improvements made by search engines to deal with these content types. In addition, we question the volume of the Invisible Web as stated by Bergman. Ou...

  1. Finding Specification Pages from the Web

    Science.gov (United States)

    Yoshinaga, Naoki; Torisawa, Kentaro

    This paper presents a method of finding a specification page on the Web for a given object (e.g., ``Ch. d'Yquem'') and its class label (e.g., ``wine''). A specification page for an object is a Web page which gives concise attribute-value information about the object (e.g., ``county''-``Sauternes'') in well formatted structures. A simple unsupervised method using layout and symbolic decoration cues was applied to a large number of the Web pages to acquire candidate attributes for each class (e.g., ``county'' for a class ``wine''). We then filter out irrelevant words from the putative attributes through an author-aware scoring function that we called site frequency. We used the acquired attributes to select a representative specification page for a given object from the Web pages retrieved by a normal search engine. Experimental results revealed that our system greatly outperformed the normal search engine in terms of this specification retrieval.

  2. Surfing the World Wide Web to Education Hot-Spots.

    Science.gov (United States)

    Dyrli, Odvard Egil

    1995-01-01

    Provides a brief explanation of Web browsers and their use, as well as technical information for those considering access to the WWW (World Wide Web). Curriculum resources and addresses to useful Web sites are included. Sidebars show sample searches using Yahoo and Lycos search engines, and a list of recommended Web resources. (JKP)

  3. WebPIE : A web-scale parallel inference engine using MapReduce

    NARCIS (Netherlands)

    Urbani, Jacopo; Kotoulas, Spyros; Maassen, Jason; Van Harmelen, Frank; Bal, Henri

    2012-01-01

    The large amount of Semantic Web data and its fast growth pose a significant computational challenge in performing efficient and scalable reasoning. On a large scale, the resources of single machines are no longer sufficient and we are required to distribute the process to improve performance. The

  4. Construction of a bibliographic information database and a web directory for the nuclear science and engineering

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jeong Hoon; Kim, Tae Whan; Lee, Ji Ho; Chun, Young Chun; Yu, An Na

    2005-11-15

    The objective of this project is to construct the bibliographic information database and the web directory in the nuclear field. Its construction is very timely and important. Because nuclear science and technology has an considerable effect all over the other sciences and technologies due to its property of giant and complex engineering. We aimed to firmly build up a basis of efficient management of the bibliographic information database and the web directory in the nuclear field. The results of this project that we achieved in this year are as follows : first, construction of the bibliographic information database in the nuclear field(the target title: 1,500 titles ; research report: 1,000 titles, full-text report: 250 titles, full-text article: 250 titles). Second, completion of construction of the web directory in the nuclear field by using SWING (the total figure achieved : 2,613 titles). We plan that we will positively give more information to the general public interested in the nuclear field and to the experts of the field through this bibliographic information database on KAERI's home page, KAERI's electronic library and other related sites as well as participation at various seminars and meetings related to the nuclear field.

  5. A novel architecture for information retrieval system based on semantic web

    Science.gov (United States)

    Zhang, Hui

    2011-12-01

    Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.

  6. Cost estimation in software engineering projects with web components development

    Directory of Open Access Journals (Sweden)

    Javier de Andrés

    2015-01-01

    Full Text Available Existen multitud de modelos propuestos para la predicción de co stes en proyectos de software, al gunos orientados específicamen te para proyectos Web. Este trabajo analiza si los modelos específicos para proyectos Web están justifi cados, examinando el comportami ento diferencial de los costes entre proyectos de desarrollo softwar e Web y no Web. Se analizan dos aspectos del cálculo de costes: las deseconomías de escala, y el im pacto de algunas características de estos proyectos que son utilizadas como cost drivers. Se en uncian dos hipótesis: (a en estos proyect os las deseconomías de escala so n mayores y (b el incremento de coste que provocan los cost dr ivers es menor para los proyectos Web. Se contrastaron estas hipótesis a nalizando un conjunto de proyectos reales. Los resultados sugie ren que ambas hipótesis se cumplen. Por lo tanto, la principal contribu ción a la literatura de esta inv estigación es que el desarrollo de modelos específicos para los proyectos Web está justificado.

  7. Web sites that work secrets from winning web sites

    CERN Document Server

    Smith, Jon

    2012-01-01

    Leading web site entrepreneur Jon Smith has condensed the secrets of his success into 52 inspiring ideas that even the most hopeless technophobe can implement. The brilliant tips and practical advice in Web sites that work will uplift and transform any website, from the simplest to the most complicated. It deals with everything from fundamentals such as how to assess the effectiveness of a website and how to get a site listed on the most popular search engines to more sophisticated challenges like creating a community and dealing with legal requirements. Straight-talking, practical and humorou

  8. Pro JavaScript for web apps

    CERN Document Server

    Freeman, Adam

    2012-01-01

    JavaScript is the engine behind every web app, and a solid knowledge of it is essential for all modern web developers. Pro JavaScript for Web Apps gives you all of the information that you need to create professional, optimized, and efficient JavaScript applications that will run across all devices. It takes you through all aspects of modern JavaScript application creation, showing you how to combine JavaScript with the new features of HTML5 and CSS3 to make the most of the new web technologies. The focus of the book is on creating professional web applications, ensuring that your app provides

  9. Search engines that learn from their users

    NARCIS (Netherlands)

    Schuth, A.G.

    2016-01-01

    More than half the world’s population uses web search engines, resulting in over half a billion search queries every single day. For many people web search engines are among the first resources they go to when a question arises. Moreover, search engines have for many become the most trusted route to

  10. GeNemo: a search engine for web-based functional genomic data.

    Science.gov (United States)

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-07-08

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Detection And Classification Of Web Robots With Honeypots

    Science.gov (United States)

    2016-03-01

    Web robots are valuable tools for indexing content on the Web, they can also be malicious through phishing , spamming, or performing targeted attacks...indexing content on the Web, they can also be malicious through phishing , spamming, or performing targeted attacks. In this thesis, we study an approach...programs has been attributed to the explosion in content and user-generated social media on the Internet. The Web search engines like Google require

  12. Social Networking on the Semantic Web

    Science.gov (United States)

    Finin, Tim; Ding, Li; Zhou, Lina; Joshi, Anupam

    2005-01-01

    Purpose: Aims to investigate the way that the semantic web is being used to represent and process social network information. Design/methodology/approach: The Swoogle semantic web search engine was used to construct several large data sets of Resource Description Framework (RDF) documents with social network information that were encoded using the…

  13. IntegromeDB: an integrated system and biological search engine.

    Science.gov (United States)

    Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia

    2012-01-19

    With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.

  14. Marketing plan for a web shop business

    OpenAIRE

    Koskivaara, Leonilla

    2014-01-01

    Internet has changed the buying behavior of consumers during the past years and companies need to adapt to the changes. Web shop business is an important sales channel of today’s companies. Advantages of a web shop business include cost effectiveness and potential to do business globally. Challenges of a web shop business include search engine optimization and running both, a retail store and a web shop at the same time. Social media has become an important marketing channel and has bec...

  15. CHIME : service-oriented framework for adaptive web-based systems

    NARCIS (Netherlands)

    Chepegin, V.; Aroyo, L.M.; De Bra, P.M.E.; Houben, G.J.P.M.; De Bra, P.M.E.

    2003-01-01

    In this paper we present our view on how the current development of knowledge engineering in the context of Semantic Web can contribute to the better applicability, reusability and sharability of adaptive web-based systems. We propose a service-oriented framework for adaptive web-based systems,

  16. Overview of the TREC 2014 Federated Web Search Track

    NARCIS (Netherlands)

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Nguyen, Dong-Phuong; Zhou, Ke; Hiemstra, Djoerd

    2014-01-01

    The TREC Federated Web Search track facilitates research in topics related to federated web search, by providing a large realistic data collection sampled from a multitude of online search engines. The FedWeb 2013 challenges of Resource Selection and Results Merging challenges are again included in

  17. Earth Science Mining Web Services

    Science.gov (United States)

    Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken

    2008-01-01

    To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.

  18. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services.

    Science.gov (United States)

    Gessler, Damian D G; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-09-23

    SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at http://sswap.info (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at http://sswap.info/protocol.jsp, developer tools at http://sswap.info/developer.jsp, and a portal to third-party ontologies at http://sswapmeet.sswap.info (a "swap meet"). SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the

  19. Web-based Analysis Services Report

    CERN Document Server

    AUTHOR|(CDS)2108758; Canali, Luca; Grancher, Eric; Lamanna, Massimo; McCance, Gavin; Mato Vila, Pere; Piparo, Danilo; Moscicki, Jakub; Pace, Alberto; Brito Da Rocha, Ricardo; Simko, Tibor; Smith, Tim; Tejedor Saavedra, Enric; CERN. Geneva. IT Department

    2017-01-01

    Web-based services (cloud services) is an important trend to innovate end-user services while optimising the service operational costs. CERN users are constantly proposing new approaches (inspired from services existing on the web, tools used in education or other science or based on their experience in using existing computing services). In addition, industry and open source communities have recently made available a large number of powerful and attractive tools and platforms that enable large scale data processing. “Big Data” software stacks notably provide solutions for scalable storage, distributed compute and data analysis engines, data streaming, web-based interfaces (notebooks). Some of those platforms and tools, typically available as open source products, are experiencing a very fast adoption in industry and science such that they are becoming “de facto” references in several areas of data engineering, data science and machine learning. In parallel to users' requests, WLCG is considering to c...

  20. Development of Content Management System-based Web Applications

    NARCIS (Netherlands)

    Souer, J.

    2012-01-01

    Web engineering is the application of systematic and quantifiable approaches (concepts, methods, techniques, tools) to cost-effective requirements analysis, design, implementation, testing, operation, and maintenance of high quality web applications. Over the past years, Content Management Systems

  1. A Survey On Various Web Template Detection And Extraction Methods

    Directory of Open Access Journals (Sweden)

    Neethu Mary Varghese

    2015-03-01

    Full Text Available Abstract In todays digital world reliance on the World Wide Web as a source of information is extensive. Users increasingly rely on web based search engines to provide accurate search results on a wide range of topics that interest them. The search engines in turn parse the vast repository of web pages searching for relevant information. However majority of web portals are designed using web templates which are designed to provide consistent look and feel to end users. The presence of these templates however can influence search results leading to inaccurate results being delivered to the users. Therefore to improve the accuracy and reliability of search results identification and removal of web templates from the actual content is essential. A wide range of approaches are commonly employed to achieve this and this paper focuses on the study of the various approaches of template detection and extraction that can be applied across homogenous as well as heterogeneous web pages.

  2. Discovering Land Cover Web Map Services from the Deep Web with JavaScript Invocation Rules

    Directory of Open Access Journals (Sweden)

    Dongyang Hou

    2016-06-01

    Full Text Available Automatic discovery of isolated land cover web map services (LCWMSs can potentially help in sharing land cover data. Currently, various search engine-based and crawler-based approaches have been developed for finding services dispersed throughout the surface web. In fact, with the prevalence of geospatial web applications, a considerable number of LCWMSs are hidden in JavaScript code, which belongs to the deep web. However, discovering LCWMSs from JavaScript code remains an open challenge. This paper aims to solve this challenge by proposing a focused deep web crawler for finding more LCWMSs from deep web JavaScript code and the surface web. First, the names of a group of JavaScript links are abstracted as initial judgements. Through name matching, these judgements are utilized to judge whether or not the fetched webpages contain predefined JavaScript links that may prompt JavaScript code to invoke WMSs. Secondly, some JavaScript invocation functions and URL formats for WMS are summarized as JavaScript invocation rules from prior knowledge of how WMSs are employed and coded in JavaScript. These invocation rules are used to identify the JavaScript code for extracting candidate WMSs through rule matching. The above two operations are incorporated into a traditional focused crawling strategy situated between the tasks of fetching webpages and parsing webpages. Thirdly, LCWMSs are selected by matching services with a set of land cover keywords. Moreover, a search engine for LCWMSs is implemented that uses the focused deep web crawler to retrieve and integrate the LCWMSs it discovers. In the first experiment, eight online geospatial web applications serve as seed URLs (Uniform Resource Locators and crawling scopes; the proposed crawler addresses only the JavaScript code in these eight applications. All 32 available WMSs hidden in JavaScript code were found using the proposed crawler, while not one WMS was discovered through the focused crawler

  3. Needle Custom Search: Recall-oriented search on the Web using semantic annotations

    NARCIS (Netherlands)

    Kaptein, Rianne; Koot, Gijs; Huis in 't Veld, Mirjam A.A.; van den Broek, Egon; de Rijke, Maarten; Kenter, Tom; de Vries, A.P.; Zhai, Chen Xiang; de Jong, Franciska M.G.; Radinsky, Kira; Hofmann, Katja

    Web search engines are optimized for early precision, which makes it difficult to perform recall-oriented tasks using these search engines. In this article, we present our tool Needle Custom Search. This tool exploits semantic annotations of Web search results and, thereby, increase the efficiency

  4. Needle Custom Search : Recall-oriented search on the web using semantic annotations

    NARCIS (Netherlands)

    Kaptein, Rianne; Koot, Gijs; Huis in 't Veld, Mirjam A.A.; van den Broek, Egon L.

    2014-01-01

    Web search engines are optimized for early precision, which makes it difficult to perform recall-oriented tasks using these search engines. In this article, we present our tool Needle Custom Search. This tool exploits semantic annotations of Web search results and, thereby, increase the efficiency

  5. Overview of the TREC 2013 Federated Web Search Track

    NARCIS (Netherlands)

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Nguyen, Dong-Phuong; Hiemstra, Djoerd

    2014-01-01

    The TREC Federated Web Search track is intended to promote research related to federated search in a realistic web setting, and hereto provides a large data collection gathered from a series of online search engines. This overview paper discusses the results of the first edition of the track, FedWeb

  6. Recommendations for Benchmarking Web Site Usage among Academic Libraries.

    Science.gov (United States)

    Hightower, Christy; Sih, Julie; Tilghman, Adam

    1998-01-01

    To help library directors and Web developers create a benchmarking program to compare statistics of academic Web sites, the authors analyzed the Web server log files of 14 university science and engineering libraries. Recommends a centralized voluntary reporting structure coordinated by the Association of Research Libraries (ARL) and a method for…

  7. Web corpus construction

    CERN Document Server

    Schafer, Roland

    2013-01-01

    The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are several adavantages of this approach: (i) Working with such corpora obviates the problems encountered when using Internet search engines in quantitative linguistic research (such as non-transparent ranking algorithms). (ii) Creating a corpus from web data is virtually free. (iii) The size of corpora compiled from the WWW may exceed by several orders of magnitudes the size of language resources offered elsewhere. (iv) The data is locally available to the user, and it can be linguistically post-processed and queried with the tools preferred by her/him. This book addresses the main practical tasks in the creation of web corpora up to giga-token size. Among these tasks are the sampling process (i.e., web crawling) and the usual cleanups including boilerplate removal and rem...

  8. [Development of domain specific search engines].

    Science.gov (United States)

    Takai, T; Tokunaga, M; Maeda, K; Kaminuma, T

    2000-01-01

    As cyber space exploding in a pace that nobody has ever imagined, it becomes very important to search cyber space efficiently and effectively. One solution to this problem is search engines. Already a lot of commercial search engines have been put on the market. However these search engines respond with such cumbersome results that domain specific experts can not tolerate. Using a dedicate hardware and a commercial software called OpenText, we have tried to develop several domain specific search engines. These engines are for our institute's Web contents, drugs, chemical safety, endocrine disruptors, and emergent response for chemical hazard. These engines have been on our Web site for testing.

  9. Engineering Education Tool for Distance Telephone Traffic Learning Through Web

    Directory of Open Access Journals (Sweden)

    Leonimer Flávio de Melo

    2012-11-01

    Full Text Available This work subject focuses in distance learning (DL modality by the Internet. The use of calculators and simulators software introduces a high level of interactivity in DL systems, such as Matlab software proposed by using in this work. The use of efficient mathematical packages and hypermedia technologies opens the door to a new paradigm of teaching and learning in the dawn of this new millennium. The use of hypertext, graphics, animation, audio, video, efficient calculators and simulators incorporating artificial intelligence techniques and the advance in the broadband networks will pave the way to this new horizon. The contribution of this work, besides the Matlab Web integration, is the developing of an introductory course in traffic engineering in hypertext format. Also, calculators to the most employed expressions of traffic analysis were developed to the Matlab server environment. By the use of the telephone traffic calculator, the user inputs data on his or her Internet browser and the systems returns numerical data, graphics and tables in HTML pages. The system is also very useful for Professional traffic calculations, replacing with advantages the use of the traditional methods by means of static tables and graphics in paper format.

  10. Habitat-mediated variation in the importance of ecosystem engineers for secondary cavity nesters in a nest web.

    Science.gov (United States)

    Robles, Hugo; Martin, Kathy

    2014-01-01

    Through physical state changes in biotic or abiotic materials, ecosystem engineers modulate resource availability to other organisms and are major drivers of evolutionary and ecological dynamics. Understanding whether and how ecosystem engineers are interchangeable for resource users in different habitats is a largely neglected topic in ecosystem engineering research that can improve our understanding of the structure of communities. We addressed this issue in a cavity-nest web (1999-2011). In aspen groves, the presence of mountain bluebird (Sialia currucoides) and tree swallow (Tachycineta bicolour) nests was positively related to the density of cavities supplied by northern flickers (Colaptes auratus), which provided the most abundant cavities (1.61 cavities/ha). Flickers in aspen groves provided numerous nesting cavities to bluebirds (66%) and swallows (46%), despite previous research showing that flicker cavities are avoided by swallows. In continuous mixed forests, however, the presence of nesting swallows was mainly related to cavity density of red-naped sapsuckers (Sphyrapicus nuchalis), which provided the most abundant cavities (0.52 cavities/ha), and to cavity density of hairy woodpeckers (Picoides villosus), which provided few (0.14 cavities/ha) but high-quality cavities. Overall, sapsuckers and hairy woodpeckers provided 86% of nesting cavities to swallows in continuous forests. In contrast, the presence of nesting bluebirds in continuous forests was associated with the density of cavities supplied by all the ecosystem engineers. These results suggest that (i) habitat type may mediate the associations between ecosystem engineers and resource users, and (ii) different ecosystem engineers may be interchangeable for resource users depending on the quantity and quality of resources that each engineer supplies in each habitat type. We, therefore, urge the incorporation of the variation in the quantity and quality of resources provided by ecosystem engineers

  11. Engineering Adaptive Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    for a domain.In this book, we propose a new domain engineering framework which extends a development process of Web applications with techniques required when designing such adaptive customizable Web applications. The framework is provided with design abstractions which deal separately with information served...

  12. An Efficient Approach for Web Indexing of Big Data through Hyperlinks in Web Crawling

    Science.gov (United States)

    Devi, R. Suganya; Manjula, D.; Siddharth, R. K.

    2015-01-01

    Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling. PMID:26137592

  13. Search Engine Optimization

    CERN Document Server

    Davis, Harold

    2006-01-01

    SEO--short for Search Engine Optimization--is the art, craft, and science of driving web traffic to web sites. Web traffic is food, drink, and oxygen--in short, life itself--to any web-based business. Whether your web site depends on broad, general traffic, or high-quality, targeted traffic, this PDF has the tools and information you need to draw more traffic to your site. You'll learn how to effectively use PageRank (and Google itself); how to get listed, get links, and get syndicated; and much more. The field of SEO is expanding into all the possible ways of promoting web traffic. This

  14. Discovering How Students Search a Library Web Site: A Usability Case Study.

    Science.gov (United States)

    Augustine, Susan; Greene, Courtney

    2002-01-01

    Discusses results of a usability study at the University of Illinois Chicago that investigated whether Internet search engines have influenced the way students search library Web sites. Results show students use the Web site's internal search engine rather than navigating through the pages; have difficulty interpreting library terminology; and…

  15. Sagace: A web-based search engine for biomedical databases in Japan

    Directory of Open Access Journals (Sweden)

    Morita Mizuki

    2012-10-01

    Full Text Available Abstract Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data and biological resource banks (such as mouse models of disease and cell lines. With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/.

  16. Introduction to Webometrics Quantitative Web Research for the Social Sciences

    CERN Document Server

    Thelwall, Michael

    2009-01-01

    Webometrics is concerned with measuring aspects of the web: web sites, web pages, parts of web pages, words in web pages, hyperlinks, web search engine results. The importance of the web itself as a communication medium and for hosting an increasingly wide array of documents, from journal articles to holiday brochures, needs no introduction. Given this huge and easily accessible source of information, there are limitless possibilities for measuring or counting on a huge scale (e.g., the number of web sites, the number of web pages, the number of blogs) or on a smaller scale (e.g., the number o

  17. Multitasking Web Searching and Implications for Design.

    Science.gov (United States)

    Ozmutlu, Seda; Ozmutlu, H. C.; Spink, Amanda

    2003-01-01

    Findings from a study of users' multitasking searches on Web search engines include: multitasking searches are a noticeable user behavior; multitasking search sessions are longer than regular search sessions in terms of queries per session and duration; both Excite and AlltheWeb.com users search for about three topics per multitasking session and…

  18. Quality of Web-Based Information on Cannabis Addiction

    Science.gov (United States)

    Khazaal, Yasser; Chatton, Anne; Cochand, Sophie; Zullino, Daniele

    2008-01-01

    This study evaluated the quality of Web-based information on cannabis use and addiction and investigated particular content quality indicators. Three keywords ("cannabis addiction," "cannabis dependence," and "cannabis abuse") were entered into two popular World Wide Web search engines. Websites were assessed with a standardized proforma designed…

  19. Semantic Web Services Challenge, Results from the First Year. Series: Semantic Web And Beyond, Volume 8.

    Science.gov (United States)

    Petrie, C.; Margaria, T.; Lausen, H.; Zaremba, M.

    Explores trade-offs among existing approaches. Reveals strengths and weaknesses of proposed approaches, as well as which aspects of the problem are not yet covered. Introduces software engineering approach to evaluating semantic web services. Service-Oriented Computing is one of the most promising software engineering trends because of the potential to reduce the programming effort for future distributed industrial systems. However, only a small part of this potential rests on the standardization of tools offered by the web services stack. The larger part of this potential rests upon the development of sufficient semantics to automate service orchestration. Currently there are many different approaches to semantic web service descriptions and many frameworks built around them. A common understanding, evaluation scheme, and test bed to compare and classify these frameworks in terms of their capabilities and shortcomings, is necessary to make progress in developing the full potential of Service-Oriented Computing. The Semantic Web Services Challenge is an open source initiative that provides a public evaluation and certification of multiple frameworks on common industrially-relevant problem sets. This edited volume reports on the first results in developing common understanding of the various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. Semantic Web Services Challenge: Results from the First Year is designed for a professional audience composed of practitioners and researchers in industry. Professionals can use this book to evaluate SWS technology for their potential practical use. The book is also suitable for advanced-level students in computer science.

  20. Reflect: a practical approach to web semantics

    DEFF Research Database (Denmark)

    O'Donoghue, S.I.; Horn, Heiko; Pafilisa, E.

    2010-01-01

    To date, adding semantic capabilities to web content usually requires considerable server-side re-engineering, thus only a tiny fraction of all web content currently has semantic annotations. Recently, we announced Reflect (http://reflect.ws), a free service that takes a more practical approach......: Reflect uses augmented browsing to allow end-users to add systematic semantic annotations to any web-page in real-time, typically within seconds. In this paper we describe the tagging process in detail and show how further entity types can be added to Reflect; we also describe how publishers and content...... web technologies....

  1. Federated Search and the Library Web Site: A Study of Association of Research Libraries Member Web Sites

    Science.gov (United States)

    Williams, Sarah C.

    2010-01-01

    The purpose of this study was to investigate how federated search engines are incorporated into the Web sites of libraries in the Association of Research Libraries. In 2009, information was gathered for each library in the Association of Research Libraries with a federated search engine. This included the name of the federated search service and…

  2. Overview of the TREC 2014 Federated Web Search Track

    OpenAIRE

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Nguyen, Dong-Phuong; Zhou, Ke; Hiemstra, Djoerd

    2014-01-01

    The TREC Federated Web Search track facilitates research in topics related to federated web search, by providing a large realistic data collection sampled from a multitude of online search engines. The FedWeb 2013 challenges of Resource Selection and Results Merging challenges are again included in FedWeb 2014, and we additionally introduced the task of vertical selection. Other new aspects are the required link between the Resource Selection and Results Merging, and the importance of diversi...

  3. Automated Security Testing of Web Widget Interactions

    NARCIS (Netherlands)

    Bezemer, C.P.; Mesbah, A.; Van Deursen, A.

    2009-01-01

    This paper is a pre-print of: Cor-Paul Bezemer, Ali Mesbah, and Arie van Deursen. Automated Security Testing of Web Widget Interactions. In Proceedings of the 7th joint meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering

  4. AN OVERVIEW OF SEARCHING AND DISCOVERING WEB BASED INFORMATION RESOURCES

    Directory of Open Access Journals (Sweden)

    Cezar VASILESCU

    2010-01-01

    Full Text Available The Internet becomes for most of us a daily used instrument, for professional or personal reasons. We even do not remember the times when a computer and a broadband connection were luxury items. More and more people are relying on the complicated web network to find the needed information.This paper presents an overview of Internet search related issues, upon search engines and describes the parties and the basic mechanism that is embedded in a search for web based information resources. Also presents ways to increase the efficiency of web searches, through a better understanding of what search engines ignore at websites content.

  5. Quality analysis of patient information about knee arthroscopy on the World Wide Web.

    Science.gov (United States)

    Sambandam, Senthil Nathan; Ramasamy, Vijayaraj; Priyanka, Priyanka; Ilango, Balakrishnan

    2007-05-01

    This study was designed to ascertain the quality of patient information available on the World Wide Web on the topic of knee arthroscopy. For the purpose of quality analysis, we used a pool of 232 search results obtained from 7 different search engines. We used a modified assessment questionnaire to assess the quality of these Web sites. This questionnaire was developed based on similar studies evaluating Web site quality and includes items on illustrations, accessibility, availability, accountability, and content of the Web site. We also compared results obtained with different search engines and tried to establish the best possible search strategy to attain the most relevant, authentic, and adequate information with minimum time consumption. For this purpose, we first compared 100 search results from the single most commonly used search engine (AltaVista) with the pooled sample containing 20 search results from each of the 7 different search engines. The search engines used were metasearch (Copernic and Mamma), general search (Google, AltaVista, and Yahoo), and health topic-related search engines (MedHunt and Healthfinder). The phrase "knee arthroscopy" was used as the search terminology. Excluding the repetitions, there were 117 Web sites available for quality analysis. These sites were analyzed for accessibility, relevance, authenticity, adequacy, and accountability by use of a specially designed questionnaire. Our analysis showed that most of the sites providing patient information on knee arthroscopy contained outdated information, were inadequate, and were not accountable. Only 16 sites were found to be providing reasonably good patient information and hence can be recommended to patients. Understandably, most of these sites were from nonprofit organizations and educational institutions. Furthermore, our study revealed that using multiple search engines increases patients' chances of obtaining more relevant information rather than using a single search

  6. GLIDERS - A web-based search engine for genome-wide linkage disequilibrium between HapMap SNPs

    Directory of Open Access Journals (Sweden)

    Broxholme John

    2009-10-01

    Full Text Available Abstract Background A number of tools for the examination of linkage disequilibrium (LD patterns between nearby alleles exist, but none are available for quickly and easily investigating LD at longer ranges (>500 kb. We have developed a web-based query tool (GLIDERS: Genome-wide LInkage DisEquilibrium Repository and Search engine that enables the retrieval of pairwise associations with r2 ≥ 0.3 across the human genome for any SNP genotyped within HapMap phase 2 and 3, regardless of distance between the markers. Description GLIDERS is an easy to use web tool that only requires the user to enter rs numbers of SNPs they want to retrieve genome-wide LD for (both nearby and long-range. The intuitive web interface handles both manual entry of SNP IDs as well as allowing users to upload files of SNP IDs. The user can limit the resulting inter SNP associations with easy to use menu options. These include MAF limit (5-45%, distance limits between SNPs (minimum and maximum, r2 (0.3 to 1, HapMap population sample (CEU, YRI and JPT+CHB combined and HapMap build/release. All resulting genome-wide inter-SNP associations are displayed on a single output page, which has a link to a downloadable tab delimited text file. Conclusion GLIDERS is a quick and easy way to retrieve genome-wide inter-SNP associations and to explore LD patterns for any number of SNPs of interest. GLIDERS can be useful in identifying SNPs with long-range LD. This can highlight mis-mapping or other potential association signal localisation problems.

  7. Interactive Web-based e-learning for Studying Flexible Manipulator Systems

    Directory of Open Access Journals (Sweden)

    Abul K. M. Azad

    2008-03-01

    Full Text Available Abstract— This paper presents a web-based e-leaning facility for simulation, modeling, and control of flexible manipulator systems. The simulation and modeling part includes finite difference and finite element simulations along with neural network and genetic algorithm based modeling strategies for flexible manipulator systems. The controller part constitutes a number of open-loop and closed-loop designs. Closed loop control designs include the classical, adaptive, and neuro-model based strategies. Matlab software package and its associated toolboxes are used to implement these. The Matlab web server is used as the gateway between the facility and web-access. ASP.NET technology and SQL database are utilized to develop web applications for access control, user account and password maintenance, administrative management, and facility utilization monitoring. The reported facility provides a flexible but effective approach of web-based interactive e-learning facility of an engineering system. This can be extended to incorporate additional engineering systems within the e-learning framework.

  8. Problem-Based Learning in Web Environments: The Case of ``Virtual eBMS'' for Business Engineering Education

    Science.gov (United States)

    Elia, Gianluca; Secundo, Giustina; Taurino, Cesare

    This chapter presents a case study where Problem Based Learning (PBL) approach is applied to a Web-based environment. It first describes the main features behind the PBL for creating Business Engineers able to face the grand technological challenges of the 2020. Then it introduces a Web Based system supporting the PBL strategy, called the “Virtual eBMS”. This system has been designed and implemented at the e-Business Management Section of the Scuola Superiore ISUFI - University of Salento (Italy), in the framework of a research project carried out in collaboration with IBM. Besides the logical and technological description of Virtual eBMS, the chapter presents two applications of the platform in two different contexts: an academic context (international master) and an entrepreneurial context (awareness workshop with companies and entrepreneurs). The system is illustrated starting from the description of an operational framework for designing curricula PBL based from the author perspective and, then, illustrating a typical scenario of a learner accessing to the curricula. In the description, it is highlighted both the “structured” way and the “unstructured” way to create and follow an entire learning path.

  9. Measuring Personalization of Web Search

    DEFF Research Database (Denmark)

    Hannak, Aniko; Sapiezynski, Piotr; Kakhki, Arash Molavi

    2013-01-01

    are simply unable to access information that the search engines’ algorithm decidesis irrelevant. Despitetheseconcerns, there has been little quantification of the extent of personalization in Web search today, or the user attributes that cause it. In light of this situation, we make three contributions...... as a result of searching with a logged in account and the IP address of the searching user. Our results are a first step towards understanding the extent and effects of personalization on Web search engines today....

  10. Web application to access U.S. Army Corps of Engineers Civil Works and Restoration Projects information for the Rio Grande Basin, southern Colorado, New Mexico, and Texas

    Science.gov (United States)

    Archuleta, Christy-Ann M.; Eames, Deanna R.

    2009-01-01

    The Rio Grande Civil Works and Restoration Projects Web Application, developed by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers (USACE) Albuquerque District, is designed to provide publicly available information through the Internet about civil works and restoration projects in the Rio Grande Basin. Since 1942, USACE Albuquerque District responsibilities have included building facilities for the U.S. Army and U.S. Air Force, providing flood protection, supplying water for power and public recreation, participating in fire remediation, protecting and restoring wetlands and other natural resources, and supporting other government agencies with engineering, contracting, and project management services. In the process of conducting this vast array of engineering work, the need arose for easily tracking the locations of and providing information about projects to stakeholders and the public. This fact sheet introduces a Web application developed to enable users to visualize locations and search for information about USACE (and some other Federal, State, and local) projects in the Rio Grande Basin in southern Colorado, New Mexico, and Texas.

  11. Web information retrieval based on ontology

    Science.gov (United States)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  12. QUEST: An Assessment Tool for Web-Based Learning.

    Science.gov (United States)

    Choren, Ricardo; Blois, Marcelo; Fuks, Hugo

    In 1997, the Software Engineering Laboratory at Pontifical Catholic University of Rio de Janeiro (Brazil) implemented the first version of AulaNet (TM) a World Wide Web-based educational environment. Some of the teaching staff will use this environment in 1998 to offer regular term disciplines through the Web. This paper introduces Quest, a tool…

  13. Drexel at TREC 2014 Federated Web Search Track

    Science.gov (United States)

    2014-11-01

    of its input RS results. 1. INTRODUCTION Federated Web Search is the task of searching multiple search engines simultaneously and combining their...or distributed properly[5]. The goal of RS is then, for a given query, to select only the most promising search engines from all those available. Most...result pages of 149 search engines . 4000 queries are used in building the sample set. As a part of the Vertical Selection task, search engines are

  14. Software engineering

    CERN Document Server

    Sommerville, Ian

    2010-01-01

    The ninth edition of Software Engineering presents a broad perspective of software engineering, focusing on the processes and techniques fundamental to the creation of reliable, software systems. Increased coverage of agile methods and software reuse, along with coverage of 'traditional' plan-driven software engineering, gives readers the most up-to-date view of the field currently available. Practical case studies, a full set of easy-to-access supplements, and extensive web resources make teaching the course easier than ever.

  15. Graph Structure in Three National Academic Webs: Power Laws with Anomalies.

    Science.gov (United States)

    Thelwall, Mike; Wilkinson, David

    2003-01-01

    Explains how the Web can be modeled as a mathematical graph and analyzes the graph structures of three national university publicly indexable Web sites from Australia, New Zealand, and the United Kingdom. Topics include commercial search engines and academic Web link research; method-analysis environment and data sets; and power laws. (LRW)

  16. Space Physics Data Facility Web Services

    Science.gov (United States)

    Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.

    2005-01-01

    The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.

  17. COEUS: "semantic web in a box" for biomedical applications.

    Science.gov (United States)

    Lopes, Pedro; Oliveira, José Luís

    2012-12-17

    As the "omics" revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter's complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a "semantic web in a box" approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.

  18. Python for Google app engine

    CERN Document Server

    Pippi, Massimiliano

    2015-01-01

    If you are a Python developer, whether you have experience in web applications development or not, and want to rapidly deploy a scalable backend service or a modern web application on Google App Engine, then this book is for you.

  19. Distributed Deep Web Search

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien

    2013-01-01

    The World Wide Web contains billions of documents (and counting); hence, it is likely that some document will contain the answer or content you are searching for. While major search engines like Bing and Google often manage to return relevant results to your query, there are plenty of situations in

  20. Teknik Perangkingan Meta-search Engine

    OpenAIRE

    Puspitaningrum, Diyah

    2014-01-01

    Meta-search engine mengorganisasikan penyatuan hasil dari berbagai search engine dengan tujuan untuk meningkatkan presisi hasil pencarian dokumen web. Pada survei teknik perangkingan meta-search engine ini akan didiskusikan isu-isu pra-pemrosesan, rangking, dan berbagai teknik penggabungan hasil pencarian dari search engine yang berbeda-beda (multi-kombinasi). Isu-isu implementasi penggabungan 2 search engine dan 3 search engine juga menjadi sorotan. Pada makalah ini juga dibahas arahan penel...

  1. Engineering Geology | Alaska Division of Geological & Geophysical Surveys

    Science.gov (United States)

    Alaska's Mineral Industry Reports AKGeology.info Rare Earth Elements WebGeochem Engineering Geology Alaska content Engineering Geology Additional information Engineering Geology Posters and Presentations Alaska Alaska MAPTEACH Tsunami Inundation Mapping Engineering Geology Staff Projects The Engineering Geology

  2. Web document clustering using hyperlink structures

    Energy Technology Data Exchange (ETDEWEB)

    He, Xiaofeng; Zha, Hongyuan; Ding, Chris H.Q; Simon, Horst D.

    2001-05-07

    With the exponential growth of information on the World Wide Web there is great demand for developing efficient and effective methods for organizing and retrieving the information available. Document clustering plays an important role in information retrieval and taxonomy management for the World Wide Web and remains an interesting and challenging problem in the field of web computing. In this paper we consider document clustering methods exploring textual information hyperlink structure and co-citation relations. In particular we apply the normalized cut clustering method developed in computer vision to the task of hyperdocument clustering. We also explore some theoretical connections of the normalized-cut method to K-means method. We then experiment with normalized-cut method in the context of clustering query result sets for web search engines.

  3. A semantics-based aspect-oriented approach to adaptation in web engineering

    NARCIS (Netherlands)

    Casteleyn, S.; Van Woensel, W.; Houben, G.J.P.M.

    2007-01-01

    In the modern Web, users are accessing their favourite Web applications from any place, at any time and with any device. In this setting, they expect the application to user-tailor and personalize content access upon their particular needs. Exhibiting some kind of user- and context-dependency is

  4. How Google Web Search copes with very similar documents

    NARCIS (Netherlands)

    W. Mettrop (Wouter); P. Nieuwenhuysen; H. Smulders

    2006-01-01

    textabstractA significant portion of the computer files that carry documents, multimedia, programs etc. on the Web are identical or very similar to other files on the Web. How do search engines cope with this? Do they perform some kind of “deduplication”? How should users take into account that

  5. Web-based information search and retrieval: effects of strategy use and age on search success.

    Science.gov (United States)

    Stronge, Aideen J; Rogers, Wendy A; Fisk, Arthur D

    2006-01-01

    The purpose of this study was to investigate the relationship between strategy use and search success on the World Wide Web (i.e., the Web) for experienced Web users. An additional goal was to extend understanding of how the age of the searcher may influence strategy use. Current investigations of information search and retrieval on the Web have provided an incomplete picture of Web strategy use because participants have not been given the opportunity to demonstrate their knowledge of Web strategies while also searching for information on the Web. Using both behavioral and knowledge-engineering methods, we investigated searching behavior and system knowledge for 16 younger adults (M = 20.88 years of age) and 16 older adults (M = 67.88 years). Older adults were less successful than younger adults in finding correct answers to the search tasks. Knowledge engineering revealed that the age-related effect resulted from ineffective search strategies and amount of Web experience rather than age per se. Our analysis led to the development of a decision-action diagram representing search behavior for both age groups. Older adults had more difficulty than younger adults when searching for information on the Web. However, this difficulty was related to the selection of inefficient search strategies, which may have been attributable to a lack of knowledge about available Web search strategies. Actual or potential applications of this research include training Web users to search more effectively and suggestions to improve the design of search engines.

  6. A Method for Transforming Existing Web Service Descriptions into an Enhanced Semantic Web Service Framework

    Science.gov (United States)

    Du, Xiaofeng; Song, William; Munro, Malcolm

    Web Services as a new distributed system technology has been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organisation (VO). However, lack of semantics in the current Web Service standards has been a major barrier in service discovery and composition. In this chapter, we propose an enhanced context-based semantic service description framework (CbSSDF+) that tackles the problem and improves the flexibility of service discovery and the correctness of generated composite services. We also provide an agile transformation method to demonstrate how the various formats of Web Service descriptions on the Web can be managed and renovated step by step into CbSSDF+ based service description without large amount of engineering work. At the end of the chapter, we evaluate the applicability of the transformation method and the effectiveness of CbSSDF+ through a series of experiments.

  7. Kansei Engineering and Website Design

    DEFF Research Database (Denmark)

    Song, Zheng; Howard, Thomas J.; Achiche, Sofiane

    2012-01-01

    a methodology based on Kansei Engineering, which has done significant work in product and industrial design but not quite been adopted in the IT field, in order to discover implicit emotional needs of users toward web site and transform them into design details. Survey and interview techniques and statistical...... methods were performed in this paper. A prototype web site was produced based on the Kansei results integrated with technical expertise and practical considerations. The results showed that the Kansei Engineering methodology in this paper played a significant role in web site design in terms of satisfying......Capturing users’ needs is critical in web site design. However, a lot of attention has been paid to enhance the functionality and usability, whereas much less consideration has been given to satisfy the emotional needs of users, which is also important to a successful design. This paper explores...

  8. The sources and popularity of online drug information: an analysis of top search engine results and web page views.

    Science.gov (United States)

    Law, Michael R; Mintzes, Barbara; Morgan, Steven G

    2011-03-01

    The Internet has become a popular source of health information. However, there is little information on what drug information and which Web sites are being searched. To investigate the sources of online information about prescription drugs by assessing the most common Web sites returned in online drug searches and to assess the comparative popularity of Web pages for particular drugs. This was a cross-sectional study of search results for the most commonly dispensed drugs in the US (n=278 active ingredients) on 4 popular search engines: Bing, Google (both US and Canada), and Yahoo. We determined the number of times a Web site appeared as the first result. A linked retrospective analysis counted Wikipedia page hits for each of these drugs in 2008 and 2009. About three quarters of the first result on Google USA for both brand and generic names linked to the National Library of Medicine. In contrast, Wikipedia was the first result for approximately 80% of generic name searches on the other 3 sites. On these other sites, over two thirds of brand name searches led to industry-sponsored sites. The Wikipedia pages with the highest number of hits were mainly for opiates, benzodiazepines, antibiotics, and antidepressants. Wikipedia and the National Library of Medicine rank highly in online drug searches. Further, our results suggest that patients most often seek information on drugs with the potential for dependence, for stigmatized conditions, that have received media attention, and for episodic treatments. Quality improvement efforts should focus on these drugs.

  9. Implementasi Seo Web Design Methodology Pada Official Homepage Pondok Pesantren Qodratullah

    OpenAIRE

    Ependi, Usman

    2013-01-01

    Homepage or website for an organization is a way to deliver information to the public. Now the number of homepage or website of the day is always increasing both personal or owned by the organization. To communicate or disseminate information homepage/ website Islamic Boarding School of Qodratullah need a surefire way to use the Search Engine Optimization Web Design Methodology. Conducted with the implementation of the Search Engine Optimization Web Design Methodology on the homepage/ website...

  10. A study on the personalization methods of the web | Hajighorbani ...

    African Journals Online (AJOL)

    ... methods of correct patterns and analyze them. Here we will discuss the basic concepts of web personalization and consider the three approaches of web personalization and we evaluated the methods belonging to each of them. Keywords: personalization, search engine, user preferences, data mining methods ...

  11. Next-Gen Search Engines

    Science.gov (United States)

    Gupta, Amardeep

    2005-01-01

    Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…

  12. Final Technical Report; NUCLEAR ENGINEERING RECRUITMENT EFFORT

    Energy Technology Data Exchange (ETDEWEB)

    Kerrick, Sharon S.; Vincent, Charles D.

    2007-07-02

    This report provides the summary of a project whose purpose was to support the costs of developing a nuclear engineering awareness program, an instruction program for teachers to integrate lessons on nuclear science and technology into their existing curricula, and web sites for the exchange of nuclear engineering career information and classroom materials. The specific objectives of the program were as follows: OBJECTIVE 1: INCREASE AWARENESS AND INTEREST OF NUCLEAR ENGINEERING; OBJECTIVE 2: INSTRUCT TEACHERS ON NUCLEAR TOPICS; OBJECTIVE 3: NUCLEAR EDUCATION PROGRAMS WEB-SITE; OBJECTIVE 4: SUPPORT TO UNIVERSITY/INDUSTRY MATCHING GRANTS AND REACTOR SHARING; OBJECTIVE 5: PILOT PROJECT; OBJECTIVE 6: NUCLEAR ENGINEERING ENROLLMENT SURVEY AT UNIVERSITIES

  13. Estimating Search Engine Index Size Variability

    DEFF Research Database (Denmark)

    Van den Bosch, Antal; Bogers, Toine; De Kunder, Maurice

    2016-01-01

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel...... method of estimating the size of a Web search engine’s index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing’s indices over a nine-year period, from March 2006...... until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find...

  14. Web of Science, Scopus, and Google Scholar citation rates: a case study of medical physics and biomedical engineering: what gets cited and what doesn't?

    Science.gov (United States)

    Trapp, Jamie

    2016-12-01

    There are often differences in a publication's citation count, depending on the database accessed. Here, aspects of citation counts for medical physics and biomedical engineering papers are studied using papers published in the journal Australasian physical and engineering sciences in medicine. Comparison is made between the Web of Science, Scopus, and Google Scholar. Papers are categorised into subject matter, and citation trends are examined. It is shown that review papers as a group tend to receive more citations on average; however the highest cited individual papers are more likely to be research papers.

  15. Raising Reliability of Web Search Tool Research through Replication and Chaos Theory

    OpenAIRE

    Nicholson, Scott

    1999-01-01

    Because the World Wide Web is a dynamic collection of information, the Web search tools (or "search engines") that index the Web are dynamic. Traditional information retrieval evaluation techniques may not provide reliable results when applied to the Web search tools. This study is the result of ten replications of the classic 1996 Ding and Marchionini Web search tool research. It explores the effects that replication can have on transforming unreliable results from one iteration into replica...

  16. Engineering High Assurance Distributed Cyber Physical Systems

    Science.gov (United States)

    2015-01-15

    engineering ( MDE ), Model- centric software engineering (MCSE), and others have attempted to leverage and integrate techniques for requirements...Part I: Principles of Software Engineering.” IBM Syst. J. 38, 2-3, pp.289-295, June 1999. [2] Xie, T, “Software Engineering Conferences”, web page

  17. Estimating search engine index size variability: a 9-year longitudinal study.

    Science.gov (United States)

    van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.

  18. Web Viz 2.0: A versatile suite of tools for collaboration and visualization

    Science.gov (United States)

    Spencer, C.; Yuen, D. A.

    2012-12-01

    Most scientific applications on the web fail to realize the full collaborative potential of the internet by not utilizing web 2.0 technology. To relieve users from the struggle with software tools and allow them to focus on their research, new software developed for scientists and researchers must harness the full suite of web technology. For several years WebViz 1.0 enabled researchers with any web accessible device to interact with the peta-scale data generated by the Hierarchical Volume Renderer (HVR) system. We have developed a new iteration of WebViz that can be easily interfaced with many problem domains in addition to HVR by employing the best practices of software engineering and object-oriented programming. This is done by separating the core WebViz system from domain specific code at an interface, leveraging inheritance and polymorphism to allow newly developed modules access to the core services. We employed several design patterns (model-view-controller, singleton, observer, and application controller) to engineer this highly modular system implemented in Java.

  19. The Invisible Web: Uncovering Information Sources Search Engines Can't See.

    Science.gov (United States)

    Sherman, Chris; Price, Gary

    This book takes a detailed look at the nature and extent of the Invisible Web, and offers pathfinders for accessing the valuable information it contains. It is designed to fit the needs of both novice and advanced Web searchers. Chapter One traces the development of the Internet and many of the early tools used to locate and share information via…

  20. An application of TOPSIS for ranking internet web browsers

    Directory of Open Access Journals (Sweden)

    Shahram Rostampour

    2012-07-01

    Full Text Available Web browser is one of the most important internet facilities for surfing the internet. A good web browser must incorporate literally tens of features such as integrated search engine, automatic updates, etc. Each year, ten web browsers are formally introduced as top best reviewers by some organizations. In this paper, we propose the implementation of TOPSIS technique to rank ten web browsers. The proposed model of this paper uses five criteria including speed, features, security, technical support and supported configurations. In terms of speed, Safari is the best web reviewer followed by Google Chrome and Internet Explorer while Opera is the best web reviewer when we look into 20 different features. We have also ranked these web browsers using all five categories together and the results indicate that Opera, Internet explorer, Firefox and Google Chrome are the best web browsers to be chosen.

  1. Ada & the Analytical Engine.

    Science.gov (United States)

    Freeman, Elisabeth

    1996-01-01

    Presents a brief history of Ada Byron King, Countess of Lovelace, focusing on her primary role in the development of the Analytical Engine--the world's first computer. Describes the Ada Project (TAP), a centralized World Wide Web site that serves as a clearinghouse for information related to women in computing, and provides a Web address for…

  2. Uncovering Web search strategies in South African higher education

    Directory of Open Access Journals (Sweden)

    Surika Civilcharran

    2016-11-01

    Full Text Available Background: In spite of the enormous amount of information available on the Web and the fact that search engines are continuously evolving to enhance the search experience, students are nevertheless faced with the difficulty of effectively retrieving information. It is, therefore, imperative for the interaction between students and search tools to be understood and search strategies to be identified, in order to promote successful information retrieval. Objectives: This study identifies the Web search strategies used by postgraduate students and forms part of a wider study into information retrieval strategies used by postgraduate students at the University of KwaZulu-Natal (UKZN, Pietermaritzburg campus, South Africa. Method: Largely underpinned by Thatcher’s cognitive search strategies, the mixed-methods approach was utilised for this study, in which questionnaires were employed in Phase 1 and structured interviews in Phase 2. This article reports and reflects on the findings of Phase 2, which focus on identifying the Web search strategies employed by postgraduate students. The Phase 1 results were reported in Civilcharran, Hughes and Maharaj (2015. Results: Findings reveal the Web search strategies used for academic information retrieval. In spite of easy access to the invisible Web and the advent of meta-search engines, the use of Web search engines still remains the preferred search tool. The UKZN online library databases and especially the UKZN online library, Online Public Access Catalogue system, are being underutilised. Conclusion: Being ranked in the top three percent of the world’s universities, UKZN is investing in search tools that are not being used to their full potential. This evidence suggests an urgent need for students to be trained in Web searching and to have a greater exposure to a variety of search tools. This article is intended to further contribute to the design of undergraduate training programmes in order to deal

  3. Overview of the TREC 2013 federated web search track

    OpenAIRE

    Demeester, Thomas; Trieschnigg, D; Nguyen, D; Hiemstra, D

    2013-01-01

    The TREC Federated Web Search track is intended to promote research related to federated search in a realistic web setting, and hereto provides a large data collection gathered from a series of online search engines. This overview paper discusses the results of the first edition of the track, FedWeb 2013. The focus was on basic challenges in federated search: (1) resource selection, and (2) results merging. After an overview of the provided data collection and the relevance judgments for the ...

  4. A Web Service and Interface for Remote Electronic Device Characterization

    Science.gov (United States)

    Dutta, S.; Prakash, S.; Estrada, D.; Pop, E.

    2011-01-01

    A lightweight Web Service and a Web site interface have been developed, which enable remote measurements of electronic devices as a "virtual laboratory" for undergraduate engineering classes. Using standard browsers without additional plugins (such as Internet Explorer, Firefox, or even Safari on an iPhone), remote users can control a Keithley…

  5. Flow Webs: Mechanism and Architecture for the Implementation of Sensor Webs

    Science.gov (United States)

    Gorlick, M. M.; Peng, G. S.; Gasster, S. D.; McAtee, M. D.

    2006-12-01

    -time demands. Flows are the connective tissue of flow webs—massive computational engines organized as directed graphs whose nodes are semi-autonomous components and whose edges are flows. The individual components of a flow web may themselves be encapsulated flow webs. In other words, a flow web subgraph may be presented to a yet larger flow web as a single, seamless component. Flow webs, at all levels, may be edited and modified while still executing. Within a flow web individual components may be added, removed, started, paused, halted, reparameterized, or inspected. The topology of a flow web may be changed at will. Thus, flow webs exhibit an extraordinary degree of adaptivity and robustness as they are explicitly designed to be modified on the fly, an attribute well suited for dynamic model interactions in sensor webs. We describe our concept for a sensor web, implemented as a flow web, in the context of a wildfire disaster management system for the southern California region. Comprehensive wildfire management requires cooperation among multiple agencies. Flow webs allow agencies to share resources in exactly the manner they choose. We will explain how to employ flow webs and agents to integrate satellite remote sensing data, models, in-situ sensors, UAVs and other resources into a sensor web that interconnects organizations and their disaster management tools in a manner that simultaneously preserves their independence and builds upon the individual strengths of agency-specific models and data sources.

  6. FindZebra: a search engine for rare diseases.

    Science.gov (United States)

    Dragusin, Radu; Petcu, Paula; Lioma, Christina; Larsen, Birger; Jørgensen, Henrik L; Cox, Ingemar J; Hansen, Lars Kai; Ingwersen, Peter; Winther, Ole

    2013-06-01

    The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface to this information. It is therefore of interest to find out how well web search engines work for diagnostic queries and what factors contribute to successes and failures. Among diseases, rare (or orphan) diseases represent an especially challenging and thus interesting class to diagnose as each is rare, diverse in symptoms and usually has scattered resources associated with it. We design an evaluation approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, performance measures, information resources and guidelines for customising Google Search to this task. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source search technology and uses curated freely available online medical information. FindZebra outperforms Google Search in both default set-up and customised to the resources used by FindZebra. We extend FindZebra with specialized functionalities exploiting medical ontological information and UMLS medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular standard web search. The proposed evaluation approach can be valuable for future development and benchmarking. The FindZebra search engine is available at http://www.findzebra.com/. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Study on online community user motif using web usage mining

    Science.gov (United States)

    Alphy, Meera; Sharma, Ajay

    2016-04-01

    The Web usage mining is the application of data mining, which is used to extract useful information from the online community. The World Wide Web contains at least 4.73 billion pages according to Indexed Web and it contains at least 228.52 million pages according Dutch Indexed web on 6th august 2015, Thursday. It’s difficult to get needed data from these billions of web pages in World Wide Web. Here is the importance of web usage mining. Personalizing the search engine helps the web user to identify the most used data in an easy way. It reduces the time consumption; automatic site search and automatic restore the useful sites. This study represents the old techniques to latest techniques used in pattern discovery and analysis in web usage mining from 1996 to 2015. Analyzing user motif helps in the improvement of business, e-commerce, personalisation and improvement of websites.

  8. Noise and Vibration Risk Prevention Virtual Web for Ubiquitous Training

    Science.gov (United States)

    Redel-Macías, María Dolores; Cubero-Atienza, Antonio J.; Martínez-Valle, José Miguel; Pedrós-Pérez, Gerardo; del Pilar Martínez-Jiménez, María

    2015-01-01

    This paper describes a new Web portal offering experimental labs for ubiquitous training of university engineering students in work-related risk prevention. The Web-accessible computer program simulates the noise and machine vibrations met in the work environment, in a series of virtual laboratories that mimic an actual laboratory and provide the…

  9. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.

    Directory of Open Access Journals (Sweden)

    Mansour Alsaleh

    Full Text Available Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.

  10. A systematic framework to discover pattern for web spam classification

    OpenAIRE

    Jelodar, Hamed; Wang, Yongli; Yuan, Chi; Jiang, Xiaohui

    2017-01-01

    Web spam is a big problem for search engine users in World Wide Web. They use deceptive techniques to achieve high rankings. Although many researchers have presented the different approach for classification and web spam detection still it is an open issue in computer science. Analyzing and evaluating these websites can be an effective step for discovering and categorizing the features of these websites. There are several methods and algorithms for detecting those websites, such as decision t...

  11. Modelo de web semántica para universidades

    Directory of Open Access Journals (Sweden)

    Karla Abad

    2015-12-01

    Full Text Available A raíz del estudio de estado actual de micrositios y repositorios en la Universidad Estatal Península de Santa Elena se encontró que su información carecía de semántica óptima y adecuada. Bajo estas circunstancias, se plantea entonces la necesidad de crear un modelo de estructura de web semántica para Universidades, el cual posteriormente fue aplicado a micrositios y repositorio digital de la UPSE, como caso de prueba. Parte de este proyecto incluye la instalación de módulos de software con sus respectivas configuraciones y la utilización de estándares de metadatos como DUBLIN CORE, para la mejora del SEO (optimización en motores de búsqueda; con ello se ha logrado la generación de metadatos estandarizados y la creación de políticas para la subida de información. El uso de metadatos transforma datos simples en estructuras bien organizadas que aportan información y conocimiento para generar resultados en buscadores web. Al culminar la implementación del modelo de web semántica es posible decir que la universidad ha mejorado su presencia y visibilidad en la web a través del indexamiento de información en diferentes motores de búsqueda y posicionamiento en la categorización de universidades y de repositorios de Webometrics (ranking que proporciona clasificación de universidades de todo el mundo.   Abstract After examining the current microsites and repositories situation in University, Peninsula of Santa Elena´s, it was found that information lacked optimal and appropriate semantic. Under these circumstances, there is a need to create a semantic web structure model for Universities, which was subsequently applied to UPSE´s microsites and digital repositories, as a test study case. Part of this project includes the installation of software modules with their respective configurations and the use of metadata standards such as DUBLIN CORE, to improve the SEO (Search Engine Optimization; with these applications, it was

  12. Identify Web-page Content meaning using Knowledge based System for Dual Meaning Words

    OpenAIRE

    Sinha, Sukanta; Dattagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Meaning of Web-page content plays a big role while produced a search result from a search engine. Most of the cases Web-page meaning stored in title or meta-tag area but those meanings do not always match with Web-page content. To overcome this situation we need to go through the Web-page content to identify the Web-page meaning. In such cases, where Webpage content holds dual meaning words that time it is really difficult to identify the meaning of the Web-page. In this paper, we are introdu...

  13. Shear Behavior of Corrugated Steel Webs in H Shape Bridge Girders

    Directory of Open Access Journals (Sweden)

    Qi Cao

    2015-01-01

    Full Text Available In bridge engineering, girders with corrugated steel webs have shown good mechanical properties. With the promotion of composite bridge with corrugated steel webs, in particular steel-concrete composite girder bridge with corrugated steel webs, it is necessary to study the shear performance and buckling of the corrugated webs. In this research, by conducting experiment incorporated with finite element analysis, the stability of H shape beam welded with corrugated webs was tested and three failure modes were observed. Structural data including load-deflection, load-strain, and shear capacity of tested beam specimens were collected and compared with FEM analytical results by ANSYS software. The effects of web thickness, corrugation, and stiffening on shear capacity of corrugated webs were further discussed.

  14. A review of the reporting of web searching to identify studies for Cochrane systematic reviews.

    Science.gov (United States)

    Briscoe, Simon

    2018-03-01

    The literature searches that are used to identify studies for inclusion in a systematic review should be comprehensively reported. This ensures that the literature searches are transparent and reproducible, which is important for assessing the strengths and weaknesses of a systematic review and re-running the literature searches when conducting an update review. Web searching using search engines and the websites of topically relevant organisations is sometimes used as a supplementary literature search method. Previous research has shown that the reporting of web searching in systematic reviews often lacks important details and is thus not transparent or reproducible. Useful details to report about web searching include the name of the search engine or website, the URL, the date searched, the search strategy, and the number of results. This study reviews the reporting of web searching to identify studies for Cochrane systematic reviews published in the 6-month period August 2016 to January 2017 (n = 423). Of these reviews, 61 reviews reported using web searching using a search engine or website as a literature search method. In the majority of reviews, the reporting of web searching was found to lack essential detail for ensuring transparency and reproducibility, such as the search terms. Recommendations are made on how to improve the reporting of web searching in Cochrane systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Blending vertical and web results: A case study using video intent

    NARCIS (Netherlands)

    Lefortier, D.; Serdyukov, P.; Romanenko, F.; de Rijke, M.; de Rijke, M.; Kenter, T.; de Vries, A.P.; Zhai, C.X.; de Jong, F.; Radinsky, K.; Hofmann, K.

    2014-01-01

    Modern search engines aggregate results from specialized verticals into the Web search results. We study a setting where vertical and Web results are blended into a single result list, a setting that has not been studied before. We focus on video intent and present a detailed observational study of

  16. The Number of Scholarly Documents on the Public Web

    Science.gov (United States)

    Khabsa, Madian; Giles, C. Lee

    2014-01-01

    The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%. PMID:24817403

  17. The number of scholarly documents on the public web.

    Directory of Open Access Journals (Sweden)

    Madian Khabsa

    Full Text Available The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24% are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.

  18. The number of scholarly documents on the public web.

    Science.gov (United States)

    Khabsa, Madian; Giles, C Lee

    2014-01-01

    The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.

  19. Comparative analysis of some search engines

    Directory of Open Access Journals (Sweden)

    Taiwo O. Edosomwan

    2010-10-01

    Full Text Available We compared the information retrieval performances of some popular search engines (namely, Google, Yahoo, AlltheWeb, Gigablast, Zworks and AltaVista and Bing/MSN in response to a list of ten queries, varying in complexity. These queries were run on each search engine and the precision and response time of the retrieved results were recorded. The first ten documents on each retrieval output were evaluated as being ‘relevant’ or ‘non-relevant’ for evaluation of the search engine’s precision. To evaluate response time, normalised recall ratios were calculated at various cut-off points for each query and search engine. This study shows that Google appears to be the best search engine in terms of both average precision (70% and average response time (2 s. Gigablast and AlltheWeb performed the worst overall in this study.

  20. An open-source, mobile-friendly search engine for public medical knowledge.

    Science.gov (United States)

    Samwald, Matthias; Hanbury, Allan

    2014-01-01

    The World Wide Web has become an important source of information for medical practitioners. To complement the capabilities of currently available web search engines we developed FindMeEvidence, an open-source, mobile-friendly medical search engine. In a preliminary evaluation, the quality of results from FindMeEvidence proved to be competitive with those from TRIP Database, an established, closed-source search engine for evidence-based medicine.

  1. Discovery and Selection of Semantic Web Services

    CERN Document Server

    Wang, Xia

    2013-01-01

    For advanced web search engines to be able not only to search for semantically related information dispersed over different web pages, but also for semantic services providing certain functionalities, discovering semantic services is the key issue. Addressing four problems of current solution, this book presents the following contributions. A novel service model independent of semantic service description models is proposed, which clearly defines all elements necessary for service discovery and selection. It takes service selection as its gist and improves efficiency. Corresponding selection algorithms and their implementation as components of the extended Semantically Enabled Service-oriented Architecture in the Web Service Modeling Environment are detailed. Many applications of semantic web services, e.g. discovery, composition and mediation, can benefit from a general approach for building application ontologies. With application ontologies thus built, services are discovered in the same way as with single...

  2. Developing BP-driven web application through the use of MDE techniques

    OpenAIRE

    Torres Bosch, Maria Victoria; Giner Blasco, Pau; Pelechano Ferragud, Vicente

    2012-01-01

    Model driven engineering (MDE) is a suitable approach for performing the construction of software systems (in particular in the Web application domain). There are different types of Web applications depending on their purpose (i.e., document-centric, interactive, transactional, workflow/business process-based, collaborative, etc). This work focusses on business process-based Web applications in order to be able to understand business processes in a broad sense, from the lightweight business p...

  3. Clinical software development for the Web: lessons learned from the BOADICEA project.

    Science.gov (United States)

    Cunningham, Alex P; Antoniou, Antonis C; Easton, Douglas F

    2012-04-10

    In the past 20 years, society has witnessed the following landmark scientific advances: (i) the sequencing of the human genome, (ii) the distribution of software by the open source movement, and (iii) the invention of the World Wide Web. Together, these advances have provided a new impetus for clinical software development: developers now translate the products of human genomic research into clinical software tools; they use open-source programs to build them; and they use the Web to deliver them. Whilst this open-source component-based approach has undoubtedly made clinical software development easier, clinical software projects are still hampered by problems that traditionally accompany the software process. This study describes the development of the BOADICEA Web Application, a computer program used by clinical geneticists to assess risks to patients with a family history of breast and ovarian cancer. The key challenge of the BOADICEA Web Application project was to deliver a program that was safe, secure and easy for healthcare professionals to use. We focus on the software process, problems faced, and lessons learned. Our key objectives are: (i) to highlight key clinical software development issues; (ii) to demonstrate how software engineering tools and techniques can facilitate clinical software development for the benefit of individuals who lack software engineering expertise; and (iii) to provide a clinical software development case report that can be used as a basis for discussion at the start of future projects. We developed the BOADICEA Web Application using an evolutionary software process. Our approach to Web implementation was conservative and we used conventional software engineering tools and techniques. The principal software development activities were: requirements, design, implementation, testing, documentation and maintenance. The BOADICEA Web Application has now been widely adopted by clinical geneticists and researchers. BOADICEA Web

  4. Millennial Undergraduate Research Strategies in Web and Library Information Retrieval Systems

    Science.gov (United States)

    Porter, Brandi

    2011-01-01

    This article summarizes the author's dissertation regarding search strategies of millennial undergraduate students in Web and library online information retrieval systems. Millennials bring a unique set of search characteristics and strategies to their research since they have never known a world without the Web. Through the use of search engines,…

  5. Teen smoking cessation help via the Internet: a survey of search engines.

    Science.gov (United States)

    Edwards, Christine C; Elliott, Sean P; Conway, Terry L; Woodruff, Susan I

    2003-07-01

    The objective of this study was to assess Web sites related to teen smoking cessation on the Internet. Seven Internet search engines were searched using the keywords teen quit smoking. The top 20 hits from each search engine were reviewed and categorized. The keywords teen quit smoking produced between 35 and 400,000 hits depending on the search engine. Of 140 potential hits, 62% were active, unique sites; 85% were listed by only one search engine; and 40% focused on cessation. Findings suggest that legitimate on-line smoking cessation help for teens is constrained by search engine choice and the amount of time teens spend looking through potential sites. Resource listings should be updated regularly. Smoking cessation Web sites need to be picked up on multiple search engine searches. Further evaluation of smoking cessation Web sites need to be conducted to identify the most effective help for teens.

  6. La ingeniería de requisitos una base fundamental para el desarrollo de proyectos de TI en la web.

    Directory of Open Access Journals (Sweden)

    Edwin Mejía

    2015-12-01

    Computer systems based on the web have grown to very large steps. The web over time has become a place to locate important documents of the institutions and lead to large volumes of information and education market banking organization level. The web-based systems since its inception wrapped all this systematization to solve business problems they had, but without using any methodology for building them. Therefore, this article is a brief introduction to the Web applications, methodologies to build such applications, requirements engineering web applications, the techniques used for data collection and the serious problems that have passed several web applications focusing on requirements engineering as the linchpin in the development of software projects.

  7. Final Technical Report and management: NUCLEAR ENGINEERING RECRUITMENT EFFORT

    International Nuclear Information System (INIS)

    Kerrick, Sharon S.; Vincent, Charles D.

    2007-01-01

    This report provides the summary of a project whose purpose was to support the costs of developing a nuclear engineering awareness program, an instruction program for teachers to integrate lessons on nuclear science and technology into their existing curricula, and web sites for the exchange of nuclear engineering career information and classroom materials. The specific objectives of the program were as follows: Objective 1--Increase awareness and interest of nuclear engineering; Objective 2--Instruct Teachers on nuclear topics; Objective 3--Nuclear education programs web-site; Objective 4--Support to university/industry matching grants and reactor sharing; Objective 5--Pilot project; and Objective 6--Nuclear engineering enrollment survey at universities

  8. Electrochromic properties of polyaniline-coated fiber webs for tissue engineering applications.

    Science.gov (United States)

    Beregoi, Mihaela; Busuioc, Cristina; Evanghelidis, Alexandru; Matei, Elena; Iordache, Florin; Radu, Mihaela; Dinischiotu, Anca; Enculescu, Ionut

    2016-08-30

    By combining the electrospinning method advantages (high surface-to-volume ratio, controlled morphology, varied composition and flexibility for the resulting structures) with the electrical activity of polyaniline, a new core-shell-type material with potential applications in the field of artificial muscles was synthesized. Thus, a poly(methylmethacrylate) solution was electrospun in optimized conditions to obtain randomly oriented polymer fiber webs. Further, a gold layer was sputtered on their surface in order to make them conductive and improve the mechanical properties. The metalized fiber webs were then covered with a PANI layer by in situ electrochemical polymerization starting from aniline and using sulphuric acid as oxidizing agent. By applying a small voltage on PANI-coated fiber webs in the presence of an electrolyte, the oxidation state of PANI changes, which is followed by the device color modification. The morphological, electrical and biological properties of the resulting multilayered material were also investigated. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Engineering web maps with gradual content zoom based on streaming vector data

    Science.gov (United States)

    Huang, Lina; Meijers, Martijn; Šuba, Radan; van Oosterom, Peter

    2016-04-01

    Vario-scale data structures have been designed to support gradual content zoom and the progressive transfer of vector data, for use with arbitrary map scales. The focus to date has been on the server side, especially on how to convert geographic data into the proposed vario-scale structures by means of automated generalisation. This paper contributes to the ongoing vario-scale research by focusing on the client side and communication, particularly on how this works in a web-services setting. It is claimed that these functionalities are urgently needed, as many web-based applications, both desktop and mobile, require gradual content zoom, progressive transfer and a high performance level. The web-client prototypes developed in this paper make it possible to assess the behaviour of vario-scale data and to determine how users will actually see the interactions. Several different options of web-services communication architectures are possible in a vario-scale setting. These options are analysed and tested with various web-client prototypes, with respect to functionality, ease of implementation and performance (amount of transmitted data and response times). We show that the vario-scale data structure can fit in with current web-based architectures and efforts to standardise map distribution on the internet. However, to maximise the benefits of vario-scale data, a client needs to be aware of this structure. When a client needs a map to be refined (by means of a gradual content zoom operation), only the 'missing' data will be requested. This data will be sent incrementally to the client from a server. In this way, the amount of data transferred at one time is reduced, shortening the transmission time. In addition to these conceptual architecture aspects, there are many implementation and tooling design decisions at play. These will also be elaborated on in this paper. Based on the experiments conducted, we conclude that the vario-scale approach indeed supports gradual

  10. Spiders and Worms and Crawlers, Oh My: Searching on the World Wide Web.

    Science.gov (United States)

    Eagan, Ann; Bender, Laura

    Searching on the world wide web can be confusing. A myriad of search engines exist, often with little or no documentation, and many of these search engines work differently from the standard search engines people are accustomed to using. Intended for librarians, this paper defines search engines, directories, spiders, and robots, and covers basics…

  11. Resource quantity and quality determine the inter-specific associations between ecosystem engineers and resource users in a cavity-nest web.

    Science.gov (United States)

    Robles, Hugo; Martin, Kathy

    2013-01-01

    While ecosystem engineering is a widespread structural force of ecological communities, the mechanisms underlying the inter-specific associations between ecosystem engineers and resource users are poorly understood. A proper knowledge of these mechanisms is, however, essential to understand how communities are structured. Previous studies suggest that increasing the quantity of resources provided by ecosystem engineers enhances populations of resource users. In a long-term study (1995-2011), we show that the quality of the resources (i.e. tree cavities) provided by ecosystem engineers is also a key feature that explains the inter-specific associations in a tree cavity-nest web. Red-naped sapsuckers (Sphyrapicusnuchalis) provided the most abundant cavities (52% of cavities, 0.49 cavities/ha). These cavities were less likely to be used than other cavity types by mountain bluebirds (Sialiacurrucoides), but provided numerous nest-sites (41% of nesting cavities) to tree swallows (Tachycinetabicolour). Swallows experienced low reproductive outputs in northern flicker (Colaptesauratus) cavities compared to those in sapsucker cavities (1.1 vs. 2.1 fledglings/nest), but the highly abundant flickers (33% of cavities, 0.25 cavities/ha) provided numerous suitable nest-sites for bluebirds (58%). The relative shortage of cavities supplied by hairy woodpeckers (Picoidesvillosus) and fungal/insect decay (high quality nest-sites for both bluebirds and swallows. Because both the quantity and quality of resources supplied by different ecosystem engineers may explain the amount of resources used by each resource user, conservation strategies may require different management actions to be implemented for the key ecosystem engineer of each resource user. We, therefore, urge the incorporation of both resource quantity and quality into models that assess community dynamics to improve conservation actions and our understanding of ecological communities based on ecosystem engineering.

  12. Improving Web Page Retrieval using Search Context from Clicked Domain Names

    NARCIS (Netherlands)

    Li, R.

    Search context is a crucial factor that helps to understand a user’s information need in ad-hoc Web page retrieval. A query log of a search engine contains rich information on issued queries and their corresponding clicked Web pages. The clicked data implies its relevance to the query and can be

  13. A Longitudinal Analysis of Search Engine Index Size

    DEFF Research Database (Denmark)

    Van den Bosch, Antal; Bogers, Toine; De Kunder, Maurice

    2015-01-01

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel...... method of estimating the size of a Web search engine’s index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing’s indexes over a nine-year period, from March 2006...... until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find...

  14. Children's Search Engines from an Information Search Process Perspective.

    Science.gov (United States)

    Broch, Elana

    2000-01-01

    Describes cognitive and affective characteristics of children and teenagers that may affect their Web searching behavior. Reviews literature on children's searching in online public access catalogs (OPACs) and using digital libraries. Profiles two Web search engines. Discusses some of the difficulties children have searching the Web, in the…

  15. Analyzing Web Server Logs to Improve a Site's Usage. The Systems Librarian

    Science.gov (United States)

    Breeding, Marshall

    2005-01-01

    This column describes ways to streamline and optimize how a Web site works in order to improve both its usability and its visibility. The author explains how to analyze logs and other system data to measure the effectiveness of the Web site design and search engine.

  16. Web の探索行動と情報評価過程の分析

    OpenAIRE

    種市, 淳子; 逸村, 裕; TANEICHI, Junko; ITSUMURA, Hiroshi

    2005-01-01

    In this study, we discussed information seeking behavior on the Web. First, the currentWeb-searching studies are reviewed from the perspective of: (1) Web-searching characteristics; (2) the process model for how users evaluate Web resources. Secondly, we investigated information seeking processes using the Web search engine and online public access catalogue (OPAC) system by undergraduate students, through an experiment and its protocol analysis. The results indicate that: (1) Web-searching p...

  17. Improving web site performance using commercially available analytical tools.

    Science.gov (United States)

    Ogle, James A

    2010-10-01

    It is easy to accurately measure web site usage and to quantify key parameters such as page views, site visits, and more complex variables using commercially available tools that analyze web site log files and search engine use. This information can be used strategically to guide the design or redesign of a web site (templates, look-and-feel, and navigation infrastructure) to improve overall usability. The data can also be used tactically to assess the popularity and use of new pages and modules that are added and to rectify problems that surface. This paper describes software tools used to: (1) inventory search terms that lead to available content; (2) propose synonyms for commonly used search terms; (3) evaluate the effectiveness of calls to action; (4) conduct path analyses to targeted content. The American Academy of Orthopaedic Surgeons (AAOS) uses SurfRay's Behavior Tracking software (Santa Clara CA, USA, and Copenhagen, Denmark) to capture and archive the search terms that have been entered into the site's Google Mini search engine. The AAOS also uses Unica's NetInsight program to analyze its web site log files. These tools provide the AAOS with information that quantifies how well its web sites are operating and insights for making improvements to them. Although it is easy to quantify many aspects of an association's web presence, it also takes human involvement to analyze the results and then recommend changes. Without a dedicated resource to do this, the work often is accomplished only sporadically and on an ad hoc basis.

  18. WEB-IS2: Next Generation Web Services Using Amira Visualization Package

    Science.gov (United States)

    Yang, X.; Wang, Y.; Bollig, E. F.; Kadlec, B. J.; Garbow, Z. A.; Yuen, D. A.; Erlebacher, G.

    2003-12-01

    Amira (www.amiravis.com) is a powerful 3-D visualization package and has been employed recently by the science and engineering communities to gain insight into their data. We present a new web-based interface to Amira, packaged in a Java applet. We have developed a module called WEB-IS/Amira (WEB-IS2), which provides web-based access to Amira. This tool allows earth scientists to manipulate Amira controls remotely and to analyze, render and view large datasets over the internet, without regard for time or location. This could have important ramifications for GRID computing. The design of our implementation will soon allow multiple users to visually collaborate by manipulating a single dataset through a variety of client devices. These clients will only require a browser capable of displaying Java applets. As the deluge of data continues, innovative solutions that maximize ease of use without sacrificing efficiency or flexibility will continue to gain in importance, particularly in the Earth sciences. Major initiatives, such as Earthscope (http://www.earthscope.org), which will generate at least a terabyte of data daily, stand to profit enormously by a system such as WEB-IS/Amira (WEB-IS2). We discuss our use of SOAP (Livingston, D., Advanced SOAP for Web development, Prentice Hall, 2002), a novel 2-way communication protocol, as a means of providing remote commands, and efficient point-to-point transfer of binary image data. We will present our initial experiences with the use of Naradabrokering (www.naradabrokering.org) as a means to decouple clients and servers. Information is submitted to the system as a published item, while it is retrieved through a subscription mechanisms, via what is known as "topics". These topic headers, their contents, and the list of subscribers are automatically tracked by Naradabrokering. This novel approach promises a high degree of fault tolerance, flexibility with respect to client diversity, and language independence for the

  19. WebDMS: A Web-Based Data Management System for Environmental Data

    Science.gov (United States)

    Ekstrand, A. L.; Haderman, M.; Chan, A.; Dye, T.; White, J. E.; Parajon, G.

    2015-12-01

    DMS is an environmental Data Management System to manage, quality-control (QC), summarize, document chain-of-custody, and disseminate data from networks ranging in size from a few sites to thousands of sites, instruments, and sensors. The server-client desktop version of DMS is used by local and regional air quality agencies (including the Bay Area Air Quality Management District, the South Coast Air Quality Management District, and the California Air Resources Board), the EPA's AirNow Program, and the EPA's AirNow-International (AirNow-I) program, which offers countries the ability to run an AirNow-like system. As AirNow's core data processing engine, DMS ingests, QCs, and stores real-time data from over 30,000 active sensors at over 5,280 air quality and meteorological sites from over 130 air quality agencies across the United States. As part of the AirNow-I program, several instances of DMS are deployed in China, Mexico, and Taiwan. The U.S. Department of State's StateAir Program also uses DMS for five regions in China and plans to expand to other countries in the future. Recent development has begun to migrate DMS from an onsite desktop application to WebDMS, a web-based application designed to take advantage of cloud hosting and computing services to increase scalability and lower costs. WebDMS will continue to provide easy-to-use data analysis tools, such as time-series graphs, scatterplots, and wind- or pollution-rose diagrams, as well as allowing data to be exported to external systems such as the EPA's Air Quality System (AQS). WebDMS will also provide new GIS analysis features and a suite of web services through a RESTful web API. These changes will better meet air agency needs and allow for broader national and international use (for example, by the AirNow-I partners). We will talk about the challenges and advantages of migrating DMS to the web, modernizing the DMS user interface, and making it more cost-effective to enhance and maintain over time.

  20. Asset Identification for Security Risk Assessment in Web Applications

    OpenAIRE

    Hisham M. Haddad; Brunil D. Romero

    2009-01-01

    As software applications become more complex they require more security, allowing them to reach an appropriate level of quality to manage information, and therefore achieving business objectives. Web applications represent one segment of software industry where security risk assessment is essential. Web engineering must address new challenges to provide new techniques and tools that guarantee high quality application development. This work focuses asset identification, the initial step in sec...

  1. Where does it break? or : Why the semantic web is not just "research as usual"

    NARCIS (Netherlands)

    Van Harmelen, Frank

    2006-01-01

    Work on the Semantic Web is all too often phrased as a technological challenge: how to improve the precision of search engines, how to personalise web-sites, how to integrate weakly-structured data-sources, etc. This suggests that we will be able to realise the Semantic Web by merely applying (and

  2. Sharing casting technological data on web site

    Directory of Open Access Journals (Sweden)

    Li Hailan

    2008-11-01

    Full Text Available Based on database and asp.net technologies, a web platform of scientific data in the casting technology fi eld has been developed. This paper presents the relevant data system structure, the approaches to the data collection, the applying methods and policy in data sharing, and depicts the collected and shared data recently fi nished. Statistics showed that there are about 20,000 visitors in China every day visiting the related data through the web, proving that many engineers or other relevant persons are interested in the data.

  3. Usability Evaluation of Public Web Mapping Sites

    Science.gov (United States)

    Wang, C.

    2014-04-01

    Web mapping sites are interactive maps that are accessed via Webpages. With the rapid development of Internet and Geographic Information System (GIS) field, public web mapping sites are not foreign to people. Nowadays, people use these web mapping sites for various reasons, in that increasing maps and related map services of web mapping sites are freely available for end users. Thus, increased users of web mapping sites led to more usability studies. Usability Engineering (UE), for instance, is an approach for analyzing and improving the usability of websites through examining and evaluating an interface. In this research, UE method was employed to explore usability problems of four public web mapping sites, analyze the problems quantitatively and provide guidelines for future design based on the test results. Firstly, the development progress for usability studies were described, and simultaneously several usability evaluation methods such as Usability Engineering (UE), User-Centered Design (UCD) and Human-Computer Interaction (HCI) were generally introduced. Then the method and procedure of experiments for the usability test were presented in detail. In this usability evaluation experiment, four public web mapping sites (Google Maps, Bing maps, Mapquest, Yahoo Maps) were chosen as the testing websites. And 42 people, who having different GIS skills (test users or experts), gender (male or female), age and nationality, participated in this test to complete the several test tasks in different teams. The test comprised three parts: a pretest background information questionnaire, several test tasks for quantitative statistics and progress analysis, and a posttest questionnaire. The pretest and posttest questionnaires focused on gaining the verbal explanation of their actions qualitatively. And the design for test tasks targeted at gathering quantitative data for the errors and problems of the websites. Then, the results mainly from the test part were analyzed. The

  4. Survey of formal and informal citation in Google search engine

    Directory of Open Access Journals (Sweden)

    Afsaneh Teymourikhani

    2016-03-01

    Full Text Available Aim: Informal citations is bibliographic information (title or Internet address, citing sources of information resources for informal scholarly communication and always neglected in traditional citation databases. This study is done, in order to answer the question of whether informal citations in the web environment are traceable. The present research aims to determine what proportion of web citations of Google search engine is related to formal and informal citation. Research method: Webometrics is the method used. The study is done on 1344 research articles of 98 open access journal, and the method that is used to extract the web citation from Google search engine is “Web / URL citation extraction". Findings: The findings showed that ten percent of the web citations of Google search engine are formal and informal citations. The highest formal citation in the Google search engine with 19/27% is in the field of library and information science and the lowest official citation by 1/54% is devoted to the field of civil engineering. The highest percentage of informal citations with 3/57% is devoted to sociology and the lowest percentage of informal citations by 0/39% is devoted to the field of civil engineering. Journal Citation is highest with 94/12% in the surgical field and lowest with 5/26 percent in the philosophy filed. Result: Due to formal and informal citations in the Google search engine which is about 10 percent and the reduction of this amount compared to previous research, it seems that track citations by this engine should be treated with more caution. We see that the amount of formal citation is variable in different disciplines. Cited journals in the field of surgery, is highest and in the filed of philosophy is lowest, this indicates that in the filed of philosophy, that is a subset of the social sciences, journals in scientific communication do not play a significant role. On the other hand, book has a key role in this filed

  5. A fuzzy method for improving the functionality of search engines based on user's web interactions

    Directory of Open Access Journals (Sweden)

    Farzaneh Kabirbeyk

    2015-04-01

    Full Text Available Web mining has been widely used to discover knowledge from various sources in the web. One of the important tools in web mining is mining of web user’s behavior that is considered as a way to discover the potential knowledge of web user’s interaction. Nowadays, Website personalization is regarded as a popular phenomenon among web users and it plays an important role in facilitating user access and provides information of users’ requirements based on their own interests. Extracting important features about web user behavior plays a significant role in web usage mining. Such features are page visit frequency in each session, visit duration, and dates of visiting a certain pages. This paper presents a method to predict user’s interest and to propose a list of pages based on their interests by identifying user’s behavior based on fuzzy techniques called fuzzy clustering method. Due to the user’s different interests and use of one or more interest at a time, user’s interest may belong to several clusters and fuzzy clustering provide a possible overlap. Using the resulted cluster helps extract fuzzy rules. This helps detecting user’s movement pattern and using neural network a list of suggested pages to the users is provided.

  6. How Will Online Affiliate Marketing Networks Impact Search Engine Rankings?

    OpenAIRE

    Janssen, David; Heck, Eric

    2007-01-01

    textabstractIn online affiliate marketing networks advertising web sites offer their affiliates revenues based on provided web site traffic and associated leads and sales. Advertising web sites can have a network of thousands of affiliates providing them with web site traffic through hyperlinks on their web sites. Search engines such as Google, MSN, and Yahoo, consider hyperlinks as a proof of quality and/or reliability of the linked web sites, and therefore use them to determine the relevanc...

  7. Web-based Interactive Simulator for Rotating Machinery.

    Science.gov (United States)

    Sirohi, Vijayalaxmi

    1999-01-01

    Baroma (Balance of Rotating Machinery), the Web-based educational engineering interactive software for teaching/learning combines didactical and software ergonomical approaches. The software in tutorial form simulates a problem using Visual Interactive Simulation in graphic display, and animation is brought about through graphical user interface…

  8. Development and Evaluation of Mechatronics Learning System in a Web-Based Environment

    Science.gov (United States)

    Shyr, Wen-Jye

    2011-01-01

    The development of remote laboratory suitable for the reinforcement of undergraduate level teaching of mechatronics is important. For the reason, a Web-based mechatronics learning system, called the RECOLAB (REmote COntrol LABoratory), for remote learning in engineering education has been developed in this study. The web-based environment is an…

  9. The effect of query complexity on Web searching results

    Directory of Open Access Journals (Sweden)

    B.J. Jansen

    2000-01-01

    Full Text Available This paper presents findings from a study of the effects of query structure on retrieval by Web search services. Fifteen queries were selected from the transaction log of a major Web search service in simple query form with no advanced operators (e.g., Boolean operators, phrase operators, etc. and submitted to 5 major search engines - Alta Vista, Excite, FAST Search, Infoseek, and Northern Light. The results from these queries became the baseline data. The original 15 queries were then modified using the various search operators supported by each of the 5 search engines for a total of 210 queries. Each of these 210 queries was also submitted to the applicable search service. The results obtained were then compared to the baseline results. A total of 2,768 search results were returned by the set of all queries. In general, increasing the complexity of the queries had little effect on the results with a greater than 70% overlap in results, on average. Implications for the design of Web search services and directions for future research are discussed.

  10. INTERFACING GOOGLE SEARCH ENGINE TO CAPTURE USER WEB SEARCH BEHAVIOR

    OpenAIRE

    Fadhilah Mat Yamin; T. Ramayah

    2013-01-01

    The behaviour of the searcher when using the search engine especially during the query formulation is crucial. Search engines capture users’ activities in the search log, which is stored at the search engine server. Due to the difficulty of obtaining this search log, this paper proposed and develops an interface framework to interface a Google search engine. This interface will capture users’ queries before redirect them to Google. The analysis of the search log will show that users are utili...

  11. The Web and Information Literacy: Scaffolding the use of Web Sources in a Project-Based Curriculum

    Science.gov (United States)

    Walton, Marion; Archer, Arlene

    2004-01-01

    In this article we describe and discuss a three-year case study of a course in web literacy, part of the academic literacy curriculum for first-year engineering students at the University of Cape Town (UCT). Because they are seen as practical knowledge, not theoretical, information skills tend to be devalued at university and rendered invisible to…

  12. Web-enabled Data Warehouse and Data Webhouse

    Directory of Open Access Journals (Sweden)

    Cerasela PIRVU

    2008-01-01

    Full Text Available In this paper, our objectives are to understanding what data warehouse means examine the reasons for doing so, appreciate the implications of the convergence of Web technologies and those of the data warehouse and examine the steps for building a Web-enabled data warehouse. The web revolution has propelled the data warehouse out onto the main stage, because in many situations the data warehouse must be the engine that controls or analysis the web experience. In order to step up to this new responsibility, the data warehouse must adjust. The nature of the data warehouse needs to be somewhat different. As a result, our data warehouses are becoming data webhouses. The data warehouse is becoming the infrastructure that supports customer relationship management (CRM. And the data warehouse is being asked to make the customer clickstream available for analysis. This rebirth of data warehousing architecture is called the data webhouse.

  13. Use of Web Search Engines and Personalisation in Information Searching for Educational Purposes

    Science.gov (United States)

    Salehi, Sara; Du, Jia Tina; Ashman, Helen

    2018-01-01

    Introduction: Students increasingly depend on Web search for educational purposes. This causes concerns among education providers as some evidence indicates that in higher education, the disadvantages of Web search and personalised information are not justified by the benefits. Method: One hundred and twenty university students were surveyed about…

  14. Stability evaluation of modernized bank protections in a culvert construction

    Science.gov (United States)

    Cholewa, Mariusz; Plesiński, Karol; Kamińska, Katarzyna; Wójcik, Izabela

    2018-02-01

    The paper presents stability evaluation of the banks of the Wilga River on a chosen stretch in Koźmice Wielkie, Małopolska Province. The examined stretch included the river bed upstream from the culvert on a district road. The culvert construction, built over four decades ago, was disassembled in 2014. The former construction, two pipes that were 1.4 m in diameter, was entirely removed. The investor decided to build a new construction in the form of insitu poured reinforced concrete with a 4 x 2 m cross section. Change of geometry and different location in relation to the river current caused increase in the flow velocity and, as a consequence, erosion of both protected and natural banks. Groundwater conditions were determined based on the geotechnical tests that were carried out on soil samples taken from the banks and the river bed. Stability calculations of natural slopes of the Wilga River and the ones protected with riprap indicate mistakes in the design project concerning construction of the river banks. The purpose of the study was to determine the stability of the Wilga River banks on a selected section adjacent to the rebuilt culvert. Stability of a chosen cross section was analysed in the paper. Presented conclusions are based on the results of geotechnical tests and numerical calculations.

  15. How Can We Educate Students on the Web Engineering Discipline via the Web? The NTUA's Approach.

    NARCIS (Netherlands)

    Retalis, Symeon; Avgeriou, Paris; Skordalakis, Manolis

    2000-01-01

    Over the last years the Web has been increasingly used as a platform for supporting the delivery of flexible and interactive hypermedia applications. However, it is admitted that the dominant approach is ad hoc development. Developers should be educated in the use of effective processes, process

  16. Model Driven Engineering

    Science.gov (United States)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.

  17. Semantic web for the working ontologist effective modeling in RDFS and OWL

    CERN Document Server

    Allemang, Dean

    2011-01-01

    Semantic Web models and technologies provide information in machine-readable languages that enable computers to access the Web more intelligently and perform tasks automatically without the direction of users. These technologies are relatively recent and advancing rapidly, creating a set of unique challenges for those developing applications. Semantic Web for the Working Ontologist is the essential, comprehensive resource on semantic modeling, for practitioners in health care, artificial intelligence, finance, engineering, military intelligence, enterprise architecture, and more. Focused on

  18. Information Retrieval for Education: Making Search Engines Language Aware

    Science.gov (United States)

    Ott, Niels; Meurers, Detmar

    2010-01-01

    Search engines have been a major factor in making the web the successful and widely used information source it is today. Generally speaking, they make it possible to retrieve web pages on a topic specified by the keywords entered by the user. Yet web searching currently does not take into account which of the search results are comprehensible for…

  19. TECHNIQUES USED IN SEARCH ENGINE MARKETING

    OpenAIRE

    Assoc. Prof. Liviu Ion Ciora Ph. D; Lect. Ion Buligiu Ph. D

    2010-01-01

    Search engine marketing (SEM) is a generic term covering a variety of marketing techniques intended for attracting web traffic in search engines and directories. SEM is a popular tool since it has the potential of substantial gains with minimum investment. On the one side, most search engines and directories offer free or extremely cheap listing. On the other side, the traffic coming from search engines and directories tends to be motivated for acquisitions, making these visitors some of the ...

  20. Transactions Concurrency Control in Web Service Environment

    DEFF Research Database (Denmark)

    Alrifai, Mohammad; Dolog, Peter; Nejdl, Wolfgang

    2006-01-01

    an engineering point of view as it does not change the way consumers or clients of web services have to be programmed. Furthermore, it avoids direct communication between transaction coordinators which preserves security by keeping the information about business transactions restricted to the coordinators which......Business transactions in web service environments run with relaxed isolation and atomicity property. In such environments, transactions can commit and roll back independently on each other. Transaction management has to reflect this issue and address the problems which result for example from...... concurrent access to web service resources and data. In this paper we propose an extension to the WS-Transaction Protocol which ensures the consistency of the data when independent business transactions access the data concurrently under the relaxed transaction properties. Our extension is based...

  1. Web-Based Distributed Simulation of Aeronautical Propulsion System

    Science.gov (United States)

    Zheng, Desheng; Follen, Gregory J.; Pavlik, William R.; Kim, Chan M.; Liu, Xianyou; Blaser, Tammy M.; Lopez, Isaac

    2001-01-01

    An application was developed to allow users to run and view the Numerical Propulsion System Simulation (NPSS) engine simulations from web browsers. Simulations were performed on multiple INFORMATION POWER GRID (IPG) test beds. The Common Object Request Broker Architecture (CORBA) was used for brokering data exchange among machines and IPG/Globus for job scheduling and remote process invocation. Web server scripting was performed by JavaServer Pages (JSP). This application has proven to be an effective and efficient way to couple heterogeneous distributed components.

  2. From the Director: Surfing the Web for Health Information

    Science.gov (United States)

    ... medical library, to give you easy access to authoritative health information from across the World Wide Web. ... engine, the top-ten results will likely include authoritative nonbiased sites alongside commercial sites and those with ...

  3. Web server for priority ordered multimedia services

    Science.gov (United States)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  4. WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets

    Science.gov (United States)

    Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.

    2010-12-01

    WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface

  5. A critical evaluation of Web sites offering patient information on tinnitus.

    LENUS (Irish Health Repository)

    Kieran, Stephen M

    2012-02-01

    The Internet is a vast information resource for both patients and healthcare professionals. However, the quality and content often lack formal scrutiny, so we examined the quality of patient information regarding tinnitus on the Internet. Using the three most popular search engines (google.com, yahoo.com, and msn.com), we found pertinent Web sites using the search term tinnitus. Web sites\\' accountability and authorship were evaluated using previously published criteria. The quality of patient information about tinnitus was assessed using a new 10-point scale, the Tinnitus Information Value (TIV). Statistical analysis was performed using the independent sample t-test (p Web sites was constructed using the first 30 English-language Web sites identified by each search engine. After duplicates and sites only containing links to other Web sites were eliminated, 39 remained. The mean score for accountability was 2.13 on scale of 0 to 7. The mean TIV was 5.0 on a scale of 0 to 10. Only 12 sites (30.8%) had their authors clearly identified. Twenty-two (56.4%) sites were sponsored by commercial interests or represented private practices. The mean TIV was significantly higher (p = 0.037) for noncommercial (personal, academic institution, or charity) sites (5.88 +\\/- 2.39 SD) than those representing commercial interests (4.32 +\\/- 2.10 SD). Tinnitus information available on the Internet is indeed variable, and care should be taken in recommending tinnitus Web sites to patients.

  6. WebLab-Deusto-CPLD: A Practical Experience

    Directory of Open Access Journals (Sweden)

    Veronica Canivell

    2012-01-01

    Full Text Available This paper shows the experience at the University of Deusto with the WebLab-Deusto-CPLD in the subject “Programmable Logic” of the Faculty of Engineering in the field of Digital Electronics. Presented herein is a technical overview of the laboratory, and its characteristics.

  7. Deep web query interface understanding and integration

    CERN Document Server

    Dragut, Eduard C; Yu, Clement T

    2012-01-01

    There are millions of searchable data sources on the Web and to a large extent their contents can only be reached through their own query interfaces. There is an enormous interest in making the data in these sources easily accessible. There are primarily two general approaches to achieve this objective. The first is to surface the contents of these sources from the deep Web and add the contents to the index of regular search engines. The second is to integrate the searching capabilities of these sources and support integrated access to them. In this book, we introduce the state-of-the-art tech

  8. Letting go of the words writing web content that works

    CERN Document Server

    Redish, Janice (Ginny)

    2012-01-01

    Web site design and development continues to become more sophisticated an important part of this maturity originates with well laid out and well written content. Ginny Redish is a world-renowned expert on information design and how to produce clear writing in plain language for the web. All of the invaluable information that she  shared in the first edition is included with numerous new examples. New information on content strategy for web sites, search engine optimization (SEO), and social media will enhance the book's content making it once again the only book you need to own to o

  9. Guide to cleaner coal technology-related web sites

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, R; Jenkins, N; Zhang, X [IEA Coal Research - The Clean Coal Centre, London (United Kingdom)

    2001-07-01

    The 'Guide to Cleaner Coal Technology-Related Web Sites' is a guide to web sites that contain important information on cleaner coal technologies (CCT). It contains a short introduction to the World Wide Web and gives advice on how to search for information using directories and search engines. The core section of the Guide is a collection of factsheets summarising the information available on over 65 major web sites selected from organizations worldwide (except those promoting companies). These sites contain a wealth of information on CCT research and development, technology transfer, financing and markets. The factsheets are organised in the following categories. Associations, research centres and programmes; Climate change and sustainable development; Cooperative ventures; Electronic journals; Financial institutions; International organizations; National government information; and Statistical information. A full subject index is provided. The Guide concludes with some general comments on the quality of the sites reviewed.

  10. Finding Web-Based Anxiety Interventions on the World Wide Web: A Scoping Review.

    Science.gov (United States)

    Ashford, Miriam Thiel; Olander, Ellinor K; Ayers, Susan

    2016-06-01

    One relatively new and increasingly popular approach of increasing access to treatment is Web-based intervention programs. The advantage of Web-based approaches is the accessibility, affordability, and anonymity of potentially evidence-based treatment. Despite much research evidence on the effectiveness of Web-based interventions for anxiety found in the literature, little is known about what is publically available for potential consumers on the Web. Our aim was to explore what a consumer searching the Web for Web-based intervention options for anxiety-related issues might find. The objectives were to identify currently publically available Web-based intervention programs for anxiety and to synthesize and review these in terms of (1) website characteristics such as credibility and accessibility; (2) intervention program characteristics such as intervention focus, design, and presentation modes; (3) therapeutic elements employed; and (4) published evidence of efficacy. Web keyword searches were carried out on three major search engines (Google, Bing, and Yahoo-UK platforms). For each search, the first 25 hyperlinks were screened for eligible programs. Included were programs that were designed for anxiety symptoms, currently publically accessible on the Web, had an online component, a structured treatment plan, and were available in English. Data were extracted for website characteristics, program characteristics, therapeutic characteristics, as well as empirical evidence. Programs were also evaluated using a 16-point rating tool. The search resulted in 34 programs that were eligible for review. A wide variety of programs for anxiety, including specific anxiety disorders, and anxiety in combination with stress, depression, or anger were identified and based predominantly on cognitive behavioral therapy techniques. The majority of websites were rated as credible, secure, and free of advertisement. The majority required users to register and/or to pay a program access

  11. Síntesis y crítica de las evaluaciones de la efectividad de los motores de búsqueda en la Web. (Synthesis and critical review of evaluations of the effectiveness of Web search engines

    Directory of Open Access Journals (Sweden)

    Francisco Javier Martínez Méndez

    2003-01-01

    Full Text Available A considerable number of proposals for measuring the effectiveness of information retrieval systems have been made since the early days of such systems. The consolidation of the World Wide Web as the paradigmatic method for developing the Information Society, and the continuous multiplication of the number of documents published in this environment, has led to the implementation of the most advanced, and extensive information retrieval systems, in the shape of web search engines. Nevertheless, there is an underlying concern about the effectiveness of these systems, especially when they usually present, in response to a question, many documents with little relevance to the users' information needs. The evaluation of these systems has been, up to now, dispersed and various. The scattering is due to the lack of uniformity in the criteria used in evaluation, and this disparity derives from their a periodicity and variable coverage. In this review, we identify three groups of studies: explicit evaluations, experimental evaluations and, more recently, several proposals for the establishment of a global framework to evaluate these systems.

  12. 29 CFR 1926.758 - Systems-engineered metal buildings.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 8 2010-07-01 2010-07-01 false Systems-engineered metal buildings. 1926.758 Section 1926... Systems-engineered metal buildings. (a) All of the requirements of this subpart apply to the erection of systems-engineered metal buildings except §§ 1926.755 (column anchorage) and 1926.757 (open web steel...

  13. Digital dissemination platform of transportation engineering education materials.

    Science.gov (United States)

    2014-09-01

    National agencies have called for more widespread adoption of best practices in engineering education. To facilitate this sharing of practices we will develop a web-based system that will be used by transportation engineering educators to share curri...

  14. Comparative Study on Three Major Internet Search Engines ...

    African Journals Online (AJOL)

    , Google and ask.com search engines. Experimental method was used with ten reference questions which were used to query each of the search engines . Yahoo obtained the highest results (521,801,043) among the three Web search ...

  15. Web Page Recommendation Using Web Mining

    OpenAIRE

    Modraj Bhavsar; Mrs. P. M. Chavan

    2014-01-01

    On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each...

  16. Searching for information on the World Wide Web with a search engine: a pilot study on cognitive flexibility in younger and older users.

    Science.gov (United States)

    Dommes, Aurelie; Chevalier, Aline; Rossetti, Marilyne

    2010-04-01

    This pilot study investigated the age-related differences in searching for information on the World Wide Web with a search engine. 11 older adults (6 men, 5 women; M age=59 yr., SD=2.76, range=55-65 yr.) and 12 younger adults (2 men, 10 women; M=23.7 yr., SD=1.07, range=22-25 yr.) had to conduct six searches differing in complexity, and for which a search method was or was not induced. The results showed that the younger and older participants provided with an induced search method were less flexible than the others and produced fewer new keywords. Moreover, older participants took longer than the younger adults, especially in the complex searches. The younger participants were flexible in the first request and spontaneously produced new keywords (spontaneous flexibility), whereas the older participants only produced new keywords when confronted by impasses (reactive flexibility). Aging may influence web searches, especially the nature of keywords used.

  17. Focused Crawling of the Deep Web Using Service Class Descriptions

    Energy Technology Data Exchange (ETDEWEB)

    Rocco, D; Liu, L; Critchlow, T

    2004-06-21

    Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address these challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.

  18. Information about liver transplantation on the World Wide Web.

    Science.gov (United States)

    Hanif, F; Sivaprakasam, R; Butler, A; Huguet, E; Pettigrew, G J; Michael, E D A; Praseedom, R K; Jamieson, N V; Bradley, J A; Gibbs, P

    2006-09-01

    Orthotopic liver transplant (OLTx) has evolved to a successful surgical management for end-stage liver diseases. Awareness and information about OLTx is an important tool in assisting OLTx recipients and people supporting them, including non-transplant clinicians. The study aimed to investigate the nature and quality of liver transplant-related patient information on the World Wide Web. Four common search engines were used to explore the Internet by using the key words 'Liver transplant'. The URL (unique resource locator) of the top 50 returns was chosen as it was judged unlikely that the average user would search beyond the first 50 sites returned by a given search. Each Web site was assessed on the following categories: origin, language, accessibility and extent of the information. A weighted Information Score (IS) was created to assess the quality of clinical and educational value of each Web site and was scored independently by three transplant clinicians. The Internet search performed with the aid of the four search engines yielded a total of 2,255,244 Web sites. Of the 200 possible sites, only 58 Web sites were assessed because of repetition of the same Web sites and non-accessible links. The overall median weighted IS was 22 (IQR 1 - 42). Of the 58 Web sites analysed, 45 (77%) belonged to USA, six (10%) were European, and seven (12%) were from the rest of the world. The median weighted IS of publications originating from Europe and USA was 40 (IQR = 22 - 60) and 23 (IQR = 6 - 38), respectively. Although European Web sites produced a higher weighted IS [40 (IQR = 22 - 60)] as compared with the USA publications [23 (IQR = 6 - 38)], this was not statistically significant (p = 0.07). Web sites belonging to the academic institutions and the professional organizations scored significantly higher with a median weighted IS of 28 (IQR = 16 - 44) and 24(12 - 35), respectively, as compared with the commercial Web sites (median = 6 with IQR of 0 - 14, p = .001). There

  19. BIOMedical Search Engine Framework: Lightweight and customized implementation of domain-specific biomedical search engines.

    Science.gov (United States)

    Jácome, Alberto G; Fdez-Riverola, Florentino; Lourenço, Anália

    2016-07-01

    Text mining and semantic analysis approaches can be applied to the construction of biomedical domain-specific search engines and provide an attractive alternative to create personalized and enhanced search experiences. Therefore, this work introduces the new open-source BIOMedical Search Engine Framework for the fast and lightweight development of domain-specific search engines. The rationale behind this framework is to incorporate core features typically available in search engine frameworks with flexible and extensible technologies to retrieve biomedical documents, annotate meaningful domain concepts, and develop highly customized Web search interfaces. The BIOMedical Search Engine Framework integrates taggers for major biomedical concepts, such as diseases, drugs, genes, proteins, compounds and organisms, and enables the use of domain-specific controlled vocabulary. Technologies from the Typesafe Reactive Platform, the AngularJS JavaScript framework and the Bootstrap HTML/CSS framework support the customization of the domain-oriented search application. Moreover, the RESTful API of the BIOMedical Search Engine Framework allows the integration of the search engine into existing systems or a complete web interface personalization. The construction of the Smart Drug Search is described as proof-of-concept of the BIOMedical Search Engine Framework. This public search engine catalogs scientific literature about antimicrobial resistance, microbial virulence and topics alike. The keyword-based queries of the users are transformed into concepts and search results are presented and ranked accordingly. The semantic graph view portraits all the concepts found in the results, and the researcher may look into the relevance of different concepts, the strength of direct relations, and non-trivial, indirect relations. The number of occurrences of the concept shows its importance to the query, and the frequency of concept co-occurrence is indicative of biological relations

  20. Evaluation of a metal shear web selectively reinforced with filamentary composites for space shuttle application. Phase 1 summary report: Shear web design development

    Science.gov (United States)

    Laakso, J. H.; Zimmerman, D. K.

    1972-01-01

    An advanced composite shear web design concept was developed for the Space Shuttle orbiter main engine thrust beam structure. Various web concepts were synthesized by a computer-aided adaptive random search procedure. A practical concept is identified having a titanium-clad + or - 45 deg boron/epoxy web plate with vertical boron/epoxy reinforced aluminum stiffeners. The boron-epoxy laminate contributes to the strength and stiffness efficiency of the basic web section. The titanium-cladding functions to protect the polymeric laminate parts from damaging environments and is chem-milled to provide reinforcement in selected areas. Detailed design drawings are presented for both boron/epoxy reinforced and all-metal shear webs. The weight saving offered is 24% relative to all-metal construction at an attractive cost per pound of weight saved, based on the detailed designs. Small scale element tests substantiate the boron/epoxy reinforced design details in critical areas. The results show that the titanium-cladding reliably reinforces the web laminate in critical edge load transfer and stiffener fastener hole areas.

  1. Collaborative Web Search Who, What, Where, When, and Why

    CERN Document Server

    Morris, Meredith Ringel

    2009-01-01

    Today, Web search is treated as a solitary experience. Web browsers and search engines are typically designed to support a single user, working alone. However, collaboration on information-seeking tasks is actually commonplace. Students work together to complete homework assignments, friends seek information about joint entertainment opportunities, family members jointly plan vacation travel, and colleagues jointly conduct research for their projects. As improved networking technologies and the rise of social media simplify the process of remote collaboration, and large, novel display form-fac

  2. Subject Gateway Sites and Search Engine Ranking.

    Science.gov (United States)

    Thelwall, Mike

    2002-01-01

    Discusses subject gateway sites and commercial search engines for the Web and presents an explanation of Google's PageRank algorithm. The principle question addressed is the conditions under which a gateway site will increase the likelihood that a target page is found in search engines. (LRW)

  3. PENGEMBANGAN TERMINAL AGRIBISNIS VIRTUAL BERBASIS WEB

    Directory of Open Access Journals (Sweden)

    Arif Imam Suroso

    2011-08-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 This study was conducted to develop the prototype of web based virtual agribusiness center as an instrument to increase the scope of marketing channel of agribusiness products in Indonesia.  Using web engineering approach, this virtual agribusiness center information which conceptually has the same role as wholesaler marketing center was developed and tested using one month data of fruits and vegetables prices in traditonal market. There are tree main components of the system: catalog online, cart, and order tracking.  To ensure the quality of the system, the system was tested using Pressman approach and evaluation was done based on its functionality, usability, and reliability . 

  4. Quality of Web-based information on obsessive compulsive disorder.

    Science.gov (United States)

    Klila, Hedi; Chatton, Anne; Zermatten, Ariane; Khan, Riaz; Preisig, Martin; Khazaal, Yasser

    2013-01-01

    The Internet is increasingly used as a source of information for mental health issues. The burden of obsessive compulsive disorder (OCD) may lead persons with diagnosed or undiagnosed OCD, and their relatives, to search for good quality information on the Web. This study aimed to evaluate the quality of Web-based information on English-language sites dealing with OCD and to compare the quality of websites found through a general and a medically specialized search engine. Keywords related to OCD were entered into Google and OmniMedicalSearch. Websites were assessed on the basis of accountability, interactivity, readability, and content quality. The "Health on the Net" (HON) quality label and the Brief DISCERN scale score were used as possible content quality indicators. Of the 235 links identified, 53 websites were analyzed. The content quality of the OCD websites examined was relatively good. The use of a specialized search engine did not offer an advantage in finding websites with better content quality. A score ≥16 on the Brief DISCERN scale is associated with better content quality. This study shows the acceptability of the content quality of OCD websites. There is no advantage in searching for information with a specialized search engine rather than a general one. The Internet offers a number of high quality OCD websites. It remains critical, however, to have a provider-patient talk about the information found on the Web.

  5. SISTEM INFORMASI RUMAH SAKIT BERBASIS WEB MENGGUNAKAN JAVA SERVER PAGES

    Directory of Open Access Journals (Sweden)

    Heru Cahya Rustamaji

    2010-01-01

    Teknologi yang dipakai untuk membangun sistem informasi berbasis web ini adalah menggunakan JSP dan apache Tomcat. Tomcat merupakan servlet engine open source yang termasuk dalam proyek Jakarta yang dikerjakan oleh Apache Software Foundation.

  6. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    Science.gov (United States)

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  7. C'è lavoro sul web?

    Directory of Open Access Journals (Sweden)

    Patrizia Tullini

    2015-03-01

    Full Text Available Is there work on web? Can we distinguish web-users between "workers"? What innovative ways takes digital activity? Is it possible to identify typical professional profiles and trades on web? Can the extreme accessibility and impersonality of the web create genuine employment relationships and support production processes? Is there a way to apply labour rules and protections to experiences that intentionally take advantage of extraterritoriality, autarky and polycentric dimension of Internet?The questions raised by the Author underline that the web, rather than simply providing means and support to the movement of information in advantage of the labour market traditional players (companies, workers, private agencies, public employment services, has instead developed autonomously its own potential. On one hand, the web has became a professional intermediary, using the capabilities of computing devices and search engine that offer global employment services 2.0; on the other hand, has caused the “dis-intermediation” towards institutional operators (public and private through the dissemination of informal circuits, sites of social recruiting, and also civic networks aimed at intercepting interstitial job opportunities.Moreover, the web tends increasingly to exchange or combine the role of the intermediary and the one of the employer in the labour market, making available - at least potentially - a virtual, but global, space for outsourcing. The A. reflects, in particular, on legal characterization of digital work through crowdsourcing and on the application of rules of contracts and Agency work.Although work on web is difficult to recognize, measure and regulate in legal terms, it has been observed that the use of technological devices and digital services mark the rebirth of the "trade" - in its original, ancient meaning of the practice of an art or the expression of a talent - in contrast to the "profession" established in the high twentieth

  8. Automatic Planning of External Search Engine Optimization

    Directory of Open Access Journals (Sweden)

    Vita Jasevičiūtė

    2015-07-01

    Full Text Available This paper describes an investigation of the external search engine optimization (SEO action planning tool, dedicated to automatically extract a small set of most important keywords for each month during whole year period. The keywords in the set are extracted accordingly to external measured parameters, such as average number of searches during the year and for every month individually. Additionally the position of the optimized web site for each keyword is taken into account. The generated optimization plan is similar to the optimization plans prepared manually by the SEO professionals and can be successfully used as a support tool for web site search engine optimization.

  9. The Semantics of Web Services: An Examination in GIScience Applications

    Directory of Open Access Journals (Sweden)

    Xuan Shi

    2013-09-01

    Full Text Available Web service is a technological solution for software interoperability that supports the seamless integration of diverse applications. In the vision of web service architecture, web services are described by the Web Service Description Language (WSDL, discovered through Universal Description, Discovery and Integration (UDDI and communicate by the Simple Object Access Protocol (SOAP. Such a divination has never been fully accomplished yet. Although it was criticized that WSDL only has a syntactic definition of web services, but was not semantic, prior initiatives in semantic web services did not establish a correct methodology to resolve the problem. This paper examines the distinction and relationship between the syntactic and semantic definitions for web services that characterize different purposes in service computation. Further, this paper proposes that the semantics of web service are neutral and independent from the service interface definition, data types and platform. Such a conclusion can be a universal law in software engineering and service computing. Several use cases in the GIScience application are examined in this paper, while the formalization of geospatial services needs to be constructed by the GIScience community towards a comprehensive ontology of the conceptual definitions and relationships for geospatial computation. Advancements in semantic web services research will happen in domain science applications.

  10. Quality of web-based information on bipolar disorder.

    Science.gov (United States)

    Morel, Vincent; Chatton, Anne; Cochand, Sophie; Zullino, Daniele; Khazaal, Yasser

    2008-10-01

    To evaluate web-based information on bipolar disorder and to assess particular content quality indicators. Two keywords, "bipolar disorder" and "manic depressive illness" were entered into popular World Wide Web search engines. Websites were assessed with a standardized proforma designed to rate sites on the basis of accountability, presentation, interactivity, readability and content quality. "Health on the Net" (HON) quality label, and DISCERN scale scores were used to verify their efficiency as quality indicators. Of the 80 websites identified, 34 were included. Based on outcome measures, the content quality of the sites turned-out to be good. Content quality of web sites dealing with bipolar disorder is significantly explained by readability, accountability and interactivity as well as a global score. The overall content quality of the studied bipolar disorder websites is good.

  11. A Web-based Multi-user Interactive Visualization System For Large-Scale Computing Using Google Web Toolkit Technology

    Science.gov (United States)

    Weiss, R. M.; McLane, J. C.; Yuen, D. A.; Wang, S.

    2009-12-01

    We have created a web-based, interactive system for multi-user collaborative visualization of large data sets (on the order of terabytes) that allows users in geographically disparate locations to simultaneous and collectively visualize large data sets over the Internet. By leveraging asynchronous java and XML (AJAX) web development paradigms via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide remote, web-based users a web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota that provides high resolution visualizations to the order of 15 million pixels by Megan Damon. In the current version of our software, we have implemented a new, highly extensible back-end framework built around HTTP "server push" technology to provide a rich collaborative environment and a smooth end-user experience. Furthermore, the web application is accessible via a variety of devices including netbooks, iPhones, and other web- and javascript-enabled cell phones. New features in the current version include: the ability for (1) users to launch multiple visualizations, (2) a user to invite one or more other users to view their visualization in real-time (multiple observers), (3) users to delegate control aspects of the visualization to others (multiple controllers) , and (4) engage in collaborative chat and instant messaging with other users within the user interface of the web application. We will explain choices made regarding implementation, overall system architecture and method of operation, and the benefits of an extensible, modular design. We will also discuss future goals, features, and our plans for increasing scalability of the system which includes a discussion of the benefits potentially afforded us by a migration of server-side components to the Google Application Engine (http://code.google.com/appengine/).

  12. Quality of Web-based information on obsessive compulsive disorder

    Directory of Open Access Journals (Sweden)

    Klila H

    2013-11-01

    Full Text Available Hedi Klila,1 Anne Chatton,2 Ariane Zermatten,2 Riaz Khan,2 Martin Preisig,1,3 Yasser Khazaal2,4 1Department of Psychiatry, Lausanne University Hospital, Lausanne, Switzerland; 2Department of Mental Health and Psychiatry, Geneva University Hospitals, Geneva, Switzerland; 3Lausanne University, Lausanne, Switzerland; 4Geneva University, Geneva, Switzerland Background: The Internet is increasingly used as a source of information for mental health issues. The burden of obsessive compulsive disorder (OCD may lead persons with diagnosed or undiagnosed OCD, and their relatives, to search for good quality information on the Web. This study aimed to evaluate the quality of Web-based information on English-language sites dealing with OCD and to compare the quality of websites found through a general and a medically specialized search engine. Methods: Keywords related to OCD were entered into Google and OmniMedicalSearch. Websites were assessed on the basis of accountability, interactivity, readability, and content quality. The "Health on the Net" (HON quality label and the Brief DISCERN scale score were used as possible content quality indicators. Of the 235 links identified, 53 websites were analyzed. Results: The content quality of the OCD websites examined was relatively good. The use of a specialized search engine did not offer an advantage in finding websites with better content quality. A score ≥16 on the Brief DISCERN scale is associated with better content quality. Conclusion: This study shows the acceptability of the content quality of OCD websites. There is no advantage in searching for information with a specialized search engine rather than a general one. Practical implications: The Internet offers a number of high quality OCD websites. It remains critical, however, to have a provider–patient talk about the information found on the Web. Keywords: Internet, quality indicators, anxiety disorders, OCD, search engine

  13. Rendimiento de los sistemas de recuperación de información en la web: evalución de servicios de búsqueda (search engines.

    Directory of Open Access Journals (Sweden)

    Olvera Lobo, María Dolores

    2000-09-01

    Full Text Available Ten search engines, Altavista, Excite, Hotbot, Infoseek, Lycos. Magellan, OpenText, WebCrawler, WWWWorm, Yahoo, were evaluated, by means of a questionnaire with 20 items (adding up to a total of 200 questions. The 20 first results for each question were analysed in terms of relevance, and values of precision and recall were computed for the resulting 4000 references. The results are also analyzed in terms of the type of question (boolean or natural language and topic (specialized vs. general interest. The results showed that Excite, Infoseek and AltaVista performed generally better. The conclusion of this methodological trial was that the method used allows the evaluation of the performance of Information Retrieval Systems in the Web. As for the results, web search engines are not very precise but extremely exhaustive.

    Se han evaluado diez servicios de búsqueda: Altavista, Excite, Hotbot, Infoseek, Lycos, Magellan, OpenText, WebCrawler, WWWWorm, Yahoo. Se formularon 20 preguntas a cada uno de los 10 sistemas evaluados por lo que se realizaron 200 consultas. Además, se examinó la relevancia de los primeros 20 resultados de cada consulta lo que significa que, en total, se revisaron aproximadamente 4.000 referencias, para cada una de las cuales se calcularon los valores de precisión y exhaustividad. Los análisis muestran que Excite, Infoseek y Altavista son los tres servicios que, de forma genérica, muestran mejor rendimiento. Se analizan también los resultados en función del tipo de pregunta (booleanas o de frase y del tema (ocio o especializada. Se concluye que el método empleado permite analizar el rendimiento de los SRI de la W3 y que los resultados ponen de manifiesto que los buscadores no son sistemas de recuperación de información muy precisos aunque sí muy exhaustivos.

  14. Reviews Equipment: Data logger Book: Imagined Worlds Equipment: Mini data loggers Equipment: PICAXE-18M2 data logger Books: Engineering: A Very Short Introduction and To Engineer Is Human Book: Soap, Science, & Flat-Screen TVs Equipment: uLog and SensorLab Web Watch

    Science.gov (United States)

    2012-07-01

    WE RECOMMEND Data logger Fourier NOVA LINK: data logging and analysis To Engineer is Human Engineering: essays and insights Soap, Science, & Flat-Screen TVs People, politics, business and science overlap uLog sensors and sensor adapter A new addition to the LogIT range offers simplicity and ease of use WORTH A LOOK Imagined Worlds Socio-scientific predictions for the future Mini light data logger and mini temperature data logger Small-scale equipment for schools SensorLab Plus LogIT's supporting software, with extra features HANDLE WITH CARE CAXE110P PICAXE-18M2 data logger Data logger 'on view' but disappoints Engineering: A Very Short Introduction A broad-brush treatment fails to satisfy WEB WATCH Two very different websites for students: advanced physics questions answered and a more general BBC science resource

  15. A Network of Automatic Control Web-Based Laboratories

    Science.gov (United States)

    Vargas, Hector; Sanchez Moreno, J.; Jara, Carlos A.; Candelas, F. A.; Torres, Fernando; Dormido, Sebastian

    2011-01-01

    This article presents an innovative project in the context of remote experimentation applied to control engineering education. Specifically, the authors describe their experience regarding the analysis, design, development, and exploitation of web-based technologies within the scope of automatic control. This work is part of an inter-university…

  16. A web-based nuclear simulator using RELAP5 and LabVIEW

    International Nuclear Information System (INIS)

    Kim, K.D.; Rizwan-uddin

    2007-01-01

    A web-based nuclear reactor simulator has been developed using the best-estimate nuclear system analysis code RELAP5 as its engine, and LabVIEW for graphical user interface and web-casting. Simulator retains the accuracy of the best-estimate code. Results are displayed in user friendly graphical format. Color-coded nominal values are displayed along with the current status of different variables in tab activated windows. Some variables of interest are also shown as a function of time. All graphical outputs are displayed in web browsers making the simulator's front end independent of the operating system. The interactive simulation feature allows the users to simulate specific reactor transients - such as LOCA, scram, etc. - using a single click. Simulator's graphical output can be web-casted and is thus available to anybody with access to the web. Moreover, if permitted, the simulator can be operated remotely from another site connected to the server via the World Wide Web

  17. Presencia y visibilidad web de las universidades públicas españolas

    Directory of Open Access Journals (Sweden)

    Orduña-Malea, Enrique

    2010-06-01

    Full Text Available The evolution of size and visibility of the Spanish public universities websites according to various search engines (Google, Yahoo!, Live/Bing y Exalead was studied from January to June 2009. Additionally, the article proposes two indicators for understanding the importance of a web domain: the Relative representativeness size factor (Rs and the Relative representativeness visibility factor (Rv. These indicators, which consider the number of both documents and links, respectively, during a specific interval of time are intended to be applied in the design and construction of university rankings based on cybermetric techniques. The results confirm that the size differences among academic web domains vary significantly depending on the search engine used; therefore the use of a single web browser cannot supply reliable information about the actual size of the web domain. Moreover, the use of combined values from the mean obtained from each search engine does not offer reliable results, given the variance of data obtained from the different search engines, as well as the index differences of Rs. The differences concerning visibility were smaller, but significant nonetheless. Rs and Rv indicators were found to provide useful and consistent information about the level of development of universities on the Web during a given time interval. There was also a positive correlation between these two indicators on both Yahoo! and Exalead, confirming the relationship between the number of documents of an academic web domain and the number of links it receives over time.

    Se estudia la evolución del tamaño y visibilidad de los dominios web de las universidades públicas españolas desde enero hasta junio de 2009 en función de diversos buscadores web (Google, Yahoo!, Live/Bing y Exalead. Asimismo, se proponen el factor de representatividad relativa media en tamaño (Rs y el factor de representatividad relativa media en visibilidad (Rv como

  18. THE EFFECTIVENESS OF WEB-BASED INTERACTIVE BLENDED LEARNING MODEL IN ELECTRICAL ENGINEERING COURSES

    Directory of Open Access Journals (Sweden)

    Hansi Effendi

    2015-12-01

    Full Text Available The study was to test the effectiveness of the Web-Based Interactive Blended Learning Model (BLIBW for subjects in the Department of Electrical Engineering, Padang State University. The design that the researcher employed was a quasi-experimental design with one group pretest-posttest, which was conducted on a group of students consisting of 30 people and the test was conducted for two times. The effectiveness of BLIBW Model was tested by comparing the average pretest scores and the average posttest scores both in the first trial and the second trial. The average prestest and posttest scores in the first trial were 14.13 and 33.80. The increase in the average score was significant at alpha 0.05. Then, the average pretest and posttest scores in the second trial were 18.67 and 47.03. The result was also significant at alpha 0.05. The effectiveness of BLIBW Model in the second trial was higher than in the first test. Those result were not entirely satisfactory and it might be caused several weaknesses in both tests such as: the number of sessions were limited, there was only one subject, and the number of students who were subjected too limited. However, the researcher would like to conclude that the BLIBW Model might be implemented as a replacement alternative for the face-to-face instruction.

  19. CYCLOSA: Decentralizing Private Web Search Through SGX-Based Browser Extensions

    OpenAIRE

    Pires, Rafael; Goltzsche, David; Mokhtar, Sonia Ben; Bouchenak, Sara; Boutet, Antoine; Felber, Pascal; Kapitza, Rüdiger; Pasin, Marcelo; Schiavoni, Valerio

    2018-01-01

    By regularly querying Web search engines, users (unconsciously) disclose large amounts of their personal data as part of their search queries, among which some might reveal sensitive information (e.g. health issues, sexual, political or religious preferences). Several solutions exist to allow users querying search engines while improving privacy protection. However, these solutions suffer from a number of limitations: some are subject to user re-identification attacks, while others lack scala...

  20. TDCCREC: AN EFFICIENT AND SCALABLE WEB-BASED RECOMMENDATION SYSTEM

    Directory of Open Access Journals (Sweden)

    K.Latha

    2010-10-01

    Full Text Available Web browsers are provided with complex information space where the volume of information available to them is huge. There comes the Recommender system which effectively recommends web pages that are related to the current webpage, to provide the user with further customized reading material. To enhance the performance of the recommender systems, we include an elegant proposed web based recommendation system; Truth Discovery based Content and Collaborative RECommender (TDCCREC which is capable of addressing scalability. Existing approaches such as Learning automata deals with usage and navigational patterns of users. On the other hand, Weighted Association Rule is applied for recommending web pages by assigning weights to each page in all the transactions. Both of them have their own disadvantages. The websites recommended by the search engines have no guarantee for information correctness and often delivers conflicting information. To solve them, content based filtering and collaborative filtering techniques are introduced for recommending web pages to the active user along with the trustworthiness of the website and confidence of facts which outperforms the existing methods. Our results show how the proposed recommender system performs better in predicting the next request of web users.

  1. An Improved Abstract State Machine Based Choreography Specification and Execution Algorithm for Semantic Web Services

    Directory of Open Access Journals (Sweden)

    Shahin Mehdipour Ataee

    2018-01-01

    Full Text Available We identify significant weaknesses in the original Abstract State Machine (ASM based choreography algorithm of Web Service Modeling Ontology (WSMO, which make it impractical for use in semantic web service choreography engines. We present an improved algorithm which rectifies the weaknesses of the original algorithm, as well as a practical, fully functional choreography engine implementation in Flora-2 based on the improved algorithm. Our improvements to the choreography algorithm include (i the linking of the initial state of the ASM to the precondition of the goal, (ii the introduction of the concept of a final state in the execution of the ASM and its linking to the postcondition of the goal, and (iii modification to the execution of the ASM so that it stops when the final state condition is satisfied by the current configuration of the machine. Our choreography engine takes as input semantic web service specifications written in the Flora-2 dialect of F-logic. Furthermore, we prove the equivalence of ASMs (evolving algebras and evolving ontologies in the sense that one can simulate the other, a first in literature. Finally, we present a visual editor which facilitates the design and deployment of our F-logic based web service and goal specifications.

  2. Engineering students' sustainability approaches

    Science.gov (United States)

    Haase, S.

    2014-05-01

    Sustainability issues are increasingly important in engineering work all over the world. This article explores systematic differences in self-assessed competencies, interests, importance, engagement and practices of newly enrolled engineering students in Denmark in relation to environmental and non-environmental sustainability issues. The empirical base of the article is a nation-wide, web-based survey sent to all newly enrolled engineering students in Denmark commencing their education in the fall term 2010. The response rate was 46%. The survey focused on a variety of different aspects of what can be conceived as sustainability. By means of cluster analysis, three engineering student approaches to sustainability are identified and described. The article provides knowledge on the different prerequisites of engineering students in relation to the role of sustainability in engineering. This information is important input to educators trying to target new engineering students and contribute to the provision of engineers equipped to meet sustainability challenges.

  3. A web search on environmental topics: what is the role of ranking?

    Science.gov (United States)

    Covolo, Loredana; Filisetti, Barbara; Mascaretti, Silvia; Limina, Rosa Maria; Gelatti, Umberto

    2013-12-01

    Although the Internet is easy to use, the mechanisms and logic behind a Web search are often unknown. Reliable information can be obtained, but it may not be visible as the Web site is not located in the first positions of search results. The possible risks of adverse health effects arising from environmental hazards are issues of increasing public interest, and therefore the information about these risks, particularly on topics for which there is no scientific evidence, is very crucial. The aim of this study was to investigate whether the presentation of information on some environmental health topics differed among various search engines, assuming that the most reliable information should come from institutional Web sites. Five search engines were used: Google, Yahoo!, Bing, Ask, and AOL. The following topics were searched in combination with the word "health": "nuclear energy," "electromagnetic waves," "air pollution," "waste," and "radon." For each topic three key words were used. The first 30 search results for each query were considered. The ranking variability among the search engines and the type of search results were analyzed for each topic and for each key word. The ranking of institutional Web sites was given particular consideration. Variable results were obtained when surfing the Internet on different environmental health topics. Multivariate logistic regression analysis showed that, when searching for radon and air pollution topics, it is more likely to find institutional Web sites in the first 10 positions compared with nuclear power (odds ratio=3.4, 95% confidence interval 2.1-5.4 and odds ratio=2.9, 95% confidence interval 1.8-4.7, respectively) and also when using Google compared with Bing (odds ratio=3.1, 95% confidence interval 1.9-5.1). The increasing use of online information could play an important role in forming opinions. Web users should become more aware of the importance of finding reliable information, and health institutions should be

  4. A longitudinal analysis of search engine index size

    NARCIS (Netherlands)

    Bosch, A.P.J. van den; Bogers, T.; Kunder, M. de; Salah, A. A.; Tonta, Y.; Salah, A. A. A.; Sugimoto, C.; Al, U.

    2015-01-01

    One of the determining factors of the quality of Web search engines is the size and quality of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We

  5. Error Checking for Chinese Query by Mining Web Log

    Directory of Open Access Journals (Sweden)

    Jianyong Duan

    2015-01-01

    Full Text Available For the search engine, error-input query is a common phenomenon. This paper uses web log as the training set for the query error checking. Through the n-gram language model that is trained by web log, the queries are analyzed and checked. Some features including query words and their number are introduced into the model. At the same time data smoothing algorithm is used to solve data sparseness problem. It will improve the overall accuracy of the n-gram model. The experimental results show that it is effective.

  6. Radiation protection and environmental radioactivity. A voyage to the World Wide Web for beginners

    International Nuclear Information System (INIS)

    Weimer, S.

    1998-01-01

    According to the enormous growth of the Internet service 'World Wide Web' there is also a big growth in the number of web sites in connection with radiation protection. An introduction is given of some practical basis of the WWW. The structure of WWW addresses and navigating through the web with hyperlinks is explained. Further some search engines are presented. The paper lists a number of WWW addresses of interesting sites with radiological protection informations. (orig.) [de

  7. CentroidFold: a web server for RNA secondary structure prediction

    OpenAIRE

    Sato, Kengo; Hamada, Michiaki; Asai, Kiyoshi; Mituyama, Toutai

    2009-01-01

    The CentroidFold web server (http://www.ncrna.org/centroidfold/) is a web application for RNA secondary structure prediction powered by one of the most accurate prediction engine. The server accepts two kinds of sequence data: a single RNA sequence and a multiple alignment of RNA sequences. It responses with a prediction result shown as a popular base-pair notation and a graph representation. PDF version of the graph representation is also available. For a multiple alignment sequence, the ser...

  8. Increasing Scalability of Researcher Network Extraction from the Web

    Science.gov (United States)

    Asada, Yohei; Matsuo, Yutaka; Ishizuka, Mitsuru

    Social networks, which describe relations among people or organizations as a network, have recently attracted attention. With the help of a social network, we can analyze the structure of a community and thereby promote efficient communications within it. We investigate the problem of extracting a network of researchers from the Web, to assist efficient cooperation among researchers. Our method uses a search engine to get the cooccurences of names of two researchers and calculates the streangth of the relation between them. Then we label the relation by analyzing the Web pages in which these two names cooccur. Research on social network extraction using search engines as ours, is attracting attention in Japan as well as abroad. However, the former approaches issue too many queries to search engines to extract a large-scale network. In this paper, we propose a method to filter superfluous queries and facilitates the extraction of large-scale networks. By this method we are able to extract a network of around 3000-nodes. Our experimental results show that the proposed method reduces the number of queries significantly while preserving the quality of the network as compared to former methods.

  9. Analysing Parallel and Passive Web Browsing Behavior and its Effects on Website Metrics

    OpenAIRE

    von der Weth, Christian; Hauswirth, Manfred

    2014-01-01

    Getting deeper insights into the online browsing behavior of Web users has been a major research topic since the advent of the WWW. It provides useful information to optimize website design, Web browser design, search engines offerings, and online advertisement. We argue that new technologies and new services continue to have significant effects on the way how people browse the Web. For example, listening to music clips on YouTube or to a radio station on Last.fm does not require users to sit...

  10. Use of a web site to enhance criticality safety training

    International Nuclear Information System (INIS)

    Huang, Song T.; Morman, James A.

    2003-01-01

    Establishment of the NCSP (Nuclear Criticality Safety Program) website represents one attempt by the NCS (Nuclear Criticality Safety) community to meet the need to enhance communication and disseminate NCS information to a wider audience. With the aging work force in this important technical field, there is a common recognition of the need to capture the corporate knowledge of these people and provide an easily accessible, web-based training opportunity to those people just entering the field of criticality safety. A multimedia-based site can provide a wide range of possibilities for criticality safety training. Training modules could range from simple text-based material, similar to the NCSET (Nuclear Criticality Safety Engineer Training) modules, to interactive web-based training classes, to video lecture series. For example, the Los Alamos National Laboratory video series of interviews with pioneers of criticality safety could easily be incorporated into training modules. Obviously, the development of such a program depends largely upon the need and participation of experts who share the same vision and enthusiasm of training the next generation of criticality safety engineers. The NCSP website is just one example of the potential benefits that web-based training can offer. You are encouraged to browse the NCSP website at http://ncsp.llnl.gov. We solicit your ideas in the training of future NCS engineers and welcome your participation with us in developing future multimedia training modules. (author)

  11. Encouraging the learning of hydraulic engineering subjects in agricultural engineering schools

    Science.gov (United States)

    Rodríguez Sinobas, Leonor; Sánchez Calvo, Raúl

    2014-09-01

    Several methodological approaches to improve the understanding and motivation of students in Hydraulic Engineering courses have been adopted in the Agricultural Engineering School at Technical University of Madrid. During three years student's progress and satisfaction have been assessed by continuous monitoring and the use of 'online' and web tools in two undergraduate courses. Results from their application to encourage learning and communication skills in Hydraulic Engineering subjects are analysed and compared to the initial situation. Student's academic performance has improved since their application, but surveys made among students showed that not all the methodological proposals were perceived as beneficial. Their participation in the 'online', classroom and reading activities was low although they were well assessed.

  12. Penerapan Teknik Seo (Search Engine Optimization pada Website dalam Strategi Pemasaran melalui Internet

    Directory of Open Access Journals (Sweden)

    Rony Baskoro Lukito

    2014-12-01

    Full Text Available The purpose of this research is how to optimize a web design that can increase the number of visitors. The number of Internet users in the world continues to grow in line with advances in information technology. Products and services marketing media do not just use the printed and electronic media. Moreover, the cost of using the Internet as a medium of marketing is relatively inexpensive when compared to the use of television as a marketing medium. The penetration of the internet as a marketing medium lasted for 24 hours in different parts of the world. But to make an internet site into a site that is visited by many internet users, the site is not only good from the outside view only. Web sites that serve as a medium for marketing must be built with the correct rules, so that the Web site be optimal marketing media. One of the good rules in building the internet site as a marketing medium is how the content of such web sites indexed well in search engines like google. Search engine optimization in the index will be focused on the search engine Google for 83% of internet users across the world using Google as a search engine. Search engine optimization commonly known as SEO (Search Engine Optimization is an important rule that the internet site is easier to find a user with the desired keywords.

  13. WebVis: a hierarchical web homepage visualizer

    Science.gov (United States)

    Renteria, Jose C.; Lodha, Suresh K.

    2000-02-01

    WebVis, the Hierarchical Web Home Page Visualizer, is a tool for managing home web pages. The user can access this tool via the WWW and obtain a hierarchical visualization of one's home web pages. WebVis is a real time interactive tool that supports many different queries on the statistics of internal files such as sizes, age, and type. In addition, statistics on embedded information such as VRML files, Java applets, images and sound files can be extracted and queried. Results of these queries are visualized using color, shape and size of different nodes of the hierarchy. The visualization assists the user in a variety of task, such as quickly finding outdated information or locate large files. WebVIs is one solution to the growing web space maintenance problem. Implementation of WebVis is realized with Perl and Java. Perl pattern matching and file handling routines are used to collect and process web space linkage information and web document information. Java utilizes the collected information to produce visualization of the web space. Java also provides WebVis with real time interactivity, while running off the WWW. Some WebVis examples of home web page visualization are presented.

  14. Interactive WebGL-based 3D visualizations for EAST experiment

    International Nuclear Information System (INIS)

    Xia, J.Y.; Xiao, B.J.; Li, Dan; Wang, K.R.

    2016-01-01

    Highlights: • Developing a user-friendly interface to visualize the EAST experimental data and the device is important to scientists and engineers. • The Web3D visualization system is based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. • The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. • The original CAD model was discretized into different layers with different simplification to enable realistic rendering and improve performance. - Abstract: In recent years EAST (Experimental Advanced Superconducting Tokamak) experimental data are being shared and analyzed by an increasing number of international collaborators. Developing a user-friendly interface to visualize the data, meta data and the relevant parts of the device is becoming more and more important to aid scientists and engineers. Compared with the previous virtual EAST system based on VRML/Java3D [1] (Li et al., 2014), a new technology is being adopted to create a 3D visualization system based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. It offers a highly interactive interface allowing scientists to roam inside EAST device and view the complex 3-D structure of the machine. It includes technical details of the device and various diagnostic components, and provides visualization of diagnostic metadata with a direct link to each signal name and its stored data. In order for the quick access to the device 3D model, the original CAD model was discretized into different layers with different simplification. It allows users to search for plasma videos in any experiment and analyze the video frame by frame. In this paper, we present the implementation details to enable realistic rendering and improve performance.

  15. Interactive WebGL-based 3D visualizations for EAST experiment

    Energy Technology Data Exchange (ETDEWEB)

    Xia, J.Y., E-mail: jyxia@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Xiao, B.J. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Li, Dan [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Wang, K.R. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China)

    2016-11-15

    Highlights: • Developing a user-friendly interface to visualize the EAST experimental data and the device is important to scientists and engineers. • The Web3D visualization system is based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. • The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. • The original CAD model was discretized into different layers with different simplification to enable realistic rendering and improve performance. - Abstract: In recent years EAST (Experimental Advanced Superconducting Tokamak) experimental data are being shared and analyzed by an increasing number of international collaborators. Developing a user-friendly interface to visualize the data, meta data and the relevant parts of the device is becoming more and more important to aid scientists and engineers. Compared with the previous virtual EAST system based on VRML/Java3D [1] (Li et al., 2014), a new technology is being adopted to create a 3D visualization system based on HTML5 and WebGL, which runs without the need for plug-ins or third party components. The interactive WebGL-based 3D visualization system is a web-portal integrating EAST 3D models, experimental data and plasma videos. It offers a highly interactive interface allowing scientists to roam inside EAST device and view the complex 3-D structure of the machine. It includes technical details of the device and various diagnostic components, and provides visualization of diagnostic metadata with a direct link to each signal name and its stored data. In order for the quick access to the device 3D model, the original CAD model was discretized into different layers with different simplification. It allows users to search for plasma videos in any experiment and analyze the video frame by frame. In this paper, we present the implementation details to enable realistic rendering and improve performance.

  16. Near-Duplicate Web Page Detection: An Efficient Approach Using Clustering, Sentence Feature and Fingerprinting

    Directory of Open Access Journals (Sweden)

    J. Prasanna Kumar

    2013-02-01

    Full Text Available Duplicate and near-duplicate web pages are the chief concerns for web search engines. In reality, they incur enormous space to store the indexes, ultimately slowing down and increasing the cost of serving results. A variety of techniques have been developed to identify pairs of web pages that are aldquo;similarardquo; to each other. The problem of finding near-duplicate web pages has been a subject of research in the database and web-search communities for some years. In order to identify the near duplicate web pages, we make use of sentence level features along with fingerprinting method. When a large number of web documents are in consideration for the detection of web pages, then at first, we use K-mode clustering and subsequently sentence feature and fingerprint comparison is used. Using these steps, we exactly identify the near duplicate web pages in an efficient manner. The experimentation is carried out on the web page collections and the results ensured the efficiency of the proposed approach in detecting the near duplicate web pages.

  17. Noesis: Ontology based Scoped Search Engine and Resource Aggregator for Atmospheric Science

    Science.gov (United States)

    Ramachandran, R.; Movva, S.; Li, X.; Cherukuri, P.; Graves, S.

    2006-12-01

    The goal for search engines is to return results that are both accurate and complete. The search engines should find only what you really want and find everything you really want. Search engines (even meta search engines) lack semantics. The basis for search is simply based on string matching between the user's query term and the resource database and the semantics associated with the search string is not captured. For example, if an atmospheric scientist is searching for "pressure" related web resources, most search engines return inaccurate results such as web resources related to blood pressure. In this presentation Noesis, which is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities will be described. Noesis uses domain ontologies to help the user scope the search query to ensure that the search results are both accurate and complete. The domain ontologies guide the user to refine their search query and thereby reduce the user's burden of experimenting with different search strings. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. Noesis also serves as a resource aggregator. It categorizes the search results from different online resources such as education materials, publications, datasets, web search engines that might be of interest to the user.

  18. Meta-Search Utilizing Evolitionary Recommendation: A Web Search Architecture Proposal

    Czech Academy of Sciences Publication Activity Database

    Húsek, Dušan; Keyhanipour, A.; Krömer, P.; Moshiri, B.; Owais, S.; Snášel, V.

    2008-01-01

    Roč. 33, - (2008), s. 189-200 ISSN 1870-4069 Institutional research plan: CEZ:AV0Z10300504 Keywords : web search * meta-search engine * intelligent re-ranking * ordered weighted averaging * Boolean search queries optimizing Subject RIV: IN - Informatics, Computer Science

  19. Variability of patient spine education by Internet search engine.

    Science.gov (United States)

    Ghobrial, George M; Mehdi, Angud; Maltenfort, Mitchell; Sharan, Ashwini D; Harrop, James S

    2014-03-01

    Patients are increasingly reliant upon the Internet as a primary source of medical information. The educational experience varies by search engine, search term, and changes daily. There are no tools for critical evaluation of spinal surgery websites. To highlight the variability between common search engines for the same search terms. To detect bias, by prevalence of specific kinds of websites for certain spinal disorders. Demonstrate a simple scoring system of spinal disorder website for patient use, to maximize the quality of information exposed to the patient. Ten common search terms were used to query three of the most common search engines. The top fifty results of each query were tabulated. A negative binomial regression was performed to highlight the variation across each search engine. Google was more likely than Bing and Yahoo search engines to return hospital ads (P=0.002) and more likely to return scholarly sites of peer-reviewed lite (P=0.003). Educational web sites, surgical group sites, and online web communities had a significantly higher likelihood of returning on any search, regardless of search engine, or search string (P=0.007). Likewise, professional websites, including hospital run, industry sponsored, legal, and peer-reviewed web pages were less likely to be found on a search overall, regardless of engine and search string (P=0.078). The Internet is a rapidly growing body of medical information which can serve as a useful tool for patient education. High quality information is readily available, provided that the patient uses a consistent, focused metric for evaluating online spine surgery information, as there is a clear variability in the way search engines present information to the patient. Published by Elsevier B.V.

  20. GeoSearcher: Location-Based Ranking of Search Engine Results.

    Science.gov (United States)

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  1. Easy web interfaces to IDL code for NSTX Data Analysis

    International Nuclear Information System (INIS)

    Davis, W.M.

    2012-01-01

    Highlights: ► Web interfaces to IDL code can be developed quickly. ► Dozens of Web Tools are used effectively on NSTX for Data Analysis. ► Web interfaces are easier to use than X-window applications. - Abstract: Reusing code is a well-known Software Engineering practice to substantially increase the efficiency of code production, as well as to reduce errors and debugging time. A variety of “Web Tools” for the analysis and display of raw and analyzed physics data are in use on NSTX [1], and new ones can be produced quickly from existing IDL [2] code. A Web Tool with only a few inputs, and which calls an IDL routine written in the proper style, can be created in less than an hour; more typical Web Tools with dozens of inputs, and the need for some adaptation of existing IDL code, can be working in a day or so. Efficiency is also increased for users of Web Tools because of the familiar interface of the web browser, and not needing X-windows, or accounts and passwords, when used within our firewall. Web Tools were adapted for use by PPPL physicists accessing EAST data stored in MDSplus with only a few man-weeks of effort; adapting to additional sites should now be even easier. An overview of Web Tools in use on NSTX, and a list of the most useful features, is also presented.

  2. An Iterative and Incremental Approach for E-Learning Ontology Engineering

    Directory of Open Access Journals (Sweden)

    Sudath Rohitha Heiyanthuduwage

    2009-03-01

    Full Text Available Abstract - There is a boost in the interest on ontology with the developments in Semantic Web technologies. Ontologies play a vital role in semantic web. Even though there is lot of work done on ontology, still a standard framework for ontology engineering has not been defined. Even though current ontology engineering methodologies are available they need improvements. The effort of our work is to integrate various methods, techniques, tools and etc to different stages of proposed ontology engineering life cycle to create a comprehensive framework for ontology engineering. Current methodologies discuss ontology engineering stages and collaborative environments with user collaboration. However, discussion on increasing effectiveness and correct inference has been given less attention. More over, these methodologies provide little discussion on usability of domain ontologies. We consider these aspects as more important in our work. Also, ontology engineering has been done for various domains and for various purposes. Our effort is to propose an iterative and incremental approach for ontology engineering especially for e-learning domain with the intention of achieving a higher usability and effectiveness of e-learning systems. This paper introduces different aspects of the proposed ontology engineering framework and evaluation of it.

  3. The rendering context for stereoscopic 3D web

    Science.gov (United States)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  4. Web-based control application using WebSocket

    International Nuclear Information System (INIS)

    Furukawa, Y.

    2012-01-01

    The WebSocket allows asynchronous full-duplex communication between a Web-based (i.e. Java Script-based) application and a Web-server. WebSocket started as a part of HTML5 standardization but has now been separated from HTML5 and has been developed independently. Using WebSocket, it becomes easy to develop platform independent presentation layer applications for accelerator and beamline control software. In addition, a Web browser is the only application program that needs to be installed on client computer. The WebSocket-based applications communicate with the WebSocket server using simple text-based messages, so WebSocket is applicable message-based control system like MADOCA, which was developed for the SPring-8 control system. A simple WebSocket server for the MADOCA control system and a simple motor control application were successfully made as a first trial of the WebSocket control application. Using Google-Chrome (version 13.0) on Debian/Linux and Windows 7, Opera (version 11.0) on Debian/Linux and Safari (version 5.0.3) on Mac OS X as clients, the motors can be controlled using a WebSocket-based Web-application. Diffractometer control application use in synchrotron radiation diffraction experiment was also developed. (author)

  5. Web based electronic logbook and experiment run database viewer for Alcator C-Mod

    International Nuclear Information System (INIS)

    Fredian, T.W.; Stillerman, J.A.

    2006-01-01

    Since 1991, the scientists and engineers at the Alcator C-Mod experiment at MIT have been recording text entries about the experiments being performed in an electronic logbook. In addition, separate documents such as run plans, run summaries and experimental proposals have been created and stored in a variety of formats in computer files. This information has now been organized and made available via any modern web browser. The new web based interface permits the user to browse through all the logbook entries, run information and even view some key data traces of the experiment. Since this information is being catalogued by Internet search engines, these tools can also be used to quickly locate information. The web based logbook and run information interface provides some additional capabilities. Once logged into the web site, users can add, delete or modify logbook entries directly from their browser. The logbook window on their browser also provides dynamic updating when any new logbook entries are made. There is also live C-Mod operation status information with optional audio announcements available. The user can receive the same state change announcements such as 'entering init' or 'entering pulse' as they would if they were sitting in the C-Mod control room. This paper will describe the functionality of the web based logbook and how it was implemented

  6. A web-based, collaborative modeling, simulation, and parallel computing environment for electromechanical systems

    Directory of Open Access Journals (Sweden)

    Xiaoliang Yin

    2015-03-01

    Full Text Available Complex electromechanical system is usually composed of multiple components from different domains, including mechanical, electronic, hydraulic, control, and so on. Modeling and simulation for electromechanical system on a unified platform is one of the research hotspots in system engineering at present. It is also the development trend of the design for complex electromechanical system. The unified modeling techniques and tools based on Modelica language provide a satisfactory solution. To meet with the requirements of collaborative modeling, simulation, and parallel computing for complex electromechanical systems based on Modelica, a general web-based modeling and simulation prototype environment, namely, WebMWorks, is designed and implemented. Based on the rich Internet application technologies, an interactive graphic user interface for modeling and post-processing on web browser was implemented; with the collaborative design module, the environment supports top-down, concurrent modeling and team cooperation; additionally, service-oriented architecture–based architecture was applied to supply compiling and solving services which run on cloud-like servers, so the environment can manage and dispatch large-scale simulation tasks in parallel on multiple computing servers simultaneously. An engineering application about pure electric vehicle is tested on WebMWorks. The results of simulation and parametric experiment demonstrate that the tested web-based environment can effectively shorten the design cycle of the complex electromechanical system.

  7. Reflections on New Search Engine 新型搜索引擎畅想

    OpenAIRE

    Huang, Jiannian

    2007-01-01

    English abstract]Quick increment of need on internet information resources leads to a rush of search engines. This article introduces some new type of search engines which is appearing and will appear. These search engines includes as follows: grey document search engine, invisible web search engine, knowledge discovery search engine, clustering meta search engine, academic clustering search engine, conception comparison and conception analogy search engine, consultation search engine, teachi...

  8. 77 FR 9868 - Airworthiness Directives; Honeywell International Inc. Turbofan Engines

    Science.gov (United States)

    2012-02-21

    ... Airworthiness Directives; Honeywell International Inc. Turbofan Engines AGENCY: Federal Aviation Administration... -5BR series turbofan engines. This proposed AD was prompted by a report of a rim/web separation of a..., -4R, -5AR, -5BR, and -5R series turbofan engines, with an LPT1 rotor assembly, P/N 3074748-4, 3074748...

  9. An evaluation of the quality of Turkish community pharmacy web sites concerning HON principles.

    Science.gov (United States)

    Yegenoglu, Selen; Sozen, Bilge; Aslan, Dilek; Calgan, Zeynep; Cagirci, Simge

    2008-05-01

    The objective of this study was to find all the existing Web sites of Turkish community pharmacies and evaluate their "quality" in terms of Health on the Net (HON) Code of conduct principles. Multiple Internet search engines were used (google.com, yahoo.com, altavista.com, msn.com). While searching on the Internet, "eczane (pharmacy)" and "eczanesi (pharmacy of)" key words were used. The Internet search lasted for 2 months starting from March 1, 2007 until May 1, 2007. SPSS ver. 11.5 statistical program (SPSS, Inc., Chicago, IL) was used for data entry and analysis. At the end of the Internet search via all the indicated search engines, a total of 203 (all different from each other) community pharmacy Web sites were determined; of these, 14 were under construction and 6 were not accessible. As a result, 183 community pharmacy Web sites were included in the study. All of the Web sites could be accessed (100%). However, the availability of some characteristics of the pharmacies were quite poor. None of the pharmacies met all of the HON principles. Only 11 Web sites were appropriate in terms of complementarity (6.0%). Confidentiality criteria was met by only 14 pharmacies (7.7%). Nine pharmacies (4.9%) completed the "attribution" criteria. Among 183 pharmacy Web sites, the most met HON principle was the "transparency of authorship" (69 pharmacy Web sites; 37.7%). Because of the results of our study, the Turkish Pharmacists Association can take a pioneer role to apply some principles such as HON code of conduct in order to increase the quality of Turkish community pharmacists' Web sites.

  10. Semantically-Enabled Sensor Plug & Play for the Sensor Web

    Science.gov (United States)

    Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian

    2011-01-01

    Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC’s Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research. PMID:22164033

  11. A resource-oriented architecture for a Geospatial Web

    Science.gov (United States)

    Mazzetti, Paolo; Nativi, Stefano

    2010-05-01

    , systems using the same Web technologies and specifications but according to a different architectural style, despite their usefulness, should not be considered part of the Web. If the REST style captures the significant Web characteristics, then, in order to build a Geospatial Web it is necessary that its architecture satisfies all the REST constraints. One of them is of particular importance: the adoption of a Uniform Interface. It prescribes that all the geospatial resources must be accessed through the same interface; moreover according to the REST style this interface must satisfy four further constraints: a) identification of resources; b) manipulation of resources through representations; c) self-descriptive messages; and, d) hypermedia as the engine of application state. In the Web, the uniform interface provides basic operations which are meaningful for generic resources. They typically implement the CRUD pattern (Create-Retrieve-Update-Delete) which demonstrated to be flexible and powerful in several general-purpose contexts (e.g. filesystem management, SQL for database management systems, etc.). Restricting the scope to a subset of resources it would be possible to identify other generic actions which are meaningful for all of them. For example for geospatial resources, subsetting, resampling, interpolation and coordinate reference systems transformations functionalities are candidate functionalities for a uniform interface. However an investigation is needed to clarify the semantics of those actions for different resources, and consequently if they can really ascend the role of generic interface operation. Concerning the point a), (identification of resources), it is required that every resource addressable in the Geospatial Web has its own identifier (e.g. a URI). This allows to implement citation and re-use of resources, simply providing the URI. OPeNDAP and KVP encodings of OGC data access services specifications might provide a basis for it. Concerning

  12. Regulating Search Engines: Taking Stock And Looking Ahead

    OpenAIRE

    Gasser, Urs

    2006-01-01

    Since the creation of the first pre-Web Internet search engines in the early 1990s, search engines have become almost as important as email as a primary online activity. Arguably, search engines are among the most important gatekeepers in today's digitally networked environment. Thus, it does not come as a surprise that the evolution of search technology and the diffusion of search engines have been accompanied by a series of conflicts among stakeholders such as search operators, content crea...

  13. Using declarative workflow languages to develop process-centric web applications

    NARCIS (Netherlands)

    Bernardi, M.L.; Cimitile, M.; Di Lucca, G.A.; Maggi, F.M.

    2012-01-01

    Nowadays, process-centric Web Applications (WAs) are extensively used in contexts where multi-user, coordinated work is required. Recently, Model Driven Engineering (MDE) techniques have been investigated for the development of this kind of applications. However, there are still some open issues.

  14. An Innovative Approach for online Meta Search Engine Optimization

    OpenAIRE

    Manral, Jai; Hossain, Mohammed Alamgir

    2015-01-01

    This paper presents an approach to identify efficient techniques used in Web Search Engine Optimization (SEO). Understanding SEO factors which can influence page ranking in search engine is significant for webmasters who wish to attract large number of users to their website. Different from previous relevant research, in this study we developed an intelligent Meta search engine which aggregates results from various search engines and ranks them based on several important SEO parameters. The r...

  15. What Snippets Say About Pages in Federated Web Search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd; Hou, Yuexian; Nie, Jian-Yun; Sun, Le; Wang, Bo; Zhang, Peng

    2012-01-01

    What is the likelihood that a Web page is considered relevant to a query, given the relevance assessment of the corresponding snippet? Using a new federated IR test collection that contains search results from over a hundred search engines on the internet, we are able to investigate such research

  16. Radiation protection and environmental radioactivity. A voyage to the World Wide Web for beginners; Strahlenschutz und Umweltradioaktivitaet im Internet. Eine Reise in das World Wide Web fuer Anfaenger

    Energy Technology Data Exchange (ETDEWEB)

    Weimer, S [Landesanstalt fuer Umweltschutz Baden-Wuerttemberg, Referat ' ' Umweltradioaktivitaet, Strahlenschutz' ' (Germany)

    1998-07-01

    According to the enormous growth of the Internet service 'World Wide Web' there is also a big growth in the number of web sites in connection with radiation protection. An introduction is given of some practical basis of the WWW. The structure of WWW addresses and navigating through the web with hyperlinks is explained. Further some search engines are presented. The paper lists a number of WWW addresses of interesting sites with radiological protection informations. (orig.) [German] Mit dem rasanten Wachstum des Internet-Dienstes 'World Wide Web' ist auch das Angebot von Web-Seiten im Bereich Strahlenschutz stark gewachsen. Es wird eine Einfuehrung in die wichtigsten praktischen Grundlagen des WWW gegeben. Es wird der Aufbau der WWW-Adressen erklaert und das Navigieren mit Hyperlinks. Ausserdem werden einige Suchmaschinen vorgestellt. Der Beitrag stellt eine groessere Zahl an WWW-Adressen zu interessanten Seiten mit Strahlenschutzinformationen zur Verfuegung. (orig.)

  17. A NEW APPROACH FOR IMPROVING QUALITY OF WEB APPLICATIONS USING DESIGN PATTERNS

    OpenAIRE

    J. Srikanth R. Savithri

    2012-01-01

    Design patterns are descriptions of communicating objects and classes that are customized to solve a general design problem in a particular context, they describes the problem and its corresponding solution. Professional software engineers always use Design patterns for introducing abstractions in software and by the way they can build complex web applications. The right adoption of Design Patterns while designing web applications can promote the factors like reusability and consistency of th...

  18. Tracing agents and other automatic sampling procedures for the World Wide Web

    OpenAIRE

    Aguillo, Isidro F.

    1999-01-01

    Many of the search engines and recovery tools are not suitable to make samples of web resources for quantitative analysis. The increasing size of the web and its hypertextual nature offer opportunities for a novel approach. A new generation of recovering tools involving tracing hypertext links from selected sites are very promising. Offering capabilities to automate tasks Extracting large samples of high pertinence Ready to use in standard database formats Selecting additional resour...

  19. An ant colony optimization based feature selection for web page classification.

    Science.gov (United States)

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  20. Impact of Commercial Search Engines and International Databases on Engineering Teaching and Research

    Science.gov (United States)

    Chanson, Hubert

    2007-01-01

    For the last three decades, the engineering higher education and professional environments have been completely transformed by the "electronic/digital information revolution" that has included the introduction of personal computer, the development of email and world wide web, and broadband Internet connections at home. Herein the writer compares…

  1. Internet-based dimensional verification system for reverse engineering processes

    International Nuclear Information System (INIS)

    Song, In Ho; Kim, Kyung Don; Chung, Sung Chong

    2008-01-01

    This paper proposes a design methodology for a Web-based collaborative system applicable to reverse engineering processes in a distributed environment. By using the developed system, design reviewers of new products are able to confirm geometric shapes, inspect dimensional information of products through measured point data, and exchange views with other design reviewers on the Web. In addition, it is applicable to verifying accuracy of production processes by manufacturing engineers. Functional requirements for designing this Web-based dimensional verification system are described in this paper. ActiveX-server architecture and OpenGL plug-in methods using ActiveX controls realize the proposed system. In the developed system, visualization and dimensional inspection of the measured point data are done directly on the Web: conversion of the point data into a CAD file or a VRML form is unnecessary. Dimensional verification results and design modification ideas are uploaded to markups and/or XML files during collaboration processes. Collaborators review the markup results created by others to produce a good design result on the Web. The use of XML files allows information sharing on the Web to be independent of the platform of the developed system. It is possible to diversify the information sharing capability among design collaborators. Validity and effectiveness of the developed system has been confirmed by case studies

  2. Internet-based dimensional verification system for reverse engineering processes

    Energy Technology Data Exchange (ETDEWEB)

    Song, In Ho [Ajou University, Suwon (Korea, Republic of); Kim, Kyung Don [Small Business Corporation, Suwon (Korea, Republic of); Chung, Sung Chong [Hanyang University, Seoul (Korea, Republic of)

    2008-07-15

    This paper proposes a design methodology for a Web-based collaborative system applicable to reverse engineering processes in a distributed environment. By using the developed system, design reviewers of new products are able to confirm geometric shapes, inspect dimensional information of products through measured point data, and exchange views with other design reviewers on the Web. In addition, it is applicable to verifying accuracy of production processes by manufacturing engineers. Functional requirements for designing this Web-based dimensional verification system are described in this paper. ActiveX-server architecture and OpenGL plug-in methods using ActiveX controls realize the proposed system. In the developed system, visualization and dimensional inspection of the measured point data are done directly on the Web: conversion of the point data into a CAD file or a VRML form is unnecessary. Dimensional verification results and design modification ideas are uploaded to markups and/or XML files during collaboration processes. Collaborators review the markup results created by others to produce a good design result on the Web. The use of XML files allows information sharing on the Web to be independent of the platform of the developed system. It is possible to diversify the information sharing capability among design collaborators. Validity and effectiveness of the developed system has been confirmed by case studies

  3. [Improving vaccination social marketing by monitoring the web].

    Science.gov (United States)

    Ferro, A; Bonanni, P; Castiglia, P; Montante, A; Colucci, M; Miotto, S; Siddu, A; Murrone, L; Baldo, V

    2014-01-01

    Immunisation is one of the most important and cost- effective interventions in Public Health because of their significant positive impact on population health.However, since Jenner's discovery there always been a lively debate between supporters and opponents of vaccination; Today the antivaccination movement spreads its message mostly on the web, disseminating inaccurate data through blogs and forums, increasing vaccine rejection.In this context, the Società Italiana di Igiene (SItI) created a web project in order to fight the misinformation on the web regarding vaccinations, through a series of information tools, including scientific articles, educational information, video and multimedia presentations The web portal (http://www.vaccinarsi.org) was published in May 2013 and now is already available over one hundred web pages related to vaccinations Recently a Forum, a periodic newsletter and a Twitter page have been created. There has been an average of 10,000 hits per month. Currently our users are mostly healthcare professionals. The visibility of the site is very good and it currently ranks first in the Google's search engine, taping the word "vaccinarsi" The results of the first four months of activity are extremely encouraging and show the importance of this project; furthermore the application for quality certification by independent international Organizations has been submitted.

  4. [Biomedical information on the internet using search engines. A one-year trial].

    Science.gov (United States)

    Corrao, Salvatore; Leone, Francesco; Arnone, Sabrina

    2004-01-01

    The internet is a communication medium and content distributor that provide information in the general sense but it could be of great utility regarding as the search and retrieval of biomedical information. Search engines represent a great deal to rapidly find information on the net. However, we do not know whether general search engines and meta-search ones are reliable in order to find useful and validated biomedical information. The aim of our study was to verify the reproducibility of a search by key-words (pediatric or evidence) using 9 international search engines and 1 meta-search engine at the baseline and after a one year period. We analysed the first 20 citations as output of each searching. We evaluated the formal quality of Web-sites and their domain extensions. Moreover, we compared the output of each search at the start of this study and after a one year period and we considered as a criterion of reliability the number of Web-sites cited again. We found some interesting results that are reported throughout the text. Our findings point out an extreme dynamicity of the information on the Web and, for this reason, we advice a great caution when someone want to use search and meta-search engines as a tool for searching and retrieve reliable biomedical information. On the other hand, some search and meta-search engines could be very useful as a first step searching for defining better a search and, moreover, for finding institutional Web-sites too. This paper allows to know a more conscious approach to the internet biomedical information universe.

  5. Eysenbach, Tuische and Diepgen’s Evaluation of Web Searching for Identifying Unpublished Studies for Systematic Reviews: An Innovative Study Which is Still Relevant Today.

    Directory of Open Access Journals (Sweden)

    Simon Briscoe

    2016-09-01

    Full Text Available A Review of: Eysenbach, G., Tuische, J. & Diepgen, T.L. (2001. Evaluation of the usefulness of Internet searches to identify unpublished clinical trials for systematic reviews. Medical Informatics and the Internet in Medicine, 26(3, 203-218. http://dx.doi.org/10.1080/14639230110075459 Objective – To consider whether web searching is a useful method for identifying unpublished studies for inclusion in systematic reviews. Design – Retrospective web searches using the AltaVista search engine were conducted to identify unpublished studies – specifically, clinical trials – for systematic reviews which did not use a web search engine. Setting – The Department of Clinical Social Medicine, University of Heidelberg, Germany. Subjects – n/a Methods – Pilot testing of 11 web search engines was carried out to determine which could handle complex search queries. Pre-specified search requirements included the ability to handle Boolean and proximity operators, and truncation searching. A total of seven Cochrane systematic reviews were randomly selected from the Cochrane Library Issue 2, 1998, and their bibliographic database search strategies were adapted for the web search engine, AltaVista. Each adaptation combined search terms for the intervention, problem, and study type in the systematic review. Hints to planned, ongoing, or unpublished studies retrieved by the search engine, which were not cited in the systematic reviews, were followed up by visiting websites and contacting authors for further details when required. The authors of the systematic reviews were then contacted and asked to comment on the potential relevance of the identified studies. Main Results – Hints to 14 unpublished and potentially relevant studies, corresponding to 4 of the 7 randomly selected Cochrane systematic reviews, were identified. Out of the 14 studies, 2 were considered irrelevant to the corresponding systematic review by the systematic review authors. The

  6. Facilitation by ecosystem engineers enhances nutrient effects in an intertidal system

    NARCIS (Netherlands)

    Eriksson, B.K.; Westra, J.; van Gerwen, I.; Weerman, E.; van der Heide, T.; van der Zee, E.; van de Koppel, J.; Olff, H.; Piersma, T.; Donadi, S.

    2017-01-01

    Ecosystem engineering research has recently demonstrated the fundamental importance ofnon-trophic interactions for food-web structure. Particularly, by creating benign conditions in stressfulenvironments, ecosystem engineers create hot beds of elevated levels of recruitment, growth, and survivalof

  7. Beginning ASPNET Web Pages with WebMatrix

    CERN Document Server

    Brind, Mike

    2011-01-01

    Learn to build dynamic web sites with Microsoft WebMatrix Microsoft WebMatrix is designed to make developing dynamic ASP.NET web sites much easier. This complete Wrox guide shows you what it is, how it works, and how to get the best from it right away. It covers all the basic foundations and also introduces HTML, CSS, and Ajax using jQuery, giving beginning programmers a firm foundation for building dynamic web sites.Examines how WebMatrix is expected to become the new recommended entry-level tool for developing web sites using ASP.NETArms beginning programmers, students, and educators with al

  8. Non-visual Web Browsing: Beyond Web Accessibility.

    Science.gov (United States)

    Ramakrishnan, I V; Ashok, Vikas; Billah, Syed Masum

    2017-07-01

    People with vision impairments typically use screen readers to browse the Web. To facilitate non-visual browsing, web sites must be made accessible to screen readers, i.e., all the visible elements in the web site must be readable by the screen reader. But even if web sites are accessible, screen-reader users may not find them easy to use and/or easy to navigate. For example, they may not be able to locate the desired information without having to listen to a lot of irrelevant contents. These issues go beyond web accessibility and directly impact web usability. Several techniques have been reported in the accessibility literature for making the Web usable for screen reading. This paper is a review of these techniques. Interestingly, the review reveals that understanding the semantics of the web content is the overarching theme that drives these techniques for improving web usability.

  9. [Multi-course web-learning system for supporting students of medical technology].

    Science.gov (United States)

    Honma, Satoru; Wakamatsu, Hidetoshi; Kurihara, Yuriko; Yoshida, Shoko; Sakai, Nobue

    2013-05-01

    Web-Learning system was developed to support the self-learning for national qualification examination and medical engineering practice by students. The results from small tests in various situations suggest that the unit-learning systems are more effective, especially for the early stage of their self learning. In addition, the answers of some questionnaire suggest that the students' motivation has a certain relation with the number of the questions in the system. That is, the less number of the questions, the easier they are worked out with a higher learning motivation by students. Thus, the system was extended to enable students to study various subjects and/or units by themselves. The system enables them to have learning effects more easily by the exercise during lectures. The effectiveness of the system was investigated on medical associated subjects installed in the system. The concerning questions of Medical engineering and Pathological histology are adequately divided into several groups, of which sixteen Web-Learning subsystems were well composed for their practical application. Our concerning various unit-learning systems were confirmed much useful for most students comparing with the case of the overall Web-Learning system.

  10. Minimalist instruction for learning to search the World Wide Web

    NARCIS (Netherlands)

    Lazonder, Adrianus W.

    2001-01-01

    This study examined the efficacy of minimalist instruction to develop self-regulatory skills involved in Web searching. Two versions of minimalist self-regulatory skill instruction were compared to a control group that was merely taught procedural skills to operate the search engine. Acquired skills

  11. Omicseq: a web-based search engine for exploring omics datasets

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S.; Xu, Tianlei; Chen, Li; Zwick, Michael E.; Jiang, Xiaoqian; Wang, Fusheng

    2017-01-01

    Abstract The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve ‘findability’ of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. PMID:28402462

  12. A Web Search on Environmental Topics: What Is the Role of Ranking?

    Science.gov (United States)

    Filisetti, Barbara; Mascaretti, Silvia; Limina, Rosa Maria; Gelatti, Umberto

    2013-01-01

    Abstract Background: Although the Internet is easy to use, the mechanisms and logic behind a Web search are often unknown. Reliable information can be obtained, but it may not be visible as the Web site is not located in the first positions of search results. The possible risks of adverse health effects arising from environmental hazards are issues of increasing public interest, and therefore the information about these risks, particularly on topics for which there is no scientific evidence, is very crucial. The aim of this study was to investigate whether the presentation of information on some environmental health topics differed among various search engines, assuming that the most reliable information should come from institutional Web sites. Materials and Methods: Five search engines were used: Google, Yahoo!, Bing, Ask, and AOL. The following topics were searched in combination with the word “health”: “nuclear energy,” “electromagnetic waves,” “air pollution,” “waste,” and “radon.” For each topic three key words were used. The first 30 search results for each query were considered. The ranking variability among the search engines and the type of search results were analyzed for each topic and for each key word. The ranking of institutional Web sites was given particular consideration. Results: Variable results were obtained when surfing the Internet on different environmental health topics. Multivariate logistic regression analysis showed that, when searching for radon and air pollution topics, it is more likely to find institutional Web sites in the first 10 positions compared with nuclear power (odds ratio=3.4, 95% confidence interval 2.1–5.4 and odds ratio=2.9, 95% confidence interval 1.8–4.7, respectively) and also when using Google compared with Bing (odds ratio=3.1, 95% confidence interval 1.9–5.1). Conclusions: The increasing use of online information could play an important role in forming opinions. Web users should become

  13. WAsP engineering 2000

    DEFF Research Database (Denmark)

    Mann, J.; Ott, Søren; Jørgensen, B.H.

    2002-01-01

    This report summarizes the findings of the EFP project WAsP Engineering Version 2000. The main product of this project is the computer program WAsP Engineering which is used for the estimation of extreme wind speeds, wind shears, profiles, and turbulencein complex terrain. At the web page http......://www.waspengineering.dk more information of the program can be obtained and a copy of the manual can be downloaded. The reports contains a complete description of the turbulence modelling in moderately complexterrain, implemented in WAsP Engineering. Also experimental validation of the model together with comparison...... with spectra from engineering codes is done. Some shortcomings of the linear flow model LINCOM, which is at the core of WAsP Engineering, ispointed out and modifications to eliminate the problem are presented. The global database of meteorological "reanalysis" data from NCAP/NCEP are used to estimate...

  14. Grooker, KartOO, Addict-o-Matic and More: Really Different Search Engines

    Science.gov (United States)

    Descy, Don E.

    2009-01-01

    There are hundreds of unique search engines in the United States and thousands of unique search engines around the world. If people get into search engines designed just to search particular web sites, the number is in the hundreds of thousands. This article looks at: (1) clustering search engines, such as KartOO (www.kartoo.com) and Grokker…

  15. Facilitation by ecosystem engineers enhances nutrient effects in an intertidal system

    NARCIS (Netherlands)

    Eriksson, Britas Klemens; Westra, Jocelle; van Gerwen, Imke; Weerman, Ellen; van der Zee, Els; van der Heide, Tjisse; van de Koppel, Johan; Olff, Han; Piersma, Theunis; Donadi, Serena

    2017-01-01

    Ecosystem engineering research has recently demonstrated the fundamental importance of non-trophic interactions for food-web structure. Particularly, by creating benign conditions in stressful environments, ecosystem engineers create hot beds of elevated levels of recruitment, growth, and survival

  16. AdaFF: Adaptive Failure-Handling Framework for Composite Web Services

    Science.gov (United States)

    Kim, Yuna; Lee, Wan Yeon; Kim, Kyong Hoon; Kim, Jong

    In this paper, we propose a novel Web service composition framework which dynamically accommodates various failure recovery requirements. In the proposed framework called Adaptive Failure-handling Framework (AdaFF), failure-handling submodules are prepared during the design of a composite service, and some of them are systematically selected and automatically combined with the composite Web service at service instantiation in accordance with the requirement of individual users. In contrast, existing frameworks cannot adapt the failure-handling behaviors to user's requirements. AdaFF rapidly delivers a composite service supporting the requirement-matched failure handling without manual development, and contributes to a flexible composite Web service design in that service architects never care about failure handling or variable requirements of users. For proof of concept, we implement a prototype system of the AdaFF, which automatically generates a composite service instance with Web Services Business Process Execution Language (WS-BPEL) according to the users' requirement specified in XML format and executes the generated instance on the ActiveBPEL engine.

  17. Analysis and Testing of Ajax-based Single-page Web Applications

    NARCIS (Netherlands)

    Mesbah, A.

    2009-01-01

    This dissertation has focused on better understanding the shifting web paradigm and the consequences of moving from the classical multi-page model to an Ajax-based single-page style. Specifically to that end, this work has examined this new class of software from three main software engineering

  18. Web Caching

    Indian Academy of Sciences (India)

    leveraged through Web caching technology. Specifically, Web caching becomes an ... Web routing can improve the overall performance of the Internet. Web caching is similar to memory system caching - a Web cache stores Web resources in ...

  19. Quality of web-based information on cannabis addiction.

    Science.gov (United States)

    Khazaal, Yasser; Chatton, Anne; Cochand, Sophie; Zullino, Daniele

    2008-01-01

    This study evaluated the quality of Web-based information on cannabis use and addiction and investigated particular content quality indicators. Three keywords ("cannabis addiction," "cannabis dependence," and "cannabis abuse") were entered into two popular World Wide Web search engines. Websites were assessed with a standardized proforma designed to rate sites on the basis of accountability, presentation, interactivity, readability, and content quality. "Health on the Net" (HON) quality label, and DISCERN scale scores were used to verify their efficiency as quality indicators. Of the 94 Websites identified, 57 were included. Most were commercial sites. Based on outcome measures, the overall quality of the sites turned out to be poor. A global score (the sum of accountability, interactivity, content quality and esthetic criteria) appeared as a good content quality indicator. While cannabis education Websites for patients are widespread, their global quality is poor. There is a need for better evidence-based information about cannabis use and addiction on the Web.

  20. The internet and intelligent machines: search engines, agents and robots

    International Nuclear Information System (INIS)

    Achenbach, S.; Alfke, H.

    2000-01-01

    The internet plays an important role in a growing number of medical applications. Finding relevant information is not always easy as the amount of available information on the Web is rising quickly. Even the best Search Engines can only collect links to a fraction of all existing Web pages. In addition, many of these indexed documents have been changed or deleted. The vast majority of information on the Web is not searchable with conventional methods. New search strategies, technologies and standards are combined in Intelligent Search Agents (ISA) an Robots, which can retrieve desired information in a specific approach. Conclusion: The article describes differences between ISAs and conventional Search Engines and how communication between Agents improves their ability to find information. Examples of existing ISAs are given and the possible influences on the current and future work in radiology is discussed. (orig.) [de

  1. AADL and Model-based Engineering

    Science.gov (United States)

    2014-10-20

    pictures – MDE and MDA with UML – Automatically generated documents We need language for architecture modeling • Strongly typed • Well-defined...Mail Software Engineering Institute Customer Relations 4500 Fifth Avenue Pittsburgh, PA 15213-2612 USA Web Wiki.sei.cmu.edu/aadl www.aadl.info

  2. Keeping Dublin Core Simple: Cross-Domain Discovery or Resource Description?; First Steps in an Information Commerce Economy: Digital Rights Management in the Emerging E-Book Environment; Interoperability: Digital Rights Management and the Emerging EBook Environment; Searching the Deep Web: Direct Query Engine Applications at the Department of Energy.

    Science.gov (United States)

    Lagoze, Carl; Neylon, Eamonn; Mooney, Stephen; Warnick, Walter L.; Scott, R. L.; Spence, Karen J.; Johnson, Lorrie A.; Allen, Valerie S.; Lederman, Abe

    2001-01-01

    Includes four articles that discuss Dublin Core metadata, digital rights management and electronic books, including interoperability; and directed query engines, a type of search engine designed to access resources on the deep Web that is being used at the Department of Energy. (LRW)

  3. SWHi system description : A case study in information retrieval, inference, and visualization in the Semantic Web

    NARCIS (Netherlands)

    Fahmi, Ismail; Zhang, Junte; Ellermann, Henk; Bouma, Gosse; Franconi, E; Kifer, M; May, W

    2007-01-01

    Search engines have become the most popular tools for finding information on the Internet. A real-world Semantic Web application can benefit from this by combining its features with some features from search engines. In this paper, we describe methods for indexing and searching a populated ontology

  4. Comparing the diversity of information by word-of-mouth vs. web spread

    Science.gov (United States)

    Sela, Alon; Shekhtman, Louis; Havlin, Shlomo; Ben-Gal, Irad

    2016-06-01

    Many studies have explored spreading and diffusion through complex networks. The following study examines a specific case of spreading of opinions in modern society through two spreading schemes —defined as being either through “word of mouth” (WOM), or through online search engines (WEB). We apply both modelling and real experimental results and compare the opinions people adopt through an exposure to their friend's opinions, as opposed to the opinions they adopt when using a search engine based on the PageRank algorithm. A simulated study shows that when members in a population adopt decisions through the use of the WEB scheme, the population ends up with a few dominant views, while other views are barely expressed. In contrast, when members adopt decisions based on the WOM scheme, there is a far more diverse distribution of opinions in that population. The simulative results are further supported by an online experiment which finds that people searching information through a search engine end up with far more homogenous opinions as compared to those asking their friends.

  5. Constructing a web recommender system using web usage mining and user’s profiles

    Directory of Open Access Journals (Sweden)

    T. Mombeini

    2014-12-01

    Full Text Available The World Wide Web is a great source of information, which is nowadays being widely used due to the availability of useful information changing, dynamically. However, the large number of webpages often confuses many users and it is hard for them to find information on their interests. Therefore, it is necessary to provide a system capable of guiding users towards their desired choices and services. Recommender systems search among a large collection of user interests and recommend those, which are likely to be favored the most by the user. Web usage mining was designed to function on web server records, which are included in user search results. Therefore, recommender servers use the web usage mining technique to predict users’ browsing patterns and recommend those patterns in the form of a suggestion list. In this article, a recommender system based on web usage mining phases (online and offline was proposed. In the offline phase, the first step is to analyze user access records to identify user sessions. Next, user profiles are built using data from server records based on the frequency of access to pages, the time spent by the user on each page and the date of page view. Date is of importance since it is more possible for users to request new pages more than old ones and old pages are less probable to be viewed, as users mostly look for new information. Following the creation of user profiles, users are categorized in clusters using the Fuzzy C-means clustering algorithm and S(c criterion based on their similarities. In the online phase, a neural network is offered to identify the suggested model while online suggestions are generated using the suggestion module for the active user. Search engines analyze suggestion lists based on rate of user interest in pages and page rank and finally suggest appropriate pages to the active user. Experiments show that the proposed method of predicting user recent requested pages has more accuracy and

  6. 'Sciencenet'--towards a global search and share engine for all scientific knowledge.

    Science.gov (United States)

    Lütjohann, Dominic S; Shah, Asmi H; Christen, Michael P; Richter, Florian; Knese, Karsten; Liebel, Urban

    2011-06-15

    Modern biological experiments create vast amounts of data which are geographically distributed. These datasets consist of petabytes of raw data and billions of documents. Yet to the best of our knowledge, a search engine technology that searches and cross-links all different data types in life sciences does not exist. We have developed a prototype distributed scientific search engine technology, 'Sciencenet', which facilitates rapid searching over this large data space. By 'bringing the search engine to the data', we do not require server farms. This platform also allows users to contribute to the search index and publish their large-scale data to support e-Science. Furthermore, a community-driven method guarantees that only scientific content is crawled and presented. Our peer-to-peer approach is sufficiently scalable for the science web without performance or capacity tradeoff. The free to use search portal web page and the downloadable client are accessible at: http://sciencenet.kit.edu. The web portal for index administration is implemented in ASP.NET, the 'AskMe' experiment publisher is written in Python 2.7, and the backend 'YaCy' search engine is based on Java 1.6.

  7. Increasing public understanding of transgenic crops through the World Wide Web.

    Science.gov (United States)

    Byrne, Patrick F; Namuth, Deana M; Harrington, Judy; Ward, Sarah M; Lee, Donald J; Hain, Patricia

    2002-07-01

    Transgenic crops among the most controversial "science and society" issues of recent years. Because of the complex techniques involved in creating these crops and the polarized debate over their risks and beliefs, a critical need has arisen for accessible and balanced information on this technology. World Wide Web sites offer several advantages for disseminating information on a fast-changing technical topic, including their global accessibility; and their ability to update information frequently, incorporate multimedia formats, and link to networks of other sites. An alliance between two complementary web sites at Colorado State University and the University of Nebraska-Lincoln takes advantage of the web environment to help fill the need for public information on crop genetic engineering. This article describes the objectives and features of each site. Viewership data and other feedback have shown these web sites to be effective means of reaching public audiences on a complex scientific topic.

  8. PIA: An Intuitive Protein Inference Engine with a Web-Based User Interface.

    Science.gov (United States)

    Uszkoreit, Julian; Maerkens, Alexandra; Perez-Riverol, Yasset; Meyer, Helmut E; Marcus, Katrin; Stephan, Christian; Kohlbacher, Oliver; Eisenacher, Martin

    2015-07-02

    Protein inference connects the peptide spectrum matches (PSMs) obtained from database search engines back to proteins, which are typically at the heart of most proteomics studies. Different search engines yield different PSMs and thus different protein lists. Analysis of results from one or multiple search engines is often hampered by different data exchange formats and lack of convenient and intuitive user interfaces. We present PIA, a flexible software suite for combining PSMs from different search engine runs and turning these into consistent results. PIA can be integrated into proteomics data analysis workflows in several ways. A user-friendly graphical user interface can be run either locally or (e.g., for larger core facilities) from a central server. For automated data processing, stand-alone tools are available. PIA implements several established protein inference algorithms and can combine results from different search engines seamlessly. On several benchmark data sets, we show that PIA can identify a larger number of proteins at the same protein FDR when compared to that using inference based on a single search engine. PIA supports the majority of established search engines and data in the mzIdentML standard format. It is implemented in Java and freely available at https://github.com/mpc-bioinformatics/pia.

  9. A Javascript GIS Platform Based on Invocable Geospatial Web Services

    Directory of Open Access Journals (Sweden)

    Konstantinos Evangelidis

    2018-04-01

    Full Text Available Semantic Web technologies are being increasingly adopted by the geospatial community during last decade through the utilization of open standards for expressing and serving geospatial data. This was also dramatically assisted by the ever-increasing access and usage of geographic mapping and location-based services via smart devices in people’s daily activities. In this paper, we explore the developmental framework of a pure JavaScript client-side GIS platform exclusively based on invocable geospatial Web services. We also extend JavaScript utilization on the server side by deploying a node server acting as a bridge between open source WPS libraries and popular geoprocessing engines. The vehicle for such an exploration is a cross platform Web browser capable of interpreting JavaScript commands to achieve interaction with geospatial providers. The tool is a generic Web interface providing capabilities of acquiring spatial datasets, composing layouts and applying geospatial processes. In an ideal form the end-user will have to identify those services, which satisfy a geo-related need and put them in the appropriate row. The final output may act as a potential collector of freely available geospatial web services. Its server-side components may exploit geospatial processing suppliers composing that way a light-weight fully transparent open Web GIS platform.

  10. The breathing of webs under repeated partial edge loading

    Czech Academy of Sciences Publication Activity Database

    Škaloud, Miroslav; Zörnerová, Marie; Urushadze, Shota

    2012-01-01

    Roč. 40, č. 1 (2012), s. 463-468 E-ISSN 1877-7058. [Steel structures and bridges. Podbanske, 26.09.2012-28.09.2012] R&D Projects: GA ČR GA103/08/1340 Institutional support: RVO:68378297 Keywords : slender webs * breathing * fatigue limit state * design * repeated partial edge loading Subject RIV: JM - Building Engineering

  11. Evaluating company growth potential using AI and web media data

    DEFF Research Database (Denmark)

    Droll, Andrew; Khan, Shahzad; Tanev, Stoyan

    2017-01-01

    The article focuses on adapting and validating the use of an existing web search and analytics engine to evaluate the growth and competitive potential of new technology start-ups and existing firms in the newly emerging precision medicine sector. The results are based on two different search...... includes new technology firms in the same sector. The firms in the second sample were used as test cases in examining if their growth related web search scores would relate to the degree of their innovativeness. The second part of the study applied the same methodology to the real time monitoring of firms...

  12. WebSelF: A Web Scraping Framework

    DEFF Research Database (Denmark)

    Thomsen, Jakob; Ernst, Erik; Brabrand, Claus

    2012-01-01

    We present, WebSelF, a framework for web scraping which models the process of web scraping and decomposes it into four conceptually independent, reusable, and composable constituents. We have validated our framework through a full parameterized implementation that is flexible enough to capture...... previous work on web scraping. We have experimentally evaluated our framework and implementation in an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over...... a period of more than one year. Our framework solves three concrete problems with current web scraping and our experimental results indicate that com- position of previous and our new techniques achieve a higher degree of accuracy, precision and specificity than existing techniques alone....

  13. Development of a metal-clad advanced composite shear web design concept

    Science.gov (United States)

    Laakso, J. H.

    1974-01-01

    An advanced composite web concept was developed for potential application to the Space Shuttle Orbiter main engine thrust structure. The program consisted of design synthesis, analysis, detail design, element testing, and large scale component testing. A concept was sought that offered significant weight saving by the use of Boron/Epoxy (B/E) reinforced titanium plate structure. The desired concept was one that was practical and that utilized metal to efficiently improve structural reliability. The resulting development of a unique titanium-clad B/E shear web design concept is described. Three large scale components were fabricated and tested to demonstrate the performance of the concept: a titanium-clad plus or minus 45 deg B/E web laminate stiffened with vertical B/E reinforced aluminum stiffeners.

  14. Omicseq: a web-based search engine for exploring omics datasets.

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S; Xu, Tianlei; Chen, Li; Zwick, Michael E; Jiang, Xiaoqian; Wang, Fusheng; Qin, Zhaohui S

    2017-07-03

    The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve 'findability' of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. BAIK– PROGRAMMING LANGUAGE BASED ON INDONESIAN LEXICAL PARSING FOR MULTITIER WEB DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Haris Hasanudin

    2012-05-01

    Full Text Available Business software development with global team is increasing rapidly and the programming language as development tool takes the important role in the global web development. The real user friendly programming language should be written in local language for programmer who has native language is not in English. This paper presents our design of BAIK (Bahasa Anak Indonesia untuk Komputerscripting language which syntax is modeled with Bahasa Indonesian for multitier web development. Researcher propose the implementation of Indonesian Parsing Engine and Binary Search Tree structure for memory allocation of variable and compose the language features that support basic Object Oriented Programming, Common Gateway Interface, HTML style manipulation and database connection. Our goal is to build real programming language from simple structure design for web development using Indonesian lexical words. Pengembangan bisnis perangkat lunak dalam tim berskala global meningkat dengan cepat dan bahasa pemrograman berperan penting dalam pengembangan web secara global. Bahasa pemrograman yang benar-benar ramah terhadap pengguna harus ditulis dalam bahasa lokal programmer yang bahasa ibunya bukan Bahasa Inggris. Paper ini menyajikan desain dari bahasa penulisan BAIK (Bahasa Anak Indonesia untuk Komputer, yang sintaksisnya dimodelkan dengan Bahasa Indonesia untuk pengembangan web multitier. Peneliti mengusulkan implementasi dari parsing engine Bahasa Indonesia dan struktur binary search tree untuk alokasi memori terhadap variabel, serta membuat fitur bahasa yang mendukung dasar pemrograman berbasis objek, common gateway interface, manipulasi gaya HTML, dan koneksi basis data. Tujuan penelitian ini adalah untuk menciptakan bahasa pemrograman yang sesungguhnya dan menggunakan desain struktur sederhana untuk pengembangan web dengan menggunakan kata-kata dari Bahasa Indonesia.

  16. Expert system for web based collaborative CAE

    Science.gov (United States)

    Hou, Liang; Lin, Zusheng

    2006-11-01

    An expert system for web based collaborative CAE was developed based on knowledge engineering, relational database and commercial FEA (Finite element analysis) software. The architecture of the system was illustrated. In this system, the experts' experiences, theories and typical examples and other related knowledge, which will be used in the stage of pre-process in FEA, were categorized into analysis process and object knowledge. Then, the integrated knowledge model based on object-oriented method and rule based method was described. The integrated reasoning process based on CBR (case based reasoning) and rule based reasoning was presented. Finally, the analysis process of this expert system in web based CAE application was illustrated, and an analysis example of a machine tool's column was illustrated to prove the validity of the system.

  17. 07051 Executive Summary -- Programming Paradigms for the Web: Web Programming and Web Services

    OpenAIRE

    Hull, Richard; Thiemann, Peter; Wadler, Philip

    2007-01-01

    The world-wide web raises a variety of new programming challenges. To name a few: programming at the level of the web browser, data-centric approaches, and attempts to automatically discover and compose web services. This seminar brought together researchers from the web programming and web services communities and strove to engage them in communication with each other. The seminar was held in an unusual style, in a mixture of short presentations and in-depth discussio...

  18. Semantic Web Requirements through Web Mining Techniques

    OpenAIRE

    Hassanzadeh, Hamed; Keyvanpour, Mohammad Reza

    2012-01-01

    In recent years, Semantic web has become a topic of active research in several fields of computer science and has applied in a wide range of domains such as bioinformatics, life sciences, and knowledge management. The two fast-developing research areas semantic web and web mining can complement each other and their different techniques can be used jointly or separately to solve the issues in both areas. In addition, since shifting from current web to semantic web mainly depends on the enhance...

  19. Radio-anatomy Atlas for delineation SIRIADE web site: features and 1 year results

    International Nuclear Information System (INIS)

    Denisa, F.; Pointreau, Y.

    2010-01-01

    3-D conformal radiotherapy is based on accurate target volumes delineation. Radio-anatomy knowledge's are useful but sometimes difficult to obtain. Moreover, the sources of recommendations for volume definition are disparate. We thus developed a free radio-anatomy web site dedicated to volumes delineation for radiation-oncologists (www.siriade.org). This web site is a search engine allowing to access to delineation characteristics of main tumours illustrated with clinical cases. It does not aim to provide guidelines. Its main purpose is to provide an iconographic training support with frequent up-datings. We present the features of this web site and one year connexion statistics. (authors)

  20. Climate Engine - Monitoring Drought with Google Earth Engine

    Science.gov (United States)

    Hegewisch, K.; Daudert, B.; Morton, C.; McEvoy, D.; Huntington, J. L.; Abatzoglou, J. T.

    2016-12-01

    Drought has adverse effects on society through reduced water availability and agricultural production and increased wildfire risk. An abundance of remotely sensed imagery and climate data are being collected in near-real time that can provide place-based monitoring and early warning of drought and related hazards. However, in an era of increasing wealth of earth observations, tools that quickly access, compute, and visualize archives, and provide answers at relevant scales to better inform decision-making are lacking. We have developed ClimateEngine.org, a web application that uses Google's Earth Engine platform to enable users to quickly compute and visualize real-time observations. A suite of drought indices allow us to monitor and track drought from local (30-meters) to regional scales and contextualize current droughts within the historical record. Climate Engine is currently being used by U.S. federal agencies and researchers to develop baseline conditions and impact assessments related to agricultural, ecological, and hydrological drought. Climate Engine is also working with the Famine Early Warning Systems Network (FEWS NET) to expedite monitoring agricultural drought over broad areas at risk of food insecurity globally.

  1. Prey interception drives web invasion and spider size determines successful web takeover in nocturnal orb-web spiders.

    Science.gov (United States)

    Gan, Wenjin; Liu, Shengjie; Yang, Xiaodong; Li, Daiqin; Lei, Chaoliang

    2015-09-24

    A striking feature of web-building spiders is the use of silk to make webs, mainly for prey capture. However, building a web is energetically expensive and increases the risk of predation. To reduce such costs and still have access to abundant prey, some web-building spiders have evolved web invasion behaviour. In general, no consistent patterns of web invasion have emerged and the factors determining web invasion remain largely unexplored. Here we report web invasion among conspecifics in seven nocturnal species of orb-web spiders, and examined the factors determining the probability of webs that could be invaded and taken over by conspecifics. About 36% of webs were invaded by conspecifics, and 25% of invaded webs were taken over by the invaders. A web that was built higher and intercepted more prey was more likely to be invaded. Once a web was invaded, the smaller the size of the resident spider, the more likely its web would be taken over by the invader. This study suggests that web invasion, as a possible way of reducing costs, may be widespread in nocturnal orb-web spiders. © 2015. Published by The Company of Biologists Ltd.

  2. Prey interception drives web invasion and spider size determines successful web takeover in nocturnal orb-web spiders

    Directory of Open Access Journals (Sweden)

    Wenjin Gan

    2015-10-01

    Full Text Available A striking feature of web-building spiders is the use of silk to make webs, mainly for prey capture. However, building a web is energetically expensive and increases the risk of predation. To reduce such costs and still have access to abundant prey, some web-building spiders have evolved web invasion behaviour. In general, no consistent patterns of web invasion have emerged and the factors determining web invasion remain largely unexplored. Here we report web invasion among conspecifics in seven nocturnal species of orb-web spiders, and examined the factors determining the probability of webs that could be invaded and taken over by conspecifics. About 36% of webs were invaded by conspecifics, and 25% of invaded webs were taken over by the invaders. A web that was built higher and intercepted more prey was more likely to be invaded. Once a web was invaded, the smaller the size of the resident spider, the more likely its web would be taken over by the invader. This study suggests that web invasion, as a possible way of reducing costs, may be widespread in nocturnal orb-web spiders.

  3. Using Open Web APIs in Teaching Web Mining

    Science.gov (United States)

    Chen, Hsinchun; Li, Xin; Chau, M.; Ho, Yi-Jen; Tseng, Chunju

    2009-01-01

    With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems…

  4. An evaluation of web-based information.

    Science.gov (United States)

    Murphy, Rebecca; Frost, Susie; Webster, Peter; Schmidt, Ulrike

    2004-03-01

    To evaluate the quality of web-based information on the treatment of eating disorders and to investigate potential indicators of content quality. Two search engines were queried to obtain 15 commonly accessed websites about eating disorders. Two reviewers evaluated the characteristics, quality of content, and accountability of the sites. Intercorrelations between variables were calculated. The overall quality of the sites was poor based on the outcome measures used. All quality of content measures correlated with a measure of accountability (Silberg, W.M., Lundberg, G.D., & Mussachio, R.A., 1993). There is a lack of quality information on the treatment of eating disorders on the web. Although accountability criteria may be useful indicators of content quality, there is a need to investigate whether these can be usefully applied to other mental health areas. Copyright 2004 by Wiley Periodicals, Inc. Int J Eat Disord 35: 145-154, 2004.

  5. An architecture for diversity-aware search for medical web content.

    Science.gov (United States)

    Denecke, K

    2012-01-01

    The Web provides a huge source of information, also on medical and health-related issues. In particular the content of medical social media data can be diverse due to the background of an author, the source or the topic. Diversity in this context means that a document covers different aspects of a topic or a topic is described in different ways. In this paper, we introduce an approach that allows to consider the diverse aspects of a search query when providing retrieval results to a user. We introduce a system architecture for a diversity-aware search engine that allows retrieving medical information from the web. The diversity of retrieval results is assessed by calculating diversity measures that rely upon semantic information derived from a mapping to concepts of a medical terminology. Considering these measures, the result set is diversified by ranking more diverse texts higher. The methods and system architecture are implemented in a retrieval engine for medical web content. The diversity measures reflect the diversity of aspects considered in a text and its type of information content. They are used for result presentation, filtering and ranking. In a user evaluation we assess the user satisfaction with an ordering of retrieval results that considers the diversity measures. It is shown through the evaluation that diversity-aware retrieval considering diversity measures in ranking could increase the user satisfaction with retrieval results.

  6. Surfing the Web for Science: Early Data on the Users and Uses of The Why Files.

    Science.gov (United States)

    Eveland, William P., Jr.; Dunwoody, Sharon

    1998-01-01

    This brief offers an initial look at one science site on the World Wide Web (The Why Files: http://whyfiles.news.wise.edu) in order to consider the educational potential of this technology. The long-term goal of the studies of this site is to understand how the World Wide Web can be used to enhance science, mathematics, engineering, and technology…

  7. Advanced Visualization and Interactive Display Rapid Innovation and Discovery Evaluation Research Program task 8: Survey of WEBGL Graphics Engines

    Science.gov (United States)

    2015-01-01

    1 3.0 Methods, Assumptions, and Procedures ...18 4.6.3. LineUp Web... Procedures A search of the internet looking at web sites specializing in graphics, graphics engines, web browser applications, and games was conducted to

  8. How to Improve Artificial Intelligence through Web

    Directory of Open Access Journals (Sweden)

    Adrian LUPASC

    2005-10-01

    Full Text Available Intelligent agents, intelligent software applications and artificial intelligent applications from artificial intelligence service providers maymake their way onto the Web in greater number as adaptive software, dynamic programming languages and Learning Algorithms are introduced intoWeb Services. The evolution of Web architecture may allow intelligent applications to run directly on the Web by introducing XML, RDF and logiclayer. The Intelligent Wireless Web’s significant potential for rapidly completing information transactions may take an important contribution toglobal worker productivity. Artificial intelligence can be defined as the study of the ways in which computers can be made to perform cognitivetasks. Examples of such tasks include understanding natural language statements, recognizing visual patterns or scenes, diagnosing diseases orillnesses, solving mathematical problems, performing financial analyses, learning new procedures for solving problems. The term expert system canbe considered to be a particular type of knowledge-based system. An expert system is a system in which the knowledge is deliberately represented“as it is”. Expert systems are applications that make decisions in real-life situations that would otherwise be performed by a human expert. They areprograms designed to mimic human performance at specialized, constrained problem-solving tasks. They are constructed as a collection of IF-THENproduction rules combined with a reasoning engine that applies those rules, either in a forward or backward direction, to specific problems.

  9. A Novel Method for Live Debugging of Production Web Applications by Dynamic Resource Replacement

    OpenAIRE

    Khalid Al-Tahat; Khaled Zuhair Mahmoud; Ahmad Al-Mughrabi

    2014-01-01

    This paper proposes a novel methodology for enabling debugging and tracing of production web applications without affecting its normal flow and functionality. This method of debugging enables developers and maintenance engineers to replace a set of existing resources such as images, server side scripts, cascading style sheets with another set of resources per web session. The new resources will only be active in the debug session and other sessions will not be affected. T...

  10. Automatic identification of web-based risk markers for health events

    DEFF Research Database (Denmark)

    Yom-Tov, Elad; Borsa, Diana; Hayward, Andrew C.

    2015-01-01

    but these are often limited in size and cost and can fail to take full account of diseases where there are social stigmas or to identify transient acute risk factors. Objective: Here we report that Web search engine queries coupled with information on Wikipedia access patterns can be used to infer health events...

  11. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    Science.gov (United States)

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  12. Grid-optimized Web 3D applications on wide area network

    Science.gov (United States)

    Wang, Frank; Helian, Na; Meng, Lingkui; Wu, Sining; Zhang, Wen; Guo, Yike; Parker, Michael Andrew

    2008-08-01

    Geographical information system has come into the Web Service times now. In this paper, Web3D applications have been developed based on our developed Gridjet platform, which provides a more effective solution for massive 3D geo-dataset sharing in distributed environments. Web3D services enabling web users could access the services as 3D scenes, virtual geographical environment and so on. However, Web3D services should be shared by thousands of essential users that inherently distributed on different geography locations. Large 3D geo-datasets need to be transferred to distributed clients via conventional HTTP, NFS and FTP protocols, which often encounters long waits and frustration in distributed wide area network environments. GridJet was used as the underlying engine between the Web 3D application node and geo-data server that utilizes a wide range of technologies including the one of paralleling the remote file access, which is a WAN/Grid-optimized protocol and provides "local-like" accesses to remote 3D geo-datasets. No change in the way of using software is required since the multi-streamed GridJet protocol remains fully compatible with existing IP infrastructures. Our recent progress includes a real-world test that Web3D applications as Google Earth over the GridJet protocol beats those over the classic ones by a factor of 2-7 where the transfer distance is over 10,000 km.

  13. Propuesta de factores a considerar en el posicionamiento de los sitios web de salud (Proposal of Factors to be considered for positioning of Health Websites

    Directory of Open Access Journals (Sweden)

    Mercedes Moráguez Bergues

    2014-04-01

    Full Text Available Resumen El posicionamiento web se convierte en un factor esencial a tener presente cuando se desea promocionar un sitio web en Internet. Esta investigación trata sobre los factores SEO (Search Engine Optimization que influyen en el posicionamiento de un sitio Web en los buscadores y por tanto en su visibilidad. Se identificaron estos factores y se relacionaron con los atributos de usabilidad y accesibilidad de un sitio Web. Se exponen los resultados de la aplicación de dos cuestionarios: uno para el perfil editor y otro para el perfil usuario, lo cual permitió relacionar las problemáticas que influyen en el bajo posicionamiento de algunos sitios web de salud de la red de Infomed. Abstract The web positioning becomes an essential factor to keep in mind when you want to promote a website on the Internet. This paper analyzes the SEO factors (Search Engine Optimization that influence on the position of a website in search engines and therefore its visibility. These factors were identified and related to the attributes of usability and accessibility of a Website. Results from two surveys allowed connect the problems affecting the low positioning of some health websites of INFOMED network.

  14. Search Engines: Gateway to a New ``Panopticon''?

    Science.gov (United States)

    Kosta, Eleni; Kalloniatis, Christos; Mitrou, Lilian; Kavakli, Evangelia

    Nowadays, Internet users are depending on various search engines in order to be able to find requested information on the Web. Although most users feel that they are and remain anonymous when they place their search queries, reality proves otherwise. The increasing importance of search engines for the location of the desired information on the Internet usually leads to considerable inroads into the privacy of users. The scope of this paper is to study the main privacy issues with regard to search engines, such as the anonymisation of search logs and their retention period, and to examine the applicability of the European data protection legislation to non-EU search engine providers. Ixquick, a privacy-friendly meta search engine will be presented as an alternative to privacy intrusive existing practices of search engines.

  15. An Evidence-Based Review of Academic Web Search Engines, 2014-2016: Implications for Librarians’ Practice and Research Agenda

    Directory of Open Access Journals (Sweden)

    Jody Condit Fagan

    2017-06-01

    Full Text Available Academic web search engines have become central to scholarly research. While the fitness of Google Scholar for research purposes has been examined repeatedly, Microsoft Academic and Google Books have not received much attention. Recent studies have much to tell us about the coverage and utility of Google Scholar, its coverage of the sciences, and its utility for evaluating researcher impact. But other aspects have been woefully understudied, such as coverage of the arts and humanities, books, and non-Western, non-English publications. User research has also tapered off. A small number of articles hint at the opportunity for librarians to become expert advisors concerning opportunities of scholarly communication made possible or enhanced by these platforms. This article seeks to summarize research concerning Google Scholar, Google Books, and Microsoft Academic from the past three years with a mind to informing practice and setting a research agenda. Selected literature from earlier time periods is included to illuminate key findings and to help shape the proposed research agenda, especially in understudied areas.

  16. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Nejdl, Wolfgang

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...

  17. Semantic interpretation of search engine resultant

    Science.gov (United States)

    Nasution, M. K. M.

    2018-01-01

    In semantic, logical language can be interpreted in various forms, but the certainty of meaning is included in the uncertainty, which directly always influences the role of technology. One results of this uncertainty applies to search engines as user interfaces with information spaces such as the Web. Therefore, the behaviour of search engine results should be interpreted with certainty through semantic formulation as interpretation. Behaviour formulation shows there are various interpretations that can be done semantically either temporary, inclusion, or repeat.

  18. A Portrait of the Audience for Instruction in Web Searching: Results of a Survey Conducted at Two Canadian Universities.

    Science.gov (United States)

    Tillotson, Joy

    2003-01-01

    Describes a survey that was conducted involving participants in the library instruction program at two Canadian universities in order to describe the characteristics of students receiving instruction in Web searching. Examines criteria for evaluating Web sites, search strategies, use of search engines, and frequency of use. Questionnaire is…

  19. A model-driven approach for representing clinical archetypes for Semantic Web environments.

    Science.gov (United States)

    Martínez-Costa, Catalina; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Maldonado, José Alberto

    2009-02-01

    The life-long clinical information of any person supported by electronic means configures his Electronic Health Record (EHR). This information is usually distributed among several independent and heterogeneous systems that may be syntactically or semantically incompatible. There are currently different standards for representing and exchanging EHR information among different systems. In advanced EHR approaches, clinical information is represented by means of archetypes. Most of these approaches use the Archetype Definition Language (ADL) to specify archetypes. However, ADL has some drawbacks when attempting to perform semantic activities in Semantic Web environments. In this work, Semantic Web technologies are used to specify clinical archetypes for advanced EHR architectures. The advantages of using the Ontology Web Language (OWL) instead of ADL are described and discussed in this work. Moreover, a solution combining Semantic Web and Model-driven Engineering technologies is proposed to transform ADL into OWL for the CEN EN13606 EHR architecture.

  20. Web-Based Virtual Laboratory for Food Analysis Course

    Science.gov (United States)

    Handayani, M. N.; Khoerunnisa, I.; Sugiarti, Y.

    2018-02-01

    Implementation of learning on food analysis course in Program Study of Agro-industrial Technology Education faced problems. These problems include the availability of space and tools in the laboratory that is not comparable with the number of students also lack of interactive learning tools. On the other hand, the information technology literacy of students is quite high as well the internet network is quite easily accessible on campus. This is a challenge as well as opportunities in the development of learning media that can help optimize learning in the laboratory. This study aims to develop web-based virtual laboratory as one of the alternative learning media in food analysis course. This research is R & D (research and development) which refers to Borg & Gall model. The results showed that assessment’s expert of web-based virtual labs developed, in terms of software engineering aspects; visual communication; material relevance; usefulness and language used, is feasible as learning media. The results of the scaled test and wide-scale test show that students strongly agree with the development of web based virtual laboratory. The response of student to this virtual laboratory was positive. Suggestions from students provided further opportunities for improvement web based virtual laboratory and should be considered for further research.

  1. Regional Geology Web Map Application Development: Javascript v2.0

    International Nuclear Information System (INIS)

    Russell, Glenn

    2017-01-01

    This is a milestone report for the FY2017 continuation of the Spent Fuel, Storage, and Waste, Technology (SFSWT) program (formerly Used Fuel Disposal (UFD) program) development of the Regional Geology Web Mapping Application by the Idaho National Laboratory Geospatial Science and Engineering group. This application was developed for general public use and is an interactive web-based application built in Javascript to visualize, reference, and analyze US pertinent geological features of the SFSWT program. This tool is a version upgrade from Adobe FLEX technology. It is designed to facilitate informed decision making of the geology of continental US relevant to the SFSWT program.

  2. Regional Geology Web Map Application Development: Javascript v2.0

    Energy Technology Data Exchange (ETDEWEB)

    Russell, Glenn [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-06-19

    This is a milestone report for the FY2017 continuation of the Spent Fuel, Storage, and Waste, Technology (SFSWT) program (formerly Used Fuel Disposal (UFD) program) development of the Regional Geology Web Mapping Application by the Idaho National Laboratory Geospatial Science and Engineering group. This application was developed for general public use and is an interactive web-based application built in Javascript to visualize, reference, and analyze US pertinent geological features of the SFSWT program. This tool is a version upgrade from Adobe FLEX technology. It is designed to facilitate informed decision making of the geology of continental US relevant to the SFSWT program.

  3. Usare WebDewey

    OpenAIRE

    Baldi, Paolo

    2016-01-01

    This presentation shows how to use the WebDewey tool. Features of WebDewey. Italian WebDewey compared with American WebDewey. Querying Italian WebDewey. Italian WebDewey and MARC21. Italian WebDewey and UNIMARC. Numbers, captions, "equivalente verbale": Dewey decimal classification in Italian catalogues. Italian WebDewey and Nuovo soggettario. Italian WebDewey and LCSH. Italian WebDewey compared with printed version of Italian Dewey Classification (22. edition): advantages and disadvantages o...

  4. Music Search Engines: Specifications and Challenges

    DEFF Research Database (Denmark)

    Nanopoulos, Alexandros; Rafilidis, Dimitrios; Manolopoulos, Yannis

    2009-01-01

    Nowadays we have a proliferation of music data available over the Web. One of the imperative challenges is how to search these vast, global-scale musical resources to find preferred music. Recent research has envisaged the notion of music search engines (MSEs) that allow for searching preferred...

  5. Quality Dimensions of Internet Search Engines.

    Science.gov (United States)

    Xie, M.; Wang, H.; Goh, T. N.

    1998-01-01

    Reviews commonly used search engines (AltaVista, Excite, infoseek, Lycos, HotBot, WebCrawler), focusing on existing comparative studies; considers quality dimensions from the customer's point of view based on a SERVQUAL framework; and groups these quality expectations in five dimensions: tangibles, reliability, responsiveness, assurance, and…

  6. Software Engineering Improvement Plan

    Science.gov (United States)

    2006-01-01

    In performance of this task order, bd Systems personnel provided support to the Flight Software Branch and the Software Working Group through multiple tasks related to software engineering improvement and to activities of the independent Technical Authority (iTA) Discipline Technical Warrant Holder (DTWH) for software engineering. To ensure that the products, comments, and recommendations complied with customer requirements and the statement of work, bd Systems personnel maintained close coordination with the customer. These personnel performed work in areas such as update of agency requirements and directives database, software effort estimation, software problem reports, a web-based process asset library, miscellaneous documentation review, software system requirements, issue tracking software survey, systems engineering NPR, and project-related reviews. This report contains a summary of the work performed and the accomplishments in each of these areas.

  7. Search Engine Customization and Data Set Builder

    OpenAIRE

    Arias Moreno, Fco Javier

    2009-01-01

    There are two core objectives in this work: firstly, to build a data set, and secondly, to customize a search engine. The first objective is to design and implement a data set builder. There are two steps required for this. The first step is to build a crawler. The second step is to include a cleaner. The crawler collects Web links. The cleaner extracts the main content and removes noise from the files crawled. The goal of this application is crawling Web news sites to find the...

  8. A new web-based system for unsupervised classification of satellite images from the Google Maps engine

    Science.gov (United States)

    Ferrán, Ángel; Bernabé, Sergio; García-Rodríguez, Pablo; Plaza, Antonio

    2012-10-01

    In this paper, we develop a new web-based system for unsupervised classification of satellite images available from the Google Maps engine. The system has been developed using the Google Maps API and incorporates functionalities such as unsupervised classification of image portions selected by the user (at the desired zoom level). For this purpose, we use a processing chain made up of the well-known ISODATA and k-means algorithms, followed by spatial post-processing based on majority voting. The system is currently hosted on a high performance server which performs the execution of classification algorithms and returns the obtained classification results in a very efficient way. The previous functionalities are necessary to use efficient techniques for the classification of images and the incorporation of content-based image retrieval (CBIR). Several experimental validation types of the classification results with the proposed system are performed by comparing the classification accuracy of the proposed chain by means of techniques available in the well-known Environment for Visualizing Images (ENVI) software package. The server has access to a cluster of commodity graphics processing units (GPUs), hence in future work we plan to perform the processing in parallel by taking advantage of the cluster.

  9. Web TA Production (WebTA)

    Data.gov (United States)

    US Agency for International Development — WebTA is a web-based time and attendance system that supports USAID payroll administration functions, and is designed to capture hours worked, leave used and...

  10. Proceedings 11th International Workshop on Automated Specification and Verification of Web Systems

    DEFF Research Database (Denmark)

    2015-01-01

    is a yearly interdisciplinary forum for researchers originating from the following areas: declarative, rule-based programming, formal methods, software engineering and web-based systems. The workshop fosters the cross-fertilisation and advancement of hybrid methods from such areas....

  11. From people to entities new semantic search paradigms for the web

    CERN Document Server

    Demartini, G

    2014-01-01

    The exponential growth of digital information available in companies and on the Web creates the need for search tools that can respond to the most sophisticated information needs. Many user tasks would be simplified if Search Engines would support typed search, and return entities instead of just Web documents. For example, an executive who tries to solve a problem needs to find people in the company who are knowledgeable about a certain topic.In the first part of the book, we propose a model for expert finding based on the well-consolidated vector space model for Information Retrieval and inv

  12. Security in a Web 2.0+ World A Standards Based Approach

    CERN Document Server

    Solari , Carlos Curtis

    2010-01-01

    Discover how technology is affecting your business, and why typical security mechanisms are failing to address the issue of risk and trust. Security for a Web 2.0+ World looks at the perplexing issues of cyber security, and will be of interest to those who need to know how to make effective security policy decisions to engineers who design ICT systems - a guide to information security and standards in the Web 2.0+ era. It provides an understanding of IT security in the converged world of communications technology based on the Internet Protocol. Many companies are currently applying security mo

  13. Principles and software realization of a multimedia course on theoretical electrical engineering based on enterprise technology

    Directory of Open Access Journals (Sweden)

    Penev Krasimir

    2003-01-01

    Full Text Available The Department of Theoretical Electrical Engineering (TEE of Technical University of Sofia has been developing interactive enterprise-technologies based course on Theoretical Electrical Engineering. One side of the project is the development of multimedia teaching modules for the core undergraduate electrical engineering courses (Circuit Theory and Electromagnetic Fields and the other side is the development of Software Architecture of the web site on which modules are deployed. Initial efforts have been directed at the development of multimedia modules for the subject Electrical Circuits and on developing the web site structure. The objective is to develop teaching materials that will enhance lectures and laboratory exercises and will allow computerized examinations on the subject. This article outlines the framework used to develop the web site structure, the Circuit Theory teaching modules, and the strategy of their use as teaching tool.

  14. Semantic Similarity between Web Documents Using Ontology

    Science.gov (United States)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-06-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  15. Semantic Similarity between Web Documents Using Ontology

    Science.gov (United States)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-03-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  16. Applying Web-Based Tools for Research, Engineering, and Operations

    Science.gov (United States)

    Ivancic, William D.

    2011-01-01

    Personnel in the NASA Glenn Research Center Network and Architectures branch have performed a variety of research related to space-based sensor webs, network centric operations, security and delay tolerant networking (DTN). Quality documentation and communications, real-time monitoring and information dissemination are critical in order to perform quality research while maintaining low cost and utilizing multiple remote systems. This has been accomplished using a variety of Internet technologies often operating simultaneously. This paper describes important features of various technologies and provides a number of real-world examples of how combining Internet technologies can enable a virtual team to act efficiently as one unit to perform advanced research in operational systems. Finally, real and potential abuses of power and manipulation of information and information access is addressed.

  17. SISTEM INFORMASI GEOGRAFIS BERBASIS WEB LOKASI BAHAN GALIAN KABUPATEN PONOROGO

    Directory of Open Access Journals (Sweden)

    Budi Santosa

    2010-01-01

    Full Text Available Geographical Information System Base On Web Location Materials Dig Sub-Province of Ponorogo can give information concerning dig materials situation had by area of Ponorogo. This System will present map of Sub-Province area administration of Ponorogo along with dig materials location dots. Where every the dig materials location dot will give description of detail from each dig materials. Methodologies engineer software used to build this system is Waterfall which is covering engineer system, analyse, scheme, coding, conservancy and examination. Programming language the used is Arcview, Macromedia Flash, PHP 4, and Mysql Sistem Informasi Geografis Berbasis Web Lokasi Bahan Galian Kabupaten Ponorogo dapat memberikan informasi mengenai letak bahan galian yang dimiliki daerah Ponorogo. Sistem ini akan menampilkan peta administrasi daerah Kabupaten Ponorogo beserta titik-titik lokasi bahan galian. Dimana setiap titik lokasi bahan galian tersebut akan memberikan keterangan detail dari masing-masing bahan galian. Metodologi rekayasa perangkat lunak yang digunakan untuk membangun sistem ini adalah Waterfall yang meliputi rekayasa sistem, analisis, perancangan, pemrograman, pengujian dan pemeliharaan. Bahasa pemrograman yang digunakan adalah ArcView, Macromedia Flash, PHP 4, dan MySQL

  18. State-of-the-Art Review on Relevance of Genetic Algorithm to Internet Web Search

    Directory of Open Access Journals (Sweden)

    Kehinde Agbele

    2012-01-01

    Full Text Available People use search engines to find information they desire with the aim that their information needs will be met. Information retrieval (IR is a field that is concerned primarily with the searching and retrieving of information in the documents and also searching the search engine, online databases, and Internet. Genetic algorithms (GAs are robust, efficient, and optimizated methods in a wide area of search problems motivated by Darwin’s principles of natural selection and survival of the fittest. This paper describes information retrieval systems (IRS components. This paper looks at how GAs can be applied in the field of IR and specifically the relevance of genetic algorithms to internet web search. Finally, from the proposals surveyed it turns out that GA is applied to diverse problem fields of internet web search.

  19. Politiken, Alt om Ikast Brande (web), Lemvig Folkeblad (Web), Politiken (web), Dabladet Ringkjøbing Skjern (web)

    DEFF Research Database (Denmark)

    Lauritsen, Jens

    2014-01-01

    Politiken 01.01.2014 14:16 Danskerne skød nytåret ind med et brag, men for enkeltes vedkommende gik det galt, da nytårskrudtet blev tændt. Skadestuerne har behandlet 73 personer for fyrværkeriskader mellem klokken 18 i aftes og klokken 06 i morges. Det viser en optælling, som Politiken har...... foretaget på baggrund af tal fra Ulykkes Analyse Gruppen på Odense Universitetshospital. Artiklen er også bragt i: Alt om Ikast Brande (web), Lemvig Folkeblad (web), Politiken (web), Dagbladet Ringkjøbing Skjern (web)....

  20. Reactor Engineering Division Material for World Wide Web Pages

    International Nuclear Information System (INIS)

    1996-01-01

    This document presents the home page of the Reactor Engineering Division of Argonne National Laboratory. This WWW site describes the activities of the Division, an introduction to its wide variety of programs and samples of the results of research by people in the division

  1. Applying Web Analytics to Online Finding Aids: Page Views, Pathways, and Learning about Users

    Directory of Open Access Journals (Sweden)

    Mark R. O'English

    2011-05-01

    Full Text Available Online finding aids, Internet search tools, and increased access to the World Wide Web have greatly changed how patrons find archival collections. Through analyzing eighteen months of access data collected via Web analytics tools, this article examines how patrons discover archival materials. Contrasts are drawn between access from library catalogs and from online search engines, with the latter outweighing the former by an overwhelming margin, and argues whether archival description practices should change accordingly.

  2. MuZeeker - Adapting a music search engine for mobile phones

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Halling, Søren Christian; Sigurdsson, Magnus Kristinn

    2010-01-01

    We describe MuZeeker, a search engine with domain knowledge based on Wikipedia. MuZeeker enables the user to refine a search in multiple steps by means of category selection. In the present version we focus on multimedia search related to music and we present two prototype search applications (web......-based and mobile) and discuss the issues involved in adapting the search engine for mobile phones. A category based filtering approach enables the user to refine a search through relevance feedback by category selection instead of typing additional text, which is hypothesized to be an advantage in the mobile Mu......Zeeker application. We report from two usability experiments using the think aloud protocol, in which N=20 participants performed tasks using MuZeeker and a customized Google search engine. In both experiments web-based and mobile user interfaces were used. The experiment shows that participants are capable...

  3. Prototyping Tool for Web-Based Multiuser Online Role-Playing Game

    Science.gov (United States)

    Okamoto, Shusuke; Kamada, Masaru; Yonekura, Tatsuhiro

    This letter proposes a prototyping tool for Web-based Multiuser Online Role-Playing Game (MORPG). The design goal is to make this tool simple and powerful. The tool is comprised of a GUI editor, a translator and a runtime environment. The GUI editor is used to edit state-transition diagrams, each of which defines the behavior of the fictional characters. The state-transition diagrams are translated into C program codes, which plays the role of a game engine in RPG system. The runtime environment includes PHP, JavaScript with Ajax and HTML. So the prototype system can be played on the usual Web browser, such as Fire-fox, Safari and IE. On a click or key press by a player, the Web browser sends it to the Web server to reflect its consequence on the screens which other players are looking at. Prospected users of this tool include programming novices and schoolchildren. The knowledge or skill of any specific programming languages is not required to create state-transition diagrams. Its structure is not only suitable for the definition of a character behavior but also intuitive to help novices understand. Therefore, the users can easily create Web-based MORPG system with the tool.

  4. Using Web Server Logs in Evaluating Instructional Web Sites.

    Science.gov (United States)

    Ingram, Albert L.

    2000-01-01

    Web server logs contain a great deal of information about who uses a Web site and how they use it. This article discusses the analysis of Web logs for instructional Web sites; reviews the data stored in most Web server logs; demonstrates what further information can be gleaned from the logs; and discusses analyzing that information for the…

  5. Social engineering awareness in Nuclear Malaysia

    International Nuclear Information System (INIS)

    Mohd Dzul Aiman bin Aslan; Mohamad Safuan bin Sulaiman; Abdul Muin bin Abdul Rahman

    2010-01-01

    Social engineering is the best tools to infiltrate an organization weakness. It can go bypass the best fire wall or Intrusion Detection System (IDS) the organization ever had, effectively. Nuclear Malaysia staffs should aware of this technique as information protection it is not only depends on paper and computer. This paper consist a few test cases including e mail, dump ster diving, phishing, malicious web content, and impersonation to acknowledge all Nuclear Malaysia staffs about the method, effect and prevention of social engineering. (author)

  6. Web search queries can predict stock market volumes.

    Science.gov (United States)

    Bordino, Ilaria; Battiston, Stefano; Caldarelli, Guido; Cristelli, Matthieu; Ukkonen, Antti; Weber, Ingmar

    2012-01-01

    We live in a computerized and networked society where many of our actions leave a digital trace and affect other people's actions. This has lead to the emergence of a new data-driven research field: mathematical methods of computer science, statistical physics and sociometry provide insights on a wide range of disciplines ranging from social science to human mobility. A recent important discovery is that search engine traffic (i.e., the number of requests submitted by users to search engines on the www) can be used to track and, in some cases, to anticipate the dynamics of social phenomena. Successful examples include unemployment levels, car and home sales, and epidemics spreading. Few recent works applied this approach to stock prices and market sentiment. However, it remains unclear if trends in financial markets can be anticipated by the collective wisdom of on-line users on the web. Here we show that daily trading volumes of stocks traded in NASDAQ-100 are correlated with daily volumes of queries related to the same stocks. In particular, query volumes anticipate in many cases peaks of trading by one day or more. Our analysis is carried out on a unique dataset of queries, submitted to an important web search engine, which enable us to investigate also the user behavior. We show that the query volume dynamics emerges from the collective but seemingly uncoordinated activity of many users. These findings contribute to the debate on the identification of early warnings of financial systemic risk, based on the activity of users of the www.

  7. Web search queries can predict stock market volumes.

    Directory of Open Access Journals (Sweden)

    Ilaria Bordino

    Full Text Available We live in a computerized and networked society where many of our actions leave a digital trace and affect other people's actions. This has lead to the emergence of a new data-driven research field: mathematical methods of computer science, statistical physics and sociometry provide insights on a wide range of disciplines ranging from social science to human mobility. A recent important discovery is that search engine traffic (i.e., the number of requests submitted by users to search engines on the www can be used to track and, in some cases, to anticipate the dynamics of social phenomena. Successful examples include unemployment levels, car and home sales, and epidemics spreading. Few recent works applied this approach to stock prices and market sentiment. However, it remains unclear if trends in financial markets can be anticipated by the collective wisdom of on-line users on the web. Here we show that daily trading volumes of stocks traded in NASDAQ-100 are correlated with daily volumes of queries related to the same stocks. In particular, query volumes anticipate in many cases peaks of trading by one day or more. Our analysis is carried out on a unique dataset of queries, submitted to an important web search engine, which enable us to investigate also the user behavior. We show that the query volume dynamics emerges from the collective but seemingly uncoordinated activity of many users. These findings contribute to the debate on the identification of early warnings of financial systemic risk, based on the activity of users of the www.

  8. WEB STRUCTURE MINING

    Directory of Open Access Journals (Sweden)

    CLAUDIA ELENA DINUCĂ

    2011-01-01

    Full Text Available The World Wide Web became one of the most valuable resources for information retrievals and knowledge discoveries due to the permanent increasing of the amount of data available online. Taking into consideration the web dimension, the users get easily lost in the web’s rich hyper structure. Application of data mining methods is the right solution for knowledge discovery on the Web. The knowledge extracted from the Web can be used to raise the performances for Web information retrievals, question answering and Web based data warehousing. In this paper, I provide an introduction of Web mining categories and I focus on one of these categories: the Web structure mining. Web structure mining, one of three categories of web mining for data, is a tool used to identify the relationship between Web pages linked by information or direct link connection. It offers information about how different pages are linked together to form this huge web. Web Structure Mining finds hidden basic structures and uses hyperlinks for more web applications such as web search.

  9. Brief Report: Consistency of Search Engine Rankings for Autism Websites

    Science.gov (United States)

    Reichow, Brian; Naples, Adam; Steinhoff, Timothy; Halpern, Jason; Volkmar, Fred R.

    2012-01-01

    The World Wide Web is one of the most common methods used by parents to find information on autism spectrum disorders and most consumers find information through search engines such as Google or Bing. However, little is known about how the search engines operate or the consistency of the results that are returned over time. This study presents the…

  10. Online Data Resources in Chemical Engineering Education: Impact of the Uncertainty Concept for Thermophysical Properties

    Science.gov (United States)

    Kim, Sun Hyung; Kang, Jeong Won; Kroenlein, Kenneth; Magee, Joseph W.; Diky, Vladimir; Muzny, Chris D.; Kazakov, Andrei F.; Chirico, Robert D.; Frenkel, Michael

    2013-01-01

    We review the concept of uncertainty for thermophysical properties and its critical impact for engineering applications in the core courses of chemical engineering education. To facilitate the translation of developments to engineering education, we employ NIST Web Thermo Tables to furnish properties data with their associated expanded…

  11. Health literacy and usability of clinical trial search engines.

    Science.gov (United States)

    Utami, Dina; Bickmore, Timothy W; Barry, Barbara; Paasche-Orlow, Michael K

    2014-01-01

    Several web-based search engines have been developed to assist individuals to find clinical trials for which they may be interested in volunteering. However, these search engines may be difficult for individuals with low health and computer literacy to navigate. The authors present findings from a usability evaluation of clinical trial search tools with 41 participants across the health and computer literacy spectrum. The study consisted of 3 parts: (a) a usability study of an existing web-based clinical trial search tool; (b) a usability study of a keyword-based clinical trial search tool; and (c) an exploratory study investigating users' information needs when deciding among 2 or more candidate clinical trials. From the first 2 studies, the authors found that users with low health literacy have difficulty forming queries using keywords and have significantly more difficulty using a standard web-based clinical trial search tool compared with users with adequate health literacy. From the third study, the authors identified the search factors most important to individuals searching for clinical trials and how these varied by health literacy level.

  12. Web Accessibility in Romania: The Conformance of Municipal Web Sites to Web Content Accessibility Guidelines

    OpenAIRE

    Costin PRIBEANU; Ruxandra-Dora MARINESCU; Paul FOGARASSY-NESZLY; Maria GHEORGHE-MOISII

    2012-01-01

    The accessibility of public administration web sites is a key quality attribute for the successful implementation of the Information Society. The purpose of this paper is to present a second review of municipal web sites in Romania that is based on automated accessibility checking. A number of 60 web sites were evaluated against WCAG 2.0 recommendations. The analysis of results reveals a relatively low web accessibility of municipal web sites and highlights several aspects. Firstly, a slight ...

  13. Bat-Inspired Algorithm Based Query Expansion for Medical Web Information Retrieval.

    Science.gov (United States)

    Khennak, Ilyes; Drias, Habiba

    2017-02-01

    With the increasing amount of medical data available on the Web, looking for health information has become one of the most widely searched topics on the Internet. Patients and people of several backgrounds are now using Web search engines to acquire medical information, including information about a specific disease, medical treatment or professional advice. Nonetheless, due to a lack of medical knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, which deem their search queries to be imprecise due the use of unclear keywords. The use of these ambiguous and vague queries to describe the patients' needs has resulted in a failure of Web search engines to retrieve accurate and relevant information. One of the most natural and promising method to overcome this drawback is Query Expansion. In this paper, an original approach based on Bat Algorithm is proposed to improve the retrieval effectiveness of query expansion in medical field. In contrast to the existing literature, the proposed approach uses Bat Algorithm to find the best expanded query among a set of expanded query candidates, while maintaining low computational complexity. Moreover, this new approach allows the determination of the length of the expanded query empirically. Numerical results on MEDLINE, the on-line medical information database, show that the proposed approach is more effective and efficient compared to the baseline.

  14. A Generic Framework for Extraction of Knowledge from Social Web Sources (Social Networking Websites for an Online Recommendation System

    Directory of Open Access Journals (Sweden)

    Javubar Sathick

    2015-04-01

    Full Text Available Mining social web data is a challenging task and finding user interest for personalized and non-personalized recommendation systems is another important task. Knowledge sharing among web users has become crucial in determining usage of web data and personalizing content in various social websites as per the user’s wish. This paper aims to design a framework for extracting knowledge from web sources for the end users to take a right decision at a crucial juncture. The web data is collected from various web sources and structured appropriately and stored as an ontology based data repository. The proposed framework implements an online recommender application for the learners online who pursue their graduation in an open and distance learning environment. This framework possesses three phases: data repository, knowledge engine, and online recommendation system. The data repository possesses common data which is attained by the process of acquiring data from various web sources. The knowledge engine collects the semantic data from the ontology based data repository and maps it to the user through the query processor component. Establishment of an online recommendation system is used to make recommendations to the user for a decision making process. This research work is implemented with the help of an experimental case study which deals with an online recommendation system for the career guidance of a learner. The online recommendation application is implemented with the help of R-tool, NLP parser and clustering algorithm.This research study will help users to attain semantic knowledge from heterogeneous web sources and to make decisions.

  15. Web-ADARE: A Web-Aided Data Repairing System

    KAUST Repository

    Gu, Binbin

    2017-03-08

    Data repairing aims at discovering and correcting erroneous data in databases. In this paper, we develop Web-ADARE, an end-to-end web-aided data repairing system, to provide a feasible way to involve the vast data sources on the Web in data repairing. Our main attention in developing Web-ADARE is paid on the interaction problem between web-aided repairing and rule-based repairing, in order to minimize the Web consultation cost while reaching predefined quality requirements. The same interaction problem also exists in crowd-based methods but this is not yet formally defined and addressed. We first prove in theory that the optimal interaction scheme is not feasible to be achieved, and then propose an algorithm to identify a scheme for efficient interaction by investigating the inconsistencies and the dependencies between values in the repairing process. Extensive experiments on three data collections demonstrate the high repairing precision and recall of Web-ADARE, and the efficiency of the generated interaction scheme over several baseline ones.

  16. Web-ADARE: A Web-Aided Data Repairing System

    KAUST Repository

    Gu, Binbin; Li, Zhixu; Yang, Qiang; Xie, Qing; Liu, An; Liu, Guanfeng; Zheng, Kai; Zhang, Xiangliang

    2017-01-01

    Data repairing aims at discovering and correcting erroneous data in databases. In this paper, we develop Web-ADARE, an end-to-end web-aided data repairing system, to provide a feasible way to involve the vast data sources on the Web in data repairing. Our main attention in developing Web-ADARE is paid on the interaction problem between web-aided repairing and rule-based repairing, in order to minimize the Web consultation cost while reaching predefined quality requirements. The same interaction problem also exists in crowd-based methods but this is not yet formally defined and addressed. We first prove in theory that the optimal interaction scheme is not feasible to be achieved, and then propose an algorithm to identify a scheme for efficient interaction by investigating the inconsistencies and the dependencies between values in the repairing process. Extensive experiments on three data collections demonstrate the high repairing precision and recall of Web-ADARE, and the efficiency of the generated interaction scheme over several baseline ones.

  17. Web Mining

    Science.gov (United States)

    Fürnkranz, Johannes

    The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to Web data and documents. This chapter provides a brief overview of web mining techniques and research areas, most notably hypertext classification, wrapper induction, recommender systems and web usage mining.

  18. Challenges for Rule Systems on the Web

    Science.gov (United States)

    Hu, Yuh-Jong; Yeh, Ching-Long; Laun, Wolfgang

    The RuleML Challenge started in 2007 with the objective of inspiring the issues of implementation for management, integration, interoperation and interchange of rules in an open distributed environment, such as the Web. Rules are usually classified as three types: deductive rules, normative rules, and reactive rules. The reactive rules are further classified as ECA rules and production rules. The study of combination rule and ontology is traced back to an earlier active rule system for relational and object-oriented (OO) databases. Recently, this issue has become one of the most important research problems in the Semantic Web. Once we consider a computer executable policy as a declarative set of rules and ontologies that guides the behavior of entities within a system, we have a flexible way to implement real world policies without rewriting the computer code, as we did before. Fortunately, we have de facto rule markup languages, such as RuleML or RIF to achieve the portability and interchange of rules for different rule systems. Otherwise, executing real-life rule-based applications on the Web is almost impossible. Several commercial or open source rule engines are available for the rule-based applications. However, we still need a standard rule language and benchmark for not only to compare the rule systems but also to measure the progress in the field. Finally, a number of real-life rule-based use cases will be investigated to demonstrate the applicability of current rule systems on the Web.

  19. A Web-Remote/Robotic/Scheduled Astronomical Data Acquisition System

    Science.gov (United States)

    Denny, Robert

    2011-03-01

    Traditionally, remote/robotic observatory operating systems have been custom made for each observatory. While data reduction pipelines need to be tailored for each investigation, the data acquisition process (especially for stare-mode optical images) is often quite similar across investigations. Since 1999, DC-3 Dreams has focused on providing and supporting a remote/robotic observatory operating system which can be adapted to a wide variety of physical hardware and optics while achieving the highest practical observing efficiency and safe/secure web browser user controls. ACP Expert consists of three main subsystems: (1) a robotic list-driven data acquisition engine which controls all aspects of the observatory, (2) a constraint-driven dispatch scheduler with a long-term database of requests, and (3) a built-in "zero admin" web server and dynamic web pages which provide a remote capability for immediate execution and monitoring as well as entry and monitoring of dispatch-scheduled observing requests. No remote desktop login is necessary for observing, thus keeping the system safe and consistent. All routine operation is via the web browser. A wide variety of telescope mounts, CCD imagers, guiding sensors, filter selectors, focusers, instrument-package rotators, weather sensors, and dome control systems are supported via the ASCOM standardized device driver architecture. The system is most commonly employed on commercial 1-meter and smaller observatories used by universities and advanced amateurs for both science and art. One current project, the AAVSO Photometric All-Sky Survey (APASS), uses ACP Expert to acquire large volumes of data in dispatch-scheduled mode. In its first 18 months of operation (North then South), 40,307 sky images were acquired in 117 photometric nights, resulting in 12,107,135 stars detected two or more times. These stars had measures in 5 filters. The northern station covered 754 fields (6446 square degrees) at least twice, the southern

  20. Taking It to the Top: A Lesson in Search Engine Optimization

    Science.gov (United States)

    Frydenberg, Mark; Miko, John S.

    2011-01-01

    Search engine optimization (SEO), the promoting of a Web site so it achieves optimal position with a search engine's rankings, is an important strategy for organizations and individuals in order to promote their brands online. Techniques for achieving SEO are relevant to students of marketing, computing, media arts, and other disciplines, and many…

  1. The Web-Lecture - a viable alternative to the traditional lecture format?

    Science.gov (United States)

    Meibom, S.

    2004-12-01

    Educational research shows that students learn best in an environment with emphasis on teamwork, problem-solving, and hands-on experience. Still professors spend the majority of their time with students in the traditional lecture-hall setting where the combination of large classes and limited time prevents sufficient student-teacher interaction to foster an active learning environment. Can modern computer technology be used to provide "lecture-type" information to students via the World Wide Web? If so, will that help professors make better and/or different use of their scheduled time with the students? Answering these questions was the main motivation for the Extra-Solar Planet Project. The Extra-Solar Planet Project was designed to test the effectiveness of a lecture available to the student on the World Wide Web (Web-Lecture) and to engage the students in an active learning environment were their use the information presented in the Web-Lecture. The topic of the Web-Lecture was detection of extra-solar planets and the project was implemented into an introductory astronomy course at University of Wisconsin Madison in the spring of 2004. The Web-Lecture was designed to give an interactive presentation of synchronized video, audio and lecture notes. It was created using the eTEACH software developed at the University of Wisconsin Madison School of Engineering. In my talk, I will describe the project, show excerpts of the Web-Lecture, and present assessments of student learning and results of student evaluations of the web-lecture format.

  2. Understanding User-Web Interactions via Web Analytics

    CERN Document Server

    Jansen, Bernard J

    2009-01-01

    This lecture presents an overview of the Web analytics process, with a focus on providing insight and actionable outcomes from collecting and analyzing Internet data. The lecture first provides an overview of Web analytics, providing in essence, a condensed version of the entire lecture. The lecture then outlines the theoretical and methodological foundations of Web analytics in order to make obvious the strengths and shortcomings of Web analytics as an approach. These foundational elements include the psychological basis in behaviorism and methodological underpinning of trace data as an empir

  3. Rendimiento de los sistemas de recuperación en la world wide web: revisión metodológica.

    Directory of Open Access Journals (Sweden)

    Olvera Lobo, María Dolores

    2000-03-01

    Full Text Available This study is an attempt to establish a methodology for the evaluation of information retrieval with search engines in the World Wide Web. The method, which is explained in detail, adapts traditional techniques for evaluating web peculiarities and makes use of precision and recall scores, based on the relevance of the first 20 results retrieved. This method has been successfully applied to the evaluation of ten different search engines.

    Este estudio pretende contribuir a establecer una metodología para la evaluación de la recuperación de información de las herramientas de búsqueda en el entorno de la World Wide Web. Se detalla el método diseñado (y aplicado con éxito, para evaluar los resultados de las búsquedas, adaptando las técnicas tradicionales de evaluación a las particularidades de la Web y empleando las medidas de la precisión y exhaustividad, basadas en la relevancia, para los 20 primeros resultados recuperados.

  4. EntrezAJAX: direct web browser access to the Entrez Programming Utilities

    Directory of Open Access Journals (Sweden)

    Pallen Mark J

    2010-06-01

    Full Text Available Abstract Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/

  5. The design and implementation of web mining in web sites security

    Science.gov (United States)

    Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li

    2003-06-01

    The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.

  6. An insight into the deep web; why it matters for addiction psychiatry?

    Science.gov (United States)

    Orsolini, Laura; Papanti, Duccio; Corkery, John; Schifano, Fabrizio

    2017-05-01

    Nowadays, the web is rapidly spreading, playing a significant role in the marketing or sale or distribution of "quasi" legal drugs, hence facilitating continuous changes in drug scenarios. The easily renewable and anarchic online drug-market is gradually transforming indeed the drug market itself, from a "street" to a "virtual" one, with customers being able to shop with a relative anonymity in a 24-hr marketplace. The hidden "deep web" is facilitating this phenomenon. The paper aims at providing an overview to mental health's and addiction's professionals on current knowledge about prodrug activities on the deep web. A nonparticipant netnographic qualitative study of a list of prodrug websites (blogs, fora, and drug marketplaces) located into the surface web was here carried out. A systematic Internet search was conducted on Duckduckgo® and Google® whilst including the following keywords: "drugs" or "legal highs" or "Novel Psychoactive Substances" or "NPS" combined with the word deep web. Four themes (e.g., "How to access into the deepweb"; "Darknet and the online drug trading sites"; "Grams-search engine for the deep web"; and "Cryptocurrencies") and 14 categories were here generated and properly discussed. This paper represents a complete or systematical guideline about the deep web, specifically focusing on practical information on online drug marketplaces, useful for addiction's professionals. Copyright © 2017 John Wiley & Sons, Ltd.

  7. SWS: accessing SRS sites contents through Web Services.

    Science.gov (United States)

    Romano, Paolo; Marra, Domenico

    2008-03-26

    Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available databanks can thus be queried through programmatic interfaces. SRS is a well know indexing and search engine for biomedical databanks offering public access to many databanks and analysis tools. Unfortunately, these data are not easily and efficiently accessible through Web Services. We have developed 'SRS by WS' (SWS), a tool that makes information available in SRS sites accessible through Web Services. Information on known sites is maintained in a database, srsdb. SWS consists in a suite of WS that can query both srsdb, for information on sites and databases, and SRS sites. SWS returns results in a text-only format and can be accessed through a WSDL compliant client. SWS enables interoperability between workflow systems and SRS implementations, by also managing access to alternative sites, in order to cope with network and maintenance problems, and selecting the most up-to-date among available systems. Development and implementation of Web Services, allowing to make a programmatic access to an exhaustive set of biomedical databases can significantly improve automation of in-silico analysis. SWS supports this activity by making biological databanks that are managed in public SRS sites available through a programmatic interface.

  8. Quality of Web-based information on cocaine addiction.

    Science.gov (United States)

    Khazaal, Yasser; Chatton, Anne; Cochand, Sophie; Zullino, Daniele

    2008-08-01

    To evaluate the quality of web-based information on cocaine use and addiction and to investigate potential content quality indicators. Three keywords: cocaine, cocaine addiction and cocaine dependence were entered into two popular World Wide Web search engines. Websites were assessed with a standardized proforma designed to rate sites on the basis of accountability, presentation, interactivity, readability and content quality. "Health on the Net" (HON) quality label, and DISCERN scale scores aiding people without content expertise to assess quality of written health publication were used to verify their efficiency as quality indicators. Of the 120 websites identified, 61 were included. Most were commercial sites. The results of the study indicate low scores on each of the measures including content quality. A global score (the sum of accountability, interactivity, content quality and aesthetic criteria) appeared as a good content quality indicator. While cocaine education websites for patients are widespread, their global quality is poor. There is a need for better evidence-based information about cocaine use and addiction on the web. The poor and variable quality of web-based information and its possible impact on physician-patient relationship argue for a serious provider for patient talk about the health information found on Internet. Internet sites could improve their content using the global score as a quality indicator.

  9. BioCarian: search engine for exploratory searches in heterogeneous biological databases.

    Science.gov (United States)

    Zaki, Nazar; Tennakoon, Chandana

    2017-10-02

    There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search

  10. The Geogenomic Mutational Atlas of Pathogens (GoMAP web system.

    Directory of Open Access Journals (Sweden)

    David P Sargeant

    Full Text Available We present a new approach for pathogen surveillance we call Geogenomics. Geogenomics examines the geographic distribution of the genomes of pathogens, with a particular emphasis on those mutations that give rise to drug resistance. We engineered a new web system called Geogenomic Mutational Atlas of Pathogens (GoMAP that enables investigation of the global distribution of individual drug resistance mutations. As a test case we examined mutations associated with HIV resistance to FDA-approved antiretroviral drugs. GoMAP-HIV makes use of existing public drug resistance and HIV protein sequence data to examine the distribution of 872 drug resistance mutations in ∼ 502,000 sequences for many countries in the world. We also implemented a broadened classification scheme for HIV drug resistance mutations. Several patterns for geographic distributions of resistance mutations were identified by visual mining using this web tool. GoMAP-HIV is an open access web application available at http://www.bio-toolkit.com/GoMap/project/

  11. Web archives

    DEFF Research Database (Denmark)

    Finnemann, Niels Ole

    2018-01-01

    This article deals with general web archives and the principles for selection of materials to be preserved. It opens with a brief overview of reasons why general web archives are needed. Section two and three present major, long termed web archive initiatives and discuss the purposes and possible...... values of web archives and asks how to meet unknown future needs, demands and concerns. Section four analyses three main principles in contemporary web archiving strategies, topic centric, domain centric and time-centric archiving strategies and section five discuss how to combine these to provide...... a broad and rich archive. Section six is concerned with inherent limitations and why web archives are always flawed. The last sections deal with the question how web archives may fit into the rapidly expanding, but fragmented landscape of digital repositories taking care of various parts...

  12. Concurrent engineering: effective deployment strategies

    Directory of Open Access Journals (Sweden)

    Unny Menon

    1996-12-01

    Full Text Available This paper provides a comprehensive insight into current trends and developments in Concurrent Engineering for integrated development of products and processes with the goal of completing the entire cycle in a shorter time, at lower overall cost and with fewer engineering design changes after product release. The evolution and definition of Concurrent Engineering are addressed first, followed by a concise review of the following elements of the concurrent engineering approach to product development: Concept Development: The Front-End Process, identifying Customer Needs and Quality Function Deployment, Establishing Product Specifications, Concept Selection, Product Architecture, Design for Manufacturing, Effective Rapid Prototyping, and The Economics of Product Development. An outline of a computer-based tutorial developed by the authors and other graduate students funded by NASA ( accessible via the world-wide-web . is provided in this paper. A brief discussion of teamwork for successful concurrent engineering is included, t'ase histories of concurrent engineering implementation at North American and European companies are outlined with references to textbooks authored by Professor Menon and other writers. A comprehensive bibliography on concurrent engineering is included in the paper.

  13. A Web-Based Tool to Estimate Pollutant Loading Using LOADEST

    Directory of Open Access Journals (Sweden)

    Youn Shik Park

    2015-09-01

    Full Text Available Collecting and analyzing water quality samples is costly and typically requires significant effort compared to streamflow data, thus water quality data are typically collected at a low frequency. Regression models, identifying a relationship between streamflow and water quality data, are often used to estimate pollutant loads. A web-based tool using LOAD ESTimator (LOADEST as a core engine with four modules was developed to provide user-friendly interfaces and input data collection via web access. The first module requests and receives streamflow and water quality data from the U.S. Geological Survey. The second module retrieves watershed area for computation of pollutant loads per unit area. The third module examines potential error of input datasets for LOADEST runs, and the last module computes estimated and allowable annual average pollutant loads and provides tabular and graphical LOADEST outputs. The web-based tool was applied to two watersheds in this study, one agriculturally-dominated and one urban-dominated. It was found that annual sediment load at the urban-dominant watershed exceeded the target load; therefore, the web-based tool identified correctly the watershed requiring best management practices to reduce pollutant loads.

  14. Search engine optimization

    OpenAIRE

    Marolt, Klemen

    2013-01-01

    Search engine optimization techniques, often shortened to “SEO,” should lead to first positions in organic search results. Some optimization techniques do not change over time, yet still form the basis for SEO. However, as the Internet and web design evolves dynamically, new optimization techniques flourish and flop. Thus, we looked at the most important factors that can help to improve positioning in search results. It is important to emphasize that none of the techniques can guarantee high ...

  15. Dynamic Web Expression for Near-real-time Sensor Networks

    Science.gov (United States)

    Lindquist, K. G.; Newman, R. L.; Nayak, A.; Vernon, F. L.; Nelson, C.; Hansen, T. S.; Yuen-Wong, R.

    2003-12-01

    As near-real-time sensor grids become more widespread, and processing systems based on them become more powerful, summarizing the raw and derived information products and delivering them to the end user become increasingly important both for ongoing monitoring and as a platform for cross-disciplinary research. We have re-engineered the dbrecenteqs program, which was designed to express real-time earthquake databases into dynamic web pages, with several powerful new technologies. While the application is still most fully developed for seismic data, the infrastructure is extensible (and being extended) to create a real-time information architecture for numerous signal domains. This work provides a practical, lightweight approach suitable for individual seismic and sensor networks, which does not require a full 'web-services' implementation. Nevertheless, the technologies here are extensible to larger applications such as the Storage-Resource-Broker based VORB project. The technologies included in the new system blend real-time relational databases as a focus for processing and data handling; an XML->XSLT architecture as the core of the web mirroring; PHP extensions to Antelope (the environmental monitoring-system context adopted for RoadNET) in order to support complex, user-driven interactivity; and VRML output for expression of information as web-browsable three-dimensional worlds.

  16. Even Faster Web Sites Performance Best Practices for Web Developers

    CERN Document Server

    Souders, Steve

    2009-01-01

    Performance is critical to the success of any web site, and yet today's web applications push browsers to their limits with increasing amounts of rich content and heavy use of Ajax. In this book, Steve Souders, web performance evangelist at Google and former Chief Performance Yahoo!, provides valuable techniques to help you optimize your site's performance. Souders' previous book, the bestselling High Performance Web Sites, shocked the web development world by revealing that 80% of the time it takes for a web page to load is on the client side. In Even Faster Web Sites, Souders and eight exp

  17. Analysis, Design and Development of KINPOE Web Portal

    International Nuclear Information System (INIS)

    Rehman, M. Z.

    2012-01-01

    As the web has grown, so has the number of ways people use it. Today, it's not uncommon for Web users to shop, chat with friends or strangers; manage their bank accounts and exercise routines, share photos or videos and more. Online, Web forms bridge the gap between people, their information and a Web product or service. They can streamline sales or key customer actions; build communities or conversations and more. These crucial interactions not only keep businesses running, they also let people accomplish what they want. Every year, students in thousands queue up for collecting admission / application forms and then again for submitting the admission / application forms. This leads to problems in managing the applications, resulting in annoyed parents and students alike. In Chapter 1, discuss the existing admission system and some problems of current system. At KINPOE, it was needed to automate the admission process of PGTP and PDTP programs. So it was the good time to start at least with a prototype online web application for online admission system. Chapter 2 includes the process of online admission system. The online admission system is divided into four phases. Application form filling, automatic roll number allotment with fee slip generation, fee verification process and admit card printing. Chapter 3 consists of details about application development based on Advanced Development Strategy, with ASP.NET 4.0, C and database engine SQL Server 2008. Also online admission system is discussed with snapshots in this chapter. Chapter 4 includes the Deployment and testing of the Web Application. This Document is not for software developers, because it does not contain Requirement Specification and other Developers related document. This document is designed as to support Users and system administrator who will use and maintain the system. (author)

  18. 75 FR 28820 - Notice of Public Meeting by Teleconference Concerning Heavy Duty Diesel Engine Consent Decrees

    Science.gov (United States)

    2010-05-24

    ... implementation of the provisions of the seven consent decrees signed by the United States and diesel engine..., or anticipates receiving, requests from the diesel engine manufacturers for termination of their respective decrees. This meeting notice is also available on EPA's Diesel Engine Settlement Web site at http...

  19. Enhanced Web Interfaces for Administering Invenio Digital Library

    CERN Document Server

    Batista, João

    2012-01-01

    Invenio is an open source web-based application that implements a digital library or document server, and it's used at CERN as the base of the CERN Document Server Institutional Repository and the Inspire High Energy Physics Subject Repository. The purpose of this work was to reimplement the administrative interface of the search engine in Invenio, using new and proved open source technologies, to simplify the code base and lay the foundations for the work that it will be done in porting the rest of the administrative interfaces to use these newer technologies. In my time as a CERN openlab summer student I was able to implement some of the features for the WebSearch Admin Interfaces, enhance some of the existing code with new features and find solutions to technical challenges that will be common when porting the other administrative interfaces modules.

  20. Graphic Data Display from Manufacturing on Web Pages

    Directory of Open Access Journals (Sweden)

    Martin VALAS

    2009-06-01

    Full Text Available Industrial data can by displayed in graphical form which is usually used by three types of users. The first, nonstop users, most frequent operational engineer, who checking actual displayed values and then intervene in operation. The second are occasional users who are interested in historical data e.g. for servicing reason. The last users’ types are tradesmen and managers. State comparison few days or months ago helps as decision-making support. Graph component with web application, which provides data as XML document, was designed for second users group. Graph component displays historical data. Students can fully understand all the problems go along with web application creation in ASP.NET, which provides data in XML document, as well as graph component creation in integrated development environment Flash, thanks in detail described solution using ActionScript.