WorldWideScience

Sample records for wide web links

  1. Materializing the web of linked data

    CERN Document Server

    Konstantinou, Nikolaos

    2015-01-01

    This book explains the Linked Data domain by adopting a bottom-up approach: it introduces the fundamental Semantic Web technologies and building blocks, which are then combined into methodologies and end-to-end examples for publishing datasets as Linked Data, and use cases that harness scholarly information and sensor data. It presents how Linked Data is used for web-scale data integration, information management and search. Special emphasis is given to the publication of Linked Data from relational databases as well as from real-time sensor data streams. The authors also trace the transformation from the document-based World Wide Web into a Web of Data. Materializing the Web of Linked Data is addressed to researchers and professionals studying software technologies, tools and approaches that drive the Linked Data ecosystem, and the Web in general.

  2. The World Wide Web and the Television Generation.

    Science.gov (United States)

    Maddux, Cleborne D.

    1996-01-01

    The hypermedia nature of the World Wide Web may represent a true paradigm shift in telecommunications, but barriers exist to the Web having similar impact on education. Some of today's college students compare the Web with "bad TV"--lengthy pauses, links that result in error messages, and animation and sound clips that are too brief.…

  3. WorldWide Web: Hypertext from CERN.

    Science.gov (United States)

    Nickerson, Gord

    1992-01-01

    Discussion of software tools for accessing information on the Internet focuses on the WorldWideWeb (WWW) system, which was developed at the European Particle Physics Laboratory (CERN) in Switzerland to build a worldwide network of hypertext links using available networking technology. Its potential for use with multimedia documents is also…

  4. Playing with the internet through world wide web

    International Nuclear Information System (INIS)

    Kim, Seon Tae; Jang, Jin Seok

    1995-07-01

    This book describes how to use the internet with world wide web. It is divided into six chapters, which are Let's go to the internet ocean, the internet in information superhighway are, connecting the world with a telephone wire such as link with the internet cable and telephone modem, internet service providers, text mode connection, Domain and IP address, the principle and use of world wide web ; business, music, fashion, movie and photo, internet news and e-mail, making internet map with web language, and from installation to application of base program such as TCP/IP, SLIP/PPP 3270 Emulator, Finger and NCSA Mosaic.

  5. Internet and The World Wide Web

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 2. Internet and The World Wide Web. Neelima Shrikhande. General Article Volume 2 Issue 2 February 1997 pp 64-74. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/002/02/0064-0074 ...

  6. Integrating Temporal Media and Open Hypermedia on the World Wide Web

    DEFF Research Database (Denmark)

    Bouvin, Niels Olof; Schade, René

    1999-01-01

    The World Wide Web has since its beginning provided linking to and from text documents encoded in HTML. The Web has evolved and most Web browsers now support a rich set of media types either by default or by the use of specialised content handlers, known as plug-ins. The limitations of the Web...

  7. Linked Data Evolving the Web into a Global Data Space

    CERN Document Server

    Heath, Tom

    2011-01-01

    The World Wide Web has enabled the creation of a global information space comprising linked documents. As the Web becomes ever more enmeshed with our daily lives, there is a growing desire for direct access to raw data not currently available on the Web or bound up in hypertext documents. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. In this Synthesis lecture we provide readers with a detailed technical i

  8. Collaborative Design of World Wide Web Pages: A Case Study.

    Science.gov (United States)

    Andrew, Paige G; Musser, Linda R.

    1997-01-01

    This case study of the collaborative design of an earth science World Wide Web page at Pennsylvania State University highlights the role of librarians. Discusses the original Web site and links, planning, the intended audience, and redesign and recommended changes; and considers the potential contributions of librarians. (LRW)

  9. Unit 148 - World Wide Web Basics

    OpenAIRE

    148, CC in GIScience; Yeung, Albert K.

    2000-01-01

    This unit explains the characteristics and the working principles of the World Wide Web as the most important protocol of the Internet. Topics covered in this unit include characteristics of the World Wide Web; using the World Wide Web for the dissemination of information on the Internet; and using the World Wide Web for the retrieval of information from the Internet.

  10. Growth and structure of the World Wide Web: Towards realistic modeling

    Science.gov (United States)

    Tadić, Bosiljka

    2002-08-01

    We simulate evolution of the World Wide Web from the dynamic rules incorporating growth, bias attachment, and rewiring. We show that the emergent double-hierarchical structure with distinct distributions of out- and in-links is comparable with the observed empirical data when the control parameter (average graph flexibility β) is kept in the range β=3-4. We then explore the Web graph by simulating (a) Web crawling to determine size and depth of connected components, and (b) a random walker that discovers the structure of connected subgraphs with dominant attractor and promoter nodes. A random walker that adapts its move strategy to mimic local node linking preferences is shown to have a short access time to "important" nodes on the Web graph.

  11. Introduction to the world wide web.

    Science.gov (United States)

    Downes, P K

    2007-05-12

    The World Wide Web used to be nicknamed the 'World Wide Wait'. Now, thanks to high speed broadband connections, browsing the web has become a much more enjoyable and productive activity. Computers need to know where web pages are stored on the Internet, in just the same way as we need to know where someone lives in order to post them a letter. This section explains how the World Wide Web works and how web pages can be viewed using a web browser.

  12. World-Wide Web the information universe

    CERN Document Server

    Berners-Lee, Tim; Groff, Jean-Francois; Pollermann, Bernd

    1992-01-01

    Purpose - The World-Wide Web (W-3) initiative is a practical project designed to bring a global information universe into existence using available technology. This paper seeks to describe the aims, data model, and protocols needed to implement the "web" and to compare them with various contemporary systems. Design/methodology/approach - Since Vannevar Bush's article, men have dreamed of extending their intellect by making their collective knowledge available to each individual by using machines. Computers provide us two practical techniques for human-knowledge interface. One is hypertext, in which links between pieces of text (or other media) mimic human association of ideas. The other is text retrieval, which allows associations to be deduced from the content of text. The W-3 ideal world allows both operations and provides access from any browsing platform. Findings - Various server gateways to other information systems have been produced, and the total amount of information available on the web is...

  13. Consistency in the World Wide Web

    DEFF Research Database (Denmark)

    Thomsen, Jakob Grauenkjær

    Tim Berners-Lee envisioned that computers will behave as agents of humans on the World Wide Web, where they will retrieve, extract, and interact with information from the World Wide Web. A step towards this vision is to make computers capable of extracting this information in a reliable...... and consistent way. In this dissertation we study steps towards this vision by showing techniques for the specication, the verication and the evaluation of the consistency of information in the World Wide Web. We show how to detect certain classes of errors in a specication of information, and we show how...... the World Wide Web, in order to help perform consistent evaluations of web extraction techniques. These contributions are steps towards having computers reliable and consistently extract information from the World Wide Web, which in turn are steps towards achieving Tim Berners-Lee's vision. ii...

  14. World Wide Web voted most wonderful wonder by web-wide world

    CERN Multimedia

    2007-01-01

    The results are in, and the winner is...the World Wide Web! An online survey conducted by the CNN news group ranks the World Wide Web-invented at CERN--as the most wonderful of the seven modern wonders of the world. (See Bulletin No. 49/2006.) There is currently no speculation about whether they would have had the same results had they distributed the survey by post. The World Wide Web won with a whopping 50 per cent of the votes (3,665 votes). The runner up was CERN again, with 16 per cent of voters (1130 votes) casting the ballot in favour of the CERN particle accelerator. Stepping into place behind CERN and CERN is 'None of the Above' with 8 per cent of the votes (611 votes), followed by the development of Dubai (7%), the bionic arm (7%), China's Three Gorges Damn (5%), The Channel Tunnel (4%), and France's Millau viaduct (3%). Thanks to everyone from CERN who voted. You can view the results on http://edition.cnn.com/SPECIALS/2006/modern.wonders/

  15. Network Formation and the Structure of the Commercial World Wide Web

    OpenAIRE

    Zsolt Katona; Miklos Sarvary

    2008-01-01

    We model the commercial World Wide Web as a directed graph that emerges as the equilibrium of a game in which utility maximizing websites purchase (advertising) in-links from each other while also setting the price of these links. In equilibrium, higher content sites tend to purchase more advertising links (mirroring the Dorfman-Steiner rule) while selling less advertising links themselves. As such, there seems to be specialization across sites in revenue models: high content sites tend to ea...

  16. Alaskan Auroral All-Sky Images on the World Wide Web

    Science.gov (United States)

    Stenbaek-Nielsen, H. C.

    1997-01-01

    In response to a 1995 NASA SPDS announcement of support for preservation and distribution of important data sets online, the Geophysical Institute, University of Alaska Fairbanks, Alaska, proposed to provide World Wide Web access to the Poker Flat Auroral All-sky Camera images in real time. The Poker auroral all-sky camera is located in the Davis Science Operation Center at Poker Flat Rocket Range about 30 miles north-east of Fairbanks, Alaska, and is connected, through a microwave link, with the Geophysical Institute where we maintain the data base linked to the Web. To protect the low light-level all-sky TV camera from damage due to excessive light, we only operate during the winter season when the moon is down. The camera and data acquisition is now fully computer controlled. Digital images are transmitted each minute to the Web linked data base where the data are available in a number of different presentations: (1) Individual JPEG compressed images (1 minute resolution); (2) Time lapse MPEG movie of the stored images; and (3) A meridional plot of the entire night activity.

  17. Educational use of World Wide Web pages on CD-ROM.

    Science.gov (United States)

    Engel, Thomas P; Smith, Michael

    2002-01-01

    The World Wide Web is increasingly important for medical education. Internet served pages may also be used on a local hard disk or CD-ROM without a network or server. This allows authors to reuse existing content and provide access to users without a network connection. CD-ROM offers several advantages over network delivery of Web pages for several applications. However, creating Web pages for CD-ROM requires careful planning. Issues include file names, relative links, directory names, default pages, server created content, image maps, other file types and embedded programming. With care, it is possible to create server based pages that can be copied directly to CD-ROM. In addition, Web pages on CD-ROM may reference Internet served pages to provide the best features of both methods.

  18. Linked data: un nuovo alfabeto del web semantico

    Directory of Open Access Journals (Sweden)

    Mauro Guerrini

    2013-01-01

    Full Text Available The paper defines the linked data as a set of best practices that are used to publish data on the web using a machine; the technology (or mode of realization of linked data is associated with the concept of the semantic web. It is the area of the semantic web, or web of data, as defined by Tim Berners-Lee "A web of things in the world, described by data on the web". The paper highlights the continuities and differences between semantic web and web traditional, or web documents. The analysis of linked data takes place within the world of libraries, archives and museums, traditionally committed to high standards for structuring and sharing of data. The data, in fact, assume the role of generating quality information for the network. The production of linked data requires compliance with rules and the use of specific technologies and languages, especially in the case of publication of linked data in open mode. The production cycle of linked data may be the track, or a guideline, for institutions that wish to join projects to publish their data. Data quality is assessed through a rating system designed by Tim Berners-Lee.

  19. Link invariant and $G_2$ web space

    OpenAIRE

    Sakamoto, Takuro; Yonezawa, Yasuyoshi

    2017-01-01

    In this paper, we reconstruct Kuperberg’s $G_2$ web space [5, 6]. We introduce a new web diagram (a trivalent graph with only double edges) and new relations between Kuperberg’s web diagrams and the new web diagram. Using the web diagrams, we give crossing formulas for the $R$-matrices associated to some irreducible representations of $U_q(G_2)$ and calculate $G_2$ quantum link invariants for generalized twist links.

  20. Network dynamics: The World Wide Web

    Science.gov (United States)

    Adamic, Lada Ariana

    Despite its rapidly growing and dynamic nature, the Web displays a number of strong regularities which can be understood by drawing on methods of statistical physics. This thesis finds power-law distributions in website sizes, traffic, and links, and more importantly, develops a stochastic theory which explains them. Power-law link distributions are shown to lead to network characteristics which are especially suitable for scalable localized search. It is also demonstrated that the Web is a "small world": to reach one site from any other takes an average of only 4 hops, while most related sites cluster together. Additional dynamical properties of the Web graph are extracted from diffusion processes.

  1. Management van World-Wide Web Servers

    NARCIS (Netherlands)

    van Hengstum, F.P.H.; Pras, Aiko

    1996-01-01

    Het World Wide Web is een populaire Internet toepassing waarmee het mogelijk is documenten aan willekeurige Internet gebruikers aan te bieden. Omdat hiervoor nog geen voorzieningen zijn getroffen, was het tot voor kort niet goed mogelijk het World Wide Web op afstand te beheren. De Universiteit

  2. Embedded Web Technology: Applying World Wide Web Standards to Embedded Systems

    Science.gov (United States)

    Ponyik, Joseph G.; York, David W.

    2002-01-01

    Embedded Systems have traditionally been developed in a highly customized manner. The user interface hardware and software along with the interface to the embedded system are typically unique to the system for which they are built, resulting in extra cost to the system in terms of development time and maintenance effort. World Wide Web standards have been developed in the passed ten years with the goal of allowing servers and clients to intemperate seamlessly. The client and server systems can consist of differing hardware and software platforms but the World Wide Web standards allow them to interface without knowing about the details of system at the other end of the interface. Embedded Web Technology is the merging of Embedded Systems with the World Wide Web. Embedded Web Technology decreases the cost of developing and maintaining the user interface by allowing the user to interface to the embedded system through a web browser running on a standard personal computer. Embedded Web Technology can also be used to simplify an Embedded System's internal network.

  3. Extracting Macroscopic Information from Web Links.

    Science.gov (United States)

    Thelwall, Mike

    2001-01-01

    Discussion of Web-based link analysis focuses on an evaluation of Ingversen's proposed external Web Impact Factor for the original use of the Web, namely the interlinking of academic research. Studies relationships between academic hyperlinks and research activities for British universities and discusses the use of search engines for Web link…

  4. The effects of link format and screen location on visual search of web pages.

    Science.gov (United States)

    Ling, Jonathan; Van Schaik, Paul

    2004-06-22

    Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.

  5. Exploiting link structure for web page genre identification

    KAUST Repository

    Zhu, Jia

    2015-07-07

    As the World Wide Web develops at an unprecedented pace, identifying web page genre has recently attracted increasing attention because of its importance in web search. A common approach for identifying genre is to use textual features that can be extracted directly from a web page, that is, On-Page features. The extracted features are subsequently inputted into a machine learning algorithm that will perform classification. However, these approaches may be ineffective when the web page contains limited textual information (e.g., the page is full of images). In this study, we address genre identification of web pages under the aforementioned situation. We propose a framework that uses On-Page features while simultaneously considering information in neighboring pages, that is, the pages that are connected to the original page by backward and forward links. We first introduce a graph-based model called GenreSim, which selects an appropriate set of neighboring pages. We then construct a multiple classifier combination module that utilizes information from the selected neighboring pages and On-Page features to improve performance in genre identification. Experiments are conducted on well-known corpora, and favorable results indicate that our proposed framework is effective, particularly in identifying web pages with limited textual information. © 2015 The Author(s)

  6. Exploiting link structure for web page genre identification

    KAUST Repository

    Zhu, Jia; Xie, Qing; Yu, Shoou I.; Wong, Wai Hung

    2015-01-01

    As the World Wide Web develops at an unprecedented pace, identifying web page genre has recently attracted increasing attention because of its importance in web search. A common approach for identifying genre is to use textual features that can be extracted directly from a web page, that is, On-Page features. The extracted features are subsequently inputted into a machine learning algorithm that will perform classification. However, these approaches may be ineffective when the web page contains limited textual information (e.g., the page is full of images). In this study, we address genre identification of web pages under the aforementioned situation. We propose a framework that uses On-Page features while simultaneously considering information in neighboring pages, that is, the pages that are connected to the original page by backward and forward links. We first introduce a graph-based model called GenreSim, which selects an appropriate set of neighboring pages. We then construct a multiple classifier combination module that utilizes information from the selected neighboring pages and On-Page features to improve performance in genre identification. Experiments are conducted on well-known corpora, and favorable results indicate that our proposed framework is effective, particularly in identifying web pages with limited textual information. © 2015 The Author(s)

  7. Information about liver transplantation on the World Wide Web.

    Science.gov (United States)

    Hanif, F; Sivaprakasam, R; Butler, A; Huguet, E; Pettigrew, G J; Michael, E D A; Praseedom, R K; Jamieson, N V; Bradley, J A; Gibbs, P

    2006-09-01

    Orthotopic liver transplant (OLTx) has evolved to a successful surgical management for end-stage liver diseases. Awareness and information about OLTx is an important tool in assisting OLTx recipients and people supporting them, including non-transplant clinicians. The study aimed to investigate the nature and quality of liver transplant-related patient information on the World Wide Web. Four common search engines were used to explore the Internet by using the key words 'Liver transplant'. The URL (unique resource locator) of the top 50 returns was chosen as it was judged unlikely that the average user would search beyond the first 50 sites returned by a given search. Each Web site was assessed on the following categories: origin, language, accessibility and extent of the information. A weighted Information Score (IS) was created to assess the quality of clinical and educational value of each Web site and was scored independently by three transplant clinicians. The Internet search performed with the aid of the four search engines yielded a total of 2,255,244 Web sites. Of the 200 possible sites, only 58 Web sites were assessed because of repetition of the same Web sites and non-accessible links. The overall median weighted IS was 22 (IQR 1 - 42). Of the 58 Web sites analysed, 45 (77%) belonged to USA, six (10%) were European, and seven (12%) were from the rest of the world. The median weighted IS of publications originating from Europe and USA was 40 (IQR = 22 - 60) and 23 (IQR = 6 - 38), respectively. Although European Web sites produced a higher weighted IS [40 (IQR = 22 - 60)] as compared with the USA publications [23 (IQR = 6 - 38)], this was not statistically significant (p = 0.07). Web sites belonging to the academic institutions and the professional organizations scored significantly higher with a median weighted IS of 28 (IQR = 16 - 44) and 24(12 - 35), respectively, as compared with the commercial Web sites (median = 6 with IQR of 0 - 14, p = .001). There

  8. Le world wide web: l'hypermedià sur internet | Houmel | Revue d ...

    African Journals Online (AJOL)

    The telecommunication's networks technology linked to the electronic document has changed abroad the information specialists' methods of work. The Internet network did a lot in thèse big changes and especially after the World Wide Web intégration wich is a high hypermedia distributed information System. In Algeria lots ...

  9. INTERNET and information about nuclear sciences. The world wide web virtual library: nuclear sciences

    International Nuclear Information System (INIS)

    Kuruc, J.

    1999-01-01

    In this work author proposes to constitute new virtual library which should centralize the information from nuclear disciplines on the INTERNET, in order to them to give first and foremost the connection on the most important links in set nuclear sciences. The author has entitled this new virtual library The World Wide Web Library: Nuclear Sciences. By constitution of this virtual library next basic principles were chosen: home pages of international organizations important from point of view of nuclear disciplines; home pages of the National Nuclear Commissions and governments; home pages of nuclear scientific societies; web-pages specialized on nuclear problematic, in general; periodical tables of elements and isotopes; web-pages aimed on Chernobyl crash and consequences; web-pages with antinuclear aim. Now continue the links grouped on web-pages according to single nuclear areas: nuclear arsenals; nuclear astrophysics; nuclear aspects of biology (radiobiology); nuclear chemistry; nuclear company; nuclear data centres; nuclear energy; nuclear energy, environmental aspects of (radioecology); nuclear energy info centres; nuclear engineering; nuclear industries; nuclear magnetic resonance; nuclear material monitoring; nuclear medicine and radiology; nuclear physics; nuclear power (plants); nuclear reactors; nuclear risk; nuclear technologies and defence; nuclear testing; nuclear tourism; nuclear wastes; nuclear wastes. In these single groups web-links will be concentrated into following groups: virtual libraries and specialized servers; science; nuclear societies; nuclear departments of the academic institutes; nuclear research institutes and laboratories; centres, info links

  10. World Wide Web of Your Wide Web? Juridische aspecten van zoekmachine-personalisatie

    NARCIS (Netherlands)

    Oostveen, M.

    2012-01-01

    Het world wide web is een enorme bron van informatie. Iedere internetgebruiker maakt gebruik van zoekmachines om die informatie te kunnen vinden. Veel gebruikers weten echter niet dat zoekresultaten behorende bij een bepaalde zoekterm niet voor iedereen hetzelfde zijn. Dit personaliseren van

  11. Increasing public understanding of transgenic crops through the World Wide Web.

    Science.gov (United States)

    Byrne, Patrick F; Namuth, Deana M; Harrington, Judy; Ward, Sarah M; Lee, Donald J; Hain, Patricia

    2002-07-01

    Transgenic crops among the most controversial "science and society" issues of recent years. Because of the complex techniques involved in creating these crops and the polarized debate over their risks and beliefs, a critical need has arisen for accessible and balanced information on this technology. World Wide Web sites offer several advantages for disseminating information on a fast-changing technical topic, including their global accessibility; and their ability to update information frequently, incorporate multimedia formats, and link to networks of other sites. An alliance between two complementary web sites at Colorado State University and the University of Nebraska-Lincoln takes advantage of the web environment to help fill the need for public information on crop genetic engineering. This article describes the objectives and features of each site. Viewership data and other feedback have shown these web sites to be effective means of reaching public audiences on a complex scientific topic.

  12. The World Wide Web Revisited

    Science.gov (United States)

    Owston, Ron

    2007-01-01

    Nearly a decade ago the author wrote in one of the first widely-cited academic articles, Educational Researcher, about the educational role of the web. He argued that educators must be able to demonstrate that the web (1) can increase access to learning, (2) must not result in higher costs for learning, and (3) can lead to improved learning. These…

  13. Tim Berners-Lee, World Wide Web inventor

    CERN Multimedia

    1998-01-01

    The "Internet, Web, What's next?" conference on 26 June 1998 at CERN: Tim Berners-Lee, inventor of the World Wide Web and Director of the W3C, explains how the Web came to be and gave his views on the future.

  14. Happy 20th Birthday, World Wide Web!

    CERN Multimedia

    2009-01-01

    On 13 March CERN celebrated the 20th anniversary of the World Wide Web. Check out the video interview with Web creator Tim Berners-Lee and find out more about the both the history and future of the Web. To celebrate CERN also launched a brand new website, CERNland, for kids.

  15. Use of World Wide Web and NCSA Mcsaic at Langley

    Science.gov (United States)

    Nelson, Michael

    1994-01-01

    A brief history of the use of the World Wide Web at Langley Research Center is presented along with architecture of the Langley Web. Benefits derived from the Web and some Langley projects that have employed the World Wide Web are discussed.

  16. Utilization of the world wide web

    International Nuclear Information System (INIS)

    Mohr, P.; Mallard, G.; Ralchenko, U.; Schultz, D.

    1998-01-01

    Two aspects of utilization of the World Wide Web are examined: (i) the communication of technical data through web cites that provide repositories of atomic and molecular data accessible through searchable databases; and (ii) the communication about issues of mutual concern among data producers, data compilers and evaluators, and data users. copyright 1998 American Institute of Physics

  17. Semantic Web: Metadata, Linked Data, Open Data

    Directory of Open Access Journals (Sweden)

    Vanessa Russo

    2015-12-01

    Full Text Available What's the Semantic Web? What's the use? The inventor of the Web Tim Berners-Lee describes it as a research methodology able to take advantage of the network to its maximum capacity. This metadata system represents the innovative element through web 2.0 to web 3.0. In this context will try to understand what are the theoretical and informatic requirements of the Semantic Web. Finally will explain Linked Data applications to develop new tools for active citizenship.

  18. World Wide Web Metaphors for Search Mission Data

    Science.gov (United States)

    Norris, Jeffrey S.; Wallick, Michael N.; Joswig, Joseph C.; Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Abramyan, Lucy; Crockett, Thomas M.; Shams, Khawaja S.; Fox, Jason M.; hide

    2010-01-01

    A software program that searches and browses mission data emulates a Web browser, containing standard meta - phors for Web browsing. By taking advantage of back-end URLs, users may save and share search states. Also, since a Web interface is familiar to users, training time is reduced. Familiar back and forward buttons move through a local search history. A refresh/reload button regenerates a query, and loads in any new data. URLs can be constructed to save search results. Adding context to the current search is also handled through a familiar Web metaphor. The query is constructed by clicking on hyperlinks that represent new components to the search query. The selection of a link appears to the user as a page change; the choice of links changes to represent the updated search and the results are filtered by the new criteria. Selecting a navigation link changes the current query and also the URL that is associated with it. The back button can be used to return to the previous search state. This software is part of the MSLICE release, which was written in Java. It will run on any current Windows, Macintosh, or Linux system.

  19. In Which "Linked Data" Means "a Better Web"

    Science.gov (United States)

    Chudnov, Daniel

    2009-01-01

    In this article, the author talks about linked data and focuses on the main point of linked data: building a better web. Even though how people build the web has changed steadily over the years (and keeps changing, as programmers switch toolkits and frameworks every few years, disposing of older languages and tools when newer, better ones come…

  20. The poor quality of information about laparoscopy on the World Wide Web as indexed by popular search engines.

    Science.gov (United States)

    Allen, J W; Finch, R J; Coleman, M G; Nathanson, L K; O'Rourke, N A; Fielding, G A

    2002-01-01

    This study was undertaken to determine the quality of information on the Internet regarding laparoscopy. Four popular World Wide Web search engines were used with the key word "laparoscopy." Advertisements, patient- or physician-directed information, and controversial material were noted. A total of 14,030 Web pages were found, but only 104 were unique Web sites. The majority of the sites were duplicate pages, subpages within a main Web page, or dead links. Twenty-eight of the 104 pages had a medical product for sale, 26 were patient-directed, 23 were written by a physician or group of physicians, and six represented corporations. The remaining 21 were "miscellaneous." The 46 pages containing educational material were critically reviewed. At least one of the senior authors found that 32 of the pages contained controversial or misleading statements. All of the three senior authors (LKN, NAO, GAF) independently agreed that 17 of the 46 pages contained controversial information. The World Wide Web is not a reliable source for patient or physician information about laparoscopy. Authenticating medical information on the World Wide Web is a difficult task, and no government or surgical society has taken the lead in regulating what is presented as fact on the World Wide Web.

  1. Quality of information available on the World Wide Web for patients undergoing thyroidectomy: review.

    Science.gov (United States)

    Muthukumarasamy, S; Osmani, Z; Sharpe, A; England, R J A

    2012-02-01

    This study aimed to assess the quality of information available on the World Wide Web for patients undergoing thyroidectomy. The first 50 web-links generated by internet searches using the five most popular search engines and the key word 'thyroidectomy' were evaluated using the Lida website validation instrument (assessing accessibility, usability and reliability) and the Flesch Reading Ease Score. We evaluated 103 of a possible 250 websites. Mean scores (ranges) were: Lida accessibility, 48/63 (27-59); Lida usability, 36/54 (21-50); Lida reliability, 21/51 (4-38); and Flesch Reading Ease, 43.9 (2.6-77.6). The quality of internet health information regarding thyroidectomy is variable. High ranking and popularity are not good indicators of website quality. Overall, none of the websites assessed achieved high Lida scores. In order to prevent the dissemination of inaccurate or commercially motivated information, we recommend independent labelling of medical information available on the World Wide Web.

  2. Uses and Gratifications of the World Wide Web: From Couch Potato to Web Potato.

    Science.gov (United States)

    Kaye, Barbara K.

    1998-01-01

    Investigates uses and gratifications of the World Wide Web and its impact on traditional mass media, especially television. Identifies six Web use motivations: entertainment, social interaction, passing of time, escape, information, and Web site preference. Examines relationships between each use motivation and Web affinity, perceived realism, and…

  3. The PEP-II/BaBar Project-Wide Database using World Wide Web and Oracle*Case

    International Nuclear Information System (INIS)

    Chan, A.; Crane, G.; MacGregor, I.; Meyer, S.

    1995-12-01

    The PEP-II/BaBar Project Database is a tool for monitoring the technical and documentation aspects of the accelerator and detector construction. It holds the PEP-II/BaBar design specifications, fabrication and installation data in one integrated system. Key pieces of the database include the machine parameter list, components fabrication and calibration data, survey and alignment data, property control, CAD drawings, publications and documentation. This central Oracle database on a UNIX server is built using Oracle*Case tools. Users at the collaborating laboratories mainly access the data using World Wide Web (WWW). The Project Database is being extended to link to legacy databases required for the operations phase

  4. GeoCENS: a geospatial cyberinfrastructure for the world-wide sensor web.

    Science.gov (United States)

    Liang, Steve H L; Huang, Chih-Yuan

    2013-10-02

    The world-wide sensor web has become a very useful technique for monitoring the physical world at spatial and temporal scales that were previously impossible. Yet we believe that the full potential of sensor web has thus far not been revealed. In order to harvest the world-wide sensor web's full potential, a geospatial cyberinfrastructure is needed to store, process, and deliver large amount of sensor data collected worldwide. In this paper, we first define the issue of the sensor web long tail followed by our view of the world-wide sensor web architecture. Then, we introduce the Geospatial Cyberinfrastructure for Environmental Sensing (GeoCENS) architecture and explain each of its components. Finally, with demonstration of three real-world powered-by-GeoCENS sensor web applications, we believe that the GeoCENS architecture can successfully address the sensor web long tail issue and consequently realize the world-wide sensor web vision.

  5. World-Wide Web: The Information Universe.

    Science.gov (United States)

    Berners-Lee, Tim; And Others

    1992-01-01

    Describes the World-Wide Web (W3) project, which is designed to create a global information universe using techniques of hypertext, information retrieval, and wide area networking. Discussion covers the W3 data model, W3 architecture, the document naming scheme, protocols, document formats, comparison with other systems, experience with the W3…

  6. U.S. Geological Survey World Wide Web Information

    Science.gov (United States)

    ,

    2003-01-01

    The U.S. Geological Survey (USGS) invites you to explore an earth science virtual library of digital information, publications, and data. The USGS World Wide Web sites offer an array of information that reflects scientific research and monitoring programs conducted in the areas of natural hazards, environmental resources, and cartography. This list provides gateways to access a cross section of the digital information on the USGS World Wide Web sites.

  7. Tim Berners-Lee, World Wide Web inventor

    CERN Multimedia

    1994-01-01

    Former physicist, Tim Berners-Lee invented the World Wide Web as an essential tool for high energy physics at CERN from 1989 to 1994. Together with a small team he conceived HTML, http, URLs, and put up the first server and the first 'what you see is what you get' browser and html editor. Tim is now Director of the Web Consortium W3C, the International Web standards body based at INRIA, MIT and Keio University.

  8. Age differences in search of web pages: the effects of link size, link number, and clutter.

    Science.gov (United States)

    Grahame, Michael; Laberge, Jason; Scialfa, Charles T

    2004-01-01

    Reaction time, eye movements, and errors were measured during visual search of Web pages to determine age-related differences in performance as a function of link size, link number, link location, and clutter. Participants (15 young adults, M = 23 years; 14 older adults, M = 57 years) searched Web pages for target links that varied from trial to trial. During one half of the trials, links were enlarged from 10-point to 12-point font. Target location was distributed among the left, center, and bottom portions of the screen. Clutter was manipulated according to the percentage of used space, including graphics and text, and the number of potentially distracting nontarget links was varied. Increased link size improved performance, whereas increased clutter and links hampered search, especially for older adults. Results also showed that links located in the left region of the page were found most easily. Actual or potential applications of this research include Web site design to increase usability, particularly for older adults.

  9. GeoCENS: A Geospatial Cyberinfrastructure for the World-Wide Sensor Web

    Directory of Open Access Journals (Sweden)

    Steve H.L. Liang

    2013-10-01

    Full Text Available The world-wide sensor web has become a very useful technique for monitoring the physical world at spatial and temporal scales that were previously impossible. Yet we believe that the full potential of sensor web has thus far not been revealed. In order to harvest the world-wide sensor web’s full potential, a geospatial cyberinfrastructure is needed to store, process, and deliver large amount of sensor data collected worldwide. In this paper, we first define the issue of the sensor web long tail followed by our view of the world-wide sensor web architecture. Then, we introduce the Geospatial Cyberinfrastructure for Environmental Sensing (GeoCENS architecture and explain each of its components. Finally, with demonstration of three real-world powered-by-GeoCENS sensor web applications, we believe that the GeoCENS architecture can successfully address the sensor web long tail issue and consequently realize the world-wide sensor web vision.

  10. Surfing the World Wide Web to Education Hot-Spots.

    Science.gov (United States)

    Dyrli, Odvard Egil

    1995-01-01

    Provides a brief explanation of Web browsers and their use, as well as technical information for those considering access to the WWW (World Wide Web). Curriculum resources and addresses to useful Web sites are included. Sidebars show sample searches using Yahoo and Lycos search engines, and a list of recommended Web resources. (JKP)

  11. The World Wide Web of War

    National Research Council Canada - National Science Library

    Smith, Craig A

    2006-01-01

    Modern communications, combined with the near instantaneous publication of information on the World Wide Web, are providing the means to dramatically affect the pursuit, conduct, and public opinion of war on both sides...

  12. Implementation of a World Wide Web server for the oil and gas industry

    International Nuclear Information System (INIS)

    Blaylock, R.E.; Martin, F.D.; Emery, R.

    1996-01-01

    The Gas and Oil Technology Exchange and Communication Highway (GO-TECH) provides an electronic information system for the petroleum community for exchanging ideas, data, and technology. The PC-based system fosters communication and discussion by linking the oil and gas producers with resource centers, government agencies, consulting firms, service companies, national laboratories, academic research groups, and universities throughout the world. The oil and gas producers can access the GO-TECH World Wide Web (WWW) home page through modem links, as well as through the Internet. Future GO-TECH applications will include the establishment of virtual corporations consisting of consortia of small companies, consultants, and service companies linked by electronic information systems. These virtual corporations will have the resources and expertise previously found only in major corporations

  13. An information filtering system prototype for world wide web; Prototipo di sistema di information filtering per world wide web

    Energy Technology Data Exchange (ETDEWEB)

    Bordoni, L [ENEA Centro Ricerche Casaccia, S. Maria di Galeria, RM (Italy). Funzione Centrale Studi

    1999-07-01

    In this report the architecture of an information filtering system for world wide web, developed by the Rome Third University (Italy) for ENEA (National Agency for New Technology, Energy and the Environment), is described. This prototype allows for selecting documents in text/HTML format from the web according to the interests of users. A user modeling shell allows ro build a model of user's interests, obtained during the interaction. The experimental results support the choice of embedding methods for this kind of application. [Italian] In questo rapporto viene descritta l'architettura di un sistema adattivo di information filtering su world wide web, sviluppato dall'universita' di Roma III in collaborazione con l'ENEA. Il prototipo descritto e' in grado di selezionare documenti in formato testo/html, raccolti dal web, in accordo con le caratteristiche e gli interessi degli utenti. Una shell di modellazione utente consente di costruire un modello degli interessi dell'utente, ottenuto nel corso dell'interazione. I risultati sperimentali rafforzano la scelta di usare metodi di modellazione utente per questo genere di applicazioni.

  14. World Wide Web Homepage Design.

    Science.gov (United States)

    Tillman, Michael L.

    This paper examines hypermedia design and draws conclusions about how educational research and theory applies to various aspects of World Wide Web (WWW) homepage design. "Hypermedia" is defined as any collection of information which may be textual, graphical, visual, or auditory in nature and which may be accessed via a nonlinear route.…

  15. Using the World Wide Web To Teach Francophone Culture.

    Science.gov (United States)

    Beyer, Deborah Berg; Van Ells, Paula Hartwig

    2002-01-01

    Examined use of the World Wide Web to teach Francophone culture. Suggests that bolstering reading comprehension in the foreign language and increased proficiency in navigating the Web are potential secondary benefits gained from the cultural Web-based activities proposed in the study.(Author/VWL)

  16. Visualisierung von typisierten Links in Linked Data

    Directory of Open Access Journals (Sweden)

    Georg Neubauer

    2017-09-01

    Full Text Available Das Themengebiet der Arbeit behandelt Visualisierungen von typisierten Links in Linked Data. Die wissenschaftlichen Gebiete, die im Allgemeinen den Inhalt des Beitrags abgrenzen, sind das Semantic Web, das Web of Data und Informationsvisualisierung. Das Semantic Web, das von Tim Berners Lee 2001 erfunden wurde, stellt eine Erweiterung zum World Wide Web (Web 2.0 dar. Aktuelle Forschungen beziehen sich auf die Verknüpfbarkeit von Informationen im World Wide Web. Um es zu ermöglichen, solche Verbindungen wahrnehmen und verarbeiten zu können sind Visualisierungen die wichtigsten Anforderungen als Hauptteil der Datenverarbeitung. Im Zusammenhang mit dem Sematic Web werden Repräsentationen von zuhammenhängenden Informationen anhand von Graphen gehandhabt. Der Grund des Entstehens dieser Arbeit ist in erster Linie die Beschreibung der Gestaltung von Linked Data-Visualisierungskonzepten, deren Prinzipien im Rahmen einer theoretischen Annäherung eingeführt werden. Anhand des Kontexts führt eine schrittweise Erweiterung der Informationen mit dem Ziel, praktische Richtlinien anzubieten, zur Vernetzung dieser ausgearbeiteten Gestaltungsrichtlinien. Indem die Entwürfe zweier alternativer Visualisierungen einer standardisierten Webapplikation beschrieben werden, die Linked Data als Netzwerk visualisiert, konnte ein Test durchgeführt werden, der deren Kompatibilität zum Inhalt hatte. Der praktische Teil behandelt daher die Designphase, die Resultate, und zukünftige Anforderungen des Projektes, die durch die Testung ausgearbeitet wurden.

  17. An information filtering system prototype for world wide web; Prototipo di sistema di information filtering per world wide web

    Energy Technology Data Exchange (ETDEWEB)

    Bordoni, L. [ENEA Centro Ricerche Casaccia, S. Maria di Galeria, RM (Italy). Funzione Centrale Studi

    1999-07-01

    In this report the architecture of an information filtering system for world wide web, developed by the Rome Third University (Italy) for ENEA (National Agency for New Technology, Energy and the Environment), is described. This prototype allows for selecting documents in text/HTML format from the web according to the interests of users. A user modeling shell allows ro build a model of user's interests, obtained during the interaction. The experimental results support the choice of embedding methods for this kind of application. [Italian] In questo rapporto viene descritta l'architettura di un sistema adattivo di information filtering su world wide web, sviluppato dall'universita' di Roma III in collaborazione con l'ENEA. Il prototipo descritto e' in grado di selezionare documenti in formato testo/html, raccolti dal web, in accordo con le caratteristiche e gli interessi degli utenti. Una shell di modellazione utente consente di costruire un modello degli interessi dell'utente, ottenuto nel corso dell'interazione. I risultati sperimentali rafforzano la scelta di usare metodi di modellazione utente per questo genere di applicazioni.

  18. Collecting behavioural data using the world wide web: considerations for researchers.

    Science.gov (United States)

    Rhodes, S D; Bowie, D A; Hergenrather, K C

    2003-01-01

    To identify and describe advantages, challenges, and ethical considerations of web based behavioural data collection. This discussion is based on the authors' experiences in survey development and study design, respondent recruitment, and internet research, and on the experiences of others as found in the literature. The advantages of using the world wide web to collect behavioural data include rapid access to numerous potential respondents and previously hidden populations, respondent openness and full participation, opportunities for student research, and reduced research costs. Challenges identified include issues related to sampling and sample representativeness, competition for the attention of respondents, and potential limitations resulting from the much cited "digital divide", literacy, and disability. Ethical considerations include anonymity and privacy, providing and substantiating informed consent, and potential risks of malfeasance. Computer mediated communications, including electronic mail, the world wide web, and interactive programs will play an ever increasing part in the future of behavioural science research. Justifiable concerns regarding the use of the world wide web in research exist, but as access to, and use of, the internet becomes more widely and representatively distributed globally, the world wide web will become more applicable. In fact, the world wide web may be the only research tool able to reach some previously hidden population subgroups. Furthermore, many of the criticisms of online data collection are common to other survey research methodologies.

  19. Business use of the World-Wide Web

    Directory of Open Access Journals (Sweden)

    C. Cockburn

    1995-01-01

    Full Text Available Two methods were employed in this study of the use of the World Wide Web by business: first, a sample of 300 businesses with Web sites, across a wide range of industry types, was examined, by selecting (rather than sampling companies from the Yahoo! directory. The sites were investigated in relation to several areas - the purpose of the Web site, the use being made of electronic mail and the extent to which multi-media was being utilised. In addition, any other aspects of the site which were designed to make it more interesting to potential customers were also noted. Secondly, an electronic-mail questionnaire was sent to 222 of the 300 companies surveyed: that is, those that provided an e-mail address for contact. 14 were returned immediately due to unknown addresses or technical problems. Of the remaining 208, 102 replies were received, five of which were of no relevance, leaving 97 completed questionnaires to examine; a response rate of 47%, which is surprisingly good for a survey of this kind.

  20. Re-Framing the World Wide Web

    Science.gov (United States)

    Black, August

    2011-01-01

    The research presented in this dissertation studies and describes how technical standards, protocols, and application programming interfaces (APIs) shape the aesthetic, functional, and affective nature of our most dominant mode of online communication, the World Wide Web (WWW). I examine the politically charged and contentious battle over browser…

  1. Process Support for Cooperative Work on the World Wide Web

    NARCIS (Netherlands)

    Sikkel, Nicolaas; Neumann, Olaf; Sachweh, Sabine

    The World Wide Web is becoming a dominating factor in information technology. Consequently, computer supported cooperative work on the Web has recently drawn a lot of attention. Process Support for Cooperative Work (PSCW) is a Web based system supporting both structured and unstructured forms of

  2. WebPresent: a World Wide Web-based telepresentation tool for physicians

    Science.gov (United States)

    Sampath-Kumar, Srihari; Banerjea, Anindo; Moshfeghi, Mehran

    1997-05-01

    In this paper, we present the design architecture and the implementation status of WebPresent - a world wide web based tele-presentation tool. This tool allows a physician to use a conference server workstation and make a presentation of patient cases to a geographically distributed audience. The audience consists of other physicians collaborating on patients' health care management and physicians participating in continuing medical education. These physicians are at several locations with networks of different bandwidth and capabilities connecting them. Audiences also receive the patient case information on different computers ranging form high-end display workstations to laptops with low-resolution displays. WebPresent is a scalable networked multimedia tool which supports the presentation of hypertext, images, audio, video, and a white-board to remote physicians with hospital Intranet access. WebPresent allows the audience to receive customized information. The data received can differ in resolution and bandwidth, depending on the availability of resources such as display resolution and network bandwidth.

  3. WEB STRUCTURE MINING

    Directory of Open Access Journals (Sweden)

    CLAUDIA ELENA DINUCĂ

    2011-01-01

    Full Text Available The World Wide Web became one of the most valuable resources for information retrievals and knowledge discoveries due to the permanent increasing of the amount of data available online. Taking into consideration the web dimension, the users get easily lost in the web’s rich hyper structure. Application of data mining methods is the right solution for knowledge discovery on the Web. The knowledge extracted from the Web can be used to raise the performances for Web information retrievals, question answering and Web based data warehousing. In this paper, I provide an introduction of Web mining categories and I focus on one of these categories: the Web structure mining. Web structure mining, one of three categories of web mining for data, is a tool used to identify the relationship between Web pages linked by information or direct link connection. It offers information about how different pages are linked together to form this huge web. Web Structure Mining finds hidden basic structures and uses hyperlinks for more web applications such as web search.

  4. Googling DNA sequences on the World Wide Web.

    Science.gov (United States)

    Hajibabaei, Mehrdad; Singer, Gregory A C

    2009-11-10

    New web-based technologies provide an excellent opportunity for sharing and accessing information and using web as a platform for interaction and collaboration. Although several specialized tools are available for analyzing DNA sequence information, conventional web-based tools have not been utilized for bioinformatics applications. We have developed a novel algorithm and implemented it for searching species-specific genomic sequences, DNA barcodes, by using popular web-based methods such as Google. We developed an alignment independent character based algorithm based on dividing a sequence library (DNA barcodes) and query sequence to words. The actual search is conducted by conventional search tools such as freely available Google Desktop Search. We implemented our algorithm in two exemplar packages. We developed pre and post-processing software to provide customized input and output services, respectively. Our analysis of all publicly available DNA barcode sequences shows a high accuracy as well as rapid results. Our method makes use of conventional web-based technologies for specialized genetic data. It provides a robust and efficient solution for sequence search on the web. The integration of our search method for large-scale sequence libraries such as DNA barcodes provides an excellent web-based tool for accessing this information and linking it to other available categories of information on the web.

  5. The Top 100 Linked-To Pages on UK University Web Sites: High Inlink Counts Are Not Usually Associated with Quality Scholarly Content.

    Science.gov (United States)

    Thelwall, Mike

    2002-01-01

    Reports on an investigation into the most highly linked pages on United Kingdom university Web sites. Concludes that simple link counts are highly unreliable indicators of the average behavior of scholars, and that the most highly linked-to pages are those that facilitate access to a wide range of information rather than providing specific…

  6. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  7. Teaching Critical Evaluation Skills for World Wide Web Resources.

    Science.gov (United States)

    Tate, Marsha; Alexander, Jan

    1996-01-01

    Outlines a lesson plan used by an academic library to evaluate the quality of World Wide Web information. Discusses the traditional evaluation criteria of accuracy, authority, objectivity, currency, and coverage as it applies to the unique characteristics of Web pages: their marketing orientation, variety of information, and instability. The…

  8. Linked data-as-a-service: The semantic web redeployed

    NARCIS (Netherlands)

    Rietveld, Laurens; Verborgh, Ruben; Beek, Wouter; Vander Sande, Miel; Schlobach, Stefan

    2015-01-01

    Ad-hoc querying is crucial to access information from Linked Data, yet publishing queryable RDF datasets on the Web is not a trivial exercise. The most compelling argument to support this claim is that the Web contains hundreds of thousands of data documents, while only 260 queryable SPARQL

  9. Sources of Militaria on the World Wide Web | Walker | Scientia ...

    African Journals Online (AJOL)

    Having an interest in military-type topics is one thing, finding information on the web to quench your thirst for knowledge is another. The World Wide Web (WWW) is a universal electronic library that contains millions of web pages. As well as being fun, it is an addictive tool on which to search for information. To prevent hours ...

  10. Connecting geoscience systems and data using Linked Open Data in the Web of Data

    Science.gov (United States)

    Ritschel, Bernd; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; Galkin, Ivan; King, Todd; Fung, Shing F.; Hughes, Steve; Habermann, Ted; Hapgood, Mike; Belehaki, Anna

    2014-05-01

    Linked Data or Linked Open Data (LOD) in the realm of free and publically accessible data is one of the most promising and most used semantic Web frameworks connecting various types of data and vocabularies including geoscience and related domains. The semantic Web extension to the commonly existing and used World Wide Web is based on the meaning of entities and relationships or in different words classes and properties used for data in a global data and information space, the Web of Data. LOD data is referenced and mash-uped by URIs and is retrievable using simple parameter controlled HTTP-requests leading to a result which is human-understandable or machine-readable. Furthermore the publishing and mash-up of data in the semantic Web realm is realized by specific Web standards, such as RDF, RDFS, OWL and SPARQL defined for the Web of Data. Semantic Web based mash-up is the Web method to aggregate and reuse various contents from different sources, such as e.g. using FOAF as a model and vocabulary for the description of persons and organizations -in our case- related to geoscience projects, instruments, observations, data and so on. On the example of three different geoscience data and information management systems, such as ESPAS, IUGONET and GFZ ISDC and the associated science data and related metadata or better called context data, the concept of the mash-up of systems and data using the semantic Web approach and the Linked Open Data framework is described in this publication. Because the three systems are based on different data models, data storage structures and technical implementations an extra semantic Web layer upon the existing interfaces is used for mash-up solutions. In order to satisfy the semantic Web standards, data transition processes, such as the transfer of content stored in relational databases or mapped in XML documents into SPARQL capable databases or endpoints using D2R or XSLT is necessary. In addition, the use of mapped and/or merged domain

  11. So Wide a Web, So Little Time.

    Science.gov (United States)

    McConville, David; And Others

    1996-01-01

    Discusses new trends in the World Wide Web. Highlights include multimedia; digitized audio-visual files; compression technology; telephony; virtual reality modeling language (VRML); open architecture; and advantages of Java, an object-oriented programming language, including platform independence, distributed development, and pay-per-use software.…

  12. Business use of the World Wide Web: a report on further investigations

    Directory of Open Access Journals (Sweden)

    Hooi-Im Ng

    1998-01-01

    Full Text Available As a continuation of a previous study this paper reports on a series of studies into business use of the World Wide Web and, more generally the Internet. The use of the World Wide Web as a business tool has increased rapidly for the past three years, and the benefits of the World Wide Web to business and customers are discussed, together with the barriers that hold back future development of electronic commerce. As with the previous study we report on a desk survey of 300 randomly selected business Web sites and on the results of an electronic mail questionnaire sent to the sample companies. An extended version of this paper has been submitted to the International Journal of Information Management

  13. News Resources on the World Wide Web.

    Science.gov (United States)

    Notess, Greg R.

    1996-01-01

    Describes up-to-date news sources that are presently available on the Internet and World Wide Web. Highlights include electronic newspapers; AP (Associated Press) sources and Reuters; sports news; stock market information; New York Times; multimedia capabilities, including CNN Interactive; and local and regional news. (LRW)

  14. Golden Jubilee Photos: World Wide Web

    CERN Multimedia

    2004-01-01

    At the end of the 1980s, the Internet was already a valuable tool to scientists, allowing them to exchange e-mails and to access powerful computers remotely. A more simple means of sharing information was needed, however, and CERN, with its long tradition of informatics and networking, was the ideal place to find it. Moreover, hundreds of scientists from all over the world were starting to work together on preparations for the experiments at the Large Electron-Positron (LEP) collider. In 1989, Tim Berners-Lee (see photo), a young scientist working at CERN, drafted a proposal for an information-management system combining the internet, personal computers and computer-aided document consultation, known as hypertext. In 1990 he was joined by Robert Cailliau and the weaving of the World Wide Web began in earnest, even though only two CERN computers were allocated to the task at the time. The Web subsequently underwent a steady expansion to include the world's main particle physics institutes. The Web was not the...

  15. Judging nursing information on the world wide web.

    Science.gov (United States)

    Cader, Raffik

    2013-02-01

    The World Wide Web is increasingly becoming an important source of information for healthcare professionals. However, finding reliable information from unauthoritative Web sites to inform healthcare can pose a challenge to nurses. A study, using grounded theory, was undertaken in two phases to understand how qualified nurses judge the quality of Web nursing information. Data were collected using semistructured interviews and focus groups. An explanatory framework that emerged from the data showed that the judgment process involved the application of forms of knowing and modes of cognition to a range of evaluative tasks and depended on the nurses' critical skills, the time available, and the level of Web information cues. This article mainly focuses on the six evaluative tasks relating to assessing user-friendliness, outlook and authority of Web pages, and relationship to nursing practice; appraising the nature of evidence; and applying cross-checking strategies. The implications of these findings to nurse practitioners and publishers of nursing information are significant.

  16. International Markedsføring på World Wide Web

    DEFF Research Database (Denmark)

    Rask, Morten; Buch, Niels Jakob

    1999-01-01

    Denne artikel tager udgangspunkt i en gruppe af danske virksomheders anvendelse af World Wide Web til international markedsføring i en periode fra 1996 til 1998. Der identificeres tre interaktionstyper for virksomhedernes profil på Web, nemlig Brochuren, Håndbogen og Handelspladsen. Der reflekteres...... over de krav de enkelte interaktionstyper i forhold til automatisering, formalisering, integration og evaluering kunne kræve. Konklusionen bliver, at de tre interaktionstyper afspejler de udfordringer og muligheder, der er i anvendelsen af Web til markedsføring primært i et internationalt perspektiv......, men kan også bruges som input til nationale Web markedsføringsaktiviteter....

  17. Student participation in World Wide Web-based curriculum development of general chemistry

    Science.gov (United States)

    Hunter, William John Forbes

    1998-12-01

    This thesis describes an action research investigation of improvements to instruction in General Chemistry at Purdue University. Specifically, the study was conducted to guide continuous reform of curriculum materials delivered via the World Wide Web by involving students, instructors, and curriculum designers. The theoretical framework for this study was based upon constructivist learning theory and knowledge claims were developed using an inductive analysis procedure. This results of this study are assertions made in three domains: learning chemistry content via the World Wide Web, learning about learning via the World Wide Web, and learning about participation in an action research project. In the chemistry content domain, students were able to learn chemical concepts that utilized 3-dimensional visualizations, but not textual and graphical information delivered via the Web. In the learning via the Web domain, the use of feedback, the placement of supplementary aids, navigation, and the perception of conceptual novelty were all important to students' use of the Web. In the participation in action research domain, students learned about the complexity of curriculum. development, and valued their empowerment as part of the process.

  18. An Efficient Approach for Web Indexing of Big Data through Hyperlinks in Web Crawling

    Science.gov (United States)

    Devi, R. Suganya; Manjula, D.; Siddharth, R. K.

    2015-01-01

    Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling. PMID:26137592

  19. Linking shallow, Linking deep : how scientific intermediaries use the Web for their network of collaborators

    NARCIS (Netherlands)

    Vasileiadou, E.; Besselaar, van den P.

    2006-01-01

    In this paper we explore the possibility of using Web links to study collaborations between organisations, combining the results of qualitative analysis of interviews and quantitative analysis of linking patterns. We use case studies of scientific intermediaries, that is, organisations that mediate

  20. QBCov: A Linked Data interface for Discrete Global Grid Systems, a new approach to delivering coverage data on the web

    Science.gov (United States)

    Zhang, Z.; Toyer, S.; Brizhinev, D.; Ledger, M.; Taylor, K.; Purss, M. B. J.

    2016-12-01

    We are witnessing a rapid proliferation of geoscientific and geospatial data from an increasing variety of sensors and sensor networks. This data presents great opportunities to resolve cross-disciplinary problems. However, working with it often requires an understanding of file formats and protocols seldom used outside of scientific computing, potentially limiting the data's value to other disciplines. In this paper, we present a new approach to serving satellite coverage data on the web, which improves ease-of-access using the principles of linked data. Linked data adapts the concepts and protocols of the human-readable web to machine-readable data; the number of developers familiar with web technologies makes linked data a natural choice for bringing coverages to a wider audience. Our approach to using linked data also makes it possible to efficiently service high-level SPARQL queries: for example, "Retrieve all Landsat ETM+ observations of San Francisco between July and August 2016" can easily be encoded in a single query. We validate the new approach, which we call QBCov, with a reference implementation of the entire stack, including a simple web-based client for interacting with Landsat observations. In addition to demonstrating the utility of linked data for publishing coverages, we investigate the heretofore unexplored relationship between Discrete Global Grid Systems (DGGS) and linked data. Our conclusions are informed by the aforementioned reference implementation of QBCov, which is backed by a hierarchical file format designed around the rHEALPix DGGS. Not only does the choice of a DGGS-based representation provide an efficient mechanism for accessing large coverages at multiple scales, but the ability of DGGS to produce persistent, unique identifiers for spatial regions is especially valuable in a linked data context. This suggests that DGGS has an important role to play in creating sustainable and scalable linked data infrastructures. QBCov is being

  1. The use of the World Wide Web by medical journals in 2003 and 2005: an observational study.

    Science.gov (United States)

    Schriger, David L; Ouk, Sripha; Altman, Douglas G

    2007-01-01

    The 2- to 6-page print journal article has been the standard for 200 years, yet this format severely limits the amount of detailed information that can be conveyed. The World Wide Web provides a low-cost option for posting extended text and supplementary information. It also can enhance the experience of journal editors, reviewers, readers, and authors through added functionality (eg, online submission and peer review, postpublication critique, and e-mail notification of table of contents.) Our aim was to characterize ways that journals were using the World Wide Web in 2005 and note changes since 2003. We analyzed the Web sites of 138 high-impact print journals in 3 ways. First, we compared the print and Web versions of March 2003 and 2005 issues of 28 journals (20 of which were randomly selected from the 138) to determine how often articles were published Web only and how often print articles were augmented by Web-only supplements. Second, we examined what functions were offered by each journal Web site. Third, for journals that offered Web pages for reader commentary about each article, we analyzed the number of comments and characterized these comments. Fifty-six articles (7%) in 5 journals were Web only. Thirteen of the 28 journals had no supplementary online content. By 2005, several journals were including Web-only supplements in >20% of their papers. Supplementary methods, tables, and figures predominated. The use of supplementary material increased by 5% from 2% to 7% in the 20-journal random sample from 2003 to 2005. Web sites had similar functionality with an emphasis on linking each article to related material and e-mailing readers about activity related to each article. There was little evidence of journals using the Web to provide readers an interactive experience with the data or with each other. Seventeen of the 138 journals offered rapid-response pages. Only 18% of eligible articles had any comments after 5 months. Journal Web sites offer similar

  2. Promoting Your Web Site.

    Science.gov (United States)

    Raeder, Aggi

    1997-01-01

    Discussion of ways to promote sites on the World Wide Web focuses on how search engines work and how they retrieve and identify sites. Appropriate Web links for submitting new sites and for Internet marketing are included. (LRW)

  3. GLIDERS - A web-based search engine for genome-wide linkage disequilibrium between HapMap SNPs

    Directory of Open Access Journals (Sweden)

    Broxholme John

    2009-10-01

    Full Text Available Abstract Background A number of tools for the examination of linkage disequilibrium (LD patterns between nearby alleles exist, but none are available for quickly and easily investigating LD at longer ranges (>500 kb. We have developed a web-based query tool (GLIDERS: Genome-wide LInkage DisEquilibrium Repository and Search engine that enables the retrieval of pairwise associations with r2 ≥ 0.3 across the human genome for any SNP genotyped within HapMap phase 2 and 3, regardless of distance between the markers. Description GLIDERS is an easy to use web tool that only requires the user to enter rs numbers of SNPs they want to retrieve genome-wide LD for (both nearby and long-range. The intuitive web interface handles both manual entry of SNP IDs as well as allowing users to upload files of SNP IDs. The user can limit the resulting inter SNP associations with easy to use menu options. These include MAF limit (5-45%, distance limits between SNPs (minimum and maximum, r2 (0.3 to 1, HapMap population sample (CEU, YRI and JPT+CHB combined and HapMap build/release. All resulting genome-wide inter-SNP associations are displayed on a single output page, which has a link to a downloadable tab delimited text file. Conclusion GLIDERS is a quick and easy way to retrieve genome-wide inter-SNP associations and to explore LD patterns for any number of SNPs of interest. GLIDERS can be useful in identifying SNPs with long-range LD. This can highlight mis-mapping or other potential association signal localisation problems.

  4. A decade of Web Server updates at the Bioinformatics Links Directory: 2003-2012.

    Science.gov (United States)

    Brazas, Michelle D; Yim, David; Yeung, Winston; Ouellette, B F Francis

    2012-07-01

    The 2012 Bioinformatics Links Directory update marks the 10th special Web Server issue from Nucleic Acids Research. Beginning with content from their 2003 publication, the Bioinformatics Links Directory in collaboration with Nucleic Acids Research has compiled and published a comprehensive list of freely accessible, online tools, databases and resource materials for the bioinformatics and life science research communities. The past decade has exhibited significant growth and change in the types of tools, databases and resources being put forth, reflecting both technology changes and the nature of research over that time. With the addition of 90 web server tools and 12 updates from the July 2012 Web Server issue of Nucleic Acids Research, the Bioinformatics Links Directory at http://bioinformatics.ca/links_directory/ now contains an impressive 134 resources, 455 databases and 1205 web server tools, mirroring the continued activity and efforts of our field.

  5. Exploring Geology on the World-Wide Web--Volcanoes and Volcanism.

    Science.gov (United States)

    Schimmrich, Steven Henry; Gore, Pamela J. W.

    1996-01-01

    Focuses on sites on the World Wide Web that offer information about volcanoes. Web sites are classified into areas of Global Volcano Information, Volcanoes in Hawaii, Volcanoes in Alaska, Volcanoes in the Cascades, European and Icelandic Volcanoes, Extraterrestrial Volcanism, Volcanic Ash and Weather, and Volcano Resource Directories. Suggestions…

  6. Role of Librarian in Internet and World Wide Web Environment

    OpenAIRE

    K. Nageswara Rao; KH Babu

    2001-01-01

    The transition of traditional library collections to digital or virtual collections presented the librarian with new opportunities. The Internet, Web en-vironment and associated sophisticated tools have given the librarian a new dynamic role to play and serve the new information based society in bet-ter ways than hitherto. Because of the powerful features of Web i.e. distributed, heterogeneous, collaborative, multimedia, multi-protocol, hyperme-dia-oriented architecture, World Wide Web has re...

  7. Design of an Interface for Page Rank Calculation using Web Link Attributes Information

    Directory of Open Access Journals (Sweden)

    Jeyalatha SIVARAMAKRISHNAN

    2010-01-01

    Full Text Available This paper deals with the Web Structure Mining and the different Structure Mining Algorithms like Page Rank, HITS, Trust Rank and Sel-HITS. The functioning of these algorithms are discussed. An incremental algorithm for calculation of PageRank using an interface has been formulated. This algorithm makes use of Web Link Attributes Information as key parameters and has been implemented using Visibility and Position of a Link. The application of Web Structure Mining Algorithm in an Academic Search Application has been discussed. The present work can be a useful input to Web Users, Faculty, Students and Web Administrators in a University Environment.

  8. Interactivity, Information Processing, and Learning on the World Wide Web.

    Science.gov (United States)

    Tremayne, Mark; Dunwoody, Sharon

    2001-01-01

    Examines the role of interactivity in the presentation of science news on the World Wide Web. Proposes and tests a model of interactive information processing that suggests that characteristics of users and Web sites influence interactivity, which influences knowledge acquisition. Describes use of a think-aloud method to study participants' mental…

  9. Increasing efficiency of information dissemination and collection through the World Wide Web

    Science.gov (United States)

    Daniel P. Huebner; Malchus B. Baker; Peter F. Ffolliott

    2000-01-01

    Researchers, managers, and educators have access to revolutionary technology for information transfer through the World Wide Web (Web). Using the Web to effectively gather and distribute information is addressed in this paper. Tools, tips, and strategies are discussed. Companion Web sites are provided to guide users in selecting the most appropriate tool for searching...

  10. Introduction to the World Wide Web and Mosaic

    Science.gov (United States)

    Youngblood, Jim

    1994-01-01

    This tutorial provides an introduction to some of the terminology related to the use of the World Wide Web and Mosaic. It is assumed that the user has some prior computer experience. References are included to other sources of additional information.

  11. Grid-optimized Web 3D applications on wide area network

    Science.gov (United States)

    Wang, Frank; Helian, Na; Meng, Lingkui; Wu, Sining; Zhang, Wen; Guo, Yike; Parker, Michael Andrew

    2008-08-01

    Geographical information system has come into the Web Service times now. In this paper, Web3D applications have been developed based on our developed Gridjet platform, which provides a more effective solution for massive 3D geo-dataset sharing in distributed environments. Web3D services enabling web users could access the services as 3D scenes, virtual geographical environment and so on. However, Web3D services should be shared by thousands of essential users that inherently distributed on different geography locations. Large 3D geo-datasets need to be transferred to distributed clients via conventional HTTP, NFS and FTP protocols, which often encounters long waits and frustration in distributed wide area network environments. GridJet was used as the underlying engine between the Web 3D application node and geo-data server that utilizes a wide range of technologies including the one of paralleling the remote file access, which is a WAN/Grid-optimized protocol and provides "local-like" accesses to remote 3D geo-datasets. No change in the way of using software is required since the multi-streamed GridJet protocol remains fully compatible with existing IP infrastructures. Our recent progress includes a real-world test that Web3D applications as Google Earth over the GridJet protocol beats those over the classic ones by a factor of 2-7 where the transfer distance is over 10,000 km.

  12. Service Learning and Building Community with the World Wide Web

    Science.gov (United States)

    Longan, Michael W.

    2007-01-01

    The geography education literature touts the World Wide Web (Web) as a revolutionary educational tool, yet most accounts ignore its uses for public communication and creative expression. This article argues that students can be producers of content that is of service to local audiences. Drawing inspiration from the community networking movement,…

  13. Perspectives for Electronic Books in the World Wide Web Age.

    Science.gov (United States)

    Bry, Francois; Kraus, Michael

    2002-01-01

    Discusses the rapid growth of the World Wide Web and the lack of use of electronic books and suggests that specialized contents and device independence can make Web-based books compete with print. Topics include enhancing the hypertext model of XML; client-side adaptation, including browsers and navigation; and semantic modeling. (Author/LRW)

  14. Remote sensing education and Internet/World Wide Web technology

    Science.gov (United States)

    Griffith, J.A.; Egbert, S.L.

    2001-01-01

    Remote sensing education is increasingly in demand across academic and professional disciplines. Meanwhile, Internet technology and the World Wide Web (WWW) are being more frequently employed as teaching tools in remote sensing and other disciplines. The current wealth of information on the Internet and World Wide Web must be distilled, nonetheless, to be useful in remote sensing education. An extensive literature base is developing on the WWW as a tool in education and in teaching remote sensing. This literature reveals benefits and limitations of the WWW, and can guide its implementation. Among the most beneficial aspects of the Web are increased access to remote sensing expertise regardless of geographic location, increased access to current material, and access to extensive archives of satellite imagery and aerial photography. As with other teaching innovations, using the WWW/Internet may well mean more work, not less, for teachers, at least at the stage of early adoption. Also, information posted on Web sites is not always accurate. Development stages of this technology range from on-line posting of syllabi and lecture notes to on-line laboratory exercises and animated landscape flyovers and on-line image processing. The advantages of WWW/Internet technology may likely outweigh the costs of implementing it as a teaching tool.

  15. Radiation protection and environmental radioactivity. A voyage to the World Wide Web for beginners; Strahlenschutz und Umweltradioaktivitaet im Internet. Eine Reise in das World Wide Web fuer Anfaenger

    Energy Technology Data Exchange (ETDEWEB)

    Weimer, S [Landesanstalt fuer Umweltschutz Baden-Wuerttemberg, Referat ' ' Umweltradioaktivitaet, Strahlenschutz' ' (Germany)

    1998-07-01

    According to the enormous growth of the Internet service 'World Wide Web' there is also a big growth in the number of web sites in connection with radiation protection. An introduction is given of some practical basis of the WWW. The structure of WWW addresses and navigating through the web with hyperlinks is explained. Further some search engines are presented. The paper lists a number of WWW addresses of interesting sites with radiological protection informations. (orig.) [German] Mit dem rasanten Wachstum des Internet-Dienstes 'World Wide Web' ist auch das Angebot von Web-Seiten im Bereich Strahlenschutz stark gewachsen. Es wird eine Einfuehrung in die wichtigsten praktischen Grundlagen des WWW gegeben. Es wird der Aufbau der WWW-Adressen erklaert und das Navigieren mit Hyperlinks. Ausserdem werden einige Suchmaschinen vorgestellt. Der Beitrag stellt eine groessere Zahl an WWW-Adressen zu interessanten Seiten mit Strahlenschutzinformationen zur Verfuegung. (orig.)

  16. World wide developments in shortwall and wide web mining techniques

    Energy Technology Data Exchange (ETDEWEB)

    Pollard, T

    1975-11-01

    The paper describes the progress to date with continuous pillar extraction, and how the typical longwall powered support has been modified to be both strong enough and stable enough to provide roof support for very wide webs. It also describes the operating systems which have been specially designed. The next stages of development are discussed, particularly the provision of continuous conveyor haulage in place of the present-day shuttle car. The author suggests that marrying American coal-getting technology and British roof support technology might increase productivity.

  17. The world wide web: exploring a new advertising environment.

    Science.gov (United States)

    Johnson, C R; Neath, I

    1999-01-01

    The World Wide Web currently boasts millions of users in the United States alone and is likely to continue to expand both as a marketplace and as an advertising environment. Three experiments explored advertising in the Web environment, in particular memory for ads as they appear in everyday use across the Web. Experiments 1 and 2 examined the effect of advertising repetition on the retention of familiar and less familiar brand names, respectively. Experiment 1 demonstrated that repetition of a banner ad within multiple web pages can improve recall of familiar brand names, and Experiment 2 demonstrated that repetition can improve recognition of less familiar brand names. Experiment 3 directly compared the retention of familiar and less familiar brand names that were promoted by static and dynamic ads and demonstrated that the use of dynamic advertising can increase brand name recall, though only for familiar brand names. This study also demonstrated that, in the Web environment, much as in other advertising environments, familiar brand names possess a mnemonic advantage not possessed by less familiar brand names. Finally, data regarding Web usage gathered from all experiments confirm reports that Web usage among males tends to exceed that among females.

  18. Information on infantile colic on the World Wide Web.

    Science.gov (United States)

    Bailey, Shana D; D'Auria, Jennifer P; Haushalter, Jamie P

    2013-01-01

    The purpose of this study was to explore and describe the type and quality of information on infantile colic that a parent might access on the World Wide Web. Two checklists were used to evaluate the quality indicators of 24 Web sites and the colic-specific content. Fifteen health information Web sites met more of the quality parameters than the nine commercial sites. Eight Web sites included information about colic and infant abuse, with six being health information sites. The colic-specific content on 24 Web sites reflected current issues and controversies; however, the completeness of the information in light of current evidence varied among the Web sites. Strategies to avoid complications of parental stress or infant abuse were not commonly found on the Web sites. Pediatric professionals must guide parents to reliable colic resources that also include emotional support and understanding of infant crying. A best evidence guideline for the United States would eliminate confusion and uncertainty about which colic therapies are safe and effective for parents and professionals. Copyright © 2013 National Association of Pediatric Nurse Practitioners. Published by Mosby, Inc. All rights reserved.

  19. Medical mentoring via the evolving world wide web.

    Science.gov (United States)

    Jaffer, Usman; Vaughan-Huxley, Eyston; Standfield, Nigel; John, Nigel W

    2013-01-01

    Mentoring, for physicians and surgeons in training, is advocated as an essential adjunct in work-based learning, providing support in career and non-career related issues. The World Wide Web (WWW) has evolved, as a technology, to become more interactive and person centric, tailoring itself to the individual needs of the user. This changing technology may open new avenues to foster mentoring in medicine. DESIGN, SYSTEMATIC REVIEW, MAIN OUTCOME MEASURES: A search of the MEDLINE database from 1950 to 2012 using the PubMed interface, combined with manual cross-referencing was performed using the following strategy: ("mentors"[MeSH Terms] OR "mentors"[All Fields] OR "mentor"[All Fields]) AND ("internet"[MeSH Terms] OR "internet"[All Fields]) AND ("medicine"[MeSH Terms] OR "medicine"[All Fields]) AND ("humans"[MeSH Terms] AND English[lang]). Abstracts were screened for relevance (UJ) to the topic; eligibility for inclusion was simply on screening for relevance to online mentoring and web-based technologies. Forty-five papers were found, of which 16 were relevant. All studies were observational in nature. To date, all medical mentoring applications utilizing the World Wide Web have enjoyed some success limited by Web 1.0 and 2.0 technologies. With the evolution of the WWW through 1.0, 2.0 and 3.0 generations, the potential for meaningful tele- and distance mentoring has greatly improved. Some engagement has been made with these technological advancements, however further work is required to fully realize the potential of these technologies. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  20. Integrating Mathematics, Science, and Language Arts Instruction Using the World Wide Web.

    Science.gov (United States)

    Clark, Kenneth; Hosticka, Alice; Kent, Judi; Browne, Ron

    1998-01-01

    Addresses issues of access to World Wide Web sites, mathematics and science content-resources available on the Web, and methods for integrating mathematics, science, and language arts instruction. (Author/ASK)

  1. Basic support for cooperative work on the World Wide Web

    NARCIS (Netherlands)

    Bentley, R.; Appelt, W.; Busbach, U.; Hinrichs, E.; Kerr, D.; Sikkel, Nicolaas; Trevor, J.; Woetzel, G.

    The emergence and widespread adoption of the World Wide Web offers a great deal of potential in supporting cross-platform cooperative work within widely dispersed working groups. The Basic Support for Cooperative Work (BSCW) project at GMD is attempting to realize this potential through development

  2. Advanced use of World-Wide Web in the online system of DELPHI

    International Nuclear Information System (INIS)

    Doenszelmann, M.; Carvalho, D.; Du, S.; Tennebo, F.

    1996-01-01

    The World-Wide Web technologies used by the DELPHI experiment at CERN to provide easy access to information of the On-line System. WWW technology on both client and server side is used in five different projects. The World-Wide Web has its advantages concerning the network technology, the practical user interface and its scalability. It however also demands a stateless protocol and format negotiation. (author)

  3. Meeting the challenge of finding resources for ophthalmic nurses on the World Wide Web.

    Science.gov (United States)

    Duffel, P G

    1998-12-01

    The World Wide Web ("the Web") is a macrocosm of resources that can be overwhelming. Often the sheer volume of material available causes one to give up in despair before finding information of any use. The Web is such a popular resource that it cannot be ignored. Two of the biggest challenges to finding good information on the Web are knowing where to start and judging whether the information gathered is pertinent and credible. This article addresses these two challenges and introduces the reader to a variety of ophthalmology and vision science resources on the World Wide Web.

  4. How Commercial Banks Use the World Wide Web: A Content Analysis.

    Science.gov (United States)

    Leovic, Lydia K.

    New telecommunications vehicles expand the possible ways that business is conducted. The hypermedia portion of the Internet, the World Wide Web, is such a telecommunications device. The Web is presently one of the most flexible and dynamic methods for electronic information dissemination. The level of technological sophistication necessary to…

  5. 40 CFR 63.825 - Standards: Product and packaging rotogravure and wide-web flexographic printing.

    Science.gov (United States)

    2010-07-01

    ... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for the Printing and Publishing Industry § 63.825 Standards: Product and packaging rotogravure and wide-web flexographic printing. (a) Each... rotogravure and wide-web flexographic printing. 63.825 Section 63.825 Protection of Environment ENVIRONMENTAL...

  6. Lithuanian on-line periodicals on the World Wide Web

    Directory of Open Access Journals (Sweden)

    Lina Sarlauskiene

    2001-01-01

    Full Text Available Deals with Lithuanian full-text electronic periodicals distributed through the World Wide Web. An electronic periodical is usually defined as a regular publication on some particular topic distributed in digital form, chiefly through the Web, but also by electronic mail or digital disk. The author has surveyed 106 publications. Thirty-four are distributed only on the Web, and 72 have printed versions. The number of analysed publications is not very big, but four years of electronic publishing and the variety of periodicals enables us to establish the causes of this phenomenon, the main features of development, and some perspectives. Electronic periodicals were analysed according to their type, purpose, contents, publisher, regularity, language, starting date and place of publication, and other features.

  7. Tim Berners-Lee: inventor de la World Wide Web

    OpenAIRE

    Universidad de Granada. Biblioteca

    2015-01-01

    El presente Cat??logo contiene la exposici??n organizada por la Biblioteca de la ETSIIT de la Universidad de Granada durante los meses de noviembre-diciembre de 2015 y titulada: "Tim Berners-Lee: inventor de la World Wide Web"

  8. Consécration pour les Inventeurs du World-Wide Web

    CERN Multimedia

    CERN Press Office. Geneva

    1996-01-01

    Nearly seven years after it was invented at CERN, the World-Wide Web has woven its way into every corner of the Internet. On Saturday, 17 February, the inventors of the Web, Tim Berners-Lee, now at Massachusetts Institute of Technology (MIT), and Robert Cailliau of CERN's Electronics and Computing for Physics (ECP) Division, will be honoured with one of computing's highest distinctions: the Association for Computing (ACM) Software System Award 1995.

  9. WEB-DL endovascular treatment of wide-neck bifurcation aneurysms

    DEFF Research Database (Denmark)

    Lubicz, B; Klisch, J; Gauvrit, J-Y

    2014-01-01

    BACKGROUND AND PURPOSE: Flow disruption with the WEB-DL device has been used safely for the treatment of wide-neck bifurcation aneurysms, but the stability of aneurysm occlusion after this treatment is unknown. This retrospective multicenter European study analyzed short- and midterm data...... in patients treated with WEB-DL. MATERIALS AND METHODS: Twelve European neurointerventional centers participated in the study. Clinical data and pre- and postoperative short- and midterm images were collected. An experienced interventional neuroradiologist independently analyzed the images. Aneurysm occlusion...... was classified into 4 grades: complete occlusion, opacification of the proximal recess of the device, neck remnant, and aneurysm remnant. RESULTS: Forty-five patients (34 women and 11 men) 35-74 years of age (mean, 56.3 ± 9.6 years) with 45 aneurysms treated with the WEB device were included. Aneurysm locations...

  10. Role of Librarian in Internet and World Wide Web Environment

    Directory of Open Access Journals (Sweden)

    K. Nageswara Rao

    2001-01-01

    Full Text Available The transition of traditional library collections to digital or virtual collections presented the librarian with new opportunities. The Internet, Web en-vironment and associated sophisticated tools have given the librarian a new dynamic role to play and serve the new information based society in bet-ter ways than hitherto. Because of the powerful features of Web i.e. distributed, heterogeneous, collaborative, multimedia, multi-protocol, hyperme-dia-oriented architecture, World Wide Web has revolutionized the way people access information, and has opened up new possibilities in areas such as digital libraries, virtual libraries, scientific information retrieval and dissemination. Not only the world is becoming interconnected, but also the use of Internet and Web has changed the fundamental roles, paradigms, and organizational culture of libraries and librarians as well. The article describes the limitless scope of Internet and Web, the existence of the librarian in the changing environment, parallelism between information sci-ence and information technology, librarians and intelligent agents, working of intelligent agents, strengths, weaknesses, threats and opportunities in-volved in the relationship between librarians and the Web. The role of librarian in Internet and Web environment especially as intermediary, facilita-tor, end-user trainer, Web site builder, researcher, interface designer, knowledge manager and sifter of information resources is also described.

  11. Executing SPARQL Queries over the Web of Linked Data

    Science.gov (United States)

    Hartig, Olaf; Bizer, Christian; Freytag, Johann-Christoph

    The Web of Linked Data forms a single, globally distributed dataspace. Due to the openness of this dataspace, it is not possible to know in advance all data sources that might be relevant for query answering. This openness poses a new challenge that is not addressed by traditional research on federated query processing. In this paper we present an approach to execute SPARQL queries over the Web of Linked Data. The main idea of our approach is to discover data that might be relevant for answering a query during the query execution itself. This discovery is driven by following RDF links between data sources based on URIs in the query and in partial results. The URIs are resolved over the HTTP protocol into RDF data which is continuously added to the queried dataset. This paper describes concepts and algorithms to implement our approach using an iterator-based pipeline. We introduce a formalization of the pipelining approach and show that classical iterators may cause blocking due to the latency of HTTP requests. To avoid blocking, we propose an extension of the iterator paradigm. The evaluation of our approach shows its strengths as well as the still existing challenges.

  12. Multi-dimensional effects of color on the world wide web

    Science.gov (United States)

    Morton, Jill

    2002-06-01

    Color is the most powerful building material of visual imagery on the World Wide Web. It must function successfully as it has done historically in traditional two-dimensional media, as well as address new challenges presented by this electronic medium. The psychological, physiological, technical and aesthetic effects of color have been redefined by the unique requirements of the electronic transmission of text and images on the Web. Color simultaneously addresses each of these dimensions in this electronic medium.

  13. Linked data” – dados interligados - e interoperabilidade entre arquivos, bibliotecas e museus na web

    Directory of Open Access Journals (Sweden)

    Carlos Henrique Marcondes

    2012-01-01

    Full Text Available Catálogos Web em sistemas de arquivos, bibliotecas e museus são hoje recursos informacionais fechados, que utilizam tecnologias, padrões e interfaces próprias, não permitindo navegar via diferentes recursos dentro dos catálogos e vice-versa. Tecnologias “Linked Data” – dados interligados –, parte da proposta da Web Semântica e oferecem a possibilidade de interligar recursos informacionais Web através de links semânticos, permitindo aos usuários uma navegação natural e intuitiva, seguindo esses links, por esses recursos, independentemente de interfaces de consulta específicas. O objetivo deste artigo é identificar e discutir as potencialidades oferecidas pelas tecnologias da Web Semântica, em especial, “Linked Open Data”, para que arquivos, bibliotecas e museus disponibilizem e tornem interoperáveis seus acervos na Web. Será utilizada metodologia de caráter qualitativo, do tipo “levantamento do estado da arte”, tendo como método a pesquisa bibliográfica, a visita a “sites” de interesse e a análise do material levantado.

  14. LinkED: A Novel Methodology for Publishing Linked Enterprise Data

    Directory of Open Access Journals (Sweden)

    Shreyas Suresh Rao

    2017-01-01

    Full Text Available Semantic Web technologies have redefined and strengthened the Enterprise-Web interoperability over the last decade. Linked Open Data (LOD refers to a set of best practices that empower enterprises to publish and interlink their data using existing ontologies on the World Wide Web. Current research in LOD focuses on expert search, the creation of unified information space and augmentation of core data from an enterprise context. However, existing approaches for publication of enterprise data as LOD are domain-specific, ad-hoc and suffer from lack of uniform representation across domains. The paper proposes a novel methodology called LinkED that contributes towards LOD literature in two ways: (a streamlines the publishing process through five stages of cleaning, triplification, interlinking, storage and visualization; (b addresses the latest challenges in LOD publication, namely: inadequate links, inconsistencies in the quality of the dataset and replicability of the LOD publication process. Further, the methodology is demonstrated via the publication of digital repository data as LOD in a university setting, which is evaluated based on two semantic standards: Five-Star model and data quality metrics. Overall, the paper provides a generic LOD publication process that is applicable across various domains such as healthcare, e-governance, banking, and tourism, to name a few.

  15. Analisis Perbandingan Unjuk Kerja Sistem Penyeimbang Beban Web Server dengan HAProxy dan Pound Links

    Directory of Open Access Journals (Sweden)

    Dite Ardian

    2013-04-01

    Full Text Available The development of internet technology has many organizations that expanded service website. Initially used single web server that is accessible to everyone through the Internet, but when the number of users that access the web server is very much the traffic load to the web server and the web server anyway. It is necessary for the optimization of web servers to cope with the overload received by the web server when traffic is high. Methodology of this final project research include the study of literature, system design, and testing of the system. Methods from the literature reference books related as well as from several sources the internet. The design of this thesis uses Haproxy and Pound Links as a load balancing web server. The end of this reaserch is testing the network system, where the system will be tested this stage so as to create a web server system that is reliable and safe. The result is a web server system that can be accessed by many user simultaneously rapidly as load balancing Haproxy and Pound Links system which is set up as front-end web server performance so as to create a web server that has performance and high availability.

  16. Accessing NASA Technology with the World Wide Web

    Science.gov (United States)

    Nelson, Michael L.; Bianco, David J.

    1995-01-01

    NASA Langley Research Center (LaRC) began using the World Wide Web (WWW) in the summer of 1993, becoming the first NASA installation to provide a Center-wide home page. This coincided with a reorganization of LaRC to provide a more concentrated focus on technology transfer to both aerospace and non-aerospace industry. Use of WWW and NCSA Mosaic not only provides automated information dissemination, but also allows for the implementation, evolution and integration of many technology transfer and technology awareness applications. This paper describes several of these innovative applications, including the on-line presentation of the entire Technology OPportunities Showcase (TOPS), an industrial partnering showcase that exists on the Web long after the actual 3-day event ended. The NASA Technical Report Server (NTRS) provides uniform access to many logically similar, yet physically distributed NASA report servers. WWW is also the foundation of the Langley Software Server (LSS), an experimental software distribution system which will distribute LaRC-developed software. In addition to the more formal technology distribution projects, WWW has been successful in connecting people with technologies and people with other people.

  17. PENYEBARAN INFORMASI MENGGUNAKAN WWW (WORLD WIDE WEB

    Directory of Open Access Journals (Sweden)

    Ika Atman Satya

    2011-12-01

    Full Text Available Media Informasi secara tradisional telah kita kenai dengan menggunakan koran, televisi, radio dan buku referensi. Media informasi tersebut untuk penyebarannya memerlukan penunjang agar informasi tersebut dapat disebarkan secara lutis. Selain penggunaan media tradisional tersebut penyebaran informasi dengan menggunakan jaringan komputer Internet juga berkembang. Salah satu cara penyebaran informasi dengan menggunakan aplikasi WWW (World Wide Web yang mempunyai kemampuan menggabungkan gambar, text dan suara secara interaktif. Pada tulisan ini akan dibahas tentang kemampuan, penggunaan dan pengembangan server WWW.

  18. Glue ear: how good is the information on the World Wide Web?

    Science.gov (United States)

    Ritchie, L; Tornari, C; Patel, P M; Lakhani, R

    2016-02-01

    This paper objectively evaluates current information available to the general public related to glue ear on the World Wide Web. The term 'glue ear' was typed into the 3 most frequently used internet search engines - Google, Bing and Yahoo - and the first 20 links were analysed. The first 400 words of each page were used to calculate the Flesch-Kincaid readability score. Each website was subsequently graded using the Discern instrument, which gauges quality and content of literature. The websites Webmd.boots.com, Bupa.co.uk and Patient.co.uk received the highest overall scores. These reflected top scores in either readability or Discern instrument assessment, but not both. Readability and Discern scores increased with the presence of a marketing or advertising incentive. The Patient.co.uk website had the highest Discern score and third highest readability score. There is huge variation in the quality of information available to patients on the internet. Some websites may be accessible to a wide range of reading ages but have poor quality content, and vice versa. Clinicians should be aware of indicators of quality, and use validated instruments to assess and recommend literature.

  19. Finding Web-Based Anxiety Interventions on the World Wide Web: A Scoping Review.

    Science.gov (United States)

    Ashford, Miriam Thiel; Olander, Ellinor K; Ayers, Susan

    2016-06-01

    One relatively new and increasingly popular approach of increasing access to treatment is Web-based intervention programs. The advantage of Web-based approaches is the accessibility, affordability, and anonymity of potentially evidence-based treatment. Despite much research evidence on the effectiveness of Web-based interventions for anxiety found in the literature, little is known about what is publically available for potential consumers on the Web. Our aim was to explore what a consumer searching the Web for Web-based intervention options for anxiety-related issues might find. The objectives were to identify currently publically available Web-based intervention programs for anxiety and to synthesize and review these in terms of (1) website characteristics such as credibility and accessibility; (2) intervention program characteristics such as intervention focus, design, and presentation modes; (3) therapeutic elements employed; and (4) published evidence of efficacy. Web keyword searches were carried out on three major search engines (Google, Bing, and Yahoo-UK platforms). For each search, the first 25 hyperlinks were screened for eligible programs. Included were programs that were designed for anxiety symptoms, currently publically accessible on the Web, had an online component, a structured treatment plan, and were available in English. Data were extracted for website characteristics, program characteristics, therapeutic characteristics, as well as empirical evidence. Programs were also evaluated using a 16-point rating tool. The search resulted in 34 programs that were eligible for review. A wide variety of programs for anxiety, including specific anxiety disorders, and anxiety in combination with stress, depression, or anger were identified and based predominantly on cognitive behavioral therapy techniques. The majority of websites were rated as credible, secure, and free of advertisement. The majority required users to register and/or to pay a program access

  20. Lexical Link Analysis Application: Improving Web Service to Acquisition Visibility Portal Phase III

    Science.gov (United States)

    2015-04-30

    ååì~ä=^Åèìáëáíáçå= oÉëÉ~êÅÜ=póãéçëáìã= qÜìêëÇ~ó=pÉëëáçåë= sçäìãÉ=ff= = Lexical Link Analysis Application: Improving Web Service to Acquisition...2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Lexical Link Analysis Application: Improving Web Service...processes. Lexical Link Analysis (LLA) can help, by applying automation to reveal and depict???to decisionmakers??? the correlations, associations, and

  1. White Supremacists, Oppositional Culture and the World Wide Web

    Science.gov (United States)

    Adams, Josh; Roscigno, Vincent J.

    2005-01-01

    Over the previous decade, white supremacist organizations have tapped into the ever emerging possibilities offered by the World Wide Web. Drawing from prior sociological work that has examined this medium and its uses by white supremacist organizations, this article advances the understanding of recruitment, identity and action by providing a…

  2. Tracing agents and other automatic sampling procedures for the World Wide Web

    OpenAIRE

    Aguillo, Isidro F.

    1999-01-01

    Many of the search engines and recovery tools are not suitable to make samples of web resources for quantitative analysis. The increasing size of the web and its hypertextual nature offer opportunities for a novel approach. A new generation of recovering tools involving tracing hypertext links from selected sites are very promising. Offering capabilities to automate tasks Extracting large samples of high pertinence Ready to use in standard database formats Selecting additional resour...

  3. Library OPACs on the Web: Finding and Describing Directories.

    Science.gov (United States)

    Henry, Marcia

    1997-01-01

    Provides current descriptions of some of the major directories that link to library catalogs on the World Wide Web. Highlights include LibWeb; Hytelnet; WebCats; WWW Library Directory; and techniques for finding new library OPAC (online public access catalog) directories. (LRW)

  4. Touring the Campus Library from the World Wide Web.

    Science.gov (United States)

    Mosley, Pixey Anne; Xiao, Daniel

    1996-01-01

    The philosophy, design, implementation and evaluation of a World Wide Web-accessible Virtual Library Tour of Texas A & M University's Evans Library is presented. Its design combined technical computer issues and library instruction expertise. The tour can be used to simulate a typical walking tour through the library or heading directly to a…

  5. Distributing Congestion Management System Information Using the World Wide Web

    Science.gov (United States)

    1997-01-01

    The Internet is a unique medium for the distribution of information, and it provides a tremendous opportunity to take advantage of peoples innate interest in transportation issues as they relate to their own lives. In particular, the World Wide Web (...

  6. Technical Evaluation Report 61: The World-Wide Inaccessible Web, Part 2: Internet routes

    Directory of Open Access Journals (Sweden)

    Jim Klaas

    2007-06-01

    Full Text Available In the previous report in this series, Web browser loading times were measured in 12 Asian countries, and were found to be up to four times slower than commonly prescribed as acceptable. Failure of webpages to load at all was frequent. The current follow-up study compares these loading times with the complexity of the Internet routes linking the Web users and the Web servers hosting them. The study was conducted in the same 12 Asian countries, with the assistance of members of the International Development Research Centre’s PANdora distance education research network. The data were generated by network members in Bhutan, Cambodia, India, Indonesia, Laos, Mongolia, the Philippines, Sri Lanka, Pakistan, Singapore, Thailand, and Vietnam. Additional data for the follow-up study were collected in China. Using a ‘traceroute’ routine, the study indicates that webpage loading time is linked to the complexity of the Internet routes between Web users and the host server. It is indicated that distance educators can apply such information in the design of improved online delivery and mirror sites, notably in areas of the developing world which currently lack an effective infrastructure for online education.

  7. The World-Wide Web past present and future, and its application to medicine

    CERN Document Server

    Sendall, D M

    1997-01-01

    The World-Wide Web was first developed as a tool for collaboration in the high energy physics community. From there it spread rapidly to other fields, and grew to its present impressive size. As an easy way to access information, it has been a great success, and a huge number of medical applications have taken advantage of it. But there is another side to the Web, its potential as a tool for collaboration between people. Medical examples include telemedicine and teaching. New technical developments offer still greater potential in medical and other fields. This paper gives some background to the early development of the World-Wide Web, a brief overview of its present state with some examples relevant to medicine, and a look at the future.

  8. Wikinews interviews World Wide Web co-inventor Robert Cailliau

    CERN Multimedia

    2007-01-01

    "The name Robert Caillau may not ring a bell to the general pbulic, but his invention is the reason why you are reading this: Dr. Cailliau together with his colleague Sir Tim Berners-Lee invented the World Wide Web, making the internet accessible so it could grow from an academic tool to a mass communication medium." (9 pages)

  9. Collaborative Information Agents on the World Wide Web

    Science.gov (United States)

    Chen, James R.; Mathe, Nathalie; Wolfe, Shawn; Koga, Dennis J. (Technical Monitor)

    1998-01-01

    In this paper, we present DIAMS, a system of distributed, collaborative information agents which help users access, collect, organize, and exchange information on the World Wide Web. Personal agents provide their owners dynamic displays of well organized information collections, as well as friendly information management utilities. Personal agents exchange information with one another. They also work with other types of information agents such as matchmakers and knowledge experts to facilitate collaboration and communication.

  10. The Land of Confusion? High School Students and Their Use of the World Wide Web for Research.

    Science.gov (United States)

    Lorenzen, Michael

    2002-01-01

    Examines high school students' use of the World Wide Web to complete assignments. Findings showed the students used a good variety of resources, including libraries and the World Wide Web, to find information for assignments. However, students were weak at determining the quality of the information found on web sites. Students did poorly at…

  11. Wood Utilization Research Dissemination on the World Wide Web: A Case Study

    Science.gov (United States)

    Daniel L. Schmoldt; Matthew F. Winn; Philip A. Araman

    1997-01-01

    Because many research products are informational rather than tangible, emerging information technologies, such as the multi-media format of the World Wide Web, provide an open and easily accessible mechanism for transferring research to user groups. We have found steady, increasing use of our Web site over the first 6-1/2 months of operation; almost one-third of the...

  12. Statistical Analysis with Webstat, a Java applet for the World Wide Web

    Directory of Open Access Journals (Sweden)

    Webster West

    1997-09-01

    Full Text Available The Java programming language has added a new tool for delivering computing applications over the World Wide Web (WWW. WebStat is a new computing environment for basic statistical analysis which is delivered in the form of a Java applet. Anyone with WWW access and a Java capable browser can access this new analysis environment. Along with an overall introduction of the environment, the main features of this package are illustrated, and the prospect of using basic WebStat components for more advanced applications is discussed.

  13. REPTREE CLASSIFIER FOR IDENTIFYING LINK SPAM IN WEB SEARCH ENGINES

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2013-01-01

    Full Text Available Search Engines are used for retrieving the information from the web. Most of the times, the importance is laid on top 10 results sometimes it may shrink as top 5, because of the time constraint and reliability on the search engines. Users believe that top 10 or 5 of total results are more relevant. Here comes the problem of spamdexing. It is a method to deceive the search result quality. Falsified metrics such as inserting enormous amount of keywords or links in website may take that website to the top 10 or 5 positions. This paper proposes a classifier based on the Reptree (Regression tree representative. As an initial step Link-based features such as neighbors, pagerank, truncated pagerank, trustrank and assortativity related attributes are inferred. Based on this features, tree is constructed. The tree uses the feature inference to differentiate spam sites from legitimate sites. WEBSPAM-UK-2007 dataset is taken as a base. It is preprocessed and converted into five datasets FEATA, FEATB, FEATC, FEATD and FEATE. Only link based features are taken for experiments. This paper focus on link spam alone. Finally a representative tree is created which will more precisely classify the web spam entries. Results are given. Regression tree classification seems to perform well as shown through experiments.

  14. Histology on the World Wide Web: A Digest of Resources for Students and Teachers.

    Science.gov (United States)

    Cotter, John R.

    1997-01-01

    Provides a list of 37 World Wide Web sites that are devoted to instruction in histology and include electronic manuals, syllabi, atlases, image galleries, and quizzes. Reviews the topics, content, and highlights of these Web sites. (DDR)

  15. The NIF LinkOut broker: a web resource to facilitate federated data integration using NCBI identifiers.

    Science.gov (United States)

    Marenco, Luis; Ascoli, Giorgio A; Martone, Maryann E; Shepherd, Gordon M; Miller, Perry L

    2008-09-01

    This paper describes the NIF LinkOut Broker (NLB) that has been built as part of the Neuroscience Information Framework (NIF) project. The NLB is designed to coordinate the assembly of links to neuroscience information items (e.g., experimental data, knowledge bases, and software tools) that are (1) accessible via the Web, and (2) related to entries in the National Center for Biotechnology Information's (NCBI's) Entrez system. The NLB collects these links from each resource and passes them to the NCBI which incorporates them into its Entrez LinkOut service. In this way, an Entrez user looking at a specific Entrez entry can LinkOut directly to related neuroscience information. The information stored in the NLB can also be utilized in other ways. A second approach, which is operational on a pilot basis, is for the NLB Web server to create dynamically its own Web page of LinkOut links for each NCBI identifier in the NLB database. This approach can allow other resources (in addition to the NCBI Entrez) to LinkOut to related neuroscience information. The paper describes the current NLB system and discusses certain design issues that arose during its implementation.

  16. Gender Equity in Advertising on the World-Wide Web: Can it be Found?

    Science.gov (United States)

    Kramer, Kevin M.; Knupfer, Nancy Nelson

    Recent attention to gender equity in computer environments, as well as in print-based and televised advertising for technological products, suggests that gender bias in the computer environment continues. This study examined gender messages within World Wide Web advertisements, specifically the type and number of visual images used in Web banner…

  17. The online discourse on the Demjanjuk trial. New memory practices on the World Wide Web?

    Directory of Open Access Journals (Sweden)

    Vivien SOMMER

    2012-01-01

    Full Text Available In this article I want to discuss the question if and how the World Wide Web changes social memory practices. Therefore I examine the relationship between the World Wide Web, social memory practices and public discourses. Towards discussing mediated memory processes I focus on the online discourse about the trial against the former concentration camp guard John Demjanjuk.

  18. Infant Gastroesophageal Reflux Information on the World Wide Web.

    Science.gov (United States)

    Balgowan, Regina; Greer, Leah C; D'Auria, Jennifer P

    2016-01-01

    The purpose of this study was to describe the type and quality of health information about infant gastroesophageal reflux (GER) that a parent may find on the World Wide Web. The data collection tool included evaluation of Web site quality and infant GER-specific content on the 30 sites that met the inclusion criteria. The most commonly found content categories in order of frequency were management strategies, when to call a primary care provider, definition, and clinical features. The most frequently mentioned strategies included feeding changes, infant positioning, and medications. Thirteen of the 30 Web sites included information on both GER and gastroesophageal reflux disease. Mention of the use of medication to lessen infant symptoms was found on 15 of the 30 sites. Only 10 of the 30 sites included information about parent support and coping strategies. Pediatric nurse practitioners (PNPs) should utilize well-child visits to address the normalcy of physiologic infant GER and clarify any misperceptions parents may have about diagnosis and the role of medication from information they may have found on the Internet. It is critical for PNPs to assist in the development of Web sites with accurate content, advise parents on how to identify safe and reliable information, and provide examples of high-quality Web sites about child health topics such as infant GER. Copyright © 2016 National Association of Pediatric Nurse Practitioners. Published by Elsevier Inc. All rights reserved.

  19. Detection of spam web page using content and link-based techniques

    Indian Academy of Sciences (India)

    Spam pages are generally insufficient and inappropriate results for user. ... kinds of Web spamming techniques: Content spam and Link spam. 1. Content spam: The .... of the spam pages are machine generated and hence tech- nique of ...

  20. The Relationship of the World Wide Web to Thinking Skills.

    Science.gov (United States)

    Bradshaw, Amy C.; Bishop, Jeanne L.; Gens, Linda S.; Miller, Sharla L.; Rogers, Martha A.

    2002-01-01

    Discusses use of the World Wide Web in education and its possibilities for developing higher order critical thinking skills to successfully deal with the demands of the future information society. Suggests that teachers need to provide learning environments that are learner-centered, authentic, problem-based, and collaborative. (Contains 61…

  1. Exploratory Analysis of the Effect of Consultants on the Use of World Wide Web Sites in SMEs

    Directory of Open Access Journals (Sweden)

    Sigi Goode

    2002-11-01

    Full Text Available There is little published research on the role of consultants in technology adoption. Given the increasing popularity of the World Wide Web in commercial environments and the number of consultants now offering web development services, some analysis into the effects of their engagement would be of benefit. In an extension of an ongoing study, an existing sample of 113 World Wide Web adopters was used to examine the nature of World Wide Web site use with respect to consultant and Internet Service Provider (ISP engagement. Analysis was also conducted into the use of consultants and ISPs as developers and maintainers of these sites. This preliminary research finds a number of interesting outcomes. No significant relationship is found between consultant or ISP engagement and World Wide Web site use, regardless of whether the consultant was engaged as site developer or site maintainer. The study raises a number of additional findings that are of interest but are not directly related to this study. These findings merit further research.

  2. El creador de World Wide Web gana premio Millennium de tecnologia

    CERN Multimedia

    Galan, J

    2004-01-01

    "El creador de la World Wide Web (WWW), el fisico britanico Tim Berners-Lee, gano hoy la primera edicion del Millennium Technology Prize, un galardon internacional creado por una fundacion finlandesa y dotado con un millon de euros" (1/2 page)

  3. Remote monitoring using technologies from the Internet and World Wide Web

    International Nuclear Information System (INIS)

    Puckett, J.M.; Burczyk, L.

    1997-01-01

    Recent developments in Internet technologies are changing and enhancing how one processes and exchanges information. These developments include software and hardware in support of multimedia applications on the World Wide Web. In this paper the authors describe these technologies as they have applied them to remote monitoring and show how they will allow the International Atomic Energy Agency to efficiently review and analyze remote monitoring data for verification of material movements. The authors have developed demonstration software that illustrates several safeguards data systems using the resources of the Internet and Web to access and review data. This Web demo allows the user to directly observe sensor data, to analyze simulated safeguards data, and to view simulated on-line inventory data. Future activities include addressing the technical and security issues associated with using the Web to interface with existing and planned monitoring systems at nuclear facilities. Some of these issues are authentication, encryption, transmission of large quantities of data, and data compression

  4. The Linking Probability of Deep Spider-Web Networks

    OpenAIRE

    Pippenger, Nicholas

    2005-01-01

    We consider crossbar switching networks with base $b$ (that is, constructed from $b\\times b$ crossbar switches), scale $k$ (that is, with $b^k$ inputs, $b^k$ outputs and $b^k$ links between each consecutive pair of stages) and depth $l$ (that is, with $l$ stages). We assume that the crossbars are interconnected according to the spider-web pattern, whereby two diverging paths reconverge only after at least $k$ stages. We assume that each vertex is independently idle with probability $q$, the v...

  5. Health information seeking and the World Wide Web: an uncertainty management perspective.

    Science.gov (United States)

    Rains, Stephen A

    2014-01-01

    Uncertainty management theory was applied in the present study to offer one theoretical explanation for how individuals use the World Wide Web to acquire health information and to help better understand the implications of the Web for information seeking. The diversity of information sources available on the Web and potential to exert some control over the depth and breadth of one's information-acquisition effort is argued to facilitate uncertainty management. A total of 538 respondents completed a questionnaire about their uncertainty related to cancer prevention and information-seeking behavior. Consistent with study predictions, use of the Web for information seeking interacted with respondents' desired level of uncertainty to predict their actual level of uncertainty about cancer prevention. The results offer evidence that respondents who used the Web to search for cancer information were better able than were respondents who did not seek information to achieve a level of uncertainty commensurate with the level of uncertainty they desired.

  6. World Wide Web Homepages: An Examination of Content and Audience.

    Science.gov (United States)

    Reynolds, Betty; And Others

    This paper shows how the content of a World Wide Web page is selected and how an examination of the intended audience influences content. Examples from the New Mexico Tech (NMT) Library homepage show what sources are selected and what level of detail is appropriate for the intended audience. Six fundamental functions of libraries and information…

  7. Contemporary Approaches to Critical Thinking and the World Wide Web

    Science.gov (United States)

    Buffington, Melanie L.

    2007-01-01

    Teaching critical thinking skills is often endorsed as a means to help students develop their abilities to navigate the complex world in which people live and, in addition, as a way to help students succeed in school. Over the past few years, this author explored the idea of teaching critical thinking using the World Wide Web (WWW). She began…

  8. Autonomous Satellite Command and Control through the World Wide Web: Phase 3

    Science.gov (United States)

    Cantwell, Brian; Twiggs, Robert

    1998-01-01

    NASA's New Millenium Program (NMP) has identified a variety of revolutionary technologies that will support orders of magnitude improvements in the capabilities of spacecraft missions. This program's Autonomy team has focused on science and engineering automation technologies. In doing so, it has established a clear development roadmap specifying the experiments and demonstrations required to mature these technologies. The primary developmental thrusts of this roadmap are in the areas of remote agents, PI/operator interface, planning/scheduling fault management, and smart execution architectures. Phases 1 and 2 of the ASSET Project (previously known as the WebSat project) have focused on establishing World Wide Web-based commanding and telemetry services as an advanced means of interfacing a spacecraft system with the PI and operators. Current automated capabilities include Web-based command submission, limited contact scheduling, command list generation and transfer to the ground station, spacecraft support for demonstrations experiments, data transfer from the ground station back to the ASSET system, data archiving, and Web-based telemetry distribution. Phase 2 was finished in December 1996. During January-December 1997 work was commenced on Phase 3 of the ASSET Project. Phase 3 is the subject of this report. This phase permitted SSDL and its project partners to expand the ASSET system in a variety of ways. These added capabilities included the advancement of ground station capabilities, the adaptation of spacecraft on-board software, and the expansion of capabilities of the ASSET management algorithms. Specific goals of Phase 3 were: (1) Extend Web-based goal-level commanding for both the payload PI and the spacecraft engineer; (2) Support prioritized handling of multiple PIs as well as associated payload experimenters; (3) Expand the number and types of experiments supported by the ASSET system and its associated spacecraft; (4) Implement more advanced resource

  9. Marketing and Selling CD-ROM Products on the World-Wide Web.

    Science.gov (United States)

    Walker, Becki

    1995-01-01

    Describes three companies' approaches to marketing and selling CD-ROM products on the World Wide Web. Benefits include low overhead for Internet-based sales, allowance for creativity, and ability to let customers preview products online. Discusses advertising, information delivery, content, information services, and security. (AEF)

  10. Semantic Advertising for Web 3.0

    Science.gov (United States)

    Thomas, Edward; Pan, Jeff Z.; Taylor, Stuart; Ren, Yuan; Jekjantuk, Nophadol; Zhao, Yuting

    Advertising on the World Wide Web is based around automatically matching web pages with appropriate advertisements, in the form of banner ads, interactive adverts, or text links. Traditionally this has been done by manual classification of pages, or more recently using information retrieval techniques to find the most important keywords from the page, and match these to keywords being used by adverts. In this paper, we propose a new model for online advertising, based around lightweight embedded semantics. This will improve the relevancy of adverts on the World Wide Web and help to kick-start the use of RDFa as a mechanism for adding lightweight semantic attributes to the Web. Furthermore, we propose a system architecture for the proposed new model, based on our scalable ontology reasoning infrastructure TrOWL.

  11. Radiation protection and environmental radioactivity. A voyage to the World Wide Web for beginners

    International Nuclear Information System (INIS)

    Weimer, S.

    1998-01-01

    According to the enormous growth of the Internet service 'World Wide Web' there is also a big growth in the number of web sites in connection with radiation protection. An introduction is given of some practical basis of the WWW. The structure of WWW addresses and navigating through the web with hyperlinks is explained. Further some search engines are presented. The paper lists a number of WWW addresses of interesting sites with radiological protection informations. (orig.) [de

  12. Quality analysis of patient information about knee arthroscopy on the World Wide Web.

    Science.gov (United States)

    Sambandam, Senthil Nathan; Ramasamy, Vijayaraj; Priyanka, Priyanka; Ilango, Balakrishnan

    2007-05-01

    This study was designed to ascertain the quality of patient information available on the World Wide Web on the topic of knee arthroscopy. For the purpose of quality analysis, we used a pool of 232 search results obtained from 7 different search engines. We used a modified assessment questionnaire to assess the quality of these Web sites. This questionnaire was developed based on similar studies evaluating Web site quality and includes items on illustrations, accessibility, availability, accountability, and content of the Web site. We also compared results obtained with different search engines and tried to establish the best possible search strategy to attain the most relevant, authentic, and adequate information with minimum time consumption. For this purpose, we first compared 100 search results from the single most commonly used search engine (AltaVista) with the pooled sample containing 20 search results from each of the 7 different search engines. The search engines used were metasearch (Copernic and Mamma), general search (Google, AltaVista, and Yahoo), and health topic-related search engines (MedHunt and Healthfinder). The phrase "knee arthroscopy" was used as the search terminology. Excluding the repetitions, there were 117 Web sites available for quality analysis. These sites were analyzed for accessibility, relevance, authenticity, adequacy, and accountability by use of a specially designed questionnaire. Our analysis showed that most of the sites providing patient information on knee arthroscopy contained outdated information, were inadequate, and were not accountable. Only 16 sites were found to be providing reasonably good patient information and hence can be recommended to patients. Understandably, most of these sites were from nonprofit organizations and educational institutions. Furthermore, our study revealed that using multiple search engines increases patients' chances of obtaining more relevant information rather than using a single search

  13. Tesauros e a World Wide Web

    OpenAIRE

    Murakami, Tiago R. M.

    2005-01-01

    Thesauri are tools that growing importance in Web context. For this, is necessary adapting the thesauri for Web technologies and functionalities. The present work is an exploratory study that aim identifies how the documentary thesauri are being utilized and/or incorporated for the management of information in the Web.

  14. Radar Images of the Earth and the World Wide Web

    Science.gov (United States)

    Chapman, B.; Freeman, A.

    1995-01-01

    A perspective of NASA's Jet Propulsion Laboratory as a center of planetary exploration, and its involvement in studying the earth from space is given. Remote sensing, radar maps, land topography, snow cover properties, vegetation type, biomass content, moisture levels, and ocean data are items discussed related to earth orbiting satellite imaging radar. World Wide Web viewing of this content is discussed.

  15. Documenting historical data and accessing it on the World Wide Web

    Science.gov (United States)

    Malchus B. Baker; Daniel P. Huebner; Peter F. Ffolliott

    2000-01-01

    New computer technologies facilitate the storage, retrieval, and summarization of watershed-based data sets on the World Wide Web. These data sets are used by researchers when testing and validating predictive models, managers when planning and implementing watershed management practices, educators when learning about hydrologic processes, and decisionmakers when...

  16. A design method for an intuitive web site

    Energy Technology Data Exchange (ETDEWEB)

    Quinniey, M.L.; Diegert, K.V.; Baca, B.G.; Forsythe, J.C.; Grose, E.

    1999-11-03

    The paper describes a methodology for designing a web site for human factor engineers that is applicable for designing a web site for a group of people. Many web pages on the World Wide Web are not organized in a format that allows a user to efficiently find information. Often the information and hypertext links on web pages are not organized into intuitive groups. Intuition implies that a person is able to use their knowledge of a paradigm to solve a problem. Intuitive groups are categories that allow web page users to find information by using their intuition or mental models of categories. In order to improve the human factors engineers efficiency for finding information on the World Wide Web, research was performed to develop a web site that serves as a tool for finding information effectively. The paper describes a methodology for designing a web site for a group of people who perform similar task in an organization.

  17. Wired World-Wide Web Interactive Remote Event Display

    Energy Technology Data Exchange (ETDEWEB)

    De Groot, Nicolo

    2003-05-07

    WIRED (World-Wide Web Interactive Remote Event Display) is a framework, written in the Java{trademark} language, for building High Energy Physics event displays. An event display based on the WIRED framework enables users of a HEP collaboration to visualize and analyze events remotely using ordinary WWW browsers, on any type of machine. In addition, event displays using WIRED may provide the general public with access to the research of high energy physics. The recent introduction of the object-oriented Java{trademark} language enables the transfer of machine independent code across the Internet, to be safely executed by a Java enhanced WWW browser. We have employed this technology to create a remote event display in WWW. The combined Java-WWW technology hence assures a world wide availability of such an event display, an always up-to-date program and a platform independent implementation, which is easy to use and to install.

  18. Nessi: An EEG-Controlled Web Browser for Severely Paralyzed Patients

    Directory of Open Access Journals (Sweden)

    Michael Bensch

    2007-01-01

    Full Text Available We have previously demonstrated that an EEG-controlled web browser based on self-regulation of slow cortical potentials (SCPs enables severely paralyzed patients to browse the internet independently of any voluntary muscle control. However, this system had several shortcomings, among them that patients could only browse within a limited number of web pages and had to select links from an alphabetical list, causing problems if the link names were identical or if they were unknown to the user (as in graphical links. Here we describe a new EEG-controlled web browser, called Nessi, which overcomes these shortcomings. In Nessi, the open source browser, Mozilla, was extended by graphical in-place markers, whereby different brain responses correspond to different frame colors placed around selectable items, enabling the user to select any link on a web page. Besides links, other interactive elements are accessible to the user, such as e-mail and virtual keyboards, opening up a wide range of hypertext-based applications.

  19. Outreach to International Students and Scholars Using the World Wide Web.

    Science.gov (United States)

    Wei, Wei

    1998-01-01

    Describes the creation of a World Wide Web site for the Science Library International Outreach Program at the University of California, Santa Cruz. Discusses design elements, content, and promotion of the site. Copies of the home page and the page containing the outreach program's statement of purpose are included. (AEF)

  20. World-Wide Web Tools for Locating Planetary Images

    Science.gov (United States)

    Kanefsky, Bob; Deiss, Ron (Technical Monitor)

    1995-01-01

    The explosive growth of the World-Wide Web (WWW) in the past year has made it feasible to provide interactive graphical tools to assist scientists in locating planetary images. The highest available resolution images of any site of interest can be quickly found on a map or plot, and, if online, displayed immediately on nearly any computer equipped with a color screen, an Internet connection, and any of the free WWW browsers. The same tools may also be of interest to educators, students, and the general public. Image finding tools have been implemented covering most of the solar system: Earth, Mars, and the moons and planets imaged by Voyager. The Mars image-finder, which plots the footprints of all the high-resolution Viking Orbiter images and can be used to display any that are available online, also contains a complete scrollable atlas and hypertext gazetteer to help locating areas. The Earth image-finder is linked to thousands of Shuttle images stored at NASA/JSC, and displays them as red dots on a globe. The Voyager image-finder plots images as dots, by longitude and apparent target size, linked to online images. The locator (URL) for the top-level page is http: //ic-www.arc.nasa.gov/ic/projects/bayes-group/Atlas/. Through the efforts of the Planetary Data System and other organizations, hundreds of thousands of planetary images are now available on CD-ROM, and many of these have been made available on the WWW. However, locating images of a desired site is still problematic, in practice. For example, many scientists studying Mars use digital image maps, which are one third the resolution of Viking Orbiter survey images. When they douse Viking Orbiter images, they often work with photographically printed hardcopies, which lack the flexibility of digital images: magnification, contrast stretching, and other basic image-processing techniques offered by off-the-shelf software. From the perspective of someone working on an experimental image processing technique for

  1. Comparison of Standard Link Color Visibility Between Young Adults and Elderly Adults

    Science.gov (United States)

    Saito, Daisuke; Saito, Keiichi; Notomi, Kazuhiro; Saito, Masao

    The rapid dissemination of the World Wide Web raises the issue of the Web accessibility, and one of the important things is the combination of a foreground color and a background color. In our previous study, the visibility of web-safe colors on the white background was examined, and the blue used for unvisited standard link color was found high visibility in wide range of ages. Since the usage of the blue and an underline are recommended as a link, in this study, we examined high-visibility background colors to the unvisited standard link color, i.e. blue. One hundred and twenty three background colors to the blue were examined using pair comparison method, and the relationship between the visibility and the color difference was discussed on the uniform color space, CIELAB (L*a*b* color space). As the result, effective background colors to the standard link color were determined on the CIE LAB, that is, L* larger than 68, a* smaller than 50, and b* larger than -50 provided high visibility in wide range of ages.

  2. Where to find nutritional science journals on the World Wide Web.

    Science.gov (United States)

    Brown, C M

    1997-08-01

    The World Wide Web (WWW) is a burgeoning information resource that can be utilized for current awareness and assistance in manuscript preparation and submission. The ever changing and expanding nature of the WWW allows it to provide up to the minute information, but this inherent changeability often makes information access difficult. To assist nutrition scientists in locating useful information about nutritional science journals on the WWW, this article critically reviews and describes the WWW sites for seventeen highly ranked nutrition and dietetics journals. Included in each annotation are the site's title, web address or Universal Resource Locator (URL), journal ranking and site authorship. Also listed is whether or not the site makes available the guidelines for authors, tables of contents, abstracts, online ordering, as well as information about the editorial board. This critical survey illustrates that the information on the web, regardless of its authority, is not of equal quality.

  3. World wide web and virtual reality in developing and using environmental models

    International Nuclear Information System (INIS)

    Guariso, G.

    2001-01-01

    The application of World wide web as an active component of environmental decision support system is still largely unexplored. Environmental problems are distributed in nature, both from the physical and from the social point of view; the Web is thus an ideal tool to share concepts and decisions among multiple interested parties. Also Virtual Reality (VR) that has not find, up to know, a large application in the development and teaching of environmental models. The paper shows some recent applications that highlight the potential of these tools [it

  4. A review of images of nurses and smoking on the World Wide Web.

    Science.gov (United States)

    Sarna, Linda; Bialous, Stella Aguinaga

    2012-01-01

    With the advent of the World Wide Web, historic images previously having limited distributions are now widely available. As tobacco use has evolved, so have images of nurses related to smoking. Using a systematic search, the purpose of this article is to describe types of images of nurses and smoking available on the World Wide Web. Approximately 10,000 images of nurses and smoking published over the past century were identified through search engines and digital archives. Seven major themes were identified: nurses smoking, cigarette advertisements, helping patients smoke, "naughty" nurse, teaching women to smoke, smoking in and outside of health care facilities, and antitobacco images. The use of nursing images to market cigarettes was known but the extent of the use of these images has not been reported previously. Digital archives can be used to explore the past, provide a perspective for understanding the present, and suggest directions for the future in confronting negative images of nursing. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Using web services for linking genomic data to medical information systems.

    Science.gov (United States)

    Maojo, V; Crespo, J; de la Calle, G; Barreiro, J; Garcia-Remesal, M

    2007-01-01

    To develop a new perspective for biomedical information systems, regarding the introduction of ideas, methods and tools related to the new scenario of genomic medicine. Technological aspects related to the analysis and integration of heterogeneous clinical and genomic data include mapping clinical and genetic concepts, potential future standards or the development of integrated biomedical ontologies. In this clinicomics scenario, we describe the use of Web services technologies to improve access to and integrate different information sources. We give a concrete example of the use of Web services technologies: the OntoFusion project. Web services provide new biomedical informatics (BMI) approaches related to genomic medicine. Customized workflows will aid research tasks by linking heterogeneous Web services. Two significant examples of these European Commission-funded efforts are the INFOBIOMED Network of Excellence and the Advancing Clinico-Genomic Trials on Cancer (ACGT) integrated project. Supplying medical researchers and practitioners with omics data and biologists with clinical datasets can help to develop genomic medicine. BMI is contributing by providing the informatics methods and technological infrastructure needed for these collaborative efforts.

  6. Efficacy of the World Wide Web in K-12 environmental education

    Science.gov (United States)

    York, Kimberly Jane

    1998-11-01

    Despite support by teachers, students, and the American public in general, environmental education is not a priority in U.S. schools. Teachers face many barriers to integrating environmental education into K--12 curricula. The focus of this research is teachers' lack of access to environmental education resources. New educational reforms combined with emerging mass communication technologies such as the Internet and World Wide Web present new opportunities for the infusion of environmental content into the curriculum. New technologies can connect teachers and students to a wealth of resources previously unavailable to them. However, significant barriers to using technologies exist that must be overcome to make this promise a reality. Web-based environmental education is a new field and research is urgently needed. If teachers are to use the Web meaningfully in their classrooms, it is essential that their attitudes and perceptions about using this new technology be brought to light. Therefore, this exploratory research investigates teachers' attitudes toward using the Web to share environmental education resources. Both qualitative and quantitative methods were used to investigate this problem. Two surveys were conducted---self-administered mail survey and a Web-based online survey---to elicit teachers perceptions and comments about environmental education and the Web. Preliminary statistical procedures including frequencies, percentages and correlational measures were performed to interpret the data. In-depth interviews and participant-observation methods were used during an extended environmental education curriculum development project with two practicing teachers to gain insights into the process of creating curricula and placing it online. Findings from the both the mail survey and the Web-based survey suggest that teachers are interested in environmental education---97% of respondents for each survey agreed that environmental education should be taught in K

  7. The World-Wide Web: An Interface between Research and Teaching in Bioinformatics

    Directory of Open Access Journals (Sweden)

    James F. Aiton

    1994-01-01

    Full Text Available The rapid expansion occurring in World-Wide Web activity is beginning to make the concepts of ‘global hypermedia’ and ‘universal document readership’ realistic objectives of the new revolution in information technology. One consequence of this increase in usage is that educators and students are becoming more aware of the diversity of the knowledge base which can be accessed via the Internet. Although computerised databases and information services have long played a key role in bioinformatics these same resources can also be used to provide core materials for teaching and learning. The large datasets and arch ives th at have been compiled for biomedical research can be enhanced with the addition of a variety of multimedia elements (images. digital videos. animation etc.. The use of this digitally stored information in structured and self-directed learning environments is likely to increase as activity across World-Wide Web increases.

  8. Reliable and Persistent Identification of Linked Data Elements

    Science.gov (United States)

    Wood, David

    Linked Data techniques rely upon common terminology in a manner similar to a relational database'vs reliance on a schema. Linked Data terminology anchors metadata descriptions and facilitates navigation of information. Common vocabularies ease the human, social tasks of understanding datasets sufficiently to construct queries and help to relate otherwise disparate datasets. Vocabulary terms must, when using the Resource Description Framework, be grounded in URIs. A current bestpractice on the World Wide Web is to serve vocabulary terms as Uniform Resource Locators (URLs) and present both human-readable and machine-readable representations to the public. Linked Data terminology published to theWorldWideWeb may be used by others without reference or notification to the publishing party. That presents a problem: Vocabulary publishers take on an implicit responsibility to maintain and publish their terms via the URLs originally assigned, regardless of the inconvenience such a responsibility may cause. Over the course of years, people change jobs, publishing organizations change Internet domain names, computers change IP addresses,systems administrators publish old material in new ways. Clearly, a mechanism is required to manageWeb-based vocabularies over a long term. This chapter places Linked Data vocabularies in context with the wider concepts of metadata in general and specifically metadata on the Web. Persistent identifier mechanisms are reviewed, with a particular emphasis on Persistent URLs, or PURLs. PURLs and PURL services are discussed in the context of Linked Data. Finally, historic weaknesses of PURLs are resolved by the introduction of a federation of PURL services to address needs specific to Linked Data.

  9. Distance Learning Courses on the Web: The Authoring Approach.

    Science.gov (United States)

    Santos, Neide; Diaz, Alicia; Bibbo, Luis Mariano

    This paper proposes a framework for supporting the authoring process of distance learning courses. An overview of distance learning courses and the World Wide Web is presented. The proposed framework is then described, including: (1) components of the framework--a hypermedia design methodology for authoring the course, links to related Web sites,…

  10. Environmental Reporting for Global Higher Education Institutions using the World Wide Web.

    Science.gov (United States)

    Walton, J.; Alabaster, T.; Richardson, S.; Harrison, R.

    1997-01-01

    Proposes the value of voluntary environmental reporting by higher education institutions as an aid to implementing environmental policies. Suggests that the World Wide Web can provide a fast, up-to-date, flexible, participatory, multidimensional medium for information exchange and management. Contains 29 references. (PVD)

  11. Behavior of Shear Link of WF Section with Diagonal Web Stiffener of Eccentrically Braced Frame (EBF of Steel Structure

    Directory of Open Access Journals (Sweden)

    Yurisman

    2010-11-01

    Full Text Available This paper presents results of numerical and experimental study of shear link behavior, utilizing diagonal stiffener on web of steel profile to increase shear link performance in an eccentric braced frame (EBF of a steel structure system. The specimen is to examine the behavior of shear link by using diagonal stiffener on web part under static monotonic and cyclic load. The cyclic loading pattern conducted in the experiment is adjusted according to AISC loading standards 2005. Analysis was carried out using non-linear finite element method using MSC/NASTRAN software. Link was modeled as CQUAD shell element. Along the boundary of the loading area the nodal are constraint to produce only one direction loading. The length of the link in this analysis is 400mm of the steel profile of WF 200.100. Important parameters considered to effect significantly to the performance of shear link have been analyzed, namely flange and web thicknesses, , thickness and length of web stiffener, thickness of diagonal stiffener and geometric of diagonal stiffener. The behavior of shear link with diagonal web stiffener was compared with the behavior of standard link designed based on AISC 2005 criteria. Analysis results show that diagonal web stiffener is capable to increase shear link performance in terms of stiffness, strength and energy dissipation in supporting lateral load. However, differences in displacement ductility’s between shear links with diagonal stiffener and shear links based on AISC standards have not shown to be significant. Analysis results also show thickness of diagonal stiffener and geometric model of stiffener to have a significant influence on the performance of shear links. To perform validation of the numerical study, the research is followed by experimental work conducted in Structural Mechanic Laboratory Center for Industrial Engineering ITB. The Structures and Mechanics Lab rotary PAU-ITB. The experiments were carried out using three test

  12. Lexical Link Analysis (LLA) Application: Improving Web Service to Defense Acquisition Visibility Environment (DAVE)

    Science.gov (United States)

    2015-05-01

    1 LEXICAL LINK ANALYSIS (LLA) APPLICATION: IMPROVING WEB SERVICE TO DEFENSE ACQUISITION VISIBILITY ENVIRONMENT(DAVE) May 13-14, 2015 Dr. Ying...REPORT DATE MAY 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Lexical Link Analysis (LLA) Application...Making 3 2 1 3 L L A Methods • Lexical Link Analysis (LLA) Core – LLA Reports and Visualizations • Collaborative Learning Agents (CLA) for

  13. WWW.Cell Biology Education: Using the World Wide Web to Develop a New Teaching Topic

    Science.gov (United States)

    Blystone, Robert V.; MacAlpine, Barbara

    2005-01-01

    "Cell Biology Education" calls attention each quarter to several Web sites of educational interest to the biology community. The Internet provides access to an enormous array of potential teaching materials. In this article, the authors describe one approach for using the World Wide Web to develop a new college biology laboratory exercise. As a…

  14. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Nejdl, Wolfgang

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...

  15. Distributing flight dynamics products via the World Wide Web

    Science.gov (United States)

    Woodard, Mark; Matusow, David

    1996-01-01

    The NASA Flight Dynamics Products Center (FDPC), which make available selected operations products via the World Wide Web, is reported on. The FDPC can be accessed from any host machine connected to the Internet. It is a multi-mission service which provides Internet users with unrestricted access to the following standard products: antenna contact predictions; ground tracks; orbit ephemerides; mean and osculating orbital elements; earth sensor sun and moon interference predictions; space flight tracking data network summaries; and Shuttle transport system predictions. Several scientific data bases are available through the service.

  16. The World Wide Web: A Web Even a Fly Would Love

    Science.gov (United States)

    Bryson, E.

    Ever since my introduction to the World Wide Web (WWW), it's been love at first byte. Searching on the WWW is similar to being able to go to a public library and allow yourself to be transported to any other book or library around the world by looking at a reference or index and clicking your heels together like Dorothy did in "The Wizard of Oz", only the clicking is done with a computer mouse. During this presentation, we will explore the WWW protocols which allow clients and servers to communicate on the Internet. We will demonstrate the ease with which users can navigate the virtual tidal wave of information available with a mere click of a button. In addition, the workshop will discuss the revolutionary aspects of this network information system and how it's impacting our libraries as a primary mechanism for rapid dissemination of knowledge.

  17. Expert knowledge in palliative care on the World Wide Web: palliativedrugs.org.

    Science.gov (United States)

    Gavrin, Jonathan

    2009-01-01

    In my last Internet-related article, I speculated that social networking would be the coming wave in the effort to share knowledge among experts in various disciplines. At the time I did not know that a palliative care site on the World Wide Web (WWW), palliativedrugs.com, already provided the infrastructure for sharing expert knowledge in the field. The Web site is an excellent traditional formulary but it is primarily devoted to "unlicensed" ("off-label") use of medications in palliative care, something we in the specialty often do with little to support our interventions except shared knowledge and experience. There is nothing fancy about this Web site. In a good way, its format is a throwback to Web sites of the 1990s. In only the loosest sense can one describe it as "multimedia." Yet, it provides the perfect forum for expert knowledge and is a "must see" resource. Its existing content is voluminous and reliable, filtered and reviewed by renowned clinicians and educators in the field. Although its origin and structure were not specifically designed for social or professional networking, the Web site's format makes it a natural way for practitioners around the world to contribute to an ever-growing body of expertise in palliative care.

  18. Landscaping climate change: a mapping technique for understanding science and technology debates on the world wide web

    NARCIS (Netherlands)

    Rogers, R.; Marres, N.

    2000-01-01

    New World Wide Web (web) mapping techniques may inform and ultimately facilitate meaningful participation in current science and technology debates. The technique described here "landscapes" a debate by displaying key "webby" relationships between organizations. "Debate-scaping" plots two

  19. DW3 Classical Music Resources: Managing Mozart on the Web.

    Science.gov (United States)

    Fineman, Yale

    2001-01-01

    Discusses the development of DW3 (Duke World Wide Web) Classical Music Resources, a vertical portal that comprises the most comprehensive collection of classical music resources on the Web with links to more than 2800 non-commercial pages/sites in over a dozen languages. Describes the hierarchical organization of subject headings and considers…

  20. Securing the anonymity of content providers in the World Wide Web

    Science.gov (United States)

    Demuth, Thomas; Rieke, Andreas

    1999-04-01

    Nowadays the World Wide Web (WWW) is an established service used by people all over the world. Most of them do not recognize the fact that they reveal plenty of information about themselves or their affiliation and computer equipment to the providers of web pages they connect to. As a result, a lot of services offer users to access web pages unrecognized or without risk of being backtracked, respectively. This kind of anonymity is called user or client anonymity. But on the other hand, an equivalent protection for content providers does not exist, although this feature is desirable for many situations in which the identity of a publisher or content provider shall be hidden. We call this property server anonymity. We will introduce the first system with the primary target to offer anonymity for providers of information in the WWW. Beside this property, it provides also client anonymity. Based on David Chaum's idea of mixes and in relation to the context of the WWW, we explain the term 'server anonymity' motivating the system JANUS which offers both client and server anonymity.

  1. Pre-Service Teachers Critically Evaluate Scientific Information on the World-Wide Web: What Makes Information Believable?

    Science.gov (United States)

    Iding, Marie; Klemm, E. Barbara

    2005-01-01

    The present study addresses the need for teachers to critically evaluate the credibility, validity, and cognitive load associated with scientific information on Web sites, in order to effectively teach students to evaluate scientific information on the World Wide Web. A line of prior research investigating high school and university students'…

  2. THE NEW “UNIVERSAL TRUTH” OF THE WORLD WIDE WEB

    OpenAIRE

    Alexandru Tăbușcă

    2011-01-01

    We all see that the world wide web is permanently evolving and developing. New websites are created continuously and push the limits of the old HTML specs in all respects. HTML4 is the real standard for almost 10 years and developers are starting to look for new and improved technologies to help them provide greater functionality. In order to give the authors flexibility and interoperability and to enable much more interactive and innovative websites and applications, HTML5 introduces and enh...

  3. PhenoLink - a web-tool for linking phenotype to ~omics data for bacteria: application to gene-trait matching for Lactobacillus plantarum strains

    Directory of Open Access Journals (Sweden)

    Bayjanov Jumamurat R

    2012-05-01

    Full Text Available Abstract Background Linking phenotypes to high-throughput molecular biology information generated by ~omics technologies allows revealing cellular mechanisms underlying an organism's phenotype. ~Omics datasets are often very large and noisy with many features (e.g., genes, metabolite abundances. Thus, associating phenotypes to ~omics data requires an approach that is robust to noise and can handle large and diverse data sets. Results We developed a web-tool PhenoLink (http://bamics2.cmbi.ru.nl/websoftware/phenolink/ that links phenotype to ~omics data sets using well-established as well new techniques. PhenoLink imputes missing values and preprocesses input data (i to decrease inherent noise in the data and (ii to counterbalance pitfalls of the Random Forest algorithm, on which feature (e.g., gene selection is based. Preprocessed data is used in feature (e.g., gene selection to identify relations to phenotypes. We applied PhenoLink to identify gene-phenotype relations based on the presence/absence of 2847 genes in 42 Lactobacillus plantarum strains and phenotypic measurements of these strains in several experimental conditions, including growth on sugars and nitrogen-dioxide production. Genes were ranked based on their importance (predictive value to correctly predict the phenotype of a given strain. In addition to known gene to phenotype relations we also found novel relations. Conclusions PhenoLink is an easily accessible web-tool to facilitate identifying relations from large and often noisy phenotype and ~omics datasets. Visualization of links to phenotypes offered in PhenoLink allows prioritizing links, finding relations between features, finding relations between phenotypes, and identifying outliers in phenotype data. PhenoLink can be used to uncover phenotype links to a multitude of ~omics data, e.g., gene presence/absence (determined by e.g.: CGH or next-generation sequencing, gene expression (determined by e.g.: microarrays or RNA

  4. The World Wide Web as a Medium of Instruction: What Works and What Doesn't

    Science.gov (United States)

    McCarthy, Marianne; Grabowski, Barbara; Hernandez, Angel; Koszalka, Tiffany; Duke, Lee

    1997-01-01

    A conference was held on March 18-20, 1997 to investigate the lessons learned by the Aeronautics Cooperative Agreement Projects with regard to the most effective strategies for developing instruction for the World Wide Web. The conference was a collaboration among the NASA Aeronautics and Space Transportation Technology Centers (Ames, Dryden, Langley, and Lewis), NASA Headquarters, the University of Idaho and The Pennsylvania State University. The conference consisted of presentations by the Aeronautics Cooperative Agreement Teams, the University of Idaho, and working sessions in which the participants addressed teacher training and support, technology, evaluation and pedagogy. The conference was also undertaken as part of the Dryden Learning Technologies Project which is a collaboration between the Dryden Education Office and The Pennsylvania State University. The DFRC Learning Technology Project goals relevant to the conference are as follows: conducting an analysis of current teacher needs, classroom infrastructure and exemplary instructional World Wide Web sites, and developing models for Web-enhanced learning environments that optimize teaching practices and student learning.

  5. A systematic review of patient inflammatory bowel disease information resources on the World Wide Web.

    Science.gov (United States)

    Bernard, André; Langille, Morgan; Hughes, Stephanie; Rose, Caren; Leddin, Desmond; Veldhuyzen van Zanten, Sander

    2007-09-01

    The Internet is a widely used information resource for patients with inflammatory bowel disease, but there is variation in the quality of Web sites that have patient information regarding Crohn's disease and ulcerative colitis. The purpose of the current study is to systematically evaluate the quality of these Web sites. The top 50 Web sites appearing in Google using the terms "Crohn's disease" or "ulcerative colitis" were included in the study. Web sites were evaluated using a (a) Quality Evaluation Instrument (QEI) that awarded Web sites points (0-107) for specific information on various aspects of inflammatory bowel disease, (b) a five-point Global Quality Score (GQS), (c) two reading grade level scores, and (d) a six-point integrity score. Thirty-four Web sites met the inclusion criteria, 16 Web sites were excluded because they were portals or non-IBD oriented. The median QEI score was 57 with five Web sites scoring higher than 75 points. The median Global Quality Score was 2.0 with five Web sites achieving scores of 4 or 5. The average reading grade level score was 11.2. The median integrity score was 3.0. There is marked variation in the quality of the Web sites containing information on Crohn's disease and ulcerative colitis. Many Web sites suffered from poor quality but there were five high-scoring Web sites.

  6. WEBSLIDE: A "Virtual" Slide Projector Based on World Wide Web

    Science.gov (United States)

    Barra, Maria; Ferrandino, Salvatore; Scarano, Vittorio

    1999-03-01

    We present here the design key concepts of WEBSLIDE, a software project whose objective is to provide a simple, cheap and efficient solution for showing slides during lessons in computer labs. In fact, WEBSLIDE allows the video monitors of several client machines (the "STUDENTS") to be synchronously updated by the actions of a particular client machine, called the "INSTRUCTOR." The system is based on the World Wide Web and the software components of WEBSLIDE mainly consists in a WWW server, browsers and small Cgi-Bill scripts. What makes WEBSLIDE particularly appealing for small educational institutions is that WEBSLIDE is built with "off the shelf" products: it does not involve using a specifically designed program but any Netscape browser, one of the most popular browsers available on the market, is sufficient. Another possible use is to use our system to implement "guided automatic tours" through several pages or Intranets internal news bulletins: the company Web server can broadcast to all employees relevant information on their browser.

  7. Do We Need to Impose More Regulation Upon the World Wide Web? -A Metasystem Analysis

    Directory of Open Access Journals (Sweden)

    John P. van Gigch

    2000-01-01

    Full Text Available Every day a new problem attributable to the World Wide Web's lack of formal structure and/or organization is made public. What arguably could be represented as one of its main strengths is rapidly turning out to be one of its most flagrant weaknesses. The intent of this article is to show the need to establish a more formal organization than presently exists over the World Wide Web. (This article will use the terms the Internet and Cyberspace interchangeably. It is proposed that this formal organization take the form of a metacontrol system--to be explained-- and rely, at least in part, for this control to self-regulate. The so-called metasystem system would be responsible for preventing some of the unanticipated situations that take place in cyberspace and that, due to the web's lack of maturity, have not been encountered heretofore. Some activities, such as the denial-of-service (DoS attacks, may well be illicit. Others, like the question of establishing a world-wide democratic board to administer the Internet's address system, are so new that there are no technical, legal or political precedents to ensure its design will succeed. What is needed is a formal, over-arching control system, i.e. a "metasystem," to arbitrate over controversies, decide on the legality of new policies and, in general, act as a metalevel controller over the activities of the virtual community called Cyberspace. The World Wide Web Consortium has emerged as a possible candidate for this role.This paper uses control theory to define both the problem and the proposed solution. Cyberspace lacks a metacontroller that can be used to resolve the many problems that arise when a new organizational configuration, such as the Internet, is created and when questions surface about the extent to which new activities interfere with individual or corporate freedoms.

  8. Reading on the World Wide Web: Dealing with conflicting information from multiple sources

    NARCIS (Netherlands)

    Van Strien, Johan; Brand-Gruwel, Saskia; Boshuizen, Els

    2011-01-01

    Van Strien, J. L. H., Brand-Gruwel, S., & Boshuizen, H. P. A. (2011, August). Reading on the World Wide Web: Dealing with conflicting information from multiple sources. Poster session presented at the biannual conference of the European Association for Research on Learning and Instruction, Exeter,

  9. Trait- and size-based descriptions of trophic links in freshwater food webs: current status and perspectives

    Directory of Open Access Journals (Sweden)

    David S. Boukal

    2014-04-01

    Full Text Available Biotic interactions in aquatic communities are dominated by predation, and the distribution of trophic link strengths in aquatic food webs crucially impacts their dynamics and stability. Although individual body size explains a large proportion of variation in trophic link strengths in aquatic habitats, current predominately body size-based views can gain additional realism by incorporating further traits. Functional traits that potentially affect the strength of trophic links can be classified into three groups: i body size, ii traits that identify the spatiotemporal overlap between the predators and their prey, and iii predator foraging and prey vulnerability traits, which are readily available for many taxa. Relationship between these trait groups and trophic link strength may be further modified by population densities, habitat complexity, temperature and other abiotic factors. I propose here that this broader multi-trait framework can utilize concepts, ideas and existing data from research on metabolic ecology, ecomorphology, animal personalities and role of habitats in community structuring. The framework can be used to investigate non-additive effects of traits on trophic interactions, shed more light on the structuring of local food webs and evaluate the merits of taxonomic and functional group approaches in the description of predator-prey interactions. Development of trait- and size-based descriptions of food webs could be particularly fruitful in limnology given the relative paucity of well resolved datasets in standing waters. 

  10. Ecological-network models link diversity, structure and function in the plankton food-web

    Science.gov (United States)

    D'Alelio, Domenico; Libralato, Simone; Wyatt, Timothy; Ribera D'Alcalà, Maurizio

    2016-02-01

    A planktonic food-web model including sixty-three functional nodes (representing auto- mixo- and heterotrophs) was developed to integrate most trophic diversity present in the plankton. The model was implemented in two variants - which we named ‘green’ and ‘blue’ - characterized by opposite amounts of phytoplankton biomass and representing, respectively, bloom and non-bloom states of the system. Taxonomically disaggregated food-webs described herein allowed to shed light on how components of the plankton community changed their trophic behavior in the two different conditions, and modified the overall functioning of the plankton food web. The green and blue food-webs showed distinct organizations in terms of trophic roles of the nodes and carbon fluxes between them. Such re-organization stemmed from switches in selective grazing by both metazoan and protozoan consumers. Switches in food-web structure resulted in relatively small differences in the efficiency of material transfer towards higher trophic levels. For instance, from green to blue states, a seven-fold decrease in phytoplankton biomass translated into only a two-fold decrease in potential planktivorous fish biomass. By linking diversity, structure and function in the plankton food-web, we discuss the role of internal mechanisms, relying on species-specific functionalities, in driving the ‘adaptive’ responses of plankton communities to perturbations.

  11. Delivering an Alternative Medicine Resource to the User's Desktop via World Wide Web.

    Science.gov (United States)

    Li, Jie; Wu, Gang; Marks, Ellen; Fan, Weiyu

    1998-01-01

    Discusses the design and implementation of a World Wide Web-based alternative medicine virtual resource. This homepage integrates regional, national, and international resources and delivers library services to the user's desktop. Goals, structure, and organizational schemes of the system are detailed, and design issues for building such a…

  12. World Wide Web Usage Mining Systems and Technologies

    Directory of Open Access Journals (Sweden)

    Wen-Chen Hu

    2003-08-01

    Full Text Available Web usage mining is used to discover interesting user navigation patterns and can be applied to many real-world problems, such as improving Web sites/pages, making additional topic or product recommendations, user/customer behavior studies, etc. This article provides a survey and analysis of current Web usage mining systems and technologies. A Web usage mining system performs five major tasks: i data gathering, ii data preparation, iii navigation pattern discovery, iv pattern analysis and visualization, and v pattern applications. Each task is explained in detail and its related technologies are introduced. A list of major research systems and projects concerning Web usage mining is also presented, and a summary of Web usage mining is given in the last section.

  13. A STUDY ON RANKING METHOD IN RETRIEVING WEB PAGES BASED ON CONTENT AND LINK ANALYSIS: COMBINATION OF FOURIER DOMAIN SCORING AND PAGERANK SCORING

    Directory of Open Access Journals (Sweden)

    Diana Purwitasari

    2008-01-01

    Full Text Available Ranking module is an important component of search process which sorts through relevant pages. Since collection of Web pages has additional information inherent in the hyperlink structure of the Web, it can be represented as link score and then combined with the usual information retrieval techniques of content score. In this paper we report our studies about ranking score of Web pages combined from link analysis, PageRank Scoring, and content analysis, Fourier Domain Scoring. Our experiments use collection of Web pages relate to Statistic subject from Wikipedia with objectives to check correctness and performance evaluation of combination ranking method. Evaluation of PageRank Scoring show that the highest score does not always relate to Statistic. Since the links within Wikipedia articles exists so that users are always one click away from more information on any point that has a link attached, it it possible that unrelated topics to Statistic are most likely frequently mentioned in the collection. While the combination method show link score which is given proportional weight to content score of Web pages does effect the retrieval results.

  14. An Ontology of Quality Initiatives and a Model for Decentralized, Collaborative Quality Management on the (Semantic) World Wide Web

    Science.gov (United States)

    2001-01-01

    This editorial provides a model of how quality initiatives concerned with health information on the World Wide Web may in the future interact with each other. This vision fits into the evolving "Semantic Web" architecture - ie, the prospective that the World Wide Web may evolve from a mess of unstructured, human-readable information sources into a global knowledge base with an additional layer providing richer and more meaningful relationships between resources. One first prerequisite for forming such a "Semantic Web" or "web of trust" among the players active in quality management of health information is that these initiatives make statements about themselves and about each other in a machine-processable language. I present a concrete model on how this collaboration could look, and provide some recommendations on what the role of the World Health Organization (WHO) and other policy makers in this framework could be. PMID:11772549

  15. The World Wide Web and Technology Transfer at NASA Langley Research Center

    Science.gov (United States)

    Nelson, Michael L.; Bianco, David J.

    1994-01-01

    NASA Langley Research Center (LaRC) began using the World Wide Web (WWW) in the summer of 1993, becoming the first NASA installation to provide a Center-wide home page. This coincided with a reorganization of LaRC to provide a more concentrated focus on technology transfer to both aerospace and non-aerospace industry. Use of the WWW and NCSA Mosaic not only provides automated information dissemination, but also allows for the implementation, evolution and integration of many technology transfer applications. This paper describes several of these innovative applications, including the on-line presentation of the entire Technology Opportunities Showcase (TOPS), an industrial partnering showcase that exists on the Web long after the actual 3-day event ended. During its first year on the Web, LaRC also developed several WWW-based information repositories. The Langley Technical Report Server (LTRS), a technical paper delivery system with integrated searching and retrieval, has proved to be quite popular. The NASA Technical Report Server (NTRS), an outgrowth of LTRS, provides uniform access to many logically similar, yet physically distributed NASA report servers. WWW is also the foundation of the Langley Software Server (LSS), an experimental software distribution system which will distribute LaRC-developed software with the possible phase-out of NASA's COSMIC program. In addition to the more formal technology distribution projects, WWW has been successful in connecting people with technologies and people with other people. With the completion of the LaRC reorganization, the Technology Applications Group, charged with interfacing with non-aerospace companies, opened for business with a popular home page.

  16. Lexical Link Analysis Application: Improving Web Service to Acquisition Visibility Portal Phase II

    Science.gov (United States)

    2014-04-30

    bäÉîÉåíÜ=^ååì~ä=^Åèìáëáíáçå= oÉëÉ~êÅÜ=póãéçëáìã= qÜìêëÇ~ó=pÉëëáçåë= sçäìãÉ=ff= = Lexical Link Analysis Application: Improving Web Service to...DATE 30 APR 2014 2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Lexical Link Analysis Application: Improving...vocabulary or lexicon, to describe the attributes and surrounding environment of the system. Lexical Link Analysis (LLA) is a form of text mining in which

  17. Application of World Wide Web (W3) Technologies in Payload Operations

    Science.gov (United States)

    Sun, Charles; Windrem, May; Picinich, Lou

    1996-01-01

    World Wide Web (W3) technologies are considered in relation to their application to space missions. It is considered that such technologies, including the hypertext transfer protocol and the Java object-oriented language, offer a powerful and relatively inexpensive framework for distributed application software development. The suitability of these technologies for payload monitoring systems development is discussed, and the experience gained from the development of an insect habitat monitoring system based on W3 technologies is reported.

  18. Software Project Management and Measurement on the World-Wide-Web (WWW)

    Science.gov (United States)

    Callahan, John; Ramakrishnan, Sudhaka

    1996-01-01

    We briefly describe a system for forms-based, work-flow management that helps members of a software development team overcome geographical barriers to collaboration. Our system, called the Web Integrated Software Environment (WISE), is implemented as a World-Wide-Web service that allows for management and measurement of software development projects based on dynamic analysis of change activity in the workflow. WISE tracks issues in a software development process, provides informal communication between the users with different roles, supports to-do lists, and helps in software process improvement. WISE minimizes the time devoted to metrics collection and analysis by providing implicit delivery of messages between users based on the content of project documents. The use of a database in WISE is hidden from the users who view WISE as maintaining a personal 'to-do list' of tasks related to the many projects on which they may play different roles.

  19. Tapping the Resources of the World Wide Web for Inquiry in Middle Schools.

    Science.gov (United States)

    Windschitl, Mark; Irby, Janet

    1999-01-01

    Argues for the cautiously expanded use of the World Wide Web for inquiry across the middle school curriculum, noting how the Internet can be used in schools. Describes the Internet and appraises its distractions and academic utility, identifying features that support student inquiry in science, mathematics, social studies, and language arts. (JPB)

  20. Cardiac Resynchronization Therapy Online: What Patients Find when Searching the World Wide Web.

    Science.gov (United States)

    Modi, Minal; Laskar, Nabila; Modi, Bhavik N

    2016-06-01

    To objectively assess the quality of information available on the World Wide Web on cardiac resynchronization therapy (CRT). Patients frequently search the internet regarding their healthcare issues. It has been shown that patients seeking information can help or hinder their healthcare outcomes depending on the quality of information consulted. On the internet, this information can be produced and published by anyone, resulting in the risk of patients accessing inaccurate and misleading information. The search term "Cardiac Resynchronisation Therapy" was entered into the three most popular search engines and the first 50 pages on each were pooled and analyzed, after excluding websites inappropriate for objective review. The "LIDA" instrument (a validated tool for assessing quality of healthcare information websites) was to generate scores on Accessibility, Reliability, and Usability. Readability was assessed using the Flesch Reading Ease Score (FRES). Of the 150 web-links, 41 sites met the eligibility criteria. The sites were assessed using the LIDA instrument and the FRES. A mean total LIDA score for all the websites assessed was 123.5 of a possible 165 (74.8%). The average Accessibility of the sites assessed was 50.1 of 60 (84.3%), on Usability 41.4 of 54 (76.6%), on Reliability 31.5 of 51 (61.7%), and 41.8 on FRES. There was a significant variability among sites and interestingly, there was no correlation between the sites' search engine ranking and their scores. This study has illustrated the variable quality of online material on the topic of CRT. Furthermore, there was also no apparent correlation between highly ranked, popular websites and their quality. Healthcare professionals should be encouraged to guide their patients toward the online material that contains reliable information. © 2016 Wiley Periodicals, Inc.

  1. Spiders and Worms and Crawlers, Oh My: Searching on the World Wide Web.

    Science.gov (United States)

    Eagan, Ann; Bender, Laura

    Searching on the world wide web can be confusing. A myriad of search engines exist, often with little or no documentation, and many of these search engines work differently from the standard search engines people are accustomed to using. Intended for librarians, this paper defines search engines, directories, spiders, and robots, and covers basics…

  2. Integration of Web mining and web crawler: Relevance and State of Art

    OpenAIRE

    Subhendu kumar pani; Deepak Mohapatra,; Bikram Keshari Ratha

    2010-01-01

    This study presents the role of web crawler in web mining environment. As the growth of the World Wide Web exceeded all expectations,the research on Web mining is growing more and more.web mining research topic which combines two of the activated research areas: Data Mining and World Wide Web .So, the World Wide Web is a very advanced area for data mining research. Search engines that are based on web crawling framework also used in web mining to find theinteracted web pages. This paper discu...

  3. Dynamic Link Inclusion in Online PDF Journals

    OpenAIRE

    Probets, Steve; Brailsford, David; Carr, Les; Hall, Wendy

    1998-01-01

    Two complementary de facto standards for the publication of electronic documents are HTML on theWorldWideWeb and Adobe s PDF (Portable Document Format) language for use with Acrobat viewers. Both these formats provide support for hypertext features to be embedded within documents. We present a method, which allows links and other hypertext material to be kept in an abstract form in separate link databases. The links can then be interpreted or compiled at any stage and applied, in the correct ...

  4. Information consumerism on the World Wide Web: implications for dermatologists and patients.

    Science.gov (United States)

    Travers, Robin L

    2002-09-01

    The World Wide Web (WWW) is continuing to grow exponentially both in terms of numbers of users and numbers of web pages. There is a trend toward the increasing use of the WWW for medical educational purposes, both among physicians and patients alike. The multimedia capabilities of this evolving medium are particularly relevant to visual medical specialties such as dermatology. The origins of information consumerism on the WWW are examined, and the public health issues surrounding dermatologic information and misinformation, and how consumers navigate through the WWW are reviewed. The economic realities of medical information as a "capital good," and the impact this has on dermatologic information sources on the WWW are also discussed.Finally, strategies for guiding consumers and ourselves toward credible medical information sources on the WWW are outlined.

  5. An open source web interface for linking models to infrastructure system databases

    Science.gov (United States)

    Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.

    2016-12-01

    Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.

  6. SAMP: Application Messaging for Desktop and Web Applications

    Science.gov (United States)

    Taylor, M. B.; Boch, T.; Fay, J.; Fitzpatrick, M.; Paioro, L.

    2012-09-01

    SAMP, the Simple Application Messaging Protocol, is a technology which allows tools to communicate. It is deployed in a number of desktop astronomy applications including ds9, Aladin, TOPCAT, World Wide Telescope and numerous others, and makes it straightforward for a user to treat a selection of these tools as a loosely-integrated suite, combining the most powerful features of each. It has been widely used within Virtual Observatory contexts, but is equally suitable for non-VO use. Enabling SAMP communication from web-based content has long been desirable. An obvious use case is arranging for a click on a web page link to deliver an image, table or spectrum to a desktop viewer, but more sophisticated two-way interaction with rich internet applications would also be possible. Use from the web however presents some problems related to browser sandboxing. We explain how the SAMP Web Profile, introduced in version 1.3 of the SAMP protocol, addresses these issues, and discuss the resulting security implications.

  7. Compact Optical Discs and the World Wide Web: Two Mediums in Digitized Information Delivery Services

    Directory of Open Access Journals (Sweden)

    Ziyu Lin

    1999-10-01

    Full Text Available

    頁次:40-52

    Compact optical discs (CDs and the World Wide Web (the Web are two mechanisms that contemporary libraries extensively use for digitized information storage, dissemination, and retrieval. The Web features an unparalleled global accessibility free from many previously known temporal and spatial restrictions. Its real-time update capability is impossible for CDs. Web-based information delivery can reduce the cost in hardware and software ownership and management of a local library, and provide one-to-one zcustomization to better serve library's clients. The current limitations of the Web include inadequate speed in data transmission, particularly for multimedia applications, and its insufficient reliability, search capabilities, and security. In comparison, speed, quality, portability, and reliability are the current advantages of CDs over the Web. These features, together with the trend in the PC industry and market, suggest that CDs will exist and continue to develop. CD/Web hybrids can combine the best of both developing mechanisms and offer optimal results. Through a comparison of CDs and the Web, it is argued that the functionality and unique features of a technology determine its future.

  8. Comparison of student outcomes and preferences in a traditional vs. World Wide Web-based baccalaureate nursing research course.

    Science.gov (United States)

    Leasure, A R; Davis, L; Thievon, S L

    2000-04-01

    The purpose of this project was to compare student outcomes in an undergraduate research course taught using both World Wide Web-based distance learning technology and traditional pedagogy. Reasons given for enrolling in the traditional classroom section included the perception of increased opportunity for interaction, decreased opportunity to procrastinate, immediate feedback, and more meaningful learning activities. Reasons for selecting the Web group section included cost, convenience, and flexibility. Overall, there was no significant difference in examination scores between the two groups on the three multiple-choice examinations or for the course grades (t = -.96, P = .343). Students who reported that they were self-directed and had the ability to maintain their own pace and avoid procrastination were most suited to Web-based courses. The Web-based classes can help provide opportunities for methods of communication that are not traditionally nurtured in traditional classroom settings. Secondary benefits of the World Wide Web-based course were to increase student confidence with the computer, and introduce them to skills and opportunities they would not have had in the classroom. Additionally, over time and with practice, student's writing skills improved.

  9. Beyond Piñatas, Fortune Cookies, and Wooden Shoes: Using the World Wide Web to Help Children Explore the Whole Wide World

    Science.gov (United States)

    Kirkwood, Donna; Shulsky, Debra; Willis, Jana

    2014-01-01

    The advent of technology and access to the internet through the World Wide Web have stretched the traditional ways of teaching social studies beyond classroom boundaries. This article explores how teachers can create authentic and contextualized cultural studies experiences for young children by integrating social studies and technology. To…

  10. E-Learning and Role of World Wide Web in E-Learning

    OpenAIRE

    Jahankhani, Hossein

    2012-01-01

    This paper reviews some of the aspects of the E-learning through the World Wide Web. E-revolution as new phenomenon influenced the society by its means and strategies. E-learning is one of the sub-products of E-revolution, towards making more convenient and effective learning. In time Internet become a source of information, people start to learn through the Internet instead of books. It gives the flexibility to remote access at any time. The working people and the students are inspired by th...

  11. CNA web server: rigidity theory-based thermal unfolding simulations of proteins for linking structure, (thermo-)stability, and function.

    Science.gov (United States)

    Krüger, Dennis M; Rathi, Prakash Chandra; Pfleger, Christopher; Gohlke, Holger

    2013-07-01

    The Constraint Network Analysis (CNA) web server provides a user-friendly interface to the CNA approach developed in our laboratory for linking results from rigidity analyses to biologically relevant characteristics of a biomolecular structure. The CNA web server provides a refined modeling of thermal unfolding simulations that considers the temperature dependence of hydrophobic tethers and computes a set of global and local indices for quantifying biomacromolecular stability. From the global indices, phase transition points are identified where the structure switches from a rigid to a floppy state; these phase transition points can be related to a protein's (thermo-)stability. Structural weak spots (unfolding nuclei) are automatically identified, too; this knowledge can be exploited in data-driven protein engineering. The local indices are useful in linking flexibility and function and to understand the impact of ligand binding on protein flexibility. The CNA web server robustly handles small-molecule ligands in general. To overcome issues of sensitivity with respect to the input structure, the CNA web server allows performing two ensemble-based variants of thermal unfolding simulations. The web server output is provided as raw data, plots and/or Jmol representations. The CNA web server, accessible at http://cpclab.uni-duesseldorf.de/cna or http://www.cnanalysis.de, is free and open to all users with no login requirement.

  12. Microbiome-wide association studies link dynamic microbial consortia to disease

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, Jack A.; Quinn, Robert A.; Debelius, Justine; Xu, Zhenjiang Z.; Morton, James; Garg, Neha; Jansson, Janet K.; Dorrestein, Pieter C.; Knight, Rob

    2016-07-06

    Rapid advances in DNA sequencing, metabolomics, proteomics and computational tools are dramatically increasing access to the microbiome and identification of its links with disease. In particular, time-series studies and multiple molecular perspectives are facilitating microbiome-wide association studies, which are analogous to genome-wide association studies. Early findings point to actionable outcomes of microbiome-wide association studies, although their clinical application has yet to be approved. An appreciation of the complexity of interactions among the microbiome and the host's diet, chemistry and health, as well as determining the frequency of observations that are needed to capture and integrate this dynamic interface, is paramount for developing precision diagnostics and therapies that are based on the microbiome.

  13. Trends in the wide web converting markets for UV curing

    International Nuclear Information System (INIS)

    Fisher, R.

    1999-01-01

    As we prepare to enter a new decade, the use of ultraviolet (UV) energy to initiate the polymerization of coatings in the wide web segment of the Converting industry continues to increase. As is typical in the Converting industry, while many of the significant advances in technology have been developed around the world, they have been driven initially by the Western European markets. This was true with regards to the introduction of water-borne Pressure Sensitive Adhesives and thermal curing 100% solids silicone release coatings during the late 1970s and early 1980s, but this trend has changed with regards to the current state-of-the-art in UV curing

  14. Development of a web-based, underground coalmine gas outburst information management system

    Energy Technology Data Exchange (ETDEWEB)

    Naj Aziz; Richard Caladine; Lucia Tome; Ken Cram; Devendra Vyas [University of Wollongong, NSW (Australia)

    2007-04-15

    The primary objective of this project was to develop an online coal mine outburst information management system to provide the coal mining industry with the necessary information and knowledge on outbursts via the World Wide Web. The Website has been constructed using the standard web format. Access to the site is by standard web browsers. The address of the site is http://www.uow.edu.au/eng/outburst. The website has 85 conference papers which were held in Australia, dating as far back as the 1980's, various seminar presentations, more than 250 references, a limited but important collection of international papers, direct links to ACARP and NERRDC publication lists, links to several leading organisations of particular interest in mine gas and outburst control. These links include both private and government organisations, and a forum for discussion.

  15. Affordances of students' using the World Wide Web as a publishing medium in project-based learning environments

    Science.gov (United States)

    Bos, Nathan Daniel

    This dissertation investigates the emerging affordance of the World Wide Web as a place for high school students to become authors and publishers of information. Two empirical studies lay groundwork for student publishing by examining learning issues related to audience adaptation in writing, motivation and engagement with hypermedia, design, problem-solving, and critical evaluation. Two models of student publishing on the World Wide Web were investigated over the course of two 11spth grade project-based science curriculums. In the first curricular model, students worked in pairs to design informative hypermedia projects about infectious diseases that were published on the Web. Four case studies were written, drawing on both product- and process-related data sources. Four theoretically important findings are illustrated through these cases: (1) multimedia, especially graphics, seemed to catalyze some students' design processes by affecting the sequence of their design process and by providing a connection between the science content and their personal interest areas, (2) hypermedia design can demand high levels of analysis and synthesis of science content, (3) students can learn to think about science content representation through engagement with challenging design tasks, and (4) students' consideration of an outside audience can be facilitated by teacher-given design principles. The second Web-publishing model examines how students critically evaluate scientific resources on the Web, and how students can contribute to the Web's organization and usability by publishing critical reviews. Students critically evaluated Web resources using a four-part scheme: summarization of content, content, evaluation of credibility, evaluation of organizational structure, and evaluation of appearance. Content analyses comparing students' reviews and reviewed Web documents showed that students were proficient at summarizing content of Web documents, identifying their publishing

  16. Caught in the Web

    Energy Technology Data Exchange (ETDEWEB)

    Gillies, James

    1995-06-15

    The World-Wide Web may have taken the Internet by storm, but many people would be surprised to learn that it owes its existence to CERN. Around half the world's particle physicists come to CERN for their experiments, and the Web is the result of their need to share information quickly and easily on a global scale. Six years after Tim Berners-Lee's inspired idea to marry hypertext to the Internet in 1989, CERN is handing over future Web development to the World-Wide Web Consortium, run by the French National Institute for Research in Computer Science and Control, INRIA, and the Laboratory for Computer Science of the Massachusetts Institute of Technology, MIT, leaving itself free to concentrate on physics. The Laboratory marked this transition with a conference designed to give a taste of what the Web can do, whilst firmly stamping it with the label ''Made in CERN''. Over 200 European journalists and educationalists came to CERN on 8 - 9 March for the World-Wide Web Days, resulting in wide media coverage. The conference was opened by UK Science Minister David Hunt who stressed the importance of fundamental research in generating new ideas. ''Who could have guessed 10 years ago'', he said, ''that particle physics research would lead to a communication system which would allow every school to have the biggest library in the world in a single computer?''. In his introduction, the Minister also pointed out that ''CERN and other basic research laboratories help to break new technological ground and sow the seeds of what will become mainstream manufacturing in the future.'' Learning the jargon is often the hardest part of coming to grips with any new invention, so CERN put it at the top of the agenda. Jacques Altaber, who helped introduce the Internet to CERN in the early 1980s, explained that without the Internet, the Web couldn't exist. The Internet began as a US Defense Department research project in the 1970s and has grown into a global network-ofnetworks linking some

  17. Operational Marine Data Acquisition and Delivery Powered by Web and Geospatial Standards

    Science.gov (United States)

    Thomas, R.; Buck, J. J. H.

    2015-12-01

    As novel sensor types and new platforms are deployed to monitor the global oceans, the volumes of scientific and environmental data collected in the marine context are rapidly growing. In order to use these data in both the traditional operational modes and in innovative "Big Data" applications the data must be readily understood by software agents. One approach to achieving this is the application of both World Wide Web and Open Geospatial Consortium standards: namely Linked Data1 and Sensor Web Enablement2 (SWE). The British Oceanographic Data Centre (BODC) is adopting this strategy in a number of European Commission funded projects (NETMAR; SenseOCEAN; Ocean Data Interoperability Platform - ODIP; and AtlantOS) to combine its existing data archiving architecture with SWE components (such as Sensor Observation Services) and a Linked Data interface. These will evolve the data management and data transfer from a process that requires significant manual intervention to an automated operational process enabling the rapid, standards-based, ingestion and delivery of data. This poster will show the current capabilities of BODC and the status of on-going implementation of this strategy. References1. World Wide Web Consortium. (2013). Linked Data. Available:http://www.w3.org/standards/semanticweb/data. Last accessed 7th April 20152. Open Geospatial Consortium. (2014). Sensor Web Enablement (SWE). Available:http://www.opengeospatial.org/ogc/markets-technologies/swe. Last accessed 8th October 2014

  18. How Students Evaluate Information and Sources when Searching the World Wide Web for Information

    Science.gov (United States)

    Walraven, Amber; Brand-Gruwel, Saskia; Boshuizen, Henny P. A.

    2009-01-01

    The World Wide Web (WWW) has become the biggest information source for students while solving information problems for school projects. Since anyone can post anything on the WWW, information is often unreliable or incomplete, and it is important to evaluate sources and information before using them. Earlier research has shown that students have…

  19. World Wide Webs: Crossing the Digital Divide through Promotion of Public Access

    Science.gov (United States)

    Coetzee, Liezl

    “As Bill Gates and Steve Case proclaim the global omnipresence of the Internet, the majority of non-Western nations and 97 per cent of the world's population remain unconnected to the net for lack of money, access, or knowledge. This exclusion of so vast a share of the global population from the Internet sharply contradicts the claims of those who posit the World Wide Web as a ‘universal' medium of egalitarian communication.” (Trend 2001:2)

  20. Enhancing Enterprise 2.0 Ecosystems Using Semantic Web and Linked Data Technologies:The SemSLATES Approach

    Science.gov (United States)

    Passant, Alexandre; Laublet, Philippe; Breslin, John G.; Decker, Stefan

    During the past few years, various organisations embraced the Enterprise 2.0 paradigms, providing their employees with new means to enhance collaboration and knowledge sharing in the workplace. However, while tools such as blogs, wikis, and principles like free-tagging or content syndication allow user-generated content to be more easily created and shared in the enterprise, in spite of some social issues, these new practices lead to various problems in terms of knowledge management. In this chapter, we provide an approach based on Semantic Web and Linked Data technologies for (1) integrating heterogeneous data from distinct Enterprise 2.0 applications, and (2) bridging the gap between raw text and machine-readable Linked Data. We discuss the theoretical background of our proposal as well as a practical case-study in enterprise, focusing on the various add-ons that have been provided to the original information system, as well as presenting how public Linked Open Data from the Web can be used to enhance existing Enterprise 2.0 ecosystems.

  1. Digital libraries and World Wide Web sites and page persistence.

    Directory of Open Access Journals (Sweden)

    Wallace Koehler

    1999-01-01

    Full Text Available Web pages and Web sites, some argue, can either be collected as elements of digital or hybrid libraries, or, as others would have it, the WWW is itself a library. We begin with the assumption that Web pages and Web sites can be collected and categorized. The paper explores the proposition that the WWW constitutes a library. We conclude that the Web is not a digital library. However, its component parts can be aggregated and included as parts of digital library collections. These, in turn, can be incorporated into "hybrid libraries." These are libraries with both traditional and digital collections. Material on the Web can be organized and managed. Native documents can be collected in situ, disseminated, distributed, catalogueed, indexed, controlled, in traditional library fashion. The Web therefore is not a library, but material for library collections is selected from the Web. That said, the Web and its component parts are dynamic. Web documents undergo two kinds of change. The first type, the type addressed in this paper, is "persistence" or the existence or disappearance of Web pages and sites, or in a word the lifecycle of Web documents. "Intermittence" is a variant of persistence, and is defined as the disappearance but reappearance of Web documents. At any given time, about five percent of Web pages are intermittent, which is to say they are gone but will return. Over time a Web collection erodes. Based on a 120-week longitudinal study of a sample of Web documents, it appears that the half-life of a Web page is somewhat less than two years and the half-life of a Web site is somewhat more than two years. That is to say, an unweeded Web document collection created two years ago would contain the same number of URLs, but only half of those URLs point to content. The second type of change Web documents experience is change in Web page or Web site content. Again based on the Web document samples, very nearly all Web pages and sites undergo some

  2. Web Page Recommendation Using Web Mining

    OpenAIRE

    Modraj Bhavsar; Mrs. P. M. Chavan

    2014-01-01

    On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each...

  3. Greater freedom of speech on Web 2.0 correlates with dominance of views linking vaccines to autism.

    Science.gov (United States)

    Venkatraman, Anand; Garg, Neetika; Kumar, Nilay

    2015-03-17

    It is suspected that Web 2.0 web sites, with a lot of user-generated content, often support viewpoints that link autism to vaccines. We assessed the prevalence of the views supporting a link between vaccines and autism online by comparing YouTube, Google and Wikipedia with PubMed. Freedom of speech is highest on YouTube and progressively decreases for the others. Support for a link between vaccines and autism is most prominent on YouTube, followed by Google search results. It is far lower on Wikipedia and PubMed. Anti-vaccine activists use scientific arguments, certified physicians and official-sounding titles to gain credibility, while also leaning on celebrity endorsement and personalized stories. Online communities with greater freedom of speech lead to a dominance of anti-vaccine voices. Moderation of content by editors can offer balance between free expression and factual accuracy. Health communicators and medical institutions need to step up their activity on the Internet. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Breast cancer on the world wide web: cross sectional survey of quality of information and popularity of websites

    Science.gov (United States)

    Meric, Funda; Bernstam, Elmer V; Mirza, Nadeem Q; Hunt, Kelly K; Ames, Frederick C; Ross, Merrick I; Kuerer, Henry M; Pollock, Raphael E; Musen, Mark A; Singletary, S Eva

    2002-01-01

    Objectives To determine the characteristics of popular breast cancer related websites and whether more popular sites are of higher quality. Design The search engine Google was used to generate a list of websites about breast cancer. Google ranks search results by measures of link popularity—the number of links to a site from other sites. The top 200 sites returned in response to the query “breast cancer” were divided into “more popular” and “less popular” subgroups by three different measures of link popularity: Google rank and number of links reported independently by Google and by AltaVista (another search engine). Main outcome measures Type and quality of content. Results More popular sites according to Google rank were more likely than less popular ones to contain information on ongoing clinical trials (27% v 12%, P=0.01 ), results of trials (12% v 3%, P=0.02), and opportunities for psychosocial adjustment (48% v 23%, Ppopular sites by number of linking sites were also more likely to provide updates on other breast cancer research, information on legislation and advocacy, and a message board service. Measures of quality such as display of authorship, attribution or references, currency of information, and disclosure did not differ between groups. Conclusions Popularity of websites is associated with type rather than quality of content. Sites that include content correlated with popularity may best meet the public's desire for information about breast cancer. What is already known on this topicPatients are using the world wide web to search for health informationBreast cancer is one of the most popular search topicsCharacteristics of popular websites may reflect the information needs of patientsWhat this study addsType rather than quality of content correlates with popularity of websitesMeasures of quality correlate with accuracy of medical information PMID:11884322

  5. Parasites in food webs: the ultimate missing links

    Science.gov (United States)

    Lafferty, Kevin D.; Allesina, Stefano; Arim, Matias; Briggs, Cherie J.; De Leo, Giulio A.; Dobson, Andrew P.; Dunne, Jennifer A.; Johnson, Pieter T.J.; Kuris, Armand M.; Marcogliese, David J.; Martinez, Neo D.; Memmott, Jane; Marquet, Pablo A.; McLaughlin, John P.; Mordecai, Eerin A.; Pascual, Mercedes; Poulin, Robert; Thieltges, David W.

    2008-01-01

    Parasitism is the most common consumer strategy among organisms, yet only recently has there been a call for the inclusion of infectious disease agents in food webs. The value of this effort hinges on whether parasites affect food-web properties. Increasing evidence suggests that parasites have the potential to uniquely alter food-web topology in terms of chain length, connectance and robustness. In addition, parasites might affect food-web stability, interaction strength and energy flow. Food-web structure also affects infectious disease dynamics because parasites depend on the ecological networks in which they live. Empirically, incorporating parasites into food webs is straightforward. We may start with existing food webs and add parasites as nodes, or we may try to build food webs around systems for which we already have a good understanding of infectious processes. In the future, perhaps researchers will add parasites while they construct food webs. Less clear is how food-web theory can accommodate parasites. This is a deep and central problem in theoretical biology and applied mathematics. For instance, is representing parasites with complex life cycles as a single node equivalent to representing other species with ontogenetic niche shifts as a single node? Can parasitism fit into fundamental frameworks such as the niche model? Can we integrate infectious disease models into the emerging field of dynamic food-web modelling? Future progress will benefit from interdisciplinary collaborations between ecologists and infectious disease biologists.

  6. From theater to the world wide web--a new online era for surgical education.

    LENUS (Irish Health Repository)

    O'Leary, D Peter

    2012-07-01

    Traditionally, surgical education has been confined to operating and lecture theaters. Access to the World Wide Web and services, such as YouTube and iTunes has expanded enormously. Each week throughout Ireland, nonconsultant hospital doctors work hard to create presentations for surgical teaching. Once presented, these valuable presentations are often never used again.

  7. Enhancement of shear strength and ductility for reinforced concrete wide beams due to web reinforcement

    Directory of Open Access Journals (Sweden)

    M. Said

    2013-12-01

    Full Text Available The shear behavior of reinforced concrete wide beams was investigated. The experimental program consisted of nine beams of 29 MPa concrete strength tested with a shear span-depth ratio equal to 3.0. One of the tested beams had no web reinforcement as a control specimen. The flexure mode of failure was secured for all of the specimens to allow for shear mode of failure. The key parameters covered in this investigation are the effect of the existence, spacing, amount and yield stress of the vertical stirrups on the shear capacity and ductility of the tested wide beams. The study shows that the contribution of web reinforcement to the shear capacity is significant and directly proportional to the amount and spacing of the shear reinforcement. The increase in the shear capacity ranged from 32% to 132% for the range of the tested beams compared with the control beam. High grade steel was more effective in the contribution of the shear strength of wide beams. Also, test results demonstrate that the shear reinforcement significantly enhances the ductility of the wide beams. In addition, shear resistances at failure recorded in this study are compared to the analytical strengths calculated according to the current Egyptian Code and the available international codes. The current study highlights the need to include the contribution of shear reinforcement in the Egyptian Code requirements for shear capacity of wide beams.

  8. Microbiome-wide association studies link dynamic microbial consortia to disease

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, Jack A.; Quinn, Robert A.; Debelius, Justine; Xu, Zhenjiang Z.; Morton, James; Garg, Neha; Jansson, Janet K.; Dorrestein, Pieter C.; Knight, Rob

    2016-07-06

    Rapid advances in DNA sequencing, metabolomics, proteomics and computation dramatically increase accessibility of microbiome studies and identify links between the microbiome and disease. Microbial time-series and multiple molecular perspectives enable Microbiome-Wide Association Studies (MWAS), analogous to Genome-Wide Association Studies (GWAS). Rapid research advances point towards actionable results, although approved clinical tests based on MWAS are still in the future. Appreciating the complexity of interactions between diet, chemistry, health and the microbiome, and determining the frequency of observations needed to capture and integrate this dynamic interface, is paramount for addressing the need for personalized and precision microbiome-based diagnostics and therapies.

  9. Two virtual astro refresher courses on the world-wide-web

    International Nuclear Information System (INIS)

    Goldwein, Joel W.

    1997-01-01

    Purpose/Objective: The Internet offers a novel venue for providing educational material to radiation oncologists. This exhibit demonstrates its utility for providing the complete content of two past ASTRO refresher courses. Materials and Methods: The audio recording, handout and slides from the 1995 ASTRO refresher course entitled 'Radiation Therapy for Pediatric Brain Tumors; Standards of Care, Current Clinical Trials and New Directions' and the 1996 ASTRO refresher course entitled 'Internet-based communications in Radiation Oncology' were digitized and placed on an Internet World-Wide-Web site. The Web address was posted on the refresher course handout and in the meeting book ('http://goldwein 1.xrt.upenn.edu/brain95.html' and 'http://goldwein 1.xrt.upenn.edu/astro96/'). The computer distributing this material is an Intel-based 486 DEC50 personal computer with a 50 Mhz processor running Windows NT 3.51 workstation. Software utilized to distribute the material is in the public domain and includes EWMAC's 'httpd', and Progressive Network's 'RealAudio Server' and 'Encoder'. The University's dedicated Internet connection is used to 'serve' this material. Results: The two approximately 100 minute lectures have been encoded into several 'RealAudio' files totaling 10 Megabytes in size. These files are accessible with moderate to excellent quality and speed utilizing as little as a 14.4k modem connection to the Internet. Use of 'streaming' technology provides a means for playing the audio files over the Internet after downloading only a small portion of the files. The time required to digitize the material has been approximately 40 hours, with most time related to digitizing slides from a Powerpoint presentation. Not all slides have been digitized as of this time. To date, approximately 400 accesses to this resource have been logged on the system. Seven electronic comment forms for the second course have all rated it as 'superior'. Pitfalls include the difficulty

  10. Navigational Structure on the World Wide Web: Usability Concerns, User Preferences, and "Browsing Behavior."

    Science.gov (United States)

    Frick, Theodore; Monson, John A.; Xaver, Richard F.; Kilic, Gulsen; Conley, Aaron T.; Wamey, Beatrice

    There are several approaches a World Wide Web site designer considers in developing a menu structure. One consideration is the content of the menus (what choices are available to the user). Another consideration is the physical layout of the menu structure. The physical layout of a menu may be described as being one of at least three different…

  11. A World Wide Web Human Dimensions Framework and Database for Wildlife and Forest Planning

    Science.gov (United States)

    Michael A. Tarrant; Alan D. Bright; H. Ken Cordell

    1999-01-01

    The paper describes a human dimensions framework(HDF) for application in wildlife and forest planning. The HDF is delivered via the world wide web and retrieves data on-line from the Social, Economic, Environmental, Leisure, and Attitudes (SEELA) database. The proposed HDF is guided by ten fundamental HD principles, and is applied to wildlife and forest planning using...

  12. The protozooplankton-ichthyoplankton trophic link: an overlooked aspect of aquatic food webs.

    Science.gov (United States)

    Montagnes, David J S; Dower, John F; Figueiredo, Gisela M

    2010-01-01

    Since the introduction of the microbial loop concept, awareness of the role played by protozooplankton in marine food webs has grown. By consuming bacteria, and then being consumed by metazooplankton, protozoa form a trophic link that channels dissolved organic material into the "classic" marine food chain. Beyond enhancing energy transfer to higher trophic levels, protozoa play a key role in improving the food quality of metazooplankton. Here, we consider a third role played by protozoa, but one that has received comparatively little attention: that as prey items for ichthyoplankton. For >100 years it has been known that fish larvae consume protozoa. Despite this, fisheries scientists and biological oceanographers still largely ignore protozoa when assessing the foodweb dynamics that regulate the growth and survival of larval fish. We review evidence supporting the importance of the protozooplankton-ichthyoplankton link, including examples from the amateur aquarium trade, the commercial aquaculture industry, and contemporary studies of larval fish. We then consider why this potentially important link continues to receive very little attention. We conclude by offering suggestions for quantifying the importance of the protozooplankton-ichthyoplankton trophic link, using both existing methods and new technologies.

  13. Rendimiento de los sistemas de recuperación en la world wide web: revisión metodológica.

    Directory of Open Access Journals (Sweden)

    Olvera Lobo, María Dolores

    2000-03-01

    Full Text Available This study is an attempt to establish a methodology for the evaluation of information retrieval with search engines in the World Wide Web. The method, which is explained in detail, adapts traditional techniques for evaluating web peculiarities and makes use of precision and recall scores, based on the relevance of the first 20 results retrieved. This method has been successfully applied to the evaluation of ten different search engines.

    Este estudio pretende contribuir a establecer una metodología para la evaluación de la recuperación de información de las herramientas de búsqueda en el entorno de la World Wide Web. Se detalla el método diseñado (y aplicado con éxito, para evaluar los resultados de las búsquedas, adaptando las técnicas tradicionales de evaluación a las particularidades de la Web y empleando las medidas de la precisión y exhaustividad, basadas en la relevancia, para los 20 primeros resultados recuperados.

  14. An Image Retrieval and Processing Expert System for the World Wide Web

    Science.gov (United States)

    Rodriguez, Ricardo; Rondon, Angelica; Bruno, Maria I.; Vasquez, Ramon

    1998-01-01

    This paper presents a system that is being developed in the Laboratory of Applied Remote Sensing and Image Processing at the University of P.R. at Mayaguez. It describes the components that constitute its architecture. The main elements are: a Data Warehouse, an Image Processing Engine, and an Expert System. Together, they provide a complete solution to researchers from different fields that make use of images in their investigations. Also, since it is available to the World Wide Web, it provides remote access and processing of images.

  15. From theater to the world wide web--a new online era for surgical education.

    Science.gov (United States)

    O'Leary, D Peter; Corrigan, Mark A; McHugh, Seamus M; Hill, A D; Redmond, H Paul

    2012-01-01

    Traditionally, surgical education has been confined to operating and lecture theaters. Access to the World Wide Web and services, such as YouTube and iTunes has expanded enormously. Each week throughout Ireland, nonconsultant hospital doctors work hard to create presentations for surgical teaching. Once presented, these valuable presentations are often never used again. We aimed to compile surgical presentations online and establish a new online surgical education tool. We also sought to measure the effect of this educational tool on surgical presentation quality. Surgical presentations from Cork University Hospital and Beaumont Hospital presented between January 2010 and April 2011 were uploaded to http://www.pilgrimshospital.com/presentations. A YouTube channel and iTunes application were created. Web site hits were monitored. Quality of presentations was assessed by 4 independent senior surgical judges using a validated PowerPoint assessment form. Judges were randomly given 6 presentations; 3 presentations were pre-web site setup and 3 were post-web site setup. Once uploading commenced, presenters were informed. A total of 89 presentations have been uploaded to date. This includes 55 cases, 17 journal club, and 17 short bullet presentations. This has been associated with 46,037 web site page views. Establishment of the web site was associated with a significant improvement in the quality of presentations. Mean scores for pre- and post-web site group were 6.2 vs 7.7 out of 9 respectively, p = 0.037. This novel educational tool provides a unique method to enable surgical education become more accessible to trainees, while also improving the overall quality of surgical teaching PowerPoint presentations. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  16. Validity and client use of information from the World Wide Web regarding veterinary anesthesia in dogs.

    Science.gov (United States)

    Hofmeister, Erik H; Watson, Victoria; Snyder, Lindsey B C; Love, Emma J

    2008-12-15

    To determine the validity of the information on the World Wide Web concerning veterinary anesthesia in dogs and to determine the methods dog owners use to obtain that information. Web-based search and client survey. 73 Web sites and 92 clients. Web sites were scored on a 5-point scale for completeness and accuracy of information about veterinary anesthesia by 3 board-certified anesthesiologists. A search for anesthetic information regarding 49 specific breeds of dogs was also performed. A survey was distributed to the clients who visited the University of Georgia Veterinary Teaching Hospital during a 4-month period to solicit data about sources used by clients to obtain veterinary medical information and the manner in which information obtained from Web sites was used. The general search identified 73 Web sites that included information on veterinary anesthesia; these sites received a mean score of 3.4 for accuracy and 2.5 for completeness. Of 178 Web sites identified through the breed-specific search, 57 (32%) indicated that a particular breed was sensitive to anesthesia. Of 83 usable, completed surveys, 72 (87%) indicated the client used the Web for veterinary medical information. Fifteen clients (18%) indicated they believed their animal was sensitive to anesthesia because of its breed. Information available on the internet regarding anesthesia in dogs is generally not complete and may be misleading with respect to risks to specific breeds. Consequently, veterinarians should appropriately educate clients regarding anesthetic risk to their particular dog.

  17. The readability of pediatric patient education materials on the World Wide Web.

    Science.gov (United States)

    D'Alessandro, D M; Kingsley, P; Johnson-West, J

    2001-07-01

    Literacy is a national and international problem. Studies have shown the readability of adult and pediatric patient education materials to be too high for average adults. Materials should be written at the 8th-grade level or lower. To determine the general readability of pediatric patient education materials designed for adults on the World Wide Web (WWW). GeneralPediatrics.com (http://www.generalpediatrics.com) is a digital library serving the medical information needs of pediatric health care providers, patients, and families. Documents from 100 different authoritative Web sites designed for laypersons were evaluated using a built-in computer software readability formula (Flesch Reading Ease and Flesch-Kincaid reading levels) and hand calculation methods (Fry Formula and SMOG methods). Analysis of variance and paired t tests determined significance. Eighty-nine documents constituted the final sample; they covered a wide spectrum of pediatric topics. The overall Flesch Reading Ease score was 57.0. The overall mean Fry Formula was 12.0 (12th grade, 0 months of schooling) and SMOG was 12.2. The overall Flesch-Kincaid grade level was significantly lower (Peducation materials on the WWW are not written at an appropriate reading level for the average adult. We propose that a practical reading level and how it was determined be included on all patient education materials on the WWW for general guidance in material selection. We discuss suggestions for improved readability of patient education materials.

  18. World wide web implementation of the Langley technical report server

    Science.gov (United States)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.

    1994-01-01

    On January 14, 1993, NASA Langley Research Center (LaRC) made approximately 130 formal, 'unclassified, unlimited' technical reports available via the anonymous FTP Langley Technical Report Server (LTRS). LaRC was the first organization to provide a significant number of aerospace technical reports for open electronic dissemination. LTRS has been successful in its first 18 months of operation, with over 11,000 reports distributed and has helped lay the foundation for electronic document distribution for NASA. The availability of World Wide Web (WWW) technology has revolutionized the Internet-based information community. This paper describes the transition of LTRS from a centralized FTP site to a distributed data model using the WWW, and suggests how the general model for LTRS can be applied to other similar systems.

  19. Caught in the Web

    International Nuclear Information System (INIS)

    Gillies, James

    1995-01-01

    The World-Wide Web may have taken the Internet by storm, but many people would be surprised to learn that it owes its existence to CERN. Around half the world's particle physicists come to CERN for their experiments, and the Web is the result of their need to share information quickly and easily on a global scale. Six years after Tim Berners-Lee's inspired idea to marry hypertext to the Internet in 1989, CERN is handing over future Web development to the World-Wide Web Consortium, run by the French National Institute for Research in Computer Science and Control, INRIA, and the Laboratory for Computer Science of the Massachusetts Institute of Technology, MIT, leaving itself free to concentrate on physics. The Laboratory marked this transition with a conference designed to give a taste of what the Web can do, whilst firmly stamping it with the label ''Made in CERN''. Over 200 European journalists and educationalists came to CERN on 8 - 9 March for the World-Wide Web Days, resulting in wide media coverage. The conference was opened by UK Science Minister David Hunt who stressed the importance of fundamental research in generating new ideas. ''Who could have guessed 10 years ago'', he said, ''that particle physics research would lead to a communication system which would allow every school to have the biggest library in the world in a single computer?''. In his introduction, the Minister also pointed out that ''CERN and other basic research laboratories help to break new technological ground and sow the seeds of what will become mainstream manufacturing in the future.'' Learning the jargon is often the hardest part of coming to grips with any new invention, so CERN put it at the top of the agenda. Jacques Altaber, who helped introduce the Internet to CERN in the early 1980s, explained that without the Internet, the Web couldn't exist. The Internet began as a US Defense

  20. Studying Acute Coronary Syndrome Through the World Wide Web: Experiences and Lessons.

    Science.gov (United States)

    Alonzo, Angelo A

    2017-10-13

    This study details my viewpoint on the experiences, lessons, and assessments of conducting a national study on care-seeking behavior for heart attack in the United States utilizing the World Wide Web. The Yale Heart Study (YHS) was funded by the National Heart, Lung, and Blood Institute (NHLBI) of the National Institutes of Health (NIH). Grounded on two prior studies, the YHS combined a Web-based interview survey instrument; ads placed on the Internet; flyers and posters in public libraries, senior centers, and rehabilitation centers; information on chat rooms; a viral marketing strategy; and print ads to attract potential participants to share their heart attack experiences. Along the way, the grant was transferred from Ohio State University (OSU) to Yale University, and significant administrative, information technology, and personnel challenges ensued that materially delayed the study's execution. Overall, the use of the Internet to collect data on care-seeking behavior is very time consuming and emergent. The cost of using the Web was approximately 31% less expensive than that of face-to-face interviews. However, the quality of the data may have suffered because of the absence of some data compared with interviewing participants. Yet the representativeness of the 1154 usable surveys appears good, with the exception of a dearth of African American participants. ©Angelo A Alonzo. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 13.10.2017.

  1. Developing as new search engine and browser for libraries to search and organize the World Wide Web library resources

    OpenAIRE

    Sreenivasulu, V.

    2000-01-01

    Internet Granthalaya urges world wide advocates and targets at the task of creating a new search engine and dedicated browseer. Internet Granthalaya may be the ultimate search engine exclusively dedicated for every library use to search and organize the world wide web libary resources

  2. Traitor: associating concepts using the world wide web

    NARCIS (Netherlands)

    Drijfhout, Wanno; Oliver, J.; Oliver, Jundt; Wevers, L.; Hiemstra, Djoerd

    We use Common Crawl's 25TB data set of web pages to construct a database of associated concepts using Hadoop. The database can be queried through a web application with two query interfaces. A textual interface allows searching for similarities and differences between multiple concepts using a query

  3. Born semantic: linking data from sensors to users and balancing hardware limitations with data standards

    Science.gov (United States)

    Buck, Justin; Leadbetter, Adam

    2015-04-01

    New users for the growing volume of ocean data for purposes such as 'big data' data products and operational data assimilation/ingestion require data to be readily ingestible. This can be achieved via the application of World Wide Web Consortium (W3C) Linked Data and Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) standards to data management. As part of several Horizons 2020 European projects (SenseOCEAN, ODIP, AtlantOS) the British Oceanographic Data Centre (BODC) are working on combining existing data centre architecture and SWE software such as Sensor Observation Services with a Linked Data front end. The standards to enable data delivery are proven and well documented1,2 There are practical difficulties when SWE standards are applied to real time data because of internal hardware bandwidth restrictions and a requirement to constrain data transmission costs. A pragmatic approach is proposed where sensor metadata and data output in OGC standards are implemented "shore-side" with sensors and instruments transmitting unique resolvable web linkages to persistent OGC SensorML records published at the BODC. References: 1. World Wide Web Consortium. (2013). Linked Data. Available: http://www.w3.org/standards/semanticweb/data. Last accessed 8th October 2014. 2. Open Geospatial Consortium. (2014). Sensor Web Enablement (SWE). Available: http://www.opengeospatial.org/ogc/markets-technologies/swe. Last accessed 8th October 2014.

  4. Teleconsultation in school settings: linking classroom teachers and behavior analysts through web-based technology.

    Science.gov (United States)

    Frieder, Jessica E; Peterson, Stephanie M; Woodward, Judy; Crane, Jaelee; Garner, Marlane

    2009-01-01

    This paper describes a technically driven, collaborative approach to assessing the function of problem behavior using web-based technology. A case example is provided to illustrate the process used in this pilot project. A school team conducted a functional analysis with a child who demonstrated challenging behaviors in a preschool setting. Behavior analysts at a university setting provided the school team with initial workshop trainings, on-site visits, e-mail and phone communication, as well as live web-based feedback on functional analysis sessions. The school personnel implemented the functional analysis with high fidelity and scored the data reliably. Outcomes of the project suggest that there is great potential for collaboration via the use of web-based technologies for ongoing assessment and development of effective interventions. However, an empirical evaluation of this model should be conducted before wide-scale adoption is recommended.

  5. Finding Emotional-Laden Resources on the World Wide Web

    Directory of Open Access Journals (Sweden)

    Diane Rasmussen Neal

    2011-03-01

    Full Text Available Some content in multimedia resources can depict or evoke certain emotions in users. The aim of Emotional Information Retrieval (EmIR and of our research is to identify knowledge about emotional-laden documents and to use these findings in a new kind of World Wide Web information service that allows users to search and browse by emotion. Our prototype, called Media EMOtion SEarch (MEMOSE, is largely based on the results of research regarding emotive music pieces, images and videos. In order to index both evoked and depicted emotions in these three media types and to make them searchable, we work with a controlled vocabulary, slide controls to adjust the emotions’ intensities, and broad folksonomies to identify and separate the correct resource-specific emotions. This separation of so-called power tags is based on a tag distribution which follows either an inverse power law (only one emotion was recognized or an inverse-logistical shape (two or three emotions were recognized. Both distributions are well known in information science. MEMOSE consists of a tool for tagging basic emotions with the help of slide controls, a processing device to separate power tags, a retrieval component consisting of a search interface (for any topic in combination with one or more emotions and a results screen. The latter shows two separately ranked lists of items for each media type (depicted and felt emotions, displaying thumbnails of resources, ranked by the mean values of intensity. In the evaluation of the MEMOSE prototype, study participants described our EmIR system as an enjoyable Web 2.0 service.

  6. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M.B. Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  7. Enhancing Student Performance in First-Semester General Chemistry Using Active Feedback through the World Wide Web

    Science.gov (United States)

    Chambers, Kent A.; Blake, Bob

    2007-01-01

    The World Wide Web recently launched a new interactive feedback system for the instructors, so that can better understanding about their students and their problems. The feedback, in combination with tailored lectures is expected to enhance student performance in the first semester of general chemistry.

  8. The OceanLink Project

    Science.gov (United States)

    Narock, T.; Arko, R. A.; Carbotte, S. M.; Chandler, C. L.; Cheatham, M.; Finin, T.; Hitzler, P.; Krisnadhi, A.; Raymond, L. M.; Shepherd, A.; Wiebe, P. H.

    2014-12-01

    A wide spectrum of maturing methods and tools, collectively characterized as the Semantic Web, is helping to vastly improve the dissemination of scientific research. Creating semantic integration requires input from both domain and cyberinfrastructure scientists. OceanLink, an NSF EarthCube Building Block, is demonstrating semantic technologies through the integration of geoscience data repositories, library holdings, conference abstracts, and funded research awards. Meeting project objectives involves applying semantic technologies to support data representation, discovery, sharing and integration. Our semantic cyberinfrastructure components include ontology design patterns, Linked Data collections, semantic provenance, and associated services to enhance data and knowledge discovery, interoperation, and integration. We discuss how these components are integrated, the continued automated and semi-automated creation of semantic metadata, and techniques we have developed to integrate ontologies, link resources, and preserve provenance and attribution.

  9. Mining the Social Web Analyzing Data from Facebook, Twitter, LinkedIn, and Other Social Media Sites

    CERN Document Server

    Russell, Matthew

    2011-01-01

    Want to tap the tremendous amount of valuable social data in Facebook, Twitter, LinkedIn, and Google+? This refreshed edition helps you discover who's making connections with social media, what they're talking about, and where they're located. You'll learn how to combine social web data, analysis techniques, and visualization to find what you've been looking for in the social haystack-as well as useful information you didn't know existed. Each standalone chapter introduces techniques for mining data in different areas of the social Web, including blogs and email. All you need to get started

  10. Development of a world wide web-based interactive education program to improve detectability of pulmonary nodules on chest radiographs

    International Nuclear Information System (INIS)

    Ohm, Joon Young; Kim, Jin Hwan; Kim, Sung Soo; Han, Ki Tae; Ahn, Young Seob; Shin, Byung Seok; Bae, Kyongtae T.

    2007-01-01

    To design and develop a World Wide Web-based education program that will allow trainees to interactively learn and improve the diagnostic capability of detecting pulmonary nodules on chest radiographs. Chest radiographs with known diagnosis were retrieved and selected from our institutional clinical archives. A database was constructed by sorting radiographs into three groups: normal, nodule, and false positive (i.e., nodule-like focal opacity). Each nodule was assigned with the degree of detectability: easy, intermediate, difficult, and likely missed. Nodules were characterized by their morphology (well-defined, ill-defined, irregular, faint) and by other associated pathologies or potentially obscuring structures. The Web site was organized into four sections: study, test, record and information. The Web site allowed a user interactively to undergo the training section appropriate to the user's diagnostic capability. The training was enhanced by means of clinical and other pertinent radiological findings included in the database. The outcome of the training was tested with clinical test radiographs that presented nodules or false positives with varying diagnostic difficulties. A World Wide Web-based education program is a promising technique that would allow trainees to interactively learn and improve the diagnostic capability of detecting and characterizing pulmonary nodules

  11. Linked Data: what does it offer Earth Sciences?

    Science.gov (United States)

    Cox, Simon; Schade, Sven

    2010-05-01

    'Linked Data' is a current buzz-phrase promoting access to various forms of data on the internet. It starts from the two principles that have underpinned the architecture and scalability of the World Wide Web: 1. Universal Resource Identifiers - using the http protocol which is supported by the DNS system. 2. Hypertext - in which URIs of related resources are embedded within a document. Browsing is the key mode of interaction, with traversal of links between resources under control of the client. Linked Data also adds, or re-emphasizes: • Content negotiation - whereby the client uses http headers to tell the service what representation of a resource is acceptable, • Semantic Web principles - formal semantics for links, following the RDF data model and encoding, and • The 'mashup' effect - in which original and unexpected value may emerge from reuse of data, even if published in raw or unpolished form. Linked Data promotes typed links to all kinds of data, so is where the semantic web meets the 'deep web', i.e. resources which may be accessed using web protocols, but are in representations not indexed by search engines. Earth sciences are data rich, but with a strong legacy of specialized formats managed and processed by disconnected applications. However, most contemporary research problems require a cross-disciplinary approach, in which the heterogeneity resulting from that legacy is a significant challenge. In this context, Linked Data clearly has much to offer the earth sciences. But, there are some important questions to answer. What is a resource? Most earth science data is organized in arrays and databases. A subset useful for a particular study is usually identified by a parameterized query. The Linked Data paradigm emerged from the world of documents, and will often only resolve data-sets. It is impractical to create even nested navigation resources containing links to all potentially useful objects or subsets. From the viewpoint of human user

  12. Use of World Wide Web-based directories for tracing subjects in epidemiologic studies.

    Science.gov (United States)

    Koo, M M; Rohan, T E

    2000-11-01

    The recent availability of World Wide Web-based directories has opened up a new approach for tracing subjects in epidemiologic studies. The completeness of two World Wide Web-based directories (Canada411 and InfoSpace Canada) for subject tracing was evaluated by using a randomized crossover design for 346 adults randomly selected from respondents in an ongoing cohort study. About half (56.4%) of the subjects were successfully located by using either Canada411 or InfoSpace. Of the 43.6% of the subjects who could not be located using either directory, the majority (73.5%) were female. Overall, there was no clear advantage of one directory over the other. Although Canada411 could find significantly more subjects than InfoSpace, the number of potential matches returned by Canada411 was also higher, which meant that a longer list of potential matches had to be examined before a true match could be found. One strategy to minimize the number of potential matches per true match is to first search by InfoSpace with the last name and first name, then by Canada411 with the last name and first name, and finally by InfoSpace with the last name and first initial. Internet-based searches represent a potentially useful approach to tracing subjects in epidemiologic studies.

  13. Using Open Web APIs in Teaching Web Mining

    Science.gov (United States)

    Chen, Hsinchun; Li, Xin; Chau, M.; Ho, Yi-Jen; Tseng, Chunju

    2009-01-01

    With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems…

  14. Web 3.0 Emerging

    Energy Technology Data Exchange (ETDEWEB)

    Hendler, James [Rensselaer Polytechnic Institute

    2012-02-22

    As more and more data and information becomes available on the Web, new technologies that use explicit semantics for information organization are becoming desirable. New terms such as Linked Data, Semantic Web and Web 3.0 are used more and more, although there is increasing confusion as to what each means. In this talk, I will describe how different sorts of models can be used to link data in different ways. I will particularly explore different kinds of Web applications, from Enterprise Data Integration to Web 3.0 startups, government data release, the different needs of Web 2.0 and 3.0, the growing interest in “semantic search”, and the underlying technologies that power these new approaches.

  15. Migrating the facility profile information management system into the world wide web

    Energy Technology Data Exchange (ETDEWEB)

    Kero, R.E.; Swietlik, C.E.

    1994-09-01

    The Department of Energy - Office of Special Projects and Argonne National Laboratory (ANL), along with the Department of Energy - office of Scientific and Technical Information have previously designed and implemented the Environment, Safety and Health Facility Profile Information Management System (FPIMS) to facilitate greater efficiency in searching, analyzing and disseminating information found within environment, safety and health oversight documents. This information retrieval based system serves as a central repository for full-text electronic oversight documents, as well as a management planning and decision making tool that can assist in trend and root cause analyses. Continuous improvement of environment, safety and health programs are currently aided through this personal computer-based system by providing a means for the open communication of lessons learned across the department. Overall benefits have included reductions in costs and improvements in past information management capabilities. Access to the FPIMS has been possible historically through a headquarters-based local area network equipped with modems. Continued demand for greater accessibility of the system by remote DOE field offices and sites, in conjunction with the Secretary of Energy` s call for greater public accessibility to Department of Energy (DOE) information resources, has been the impetus to expand access through the use of Internet technologies. Therefore, the following paper will discuss reasons for migrating the FPIMS system into the World Wide Web (Web), various lessons learned from the FPIMS migration effort, as well as future plans for enhancing the Web-based FPIMS.

  16. Linked data meets ontology matching: enhancing data linking through ontology alignments

    OpenAIRE

    Scharffe , François; Euzenat , Jérôme

    2011-01-01

    scharffe2011b; International audience; The Web of data consists of publishing data on the Web in such a way that they can be connected together and interpreted. It is thus critical to establish links between these data, both for the Web of data and for the Semantic Web that it contributes to feed. We consider here the various techniques which have been developed for that purpose and analyze their commonalities and differences. This provides a general framework that the diverse data linking sy...

  17. A Webometric Analysis of ISI Medical Journals Using Yahoo, AltaVista, and All the Web Search Engines

    Directory of Open Access Journals (Sweden)

    Zohreh Zahedi

    2010-12-01

    Full Text Available The World Wide Web is an important information source for scholarly communications. Examining the inlinks via webometrics studies has attracted particular interests among information researchers. In this study, the number of inlinks to 69 ISI medical journals retrieved by Yahoo, AltaVista, and All The web Search Engines were examined via a comparative and Webometrics study. For data analysis, SPSS software was employed. Findings revealed that British Medical Journal website attracted the most links of all in the three search engines. There is a significant correlation between the number of External links and the ISI impact factor. The most significant correlation in the three search engines exists between external links of Yahoo and AltaVista (100% and the least correlation is found between external links of All The web & the number of pages of AltaVista (0.51. There is no significant difference between the internal links & the number of pages found by the three search engines. But in case of impact factors, significant differences are found between these three search engines. So, the study shows that journals with higher impact factor attract more links to their websites. It also indicates that the three search engines are significantly different in terms of total links, outlinks and web impact factors

  18. Using the World-Wide Web to Facilitate Communications of Non-Destructive Evaluation

    Science.gov (United States)

    McBurney, Sean

    1995-01-01

    The high reliability required for Aeronautical components is a major reason for extensive Nondestructive Testing and Evaluation. Here at Langley Research Center (LaRC), there are highly trained and certified personal to conduct such testing to prevent hazards from occurring in the workplace and on the research projects for the National Aeronautics and Space Administration (NASA). The purpose of my studies was to develop a communication source to educate others of the services and equipment offered here. This was accomplished by creating documents that are accessible to all in the industry via the World Wide Web.

  19. WebTag: Web browsing into sensor tags over NFC.

    Science.gov (United States)

    Echevarria, Juan Jose; Ruiz-de-Garibay, Jonathan; Legarda, Jon; Alvarez, Maite; Ayerbe, Ana; Vazquez, Juan Ignacio

    2012-01-01

    Information and Communication Technologies (ICTs) continue to overcome many of the challenges related to wireless sensor monitoring, such as for example the design of smarter embedded processors, the improvement of the network architectures, the development of efficient communication protocols or the maximization of the life cycle autonomy. This work tries to improve the communication link of the data transmission in wireless sensor monitoring. The upstream communication link is usually based on standard IP technologies, but the downstream side is always masked with the proprietary protocols used for the wireless link (like ZigBee, Bluetooth, RFID, etc.). This work presents a novel solution (WebTag) for a direct IP based access to a sensor tag over the Near Field Communication (NFC) technology for secure applications. WebTag allows a direct web access to the sensor tag by means of a standard web browser, it reads the sensor data, configures the sampling rate and implements IP based security policies. It is, definitely, a new step towards the evolution of the Internet of Things paradigm.

  20. Web party effect: a cocktail party effect in the web environment.

    Science.gov (United States)

    Rigutti, Sara; Fantoni, Carlo; Gerbino, Walter

    2015-01-01

    In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all) web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots) on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search). Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment): users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others.

  1. User Interface on the World Wide Web: How to Implement a Multi-Level Program Online

    Science.gov (United States)

    Cranford, Jonathan W.

    1995-01-01

    The objective of this Langley Aerospace Research Summer Scholars (LARSS) research project was to write a user interface that utilizes current World Wide Web (WWW) technologies for an existing computer program written in C, entitled LaRCRisk. The project entailed researching data presentation and script execution on the WWW and than writing input/output procedures for the database management portion of LaRCRisk.

  2. Creation and utilization of a World Wide Web based space radiation effects code: SIREST

    Science.gov (United States)

    Singleterry, R. C. Jr; Wilson, J. W.; Shinn, J. L.; Tripathi, R. K.; Thibeault, S. A.; Noor, A. K.; Cucinotta, F. A.; Badavi, F. F.; Chang, C. K.; Qualls, G. D.; hide

    2001-01-01

    In order for humans and electronics to fully and safely operate in the space environment, codes like HZETRN (High Charge and Energy Transport) must be included in any designer's toolbox for design evaluation with respect to radiation damage. Currently, spacecraft designers do not have easy access to accurate radiation codes like HZETRN to evaluate their design for radiation effects on humans and electronics. Today, the World Wide Web is sophisticated enough to support the entire HZETRN code and all of the associated pre and post processing tools. This package is called SIREST (Space Ionizing Radiation Effects and Shielding Tools). There are many advantages to SIREST. The most important advantage is the instant update capability of the web. Another major advantage is the modularity that the web imposes on the code. Right now, the major disadvantage of SIREST will be its modularity inside the designer's system. This mostly comes from the fact that a consistent interface between the designer and the computer system to evaluate the design is incomplete. This, however, is to be solved in the Intelligent Synthesis Environment (ISE) program currently being funded by NASA.

  3. Design and development of linked data from the National Map

    Science.gov (United States)

    Usery, E. Lynn; Varanka, Dalia E.

    2012-01-01

    The development of linked data on the World-Wide Web provides the opportunity for the U.S. Geological Survey (USGS) to supply its extensive volumes of geospatial data, information, and knowledge in a machine interpretable form and reach users and applications that heretofore have been unavailable. To pilot a process to take advantage of this opportunity, the USGS is developing an ontology for The National Map and converting selected data from nine research test areas to a Semantic Web format to support machine processing and linked data access. In a case study, the USGS has developed initial methods for legacy vector and raster formatted geometry, attributes, and spatial relationships to be accessed in a linked data environment maintaining the capability to generate graphic or image output from semantic queries. The description of an initial USGS approach to developing ontology, linked data, and initial query capability from The National Map databases is presented.

  4. Webmail: an Automated Web Publishing System

    Science.gov (United States)

    Bell, David

    A system for publishing frequently updated information to the World Wide Web will be described. Many documents now hosted by the NOAO Web server require timely posting and frequent updates, but need only minor changes in markup or are in a standard format requiring only conversion to HTML. These include information from outside the organization, such as electronic bulletins, and a number of internal reports, both human and machine generated. Webmail uses procmail and Perl scripts to process incoming email messages in a variety of ways. This processing may include wrapping or conversion to HTML, posting to the Web or internal newsgroups, updating search indices or links on related pages, and sending email notification of the new pages to interested parties. The Webmail system has been in use at NOAO since early 1997 and has steadily grown to include fourteen recipes that together handle about fifty messages per week.

  5. ePlant and the 3D data display initiative: integrative systems biology on the world wide web.

    Science.gov (United States)

    Fucile, Geoffrey; Di Biase, David; Nahal, Hardeep; La, Garon; Khodabandeh, Shokoufeh; Chen, Yani; Easley, Kante; Christendat, Dinesh; Kelley, Lawrence; Provart, Nicholas J

    2011-01-10

    Visualization tools for biological data are often limited in their ability to interactively integrate data at multiple scales. These computational tools are also typically limited by two-dimensional displays and programmatic implementations that require separate configurations for each of the user's computing devices and recompilation for functional expansion. Towards overcoming these limitations we have developed "ePlant" (http://bar.utoronto.ca/eplant) - a suite of open-source world wide web-based tools for the visualization of large-scale data sets from the model organism Arabidopsis thaliana. These tools display data spanning multiple biological scales on interactive three-dimensional models. Currently, ePlant consists of the following modules: a sequence conservation explorer that includes homology relationships and single nucleotide polymorphism data, a protein structure model explorer, a molecular interaction network explorer, a gene product subcellular localization explorer, and a gene expression pattern explorer. The ePlant's protein structure explorer module represents experimentally determined and theoretical structures covering >70% of the Arabidopsis proteome. The ePlant framework is accessed entirely through a web browser, and is therefore platform-independent. It can be applied to any model organism. To facilitate the development of three-dimensional displays of biological data on the world wide web we have established the "3D Data Display Initiative" (http://3ddi.org).

  6. Data Warehouse on the Web for Accelerator Fabrication And Maintenance

    International Nuclear Information System (INIS)

    Chan, A.; Crane, G.; Macgregor, I.; Meyer, S.

    2011-01-01

    A data warehouse grew out of the needs for a view of accelerator information from a lab-wide or project-wide standpoint (often needing off-site data access for the multi-lab PEP-II collaborators). A World Wide Web interface is used to link legacy database systems of the various labs and departments related to the PEP-II Accelerator. In this paper, we describe how links are made via the 'Formal Device Name' field(s) in the disparate databases. We also describe the functionality of a data warehouse in an accelerator environment. One can pick devices from the PEP-II Component List and find the actual components filling the functional slots, any calibration measurements, fabrication history, associated cables and modules, and operational maintenance records for the components. Information on inventory, drawings, publications, and purchasing history are also part of the PEP-II Database. A strategy of relying on a small team, and of linking existing databases rather than rebuilding systems is outlined.

  7. Continuous country-wide rainfall observation using a large network of commercial microwave links: Challenges, solutions and applications

    Science.gov (United States)

    Chwala, Christian; Boose, Yvonne; Smiatek, Gerhard; Kunstmann, Harald

    2017-04-01

    Commercial microwave link (CML) networks have proven to be a valuable source for rainfall information over the last years. However, up to now, analysis of CML data was always limited to certain snapshots of data for historic periods due to limited data access. With the real-time availability of CML data in Germany (Chwala et al. 2016) this situation has improved significantly. We are continuously acquiring and processing data from 3000 CMLs in Germany in near real-time with one minute temporal resolution. Currently the data acquisition system is extended to 10000 CMLs so that the whole of Germany is covered and a continuous country-wide rainfall product can be provided. In this contribution we will elaborate on the challenges and solutions regarding data acquisition, data management and robust processing. We will present the details of our data acquisition system that we run operationally at the network of the CML operator Ericsson Germany to solve the problem of limited data availability. Furthermore we will explain the implementation of our data base, its web-frontend for easy data access and present our data processing algorithms. Finally we will showcase an application of our data in hydrological modeling and its potential usage to improve radar QPE. Bibliography: Chwala, C., Keis, F., and Kunstmann, H.: Real-time data acquisition of commercial microwave link networks for hydrometeorological applications, Atmos. Meas. Tech., 9, 991-999, doi:10.5194/amt-9-991-2016, 2016

  8. OrthoVenn: a web server for genome wide comparison and annotation of orthologous clusters across multiple species

    Science.gov (United States)

    Genome wide analysis of orthologous clusters is an important component of comparative genomics studies. Identifying the overlap among orthologous clusters can enable us to elucidate the function and evolution of proteins across multiple species. Here, we report a web platform named OrthoVenn that i...

  9. 07051 Executive Summary -- Programming Paradigms for the Web: Web Programming and Web Services

    OpenAIRE

    Hull, Richard; Thiemann, Peter; Wadler, Philip

    2007-01-01

    The world-wide web raises a variety of new programming challenges. To name a few: programming at the level of the web browser, data-centric approaches, and attempts to automatically discover and compose web services. This seminar brought together researchers from the web programming and web services communities and strove to engage them in communication with each other. The seminar was held in an unusual style, in a mixture of short presentations and in-depth discussio...

  10. Transitioning from XML to RDF: Considerations for an effective move towards Linked Data and the Semantic Web

    Directory of Open Access Journals (Sweden)

    Juliet L. Hardesty

    2016-04-01

    Full Text Available Metadata, particularly within the academic library setting, is often expressed in eXtensible Markup Language (XML and managed with XML tools, technologies, and workflows. Managing a library’s metadata currently takes on a greater level of complexity as libraries are increasingly adopting the Resource Description Framework (RDF. Semantic Web initiatives are surfacing in the library context with experiments in publishing metadata as Linked Data sets and also with development efforts such as BIBFRAME and the Fedora 4 Digital Repository incorporating RDF. Use cases show that transitions into RDF are occurring in both XML standards and in libraries with metadata encoded in XML. It is vital to understand that transitioning from XML to RDF requires a shift in perspective from replicating structures in XML to defining meaningful relationships in RDF. Establishing coordination and communication among these efforts will help as more libraries move to use RDF, produce Linked Data, and approach the Semantic Web.

  11. Health and medication information resources on the World Wide Web.

    Science.gov (United States)

    Grossman, Sara; Zerilli, Tina

    2013-04-01

    Health care practitioners have increasingly used the Internet to obtain health and medication information. The vast number of Internet Web sites providing such information and concerns with their reliability makes it essential for users to carefully select and evaluate Web sites prior to use. To this end, this article reviews the general principles to consider in this process. Moreover, as cost may limit access to subscription-based health and medication information resources with established reputability, freely accessible online resources that may serve as an invaluable addition to one's reference collection are highlighted. These include government- and organization-sponsored resources (eg, US Food and Drug Administration Web site and the American Society of Health-System Pharmacists' Drug Shortage Resource Center Web site, respectively) as well as commercial Web sites (eg, Medscape, Google Scholar). Familiarity with such online resources can assist health care professionals in their ability to efficiently navigate the Web and may potentially expedite the information gathering and decision-making process, thereby improving patient care.

  12. Empirical studies assessing the quality of health information for consumers on the world wide web: a systematic review.

    Science.gov (United States)

    Eysenbach, Gunther; Powell, John; Kuss, Oliver; Sa, Eun-Ryoung

    The quality of consumer health information on the World Wide Web is an important issue for medicine, but to date no systematic and comprehensive synthesis of the methods and evidence has been performed. To establish a methodological framework on how quality on the Web is evaluated in practice, to determine the heterogeneity of the results and conclusions, and to compare the methodological rigor of these studies, to determine to what extent the conclusions depend on the methodology used, and to suggest future directions for research. We searched MEDLINE and PREMEDLINE (1966 through September 2001), Science Citation Index (1997 through September 2001), Social Sciences Citation Index (1997 through September 2001), Arts and Humanities Citation Index (1997 through September 2001), LISA (1969 through July 2001), CINAHL (1982 through July 2001), PsychINFO (1988 through September 2001), EMBASE (1988 through June 2001), and SIGLE (1980 through June 2001). We also conducted hand searches, general Internet searches, and a personal bibliographic database search. We included published and unpublished empirical studies in any language in which investigators searched the Web systematically for specific health information, evaluated the quality of Web sites or pages, and reported quantitative results. We screened 7830 citations and retrieved 170 potentially eligible full articles. A total of 79 distinct studies met the inclusion criteria, evaluating 5941 health Web sites and 1329 Web pages, and reporting 408 evaluation results for 86 different quality criteria. Two reviewers independently extracted study characteristics, medical domains, search strategies used, methods and criteria of quality assessment, results (percentage of sites or pages rated as inadequate pertaining to a quality criterion), and quality and rigor of study methods and reporting. Most frequently used quality criteria used include accuracy, completeness, readability, design, disclosures, and references provided

  13. Architecture for biomedical multimedia information delivery on the World Wide Web

    Science.gov (United States)

    Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.

    1997-10-01

    Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.

  14. Web party effect: a cocktail party effect in the web environment

    Directory of Open Access Journals (Sweden)

    Sara Rigutti

    2015-03-01

    Full Text Available In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search. Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment: users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others.

  15. Web party effect: a cocktail party effect in the web environment

    Science.gov (United States)

    Gerbino, Walter

    2015-01-01

    In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all) web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots) on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search). Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment): users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others. PMID:25802803

  16. Distributed nuclear medicine applications using World Wide Web and Java technology

    International Nuclear Information System (INIS)

    Knoll, P.; Hoell, K.; Koriska, K.; Mirzaei, S.; Koehn, H.

    2000-01-01

    At present, medical applications applying World Wide Web (WWW) technology are mainly used to view static images and to retrieve some information. The Java platform is a relative new way of computing, especially designed for network computing and distributed applications which enables interactive connection between user and information via the WWW. The Java 2 Software Development Kit (SDK) including Java2D API, Java Remote Method Invocation (RMI) technology, Object Serialization and the Java Advanced Imaging (JAI) extension was used to achieve a robust, platform independent and network centric solution. Medical image processing software based on this technology is presented and adequate performance capability of Java is demonstrated by an iterative reconstruction algorithm for single photon emission computerized tomography (SPECT). (orig.)

  17. A Survey on the Exchange of Linguistic Resources: Publishing Linguistic Linked Open Data on the Web

    Science.gov (United States)

    Lezcano, Leonardo; Sanchez-Alonso, Salvador; Roa-Valverde, Antonio J.

    2013-01-01

    Purpose: The purpose of this paper is to provide a literature review of the principal formats and frameworks that have been used in the last 20 years to exchange linguistic resources. It aims to give special attention to the most recent approaches to publishing linguistic linked open data on the Web. Design/methodology/approach: Research papers…

  18. Semantic Web Requirements through Web Mining Techniques

    OpenAIRE

    Hassanzadeh, Hamed; Keyvanpour, Mohammad Reza

    2012-01-01

    In recent years, Semantic web has become a topic of active research in several fields of computer science and has applied in a wide range of domains such as bioinformatics, life sciences, and knowledge management. The two fast-developing research areas semantic web and web mining can complement each other and their different techniques can be used jointly or separately to solve the issues in both areas. In addition, since shifting from current web to semantic web mainly depends on the enhance...

  19. Assessing the quality of infertility resources on the World Wide Web: tools to guide clients through the maze of fact and fiction.

    Science.gov (United States)

    Okamura, Kyoko; Bernstein, Judith; Fidler, Anne T

    2002-01-01

    The Internet has become a major source of health information for women, but information placed on the World Wide Web does not routinely undergo a peer review process before dissemination. In this study, we present an analysis of 197 infertility-related Web sites for quality and accountability, using JAMA's minimal core standards for responsible print. Only 2% of the web sites analyzed met all four recommended standards, and 50.8% failed to report any of the four. Commercial web sites were more likely to fail to meet minimum standards (71.2%) than those with educational (46.8%) or supportive (29.8%) elements. Web sites with educational and informational components were most common (70.6%), followed by commercial sites (52.8%) and sites that offered a forum for infertility support and activism (28.9%). Internet resources available to infertile patients are at best variable. The current state of infertility-related materials on the World Wide Web offers unprecedented opportunities to improve services to a growing number of e-health users. Because of variations in quality of site content, women's health clinicians must assume responsibility for a new role as information monitor. This study provides assessment tools clinicians can apply and share with clients.

  20. Der Wandel in der Benutzung des World Wide Webs

    NARCIS (Netherlands)

    Weinreich, H.; Heinecke, A.; Obendorf, H.; Paul, H.; Mayer, M.; Herder, E.

    2006-01-01

    Dieser Beitrag präsentiert ausgewählte Ergebnisse einer Langzeitstudie mit 25 Teilnehmern zur Benutzung des Webs. Eine Gegenüberstellung mit den Ergebnissen der letzten vergleichbaren Studien offenbart eine deutliche Veränderung im Navigationsverhalten der Nutzer. Neue Angebote und Dienste des Webs

  1. PhLeGrA: Graph Analytics in Pharmacology over the Web of Life Sciences Linked Open Data.

    Science.gov (United States)

    Kamdar, Maulik R; Musen, Mark A

    2017-04-01

    Integrated approaches for pharmacology are required for the mechanism-based predictions of adverse drug reactions that manifest due to concomitant intake of multiple drugs. These approaches require the integration and analysis of biomedical data and knowledge from multiple, heterogeneous sources with varying schemas, entity notations, and formats. To tackle these integrative challenges, the Semantic Web community has published and linked several datasets in the Life Sciences Linked Open Data (LSLOD) cloud using established W3C standards. We present the PhLeGrA platform for Linked Graph Analytics in Pharmacology in this paper. Through query federation, we integrate four sources from the LSLOD cloud and extract a drug-reaction network, composed of distinct entities. We represent this graph as a hidden conditional random field (HCRF), a discriminative latent variable model that is used for structured output predictions. We calculate the underlying probability distributions in the drug-reaction HCRF using the datasets from the U.S. Food and Drug Administration's Adverse Event Reporting System. We predict the occurrence of 146 adverse reactions due to multiple drug intake with an AUROC statistic greater than 0.75. The PhLeGrA platform can be extended to incorporate other sources published using Semantic Web technologies, as well as to discover other types of pharmacological associations.

  2. CMS: a web-based system for visualization and analysis of genome-wide methylation data of human cancers.

    Science.gov (United States)

    Gu, Fei; Doderer, Mark S; Huang, Yi-Wen; Roa, Juan C; Goodfellow, Paul J; Kizer, E Lynette; Huang, Tim H M; Chen, Yidong

    2013-01-01

    DNA methylation of promoter CpG islands is associated with gene suppression, and its unique genome-wide profiles have been linked to tumor progression. Coupled with high-throughput sequencing technologies, it can now efficiently determine genome-wide methylation profiles in cancer cells. Also, experimental and computational technologies make it possible to find the functional relationship between cancer-specific methylation patterns and their clinicopathological parameters. Cancer methylome system (CMS) is a web-based database application designed for the visualization, comparison and statistical analysis of human cancer-specific DNA methylation. Methylation intensities were obtained from MBDCap-sequencing, pre-processed and stored in the database. 191 patient samples (169 tumor and 22 normal specimen) and 41 breast cancer cell-lines are deposited in the database, comprising about 6.6 billion uniquely mapped sequence reads. This provides comprehensive and genome-wide epigenetic portraits of human breast cancer and endometrial cancer to date. Two views are proposed for users to better understand methylation structure at the genomic level or systemic methylation alteration at the gene level. In addition, a variety of annotation tracks are provided to cover genomic information. CMS includes important analytic functions for interpretation of methylation data, such as the detection of differentially methylated regions, statistical calculation of global methylation intensities, multiple gene sets of biologically significant categories, interactivity with UCSC via custom-track data. We also present examples of discoveries utilizing the framework. CMS provides visualization and analytic functions for cancer methylome datasets. A comprehensive collection of datasets, a variety of embedded analytic functions and extensive applications with biological and translational significance make this system powerful and unique in cancer methylation research. CMS is freely accessible

  3. Persistent Identifiers for Improved Accessibility for Linked Data Querying

    Science.gov (United States)

    Shepherd, A.; Chandler, C. L.; Arko, R. A.; Fils, D.; Jones, M. B.; Krisnadhi, A.; Mecum, B.

    2016-12-01

    The adoption of linked open data principles within the geosciences has increased the amount of accessible information available on the Web. However, this data is difficult to consume for those who are unfamiliar with Semantic Web technologies such as Web Ontology Language (OWL), Resource Description Framework (RDF) and SPARQL - the RDF query language. Consumers would need to understand the structure of the data and how to efficiently query it. Furthermore, understanding how to query doesn't solve problems of poor precision and recall in search results. For consumers unfamiliar with the data, full-text searches are most accessible, but not ideal as they arrest the advantages of data disambiguation and co-reference resolution efforts. Conversely, URI searches across linked data can deliver improved search results, but knowledge of these exact URIs may remain difficult to obtain. The increased adoption of Persistent Identifiers (PIDs) can lead to improved linked data querying by a wide variety of consumers. Because PIDs resolve to a single entity, they are an excellent data point for disambiguating content. At the same time, PIDs are more accessible and prominent than a single data provider's linked data URI. When present in linked open datasets, PIDs provide balance between the technical and social hurdles of linked data querying as evidenced by the NSF EarthCube GeoLink project. The GeoLink project, funded by NSF's EarthCube initiative, have brought together data repositories include content from field expeditions, laboratory analyses, journal publications, conference presentations, theses/reports, and funding awards that span scientific studies from marine geology to marine ecosystems and biogeochemistry to paleoclimatology.

  4. When He Said Linking, He Really Meant Linking

    Science.gov (United States)

    Chudnov, Daniel

    2009-01-01

    There are many reasons to improve web links, starting with their design. The author tends to think about "design" on the web in terms of two things: (1) graphic/industrial design; and (2) human usability. A nice, clean URI (uniform resource identifier) that does not change, is readable to humans, is amenable to common web behaviors such as…

  5. Treatment of Wide-Neck Bifurcation Aneurysm Using "WEB Device Waffle Cone Technique".

    Science.gov (United States)

    Mihalea, Cristian; Caroff, Jildaz; Rouchaud, Aymeric; Pescariu, Sorin; Moret, Jacques; Spelle, Laurent

    2018-05-01

    The endovascular treatment of wide-neck bifurcation aneurysms can be challenging and often requires the use of adjunctive techniques and devices. We report our first experience of using a waffle-cone technique adapted to the Woven Endoluminal Bridge (WEB) device in a large-neck basilar tip aneurysm, suitable in cases where the use of Y stenting or other techniques is limited due to anatomic restrictions. The procedure was complete, and angiographic occlusion of the aneurysm was achieved 24 hours post treatment, as confirmed by digital subtraction angiography. No complications occurred. The case reported here was not suitable for Y stenting or deployment of the WEB device alone, due to the small caliber of both posterior cerebral arteries and their origin at the neck level. The main advantage of this technique is that both devices have a controlled detachment system and are fully independent. To our knowledge, this technique has not been reported previously and this modality of treatment has never been described in the literature. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Improving local clustering based top-L link prediction methods via asymmetric link clustering information

    Science.gov (United States)

    Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan

    2018-02-01

    Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.

  7. Why should we publish Linked Data?

    Science.gov (United States)

    Blower, Jon; Riechert, Maik; Koubarakis, Manolis; Pace, Nino

    2016-04-01

    We use the Web every day to access information from all kinds of different sources. But the complexity and diversity of scientific data mean that discovering accessing and interpreting data remains a large challenge to researchers, decision-makers and other users. Different sources of useful information on data, algorithms, instruments and publications are scattered around the Web. How can we link all these things together to help users to better understand and exploit earth science data? How can we combine scientific data with other relevant data sources, when standards for describing and sharing data vary so widely between communities? "Linked Data" is a term that describes a set of standards and "best practices" for sharing data on the Web (http://www.w3.org/standards/semanticweb/data). These principles can be summarised as follows: 1. Create unique and persistent identifiers for the important "things" in a community (e.g. datasets, publications, algorithms, instruments). 2. Allow users to "look up" these identifiers on the web to find out more information about them. 3. Make this information machine-readable in a community-neutral format (such as RDF, Resource Description Framework). 4. Within this information, embed links to other things and concepts and say how these are related. 5. Optionally, provide web service interfaces to allow the user to perform sophisticated queries over this information (using a language such as SPARQL). The promise of Linked Data is that, through these techniques, data will be more discoverable, more comprehensible and more usable by different communities, not just the community that produced the data. As a result, many data providers (particularly public-sector institutions) are now publishing data in this way. However, this area is still in its infancy in terms of real-world applications. Data users need guidance and tools to help them use Linked Data. Data providers need reassurance that the investments they are making in

  8. Use of World Wide Web Server and Browser Software To Support a First-Year Medical Physiology Course.

    Science.gov (United States)

    Davis, Michael J.; And Others

    1997-01-01

    Describes the use of a World Wide Web server to support a team-taught physiology course for first-year medical students. The students' evaluations indicate that computer use in class made lecture material more interesting, while the online documents helped reinforce lecture materials and textbooks. Lists factors which contribute to the…

  9. Linked Registries: Connecting Rare Diseases Patient Registries through a Semantic Web Layer.

    Science.gov (United States)

    Sernadela, Pedro; González-Castro, Lorena; Carta, Claudio; van der Horst, Eelke; Lopes, Pedro; Kaliyaperumal, Rajaram; Thompson, Mark; Thompson, Rachel; Queralt-Rosinach, Núria; Lopez, Estrella; Wood, Libby; Robertson, Agata; Lamanna, Claudia; Gilling, Mette; Orth, Michael; Merino-Martinez, Roxana; Posada, Manuel; Taruscio, Domenica; Lochmüller, Hanns; Robinson, Peter; Roos, Marco; Oliveira, José Luís

    2017-01-01

    Patient registries are an essential tool to increase current knowledge regarding rare diseases. Understanding these data is a vital step to improve patient treatments and to create the most adequate tools for personalized medicine. However, the growing number of disease-specific patient registries brings also new technical challenges. Usually, these systems are developed as closed data silos, with independent formats and models, lacking comprehensive mechanisms to enable data sharing. To tackle these challenges, we developed a Semantic Web based solution that allows connecting distributed and heterogeneous registries, enabling the federation of knowledge between multiple independent environments. This semantic layer creates a holistic view over a set of anonymised registries, supporting semantic data representation, integrated access, and querying. The implemented system gave us the opportunity to answer challenging questions across disperse rare disease patient registries. The interconnection between those registries using Semantic Web technologies benefits our final solution in a way that we can query single or multiple instances according to our needs. The outcome is a unique semantic layer, connecting miscellaneous registries and delivering a lightweight holistic perspective over the wealth of knowledge stemming from linked rare disease patient registries.

  10. VennDiagramWeb: a web application for the generation of highly customizable Venn and Euler diagrams.

    Science.gov (United States)

    Lam, Felix; Lalansingh, Christopher M; Babaran, Holly E; Wang, Zhiyuan; Prokopec, Stephenie D; Fox, Natalie S; Boutros, Paul C

    2016-10-03

    Visualization of data generated by high-throughput, high-dimensionality experiments is rapidly becoming a rate-limiting step in computational biology. There is an ongoing need to quickly develop high-quality visualizations that can be easily customized or incorporated into automated pipelines. This often requires an interface for manual plot modification, rapid cycles of tweaking visualization parameters, and the generation of graphics code. To facilitate this process for the generation of highly-customizable, high-resolution Venn and Euler diagrams, we introduce VennDiagramWeb: a web application for the widely used VennDiagram R package. VennDiagramWeb is hosted at http://venndiagram.res.oicr.on.ca/ . VennDiagramWeb allows real-time modification of Venn and Euler diagrams, with parameter setting through a web interface and immediate visualization of results. It allows customization of essentially all aspects of figures, but also supports integration into computational pipelines via download of R code. Users can upload data and download figures in a range of formats, and there is exhaustive support documentation. VennDiagramWeb allows the easy creation of Venn and Euler diagrams for computational biologists, and indeed many other fields. Its ability to support real-time graphics changes that are linked to downloadable code that can be integrated into automated pipelines will greatly facilitate the improved visualization of complex datasets. For application support please contact Paul.Boutros@oicr.on.ca.

  11. Web Mining

    Science.gov (United States)

    Fürnkranz, Johannes

    The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to Web data and documents. This chapter provides a brief overview of web mining techniques and research areas, most notably hypertext classification, wrapper induction, recommender systems and web usage mining.

  12. Virtual Web Services

    OpenAIRE

    Rykowski, Jarogniew

    2007-01-01

    In this paper we propose an application of software agents to provide Virtual Web Services. A Virtual Web Service is a linked collection of several real and/or virtual Web Services, and public and private agents, accessed by the user in the same way as a single real Web Service. A Virtual Web Service allows unrestricted comparison, information merging, pipelining, etc., of data coming from different sources and in different forms. Detailed architecture and functionality of a single Virtual We...

  13. CERN in a historic Global Web-cast

    CERN Multimedia

    2005-01-01

    On Thursday 1st December, CERN will be involved in 'Beyond Einstein', a 12-hour live world-wide web-cast, which will feature participants from across the globe, marking the World Year of Physics. CERN goes global: the 12-hour web-cast will unite different world timezones by means of the web. Viewers on the web will be able to tune into one of the most extensive videoconference in the world to learn more about Einstein's physics and how it continues to influence cutting-edge research worldwide. The event kicks off at midday (CET) with a live presentation at CERN's Globe of Science and Innovation, featuring a symbolic link-up with the New Library of Alexandria in Egypt. There will then be transmissions from a host of research institutions, such as Imperial College, Fermilab and SLAC. There will also be live connections with Jerusalem, Taipei, San Francisco, Tasmania and even Antarctica. 'Connections will be established among virtually all the time zones on Earth, a perfect way to celebrate Einstein, who rev...

  14. Local Type Checking for Linked Data Consumers

    Directory of Open Access Journals (Sweden)

    Gabriel Ciobanu

    2013-07-01

    Full Text Available The Web of Linked Data is the cumulation of over a decade of work by the Web standards community in their effort to make data more Web-like. We provide an introduction to the Web of Linked Data from the perspective of a Web developer that would like to build an application using Linked Data. We identify a weakness in the development stack as being a lack of domain specific scripting languages for designing background processes that consume Linked Data. To address this weakness, we design a scripting language with a simple but appropriate type system. In our proposed architecture some data is consumed from sources outside of the control of the system and some data is held locally. Stronger type assumptions can be made about the local data than external data, hence our type system mixes static and dynamic typing. Throughout, we relate our work to the W3C recommendations that drive Linked Data, so our syntax is accessible to Web developers.

  15. EpiCollect+: linking smartphones to web applications for complex data collection projects.

    Science.gov (United States)

    Aanensen, David M; Huntley, Derek M; Menegazzo, Mirko; Powell, Chris I; Spratt, Brian G

    2014-01-01

    Previously, we have described the development of the generic mobile phone data gathering tool, EpiCollect, and an associated web application, providing two-way communication between multiple data gatherers and a project database. This software only allows data collection on the phone using a single questionnaire form that is tailored to the needs of the user (including a single GPS point and photo per entry), whereas many applications require a more complex structure, allowing users to link a series of forms in a linear or branching hierarchy, along with the addition of any number of media types accessible from smartphones and/or tablet devices (e.g., GPS, photos, videos, sound clips and barcode scanning). A much enhanced version of EpiCollect has been developed (EpiCollect+). The individual data collection forms in EpiCollect+ provide more design complexity than the single form used in EpiCollect, and the software allows the generation of complex data collection projects through the ability to link many forms together in a linear (or branching) hierarchy. Furthermore, EpiCollect+ allows the collection of multiple media types as well as standard text fields, increased data validation and form logic. The entire process of setting up a complex mobile phone data collection project to the specification of a user (project and form definitions) can be undertaken at the EpiCollect+ website using a simple 'drag and drop' procedure, with visualisation of the data gathered using Google Maps and charts at the project website. EpiCollect+ is suitable for situations where multiple users transmit complex data by mobile phone (or other Android devices) to a single project web database and is already being used for a range of field projects, particularly public health projects in sub-Saharan Africa. However, many uses can be envisaged from education, ecology and epidemiology to citizen science.

  16. An End User Development Approach for Mobile Web Augmentation

    Directory of Open Access Journals (Sweden)

    Gabriela Bosetti

    2017-01-01

    Full Text Available The trend towards mobile devices usage has made it possible for the Web to be conceived not only as an information space but also as a ubiquitous platform where users perform all kinds of tasks. In some cases, users access the Web with native mobile applications developed for well-known sites, such as, LinkedIn, Facebook, and Twitter. These native applications might offer further (e.g., location-based functionalities to their users in comparison with their corresponding Web sites because they were developed with mobile features in mind. However, many Web applications have no native counterpart and users access them using a mobile Web browser. Although the access to context information is not a complex issue nowadays, not all Web applications adapt themselves according to it or diversely improve the user experience by listening to a wide range of sensors. At some point, users might want to add mobile features to these Web sites, even if those features were not originally supported. In this paper, we present a novel approach to allow end users to augment their preferred Web sites with mobile features. We support our claims by presenting a framework for mobile Web augmentation, an authoring tool, and an evaluation with 21 end users.

  17. Taking risks on the world wide web: The impact of families and societies on adolescents' risky online behavior

    NARCIS (Netherlands)

    Notten, N.J.W.R.; Hof, S. van der; Berg, B. van den; Schermer, B.W.

    2014-01-01

    Children’s engagement in risky online behavior—such as providing personal information or agreeing to meet with a stranger—is an important predictor of whether they will encounter harmful content on the World Wide Web or be confronted with situations such as sexual harassment and privacy violations.

  18. Interfacce Web per database bibliografici il sistema di informazioni scientifiche del CERN

    CERN Document Server

    Brugnolo, F

    1997-01-01

    Analysis of how to develop and organise a scientific information service based on the Word Wide Web, the specificity of the databases word-oriented and the problems linked to the information retrieval on the WWW. The analysis is done both in the theoretical and in the practical point of view. The case of the CERN scientific information service is taken into account. We study the reorganisation of t he whole architecture and the development of the Web User Interface. We conclude with the description of the service Personal Virtual Library, developed for CERN Library Catalogue.

  19. Real-Time Payload Control and Monitoring on the World Wide Web

    Science.gov (United States)

    Sun, Charles; Windrem, May; Givens, John J. (Technical Monitor)

    1998-01-01

    World Wide Web (W3) technologies such as the Hypertext Transfer Protocol (HTTP) and the Java object-oriented programming environment offer a powerful, yet relatively inexpensive, framework for distributed application software development. This paper describes the design of a real-time payload control and monitoring system that was developed with W3 technologies at NASA Ames Research Center. Based on Java Development Toolkit (JDK) 1.1, the system uses an event-driven "publish and subscribe" approach to inter-process communication and graphical user-interface construction. A C Language Integrated Production System (CLIPS) compatible inference engine provides the back-end intelligent data processing capability, while Oracle Relational Database Management System (RDBMS) provides the data management function. Preliminary evaluation shows acceptable performance for some classes of payloads, with Java's portability and multimedia support identified as the most significant benefit.

  20. The importance of link evidence in Wikipedia

    NARCIS (Netherlands)

    Kamps, J.; Koolen, M.

    2008-01-01

    Wikipedia is one of the most popular information sources on the Web. The free encyclopedia is densely linked. The link structure in Wikipedia differs from the Web at large: internal links in Wikipedia are typically based on words naturally occurring in a page, and link to another semantically

  1. EpiCollect: linking smartphones to web applications for epidemiology, ecology and community data collection.

    Directory of Open Access Journals (Sweden)

    David M Aanensen

    2009-09-01

    Full Text Available Epidemiologists and ecologists often collect data in the field and, on returning to their laboratory, enter their data into a database for further analysis. The recent introduction of mobile phones that utilise the open source Android operating system, and which include (among other features both GPS and Google Maps, provide new opportunities for developing mobile phone applications, which in conjunction with web applications, allow two-way communication between field workers and their project databases.Here we describe a generic framework, consisting of mobile phone software, EpiCollect, and a web application located within www.spatialepidemiology.net. Data collected by multiple field workers can be submitted by phone, together with GPS data, to a common web database and can be displayed and analysed, along with previously collected data, using Google Maps (or Google Earth. Similarly, data from the web database can be requested and displayed on the mobile phone, again using Google Maps. Data filtering options allow the display of data submitted by the individual field workers or, for example, those data within certain values of a measured variable or a time period.Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting 'citizen scientists' to contribute data easily to central databases through their mobile phone.

  2. EpiCollect: linking smartphones to web applications for epidemiology, ecology and community data collection.

    Science.gov (United States)

    Aanensen, David M; Huntley, Derek M; Feil, Edward J; al-Own, Fada'a; Spratt, Brian G

    2009-09-16

    Epidemiologists and ecologists often collect data in the field and, on returning to their laboratory, enter their data into a database for further analysis. The recent introduction of mobile phones that utilise the open source Android operating system, and which include (among other features) both GPS and Google Maps, provide new opportunities for developing mobile phone applications, which in conjunction with web applications, allow two-way communication between field workers and their project databases. Here we describe a generic framework, consisting of mobile phone software, EpiCollect, and a web application located within www.spatialepidemiology.net. Data collected by multiple field workers can be submitted by phone, together with GPS data, to a common web database and can be displayed and analysed, along with previously collected data, using Google Maps (or Google Earth). Similarly, data from the web database can be requested and displayed on the mobile phone, again using Google Maps. Data filtering options allow the display of data submitted by the individual field workers or, for example, those data within certain values of a measured variable or a time period. Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting 'citizen scientists' to contribute data easily to central databases through their mobile phone.

  3. Augmenting the Web through Open Hypermedia

    DEFF Research Database (Denmark)

    Bouvin, N.O.

    2003-01-01

    Based on an overview of Web augmentation and detailing the three basic approaches to extend the hypermedia functionality of the Web, the author presents a general open hypermedia framework (the Arakne framework) to augment the Web. The aim is to provide users with the ability to link, annotate, a......, and otherwise structure Web pages, as they see fit. The paper further discusses the possibilities of the concept through the description of various experiments performed with an implementation of the framework, the Arakne Environment......Based on an overview of Web augmentation and detailing the three basic approaches to extend the hypermedia functionality of the Web, the author presents a general open hypermedia framework (the Arakne framework) to augment the Web. The aim is to provide users with the ability to link, annotate...

  4. Web Transfer Over Satellites Being Improved

    Science.gov (United States)

    Allman, Mark

    1999-01-01

    Extensive research conducted by NASA Lewis Research Center's Satellite Networks and Architectures Branch and the Ohio University has demonstrated performance improvements in World Wide Web transfers over satellite-based networks. The use of a new version of the Hypertext Transfer Protocol (HTTP) reduced the time required to load web pages over a single Transmission Control Protocol (TCP) connection traversing a satellite channel. However, an older technique of simultaneously making multiple requests of a given server has been shown to provide even faster transfer time. Unfortunately, the use of multiple simultaneous requests has been shown to be harmful to the network in general. Therefore, we are developing new mechanisms for the HTTP protocol which may allow a single request at any given time to perform as well as, or better than, multiple simultaneous requests. In the course of study, we also demonstrated that the time for web pages to load is at least as short via a satellite link as it is via a standard 28.8-kbps dialup modem channel. This demonstrates that satellites are a viable means of accessing the Internet.

  5. Pilot using World Wide Web to prevent diabetes in adolescents.

    Science.gov (United States)

    Long, Joann D; Armstrong, Myrna L; Amos, Elizabeth; Shriver, Brent; Roman-Shriver, Carmen; Feng, Du; Harrison, Lanell; Luker, Scott; Nash, Anita; Blevins, Monica Witcher

    2006-02-01

    This pilot study tested the effects of an interactive nutrition education Web site on fruit, vegetable, and fat consumption in minority adolescents genetically at risk for Type 2 diabetes. A one-group nonexperimental pretest, posttest focus group design was used. Twenty-one sixth-grade to eighth-grade junior high adolescents who were minorities volunteered to participate. Participants received 5 hours of Web-based nutrition education over 3 weeks. A significant difference in fat consumption was supported from the computerized dietary assessment. No difference was found in fruit or vegetable consumption. Comparative data indicated a rise in body mass index (BMI) percentile from 88.03 (1999) to 88.40 (2002; boys) and 88.25 (1999) to 91.2 (2002; girls). Focus group responses supported the satisfaction of adolescents in the study with the use of the Web-based intervention for nutrition education. Healthy eating interventions using Web-based nutrition education should be further investigated with adolescents.

  6. Open Hypermedia as User Controlled Meta Data for the Web

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Sloth, Lennert; Bouvin, Niels Olof

    2000-01-01

    segments. By means of the Webvise system, OHIF structures can be authored, imposed on Web pages, and finally linked on the Web as any ordinary Web resource. Following a link to an OHIF file automatically invokes a Webvise download of the meta data structures and the annotated Web content will be displayed...... in the browser. Moreover, the Webvise system provides support for users to create, manipulate, and share the OHIF structures together with custom made web pages and MS Office 2000 documents on WebDAV servers. These Webvise facilities goes beyond ealier open hypermedia systems in that it now allows fully...... distributed open hypermedia linking between Web pages and WebDAV aware desktop applications. The paper describes the OHIF format and demonstrates how the Webvise system handles OHIF. Finally, it argues for better support for handling user controlled meta data, e.g. support for linking in non-XML data...

  7. The Evolution of Web Searching.

    Science.gov (United States)

    Green, David

    2000-01-01

    Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…

  8. Interactive fluka: a world wide web version for a simulation code in proton therapy

    International Nuclear Information System (INIS)

    Garelli, S.; Giordano, S.; Piemontese, G.; Squarcia, S.

    1998-01-01

    We considered the possibility of using the simulation code FLUKA, in the framework of TERA. We provided a window under World Wide Web in which an interactive version of the code is available. The user can find instructions for the installation, an on-line FLUKA manual and interactive windows for inserting all the data required by the configuration running file in a very simple way. The database choice allows a more versatile use for data verification and update, recall of old simulations and comparison with selected examples. A completely new tool for geometry drawing under Java has also been developed. (authors)

  9. Web Search Engines

    OpenAIRE

    Rajashekar, TB

    1998-01-01

    The World Wide Web is emerging as an all-in-one information source. Tools for searching Web-based information include search engines, subject directories and meta search tools. We take a look at key features of these tools and suggest practical hints for effective Web searching.

  10. The use of web ontology languages and other semantic web tools in drug discovery.

    Science.gov (United States)

    Chen, Huajun; Xie, Guotong

    2010-05-01

    To optimize drug development processes, pharmaceutical companies require principled approaches to integrate disparate data on a unified infrastructure, such as the web. The semantic web, developed on the web technology, provides a common, open framework capable of harmonizing diversified resources to enable networked and collaborative drug discovery. We survey the state of art of utilizing web ontologies and other semantic web technologies to interlink both data and people to support integrated drug discovery across domains and multiple disciplines. Particularly, the survey covers three major application categories including: i) semantic integration and open data linking; ii) semantic web service and scientific collaboration and iii) semantic data mining and integrative network analysis. The reader will gain: i) basic knowledge of the semantic web technologies; ii) an overview of the web ontology landscape for drug discovery and iii) a basic understanding of the values and benefits of utilizing the web ontologies in drug discovery. i) The semantic web enables a network effect for linking open data for integrated drug discovery; ii) The semantic web service technology can support instant ad hoc collaboration to improve pipeline productivity and iii) The semantic web encourages publishing data in a semantic way such as resource description framework attributes and thus helps move away from a reliance on pure textual content analysis toward more efficient semantic data mining.

  11. Web Science emerges

    OpenAIRE

    Shadbolt, Nigel; Berners-Lee, Tim

    2008-01-01

    The relentless rise in Web pages and links is creating emergent properties, from social networks to virtual identity theft, that are transforming society. A new discipline, Web Science, aims to discover how Web traits arise and how they can be harnessed or held in check to benefit society. Important advances are beginning to be made; more work can solve major issues such as securing privacy and conveying trust.

  12. Cpf1-Database: web-based genome-wide guide RNA library design for gene knockout screens using CRISPR-Cpf1.

    Science.gov (United States)

    Park, Jeongbin; Bae, Sangsu

    2018-03-15

    Following the type II CRISPR-Cas9 system, type V CRISPR-Cpf1 endonucleases have been found to be applicable for genome editing in various organisms in vivo. However, there are as yet no web-based tools capable of optimally selecting guide RNAs (gRNAs) among all possible genome-wide target sites. Here, we present Cpf1-Database, a genome-wide gRNA library design tool for LbCpf1 and AsCpf1, which have DNA recognition sequences of 5'-TTTN-3' at the 5' ends of target sites. Cpf1-Database provides a sophisticated but simple way to design gRNAs for AsCpf1 nucleases on the genome scale. One can easily access the data using a straightforward web interface, and using the powerful collections feature one can easily design gRNAs for thousands of genes in short time. Free access at http://www.rgenome.net/cpf1-database/. sangsubae@hanyang.ac.kr.

  13. La salud de las web universitarias españolas

    Directory of Open Access Journals (Sweden)

    Thelwall, Mike

    2003-09-01

    Full Text Available The Web has become an important tool for universities, and one that is employed in a variety of ways. Examples are: disseminating and publicising research findings and activities; publishing teaching and administrative information for students; and collaborating with other institutions nationally and internationally. But how effectively are Spanish universities using the Web and what information can be gained about online communication patterns through the study of Web links? This paper reports on an investigation into 64 university Web sites that were indexed using a specialist information science Web crawler and analysed using associated software. There were a wide variety of sizes for university Web sites and that universities attracted links from others broadly in proportion to their site size. The Spanish academic Web was found to lag behind those of the four countries that it was compared to. However, the most commonly targeted top-level Internet domains were from non-Spanish speaking high Web using countries around the world, showing a broad international perspective and high degree of multilingualism for Web authors. The most highly targeted pages were mainly those that attracted automatically generated links, but several government ministries were a surprise inclusion.

    La web se ha convertido en una importante herramienta para las universidades, donde se utiliza en una amplia variedad de formas, tales como publicar y diseminar actividades y resultados de investigación, proporcionar información administrativa y académica de interés para los estudiantes o facilitar la colaboración con otras instituciones nacionales e internacionales. Pero, ¿cómo están realmente utilizando la Web las universidades españolas y qué información se puede obtener sobre sus modos de comunicación en línea a través del estudio de los enlaces web?. Para obtener respuestas se han investigado 64 sedes Web de universidades utilizando un robot

  14. Semantic Web Primer

    NARCIS (Netherlands)

    Antoniou, Grigoris; Harmelen, Frank van

    2004-01-01

    The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its use. A Semantic Web Primer provides an introduction and guide to this still emerging field, describing its key ideas, languages, and technologies. Suitable for use as a

  15. Educational Applications on the World Wide Web: An Example Using Amphion

    Science.gov (United States)

    Friedman, Jane

    1998-01-01

    There is a great deal of excitement about using the internet and the World Wide Web in education. There are such exciting possibilities and there is a wealth and variety of material up on the web. There are however many problems, problems of access and resources, problems of quality -- for every excellent resource there are many poor ones, and there are insufficiently explored problems of teacher training and motivation. For example, Wiesenmayer and Meadows report on a study of 347 West Virginia science teachers. These teachers were enrolled in a week-long summer workshop to introduce them to the internet and its educational potential. The teachers were asked to review science sites as to overall quality and then about their usefulness in their own classrooms. The teachers were enthusiastic about the web, and gave two-thirds of the sites high ratings, and essentially all the rest average ratings. But alarmingly, over 80% of these sites were viewed as having no direct applicability in the teacher's own classroom. This summer I was assigned to work on the Amphion project in the Automated Software Engineering Group under the leadership of Michael Lowry. I wished to find educational applications of the Amphion system, which in its current implementation can be used to create fortran programs and animations using the SPICE libraries created by the NAIF group at JPL. I wished to find an application which provided real added educational value, which was in line with educational curriculum standards and which would serve a documented need of the educational community. The application selected was teaching about the causes of the seasons -- at the approximately the fourth, fifth, sixth grade level. This topic was chosen because it is in line with national curriculum standards. The fourth, fifth, sixth grade level was selected to coincide with the grade level served by the Ames Aerospace Encounter, which services 10,000 children a year on field trips. The hope is that

  16. Cytological analysis of atypical squamous epithelial cells of undetermined significance using the world wide web.

    Science.gov (United States)

    Washiya, Kiyotada; Abe, Ichinosuke; Ambo, Junichi; Iwai, Muneo; Okusawa, Estuko; Asanuma, Kyousuke; Watanabe, Jun

    2011-01-01

    The low-level consistency of the cytodiagnosis of uterine cervical atypical squamous epithelial cells of undetermined significance (ASC-US) employing the Bethesda System has been reported, suggesting the necessity of a wide survey. We presented cases judged as ASC-US on the Web and analyzed the voting results to investigate ASC-US cytologically. Cytology samples from 129 patients diagnosed with ASC-US were used. Images of several atypical cells observed in these cases were presented on the Web. The study, based on the voting results, was presented and opinions were exchanged at the meeting of the Japanese Society of Clinical Cytology. The final diagnosis of ASC-US was benign lesions in 76 cases and low- and high-grade squamous intraepithelial lesions in 44, but no definite diagnosis could be made for the remaining 9. The total number of votes was 17,884 with a 36.5% consistency of cases judged as ASC-US. Benign cases were divided into 6 categories. Four categories not corresponding to the features of koilocytosis and small abnormal keratinized cells were judged as negative for an intraepithelial lesion or malignancy at a high rate. A Web-based survey would be useful which could be viewed at any time and thereby facilitate the sharing of cases to increase consistency. Copyright © 2011 S. Karger AG, Basel.

  17. TOGA COARE Satellite data summaries available on the World Wide Web

    Science.gov (United States)

    Chen, S. S.; Houze, R. A., Jr.; Mapes, B. E.; Brodzick, S. R.; Yutler, S. E.

    1995-01-01

    Satellite data summary images and analysis plots from the Tropical Ocean Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE), which were initially prepared in the field at the Honiara Operations Center, are now available on the Internet via World Wide Web browsers such as Mosaic. These satellite data summaries consist of products derived from the Japanese Geosynchronous Meteorological Satellite IR data: a time-size series of the distribution of contiguous cold cloudiness areas, weekly percent high cloudiness (PHC) maps, and a five-month time-longitudinal diagram illustrating the zonal motion of large areas of cold cloudiness. The weekly PHC maps are overlaid with weekly mean 850-hPa wind calculated from the European Centre for Medium-Range Weather Forecasts (ECMWF) global analysis field and can be viewed as an animation loop. These satellite summaries provide an overview of spatial and temporal variabilities of the cloud population and a large-scale context for studies concerning specific processes of various components of TOGA COARE.

  18. A brief history of the World Wide Web Where it as invented, how it's used, and where it's headed

    CERN Document Server

    Kyrnin, Jennifer

    2005-01-01

    The World Wide Web has its historical roots in things such as the creation of the telegraph, the launching of the Sputnik and more, but it really all started in March 1989, when Tim Berners-Lee, a computer scientist at CERN in Geneva wrote a paper called Information Management: A proposal

  19. Effects of Learning Style and Training Method on Computer Attitude and Performance in World Wide Web Page Design Training.

    Science.gov (United States)

    Chou, Huey-Wen; Wang, Yu-Fang

    1999-01-01

    Compares the effects of two training methods on computer attitude and performance in a World Wide Web page design program in a field experiment with high school students in Taiwan. Discusses individual differences, Kolb's Experiential Learning Theory and Learning Style Inventory, Computer Attitude Scale, and results of statistical analyses.…

  20. Reasoning techniques for the web of data

    CERN Document Server

    Hogan, A

    2014-01-01

    Linked Data publishing has brought about a novel "Web of Data": a wealth of diverse, interlinked, structured data published on the Web. These Linked Datasets are described using the Semantic Web standards and are openly available to all, produced by governments, businesses, communities and academia alike. However, the heterogeneity of such data - in terms of how resources are described and identified - poses major challenges to potential consumers. Herein, we examine use cases for pragmatic, lightweight reasoning techniques that leverage Web vocabularies (described in RDFS and OWL) to better i

  1. Cooperative Mobile Web Browsing

    DEFF Research Database (Denmark)

    Perrucci, GP; Fitzek, FHP; Zhang, Qi

    2009-01-01

    This paper advocates a novel approach for mobile web browsing based on cooperation among wireless devices within close proximity operating in a cellular environment. In the actual state of the art, mobile phones can access the web using different cellular technologies. However, the supported data......-range links can then be used for cooperative mobile web browsing. By implementing the cooperative web browsing on commercial mobile phones, it will be shown that better performance is achieved in terms of increased data rate and therefore reduced access times, resulting in a significantly enhanced web...

  2. Editorial for the special issue on "The Semantic Web for all" of the Semantic Web Journal (SWJ)

    NARCIS (Netherlands)

    Guéret, Christophe; Boyera, Stephane; Powell, Mike; Murillo, Martin

    2014-01-01

    Over the past few years Semantic Web technologies have brought significant changes in the way structured data is published, shared and consumed on the Web. Emerging online applications based on the Web of Objects or Linked Open Data can use the Web as a platform to exchange and reason over

  3. Web wisdom how to evaluate and create information quality on the Web

    CERN Document Server

    Alexander, Janet E

    1999-01-01

    Web Wisdom is an essential reference for anyone needing to evaluate or establish information quality on the World Wide Web. The book includes easy to use checklists for step-by-step quality evaluations of virtually any Web page. The checklists can also be used by Web authors to help them ensure quality information on their pages. In addition, Web Wisdom addresses other important issues, such as understanding the ways that advertising and sponsorship may affect the quality of Web information. It features: * a detailed discussion of the items involved in evaluating Web information; * checklists

  4. Presentation of klystron history and statistics by World-Wide Web

    International Nuclear Information System (INIS)

    Kamikubota, N.; Furukawa, K.

    2000-01-01

    A web-based system for browsing klystron histories and statistics has been developed for the KEKB e-/e+ linac. This system enables linac staffs to investigate various klystron histories, such as recent trends of ES (down frequency/reflection/high voltage), at his/her convenient PC/Mac/console, where a web-browser is available. This system started in January 2000, and now becomes an inevitable tool for the linac staffs. (author)

  5. Students' Perceptions of the Effectiveness of the World Wide Web as a Research and Teaching Tool in Science Learning.

    Science.gov (United States)

    Ng, Wan; Gunstone, Richard

    2002-01-01

    Investigates the use of the World Wide Web (WWW) as a research and teaching tool in promoting self-directed learning groups of 15-year-old students. Discusses the perceptions of students of the effectiveness of the WWW in assisting them with the construction of knowledge on photosynthesis and respiration. (Contains 33 references.) (Author/YDS)

  6. "Così abbiamo creato il World Wide Web"

    CERN Multimedia

    Sigiani, GianLuca

    2002-01-01

    Meeting with Robert Cailliau, scientist and pioneer of the web, who, in a book, tells how at CERN in Geneva, his team transformed Internet (an instrument used for military purposes) in one of the most revolutionary tool of mass media from ever (1 page)

  7. WEB STRUCTURE MINING USING PAGERANK, IMPROVED PAGERANK – AN OVERVIEW

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2011-03-01

    Full Text Available Web Mining is the extraction of interesting and potentially useful patterns and information from Web. It includes Web documents, hyperlinks between documents, and usage logs of web sites. The significant task for web mining can be listed out as Information Retrieval, Information Selection / Extraction, Generalization and Analysis. Web information retrieval tools consider only the text on pages and ignore information in the links. The goal of Web structure mining is to explore structural summary about web. Web structure mining focusing on link information is an important aspect of web data. This paper presents an overview of the PageRank, Improved Page Rank and its working functionality in web structure mining.

  8. An Exploratory Survey of Digital Libraries on the World Wide Web: Art and Literature of the Early Italian Renaissance.

    Science.gov (United States)

    McKibben, Suzanne J.

    This study assessed the ongoing development of digital libraries (DLs) on the World Wide Web. DLs of art and literature were surveyed for selected works from the early Italian Renaissance in order to gain insight into the current trends prevalent throughout the larger population of DLs. The following artists and authors were selected for study:…

  9. Up in the Air: When Homes Meet the Web of Things

    OpenAIRE

    Yao, Lina; Sheng, Quan Z.; Benatallah, Boualem; Dustdar, Schahram; Wang, Xianzhi; Shemshadi, Ali; Ngu, Anne H. H.

    2015-01-01

    The emerging Internet of Things (IoT) will comprise billions of Web-enabled objects (or "things") where such objects can sense, communicate, compute and potentially actuate. WoT is essentially the embodiment of the evolution from systems linking digital documents to systems relating digital information to real-world physical items. It is widely understood that significant technical challenges exist in developing applications in the WoT environment. In this paper, we report our practical exper...

  10. Testing Quantum Models of Conjunction Fallacy on the World Wide Web

    Science.gov (United States)

    Aerts, Diederik; Arguëlles, Jonito Aerts; Beltran, Lester; Beltran, Lyneth; de Bianchi, Massimiliano Sassoli; Sozzo, Sandro; Veloz, Tomas

    2017-12-01

    The `conjunction fallacy' has been extensively debated by scholars in cognitive science and, in recent times, the discussion has been enriched by the proposal of modeling the fallacy using the quantum formalism. Two major quantum approaches have been put forward: the first assumes that respondents use a two-step sequential reasoning and that the fallacy results from the presence of `question order effects'; the second assumes that respondents evaluate the cognitive situation as a whole and that the fallacy results from the `emergence of new meanings', as an `effect of overextension' in the conceptual conjunction. Thus, the question arises as to determine whether and to what extent conjunction fallacies would result from `order effects' or, instead, from `emergence effects'. To help clarify this situation, we propose to use the World Wide Web as an `information space' that can be interrogated both in a sequential and non-sequential way, to test these two quantum approaches. We find that `emergence effects', and not `order effects', should be considered the main cognitive mechanism producing the observed conjunction fallacies.

  11. A World Wide Web-based antimicrobial stewardship program improves efficiency, communication, and user satisfaction and reduces cost in a tertiary care pediatric medical center.

    Science.gov (United States)

    Agwu, Allison L; Lee, Carlton K K; Jain, Sanjay K; Murray, Kara L; Topolski, Jason; Miller, Robert E; Townsend, Timothy; Lehmann, Christoph U

    2008-09-15

    Antimicrobial stewardship programs aim to reduce inappropriate hospital antimicrobial use. At the Johns Hopkins Children's Medical and Surgical Center (Baltimore, MD), we implemented a World Wide Web-based antimicrobial restriction program to address problems with the existing restriction program. A user survey identified opportunities for improvement of an existing antimicrobial restriction program and resulted in subsequent design, implementation, and evaluation of a World Wide Web-based antimicrobial restriction program at a 175-bed, tertiary care pediatric teaching hospital. The program provided automated clinical decision support, facilitated approval, and enhanced real-time communication among prescribers, pharmacists, and pediatric infectious diseases fellows. Approval status, duration, and rationale; missing request notifications; and expiring approvals were stored in a database that is accessible via a secure Intranet site. Before and after implementation of the program, user satisfaction, reports of missed and/or delayed doses, antimicrobial dispensing times, and cost were evaluated. After implementation of the program, there was a $370,069 reduction in projected annual cost associated with restricted antimicrobial use and an 11.6% reduction in the number of dispensed doses. User satisfaction increased from 22% to 68% and from 13% to 69% among prescribers and pharmacists, respectively. There were 21% and 32% reductions in the number of prescriber reports of missed and delayed doses, respectively, and there was a 37% reduction in the number of pharmacist reports of delayed approvals; measured dispensing times were unchanged (P = .24). In addition, 40% fewer restricted antimicrobial-related phone calls were noted by the pharmacy. The World Wide Web-based antimicrobial approval program led to improved communication, more-efficient antimicrobial administration, increased user satisfaction, and significant cost savings. Integrated tools, such as this World

  12. Architecture and the Web.

    Science.gov (United States)

    Money, William H.

    Instructors should be concerned with how to incorporate the World Wide Web into an information systems (IS) curriculum organized across three areas of knowledge: information technology, organizational and management concepts, and theory and development of systems. The Web fits broadly into the information technology component. For the Web to be…

  13. The BiSciCol Triplifier: bringing biodiversity data to the Semantic Web.

    Science.gov (United States)

    Stucky, Brian J; Deck, John; Conlin, Tom; Ziemba, Lukasz; Cellinese, Nico; Guralnick, Robert

    2014-07-29

    Recent years have brought great progress in efforts to digitize the world's biodiversity data, but integrating data from many different providers, and across research domains, remains challenging. Semantic Web technologies have been widely recognized by biodiversity scientists for their potential to help solve this problem, yet these technologies have so far seen little use for biodiversity data. Such slow uptake has been due, in part, to the relative complexity of Semantic Web technologies along with a lack of domain-specific software tools to help non-experts publish their data to the Semantic Web. The BiSciCol Triplifier is new software that greatly simplifies the process of converting biodiversity data in standard, tabular formats, such as Darwin Core-Archives, into Semantic Web-ready Resource Description Framework (RDF) representations. The Triplifier uses a vocabulary based on the popular Darwin Core standard, includes both Web-based and command-line interfaces, and is fully open-source software. Unlike most other RDF conversion tools, the Triplifier does not require detailed familiarity with core Semantic Web technologies, and it is tailored to a widely popular biodiversity data format and vocabulary standard. As a result, the Triplifier can often fully automate the conversion of biodiversity data to RDF, thereby making the Semantic Web much more accessible to biodiversity scientists who might otherwise have relatively little knowledge of Semantic Web technologies. Easy availability of biodiversity data as RDF will allow researchers to combine data from disparate sources and analyze them with powerful linked data querying tools. However, before software like the Triplifier, and Semantic Web technologies in general, can reach their full potential for biodiversity science, the biodiversity informatics community must address several critical challenges, such as the widespread failure to use robust, globally unique identifiers for biodiversity data.

  14. Medical knowledge packages and their integration into health-care information systems and the World Wide Web.

    Science.gov (United States)

    Adlassnig, Klaus-Peter; Rappelsberger, Andrea

    2008-01-01

    Software-based medical knowledge packages (MKPs) are packages of highly structured medical knowledge that can be integrated into various health-care information systems or the World Wide Web. They have been established to provide different forms of clinical decision support such as textual interpretation of combinations of laboratory rest results, generating diagnostic hypotheses as well as confirmed and excluded diagnoses to support differential diagnosis in internal medicine, or for early identification and automatic monitoring of hospital-acquired infections. Technically, an MKP may consist of a number of inter-connected Arden Medical Logic Modules. Several MKPs have been integrated thus far into hospital, laboratory, and departmental information systems. This has resulted in useful and widely accepted software-based clinical decision support for the benefit of the patient, the physician, and the organization funding the health care system.

  15. Medium-sized Universities Connect to Their Libraries: Links on University Home Pages and User Group Pages

    Directory of Open Access Journals (Sweden)

    Pamela Harpel-Burk

    2006-03-01

    Full Text Available From major tasks—such as recruitment of new students and staff—to the more mundane but equally important tasks—such as providing directions to campus—college and university Web sites perform a wide range of tasks for a varied assortment of users. Overlapping functions and user needs meld to create the need for a Web site with three major functions: promotion and marketing, access to online services, and providing a means of communication between individuals and groups. In turn, college and university Web sites that provide links to their library home page can be valuable assets for recruitment, public relations, and for helping users locate online services.

  16. A Web-Based Comparative Genomics Tutorial for Investigating Microbial Genomes

    Directory of Open Access Journals (Sweden)

    Michael Strong

    2009-12-01

    Full Text Available As the number of completely sequenced microbial genomes continues to rise at an impressive rate, it is important to prepare students with the skills necessary to investigate microorganisms at the genomic level. As a part of the core curriculum for first-year graduate students in the biological sciences, we have implemented a web-based tutorial to introduce students to the fields of comparative and functional genomics. The tutorial focuses on recent computational methods for identifying functionally linked genes and proteins on a genome-wide scale and was used to introduce students to the Rosetta Stone, Phylogenetic Profile, conserved Gene Neighbor, and Operon computational methods. Students learned to use a number of publicly available web servers and databases to identify functionally linked genes in the Escherichia coli genome, with emphasis on genome organization and operon structure. The overall effectiveness of the tutorial was assessed based on student evaluations and homework assignments. The tutorial is available to other educators at http://www.doe-mbi.ucla.edu/~strong/m253.php.

  17. Spectral properties of the Google matrix of the World Wide Web and other directed networks.

    Science.gov (United States)

    Georgeot, Bertrand; Giraud, Olivier; Shepelyansky, Dima L

    2010-05-01

    We study numerically the spectrum and eigenstate properties of the Google matrix of various examples of directed networks such as vocabulary networks of dictionaries and university World Wide Web networks. The spectra have gapless structure in the vicinity of the maximal eigenvalue for Google damping parameter α equal to unity. The vocabulary networks have relatively homogeneous spectral density, while university networks have pronounced spectral structures which change from one university to another, reflecting specific properties of the networks. We also determine specific properties of eigenstates of the Google matrix, including the PageRank. The fidelity of the PageRank is proposed as a characterization of its stability.

  18. Historical Quantitative Reasoning on the Web

    NARCIS (Netherlands)

    Meroño-Peñuela, A.; Ashkpour, A.

    2016-01-01

    The Semantic Web is an extension of the Web through standards by the World Wide Web Consortium (W3C) [4]. These standards promote common data formats and exchange protocols on the Web, most fundamentally the Resource Description Framework (RDF). Its ultimate goal is to make the Web a suitable data

  19. Linked Data Reactor: a Framework for Building Reactive Linked Data Applications

    NARCIS (Netherlands)

    Khalili, Ali

    2016-01-01

    This paper presents Linked Data Reactor (LD-Reactor or LD-R) as a framework for developing exible and reusable User Interface components for Linked Data applications. LD-Reactor utilizes Facebook's ReactJS components, Flux architecture and Yahoo's Fluxible framework for isomorphic Web applications.

  20. Stochastic analysis of web page ranking

    NARCIS (Netherlands)

    Volkovich, Y.

    2009-01-01

    Today, the study of the World Wide Web is one of the most challenging subjects. In this work we consider the Web from a probabilistic point of view. We analyze the relations between various characteristics of the Web. In particular, we are interested in the Web properties that affect the Web page

  1. Personalizing Web Search based on User Profile

    OpenAIRE

    Utage, Sharyu; Ahire, Vijaya

    2016-01-01

    Web Search engine is most widely used for information retrieval from World Wide Web. These Web Search engines help user to find most useful information. When different users Searches for same information, search engine provide same result without understanding who is submitted that query. Personalized web search it is search technique for proving useful result. This paper models preference of users as hierarchical user profiles. a framework is proposed called UPS. It generalizes profile and m...

  2. EarthCube GeoLink: Semantics and Linked Data for the Geosciences

    Science.gov (United States)

    Arko, R. A.; Carbotte, S. M.; Chandler, C. L.; Cheatham, M.; Fils, D.; Hitzler, P.; Janowicz, K.; Ji, P.; Jones, M. B.; Krisnadhi, A.; Lehnert, K. A.; Mickle, A.; Narock, T.; O'Brien, M.; Raymond, L. M.; Schildhauer, M.; Shepherd, A.; Wiebe, P. H.

    2015-12-01

    The NSF EarthCube initiative is building next-generation cyberinfrastructure to aid geoscientists in collecting, accessing, analyzing, sharing, and visualizing their data and knowledge. The EarthCube GeoLink Building Block project focuses on a specific set of software protocols and vocabularies, often characterized as the Semantic Web and "Linked Data", to publish data online in a way that is easily discoverable, accessible, and interoperable. GeoLink brings together specialists from the computer science, geoscience, and library science domains, and includes data from a network of NSF-funded repositories that support scientific studies in marine geology, marine ecosystems, biogeochemistry, and paleoclimatology. We are working collaboratively with closely-related Building Block projects including EarthCollab and CINERGI, and solicit feedback from RCN projects including Cyberinfrastructure for Paleogeosciences (C4P) and iSamples. GeoLink has developed a modular ontology that describes essential geoscience research concepts; published data from seven collections (to date) on the Web as geospatially-enabled Linked Data using this ontology; matched and mapped data between collections using shared identifiers for investigators, repositories, datasets, funding awards, platforms, research cruises, physical specimens, and gazetteer features; and aggregated the results in a shared knowledgebase that can be queried via a standard SPARQL endpoint. Client applications have been built around the knowledgebase, including a Web/map-based data browser using the Leaflet JavaScript library and a simple query service using the OpenSearch format. Future development will include extending and refining the GeoLink ontology, adding content from additional repositories, developing semi-automated algorithms to enhance metadata, and further work on client applications.

  3. Development of a laboratory niche Web site.

    Science.gov (United States)

    Dimenstein, Izak B; Dimenstein, Simon I

    2013-10-01

    This technical note presents the development of a methodological laboratory niche Web site. The "Grossing Technology in Surgical Pathology" (www.grossing-technology.com) Web site is used as an example. Although common steps in creation of most Web sites are followed, there are particular requirements for structuring the template's menu on methodological laboratory Web sites. The "nested doll principle," in which one object is placed inside another, most adequately describes the methodological approach to laboratory Web site design. Fragmentation in presenting the Web site's material highlights the discrete parts of the laboratory procedure. An optimally minimal triad of components can be recommended for the creation of a laboratory niche Web site: a main set of media, a blog, and an ancillary component (host, contact, and links). The inclusion of a blog makes the Web site a dynamic forum for professional communication. By forming links and portals, cloud computing opens opportunities for connecting a niche Web site with other Web sites and professional organizations. As an additional source of information exchange, methodological laboratory niche Web sites are destined to parallel both traditional and new forms, such as books, journals, seminars, webinars, and internal educational materials. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Provenance Usage in the OceanLink Project

    Science.gov (United States)

    Narock, T.; Arko, R. A.; Carbotte, S. M.; Chandler, C. L.; Cheatham, M.; Fils, D.; Finin, T.; Hitzler, P.; Janowicz, K.; Jones, M.; Krisnadhi, A.; Lehnert, K. A.; Mickle, A.; Raymond, L. M.; Schildhauer, M.; Shepherd, A.; Wiebe, P. H.

    2014-12-01

    A wide spectrum of maturing methods and tools, collectively characterized as the Semantic Web, is helping to vastly improve thedissemination of scientific research. The OceanLink project, an NSF EarthCube Building Block, is utilizing semantic technologies tointegrate geoscience data repositories, library holdings, conference abstracts, and funded research awards. Provenance is a vital componentin meeting both the scientific and engineering requirements of OceanLink. Provenance plays a key role in justification and understanding when presenting users with results aggregated from multiple sources. In the engineering sense, provenance enables the identification of new data and the ability to determine which data sources to query. Additionally, OceanLink will leverage human and machine computation for crowdsourcing, text mining, and co-reference resolution. The results of these computations, and their associated provenance, will be folded back into the constituent systems to continually enhance precision and utility. We will touch on the various roles provenance is playing in OceanLink as well as present our use of the PROV Ontology and associated Ontology Design Patterns.

  5. Linked Ocean Data

    Science.gov (United States)

    Leadbetter, Adam; Arko, Robert; Chandler, Cynthia; Shepherd, Adam

    2014-05-01

    "Linked Data" is a term used in Computer Science to encapsulate a methodology for publishing data and metadata in a structured format so that links may be created and exploited between objects. Berners-Lee (2006) outlines the following four design principles of a Linked Data system: Use Uniform Resource Identifiers (URIs) as names for things. Use HyperText Transfer Protocol (HTTP) URIs so that people can look up those names. When someone looks up a URI, provide useful information, using the standards (Resource Description Framework [RDF] and the RDF query language [SPARQL]). Include links to other URIs so that they can discover more things. In 2010, Berners-Lee revisited his original design plan for Linked Data to encourage data owners along a path to "good Linked Data". This revision involved the creation of a five star rating system for Linked Data outlined below. One star: Available on the web (in any format). Two stars: Available as machine-readable structured data (e.g. An Excel spreadsheet instead of an image scan of a table). Three stars: As two stars plus the use of a non-proprietary format (e.g. Comma Separated Values instead of Excel). Four stars: As three stars plus the use of open standards from the World Wide Web Commission (W3C) (i.e. RDF and SPARQL) to identify things, so that people can point to your data and metadata. Five stars: All the above plus link your data to other people's data to provide context Here we present work building on the SeaDataNet common vocabularies served by the NERC Vocabulary Server, connecting projects such as the Rolling Deck to Repository (R2R) and the Biological and Chemical Oceanography Data Management Office (BCO-DMO) and other vocabularies such as the Marine Metadata Interoperability Ontology Register and Repository and the NASA Global Change Master Directory to create a Linked Ocean Data cloud. Publishing the vocabularies and metadata in standard RDF XML and exposing SPARQL endpoints renders them five-star Linked

  6. Web-based, Interactive, Nuclear Reactor Transient Analyzer using LabVIEW and RELAP5 (ATHENA)

    International Nuclear Information System (INIS)

    Kim, K. D.; Chung, B. D.; Rizwan-uddin

    2006-01-01

    In nuclear engineering, large system analysis codes such as RELAP5, TRAC-M, etc. play an important role in evaluating a reactor system behavior during a wide range of transient conditions. One limitation that restricts their use on a wider scale is that these codes often have a complicated I/O structure. This has motivated the development of GUI tools for best estimate codes, such as SNAP and ViSA, etc. In addition to a user interface, a greater degree of freedom in simulation and analyses of nuclear transient phenomena can be achieved if computer codes and their outputs are accessible from anywhere through the web. Such a web-based interactive interface can be very useful for geographically distributed groups when there is a need to share real-time data. Using mostly off-the-shelf technology, such a capability - a web-based transient analyzer based on a best-estimate code - has been developed. Specifically, the widely used best-estimate code RELAP5 is linked with a graphical interface. Moreover, a capability to web-cast is also available. This has been achieved by using the LabVIEW virtual instruments (VIs). In addition to the graphical display of the results, interactive control functions have also been added that allow operator's actions as well as, if permitted, by a distant user through the web

  7. Using the World Wide Web to Connect Research and Professional Practice: Towards Evidence-Based Practice

    Directory of Open Access Journals (Sweden)

    Daniel L. Moody

    2003-01-01

    Full Text Available In most professional (applied disciplines, research findings take a long time to filter into practice, if they ever do at all. The result of this is under-utilisation of research results and sub-optimal practices. There are a number of reasons for the lack of knowledge transfer. On the "demand side", people working in professional practice have little time available to keep up with the latest research in their field. In addition, the volume of research published each year means that the average practitioner would not have time to read all the research articles in their area of interest even if they devoted all their time to it. From the "supply side", academic research is primarily focused on the production rather than distribution of knowledge. While they have highly developed mechanisms for transferring knowledge among themselves, there is little investment in the distribution of research results be-yond research communities. The World Wide Web provides a potential solution to this problem, as it provides a global information infrastructure for connecting those who produce knowledge (researchers and those who need to apply this knowledge (practitioners. This paper describes two projects which use the World Wide Web to make research results directly available to support decision making in the workplace. The first is a successful knowledge management project in a health department which provides medical staff with on-line access to the latest medical research at the point of care. The second is a project currently in progress to implement a similar system to support decision making in IS practice. Finally, we draw some general lessons about how to improve transfers of knowledge from research and practice, which could be applied in any discipline.

  8. AstroWeb -- Internet Resources for Astronomers

    Science.gov (United States)

    Jackson, R. E.; Adorf, H.-M.; Egret, D.; Heck, A.; Koekemoer, A.; Murtagh, F.; Wells, D. C.

    AstroWeb is a World Wide Web (WWW) interface to a collection of Internet accessible resources aimed at the astronomical community. The collection currently contains more than 1000 WWW, Gopher, Wide Area Information System (WAIS), Telnet, and Anonymous FTP resources, and it is still growing. AstroWeb provides the additional value-added services: categorization of each resource; descriptive paragraphs for some resources; searchable index of all resource information; 3 times daily search for ``dead'' or ``unreliable'' resources.

  9. The inverse niche model for food webs with parasites

    Science.gov (United States)

    Warren, Christopher P.; Pascual, Mercedes; Lafferty, Kevin D.; Kuris, Armand M.

    2010-01-01

    Although parasites represent an important component of ecosystems, few field and theoretical studies have addressed the structure of parasites in food webs. We evaluate the structure of parasitic links in an extensive salt marsh food web, with a new model distinguishing parasitic links from non-parasitic links among free-living species. The proposed model is an extension of the niche model for food web structure, motivated by the potential role of size (and related metabolic rates) in structuring food webs. The proposed extension captures several properties observed in the data, including patterns of clustering and nestedness, better than does a random model. By relaxing specific assumptions, we demonstrate that two essential elements of the proposed model are the similarity of a parasite's hosts and the increasing degree of parasite specialization, along a one-dimensional niche axis. Thus, inverting one of the basic rules of the original model, the one determining consumers' generality appears critical. Our results support the role of size as one of the organizing principles underlying niche space and food web topology. They also strengthen the evidence for the non-random structure of parasitic links in food webs and open the door to addressing questions concerning the consequences and origins of this structure.

  10. Obtaining Streamflow Statistics for Massachusetts Streams on the World Wide Web

    Science.gov (United States)

    Ries, Kernell G.; Steeves, Peter A.; Freeman, Aleda; Singh, Raj

    2000-01-01

    A World Wide Web application has been developed to make it easy to obtain streamflow statistics for user-selected locations on Massachusetts streams. The Web application, named STREAMSTATS (available at http://water.usgs.gov/osw/streamstats/massachusetts.html ), can provide peak-flow frequency, low-flow frequency, and flow-duration statistics for most streams in Massachusetts. These statistics describe the magnitude (how much), frequency (how often), and duration (how long) of flow in a stream. The U.S. Geological Survey (USGS) has published streamflow statistics, such as the 100-year peak flow, the 7-day, 10-year low flow, and flow-duration statistics, for its data-collection stations in numerous reports. Federal, State, and local agencies need these statistics to plan and manage use of water resources and to regulate activities in and around streams. Engineering and environmental consulting firms, utilities, industry, and others use the statistics to design and operate water-supply systems, hydropower facilities, industrial facilities, wastewater treatment facilities, and roads, bridges, and other structures. Until now, streamflow statistics for data-collection stations have often been difficult to obtain because they are scattered among many reports, some of which are not readily available to the public. In addition, streamflow statistics are often needed for locations where no data are available. STREAMSTATS helps solve these problems. STREAMSTATS was developed jointly by the USGS and MassGIS, the State Geographic Information Systems (GIS) agency, in cooperation with the Massachusetts Departments of Environmental Management and Environmental Protection. The application consists of three major components: (1) a user interface that displays maps and allows users to select stream locations for which they want streamflow statistics (fig. 1), (2) a data base of previously published streamflow statistics and descriptive information for 725 USGS data

  11. WebVR: an interactive web browser for virtual environments

    Science.gov (United States)

    Barsoum, Emad; Kuester, Falko

    2005-03-01

    The pervasive nature of web-based content has lead to the development of applications and user interfaces that port between a broad range of operating systems and databases, while providing intuitive access to static and time-varying information. However, the integration of this vast resource into virtual environments has remained elusive. In this paper we present an implementation of a 3D Web Browser (WebVR) that enables the user to search the internet for arbitrary information and to seamlessly augment this information into virtual environments. WebVR provides access to the standard data input and query mechanisms offered by conventional web browsers, with the difference that it generates active texture-skins of the web contents that can be mapped onto arbitrary surfaces within the environment. Once mapped, the corresponding texture functions as a fully integrated web-browser that will respond to traditional events such as the selection of links or text input. As a result, any surface within the environment can be turned into a web-enabled resource that provides access to user-definable data. In order to leverage from the continuous advancement of browser technology and to support both static as well as streamed content, WebVR uses ActiveX controls to extract the desired texture skin from industry strength browsers, providing a unique mechanism for data fusion and extensibility.

  12. The Semantic Web in Teacher Education

    Science.gov (United States)

    Czerkawski, Betül Özkan

    2014-01-01

    The Semantic Web enables increased collaboration among computers and people by organizing unstructured data on the World Wide Web. Rather than a separate body, the Semantic Web is a functional extension of the current Web made possible by defining relationships among websites and other online content. When explicitly defined, these relationships…

  13. Constructing the Web of Events from Raw Data in the Web of Things

    Directory of Open Access Journals (Sweden)

    Yunchuan Sun

    2014-01-01

    Full Text Available An exciting paradise of data is emerging into our daily life along with the development of the Web of Things. Nowadays, volumes of heterogeneous raw data are continuously generated and captured by trillions of smart devices like sensors, smart controls, readers and other monitoring devices, while various events occur in the physical world. It is hard for users including people and smart things to master valuable information hidden in the massive data, which is more useful and understandable than raw data for users to get the crucial points for problems-solving. Thus, how to automatically and actively extract the knowledge of events and their internal links from the big data is one key challenge for the future Web of Things. This paper proposes an effective approach to extract events and their internal links from large scale data leveraging predefined event schemas in the Web of Things, which starts with grasping the critical data for useful events by filtering data with well-defined event types in the schema. A case study in the context of smart campus is presented to show the application of proposed approach for the extraction of events and their internal semantic links.

  14. ETDEWEB versus the World-Wide-Web: a specific database/web comparison

    Energy Technology Data Exchange (ETDEWEB)

    Cutler, Debbie

    2010-06-28

    A study was performed comparing user search results from the specialized scientific database on energy-related information, ETDEWEB, with search results from the internet search engines Google and Google Scholar. The primary objective of the study was to determine if ETDEWEB (the Energy Technology Data Exchange – World Energy Base) continues to bring the user search results that are not being found by Google and Google Scholar. As a multilateral information exchange initiative, ETDE’s member countries and partners contribute cost- and task-sharing resources to build the largest database of energy-related information in the world. As of early 2010, the ETDEWEB database has 4.3 million citations to world-wide energy literature. One of ETDEWEB’s strengths is its focused scientific content and direct access to full text for its grey literature (over 300,000 documents in PDF available for viewing from the ETDE site and over a million additional links to where the documents can be found at research organizations and major publishers globally). Google and Google Scholar are well-known for the wide breadth of the information they search, with Google bringing in news, factual and opinion-related information, and Google Scholar also emphasizing scientific content across many disciplines. The analysis compared the results of 15 energy-related queries performed on all three systems using identical words/phrases. A variety of subjects was chosen, although the topics were mostly in renewable energy areas due to broad international interest. Over 40,000 search result records from the three sources were evaluated. The study concluded that ETDEWEB is a significant resource to energy experts for discovering relevant energy information. For the 15 topics in this study, ETDEWEB was shown to bring the user unique results not shown by Google or Google Scholar 86.7% of the time. Much was learned from the study beyond just metric comparisons. Observations about the strengths of each

  15. Dynamic Interactive Educational Diabetes Simulations Using the World Wide Web: An Experience of More Than 15 Years with AIDA Online.

    Science.gov (United States)

    Lehmann, Eldon D; Dewolf, Dennis K; Novotny, Christopher A; Reed, Karen; Gotwals, Robert R

    2014-01-01

    Background. AIDA is a widely available downloadable educational simulator of glucose-insulin interaction in diabetes. Methods. A web-based version of AIDA was developed that utilises a server-based architecture with HTML FORM commands to submit numerical data from a web-browser client to a remote web server. AIDA online, located on a remote server, passes the received data through Perl scripts which interactively produce 24 hr insulin and glucose simulations. Results. AIDA online allows users to modify the insulin regimen and diet of 40 different prestored "virtual diabetic patients" on the internet or create new "patients" with user-generated regimens. Multiple simulations can be run, with graphical results viewed via a standard web-browser window. To date, over 637,500 diabetes simulations have been run at AIDA online, from all over the world. Conclusions. AIDA online's functionality is similar to the downloadable AIDA program, but the mode of implementation and usage is different. An advantage to utilising a server-based application is the flexibility that can be offered. New modules can be added quickly to the online simulator. This has facilitated the development of refinements to AIDA online, which have instantaneously become available around the world, with no further local downloads or installations being required.

  16. Dynamic Interactive Educational Diabetes Simulations Using the World Wide Web: An Experience of More Than 15 Years with AIDA Online

    Science.gov (United States)

    Lehmann, Eldon D.; DeWolf, Dennis K.; Novotny, Christopher A.; Reed, Karen; Gotwals, Robert R.

    2014-01-01

    Background. AIDA is a widely available downloadable educational simulator of glucose-insulin interaction in diabetes. Methods. A web-based version of AIDA was developed that utilises a server-based architecture with HTML FORM commands to submit numerical data from a web-browser client to a remote web server. AIDA online, located on a remote server, passes the received data through Perl scripts which interactively produce 24 hr insulin and glucose simulations. Results. AIDA online allows users to modify the insulin regimen and diet of 40 different prestored “virtual diabetic patients” on the internet or create new “patients” with user-generated regimens. Multiple simulations can be run, with graphical results viewed via a standard web-browser window. To date, over 637,500 diabetes simulations have been run at AIDA online, from all over the world. Conclusions. AIDA online's functionality is similar to the downloadable AIDA program, but the mode of implementation and usage is different. An advantage to utilising a server-based application is the flexibility that can be offered. New modules can be added quickly to the online simulator. This has facilitated the development of refinements to AIDA online, which have instantaneously become available around the world, with no further local downloads or installations being required. PMID:24511312

  17. A giant protogalactic disk linked to the cosmic web

    Science.gov (United States)

    Martin, D. Christopher; Matuszewski, Mateusz; Morrissey, Patrick; Neill, James D.; Moore, Anna; Cantalupo, Sebastiano; Prochaska, J. Xavier; Chang, Daphne

    2015-08-01

    The specifics of how galaxies form from, and are fuelled by, gas from the intergalactic medium remain uncertain. Hydrodynamic simulations suggest that `cold accretion flows'--relatively cool (temperatures of the order of 104 kelvin), unshocked gas streaming along filaments of the cosmic web into dark-matter halos--are important. These flows are thought to deposit gas and angular momentum into the circumgalactic medium, creating disk- or ring-like structures that eventually coalesce into galaxies that form at filamentary intersections. Recently, a large and luminous filament, consistent with such a cold accretion flow, was discovered near the quasi-stellar object QSO UM287 at redshift 2.279 using narrow-band imaging. Unfortunately, imaging is not sufficient to constrain the physical characteristics of the filament, to determine its kinematics, to explain how it is linked to nearby sources, or to account for its unusual brightness, more than a factor of ten above what is expected for a filament. Here we report a two-dimensional spectroscopic investigation of the emitting structure. We find that the brightest emission region is an extended rotating hydrogen disk with a velocity profile that is characteristic of gas in a dark-matter halo with a mass of 1013 solar masses. This giant protogalactic disk appears to be connected to a quiescent filament that may extend beyond the virial radius of the halo. The geometry is strongly suggestive of a cold accretion flow.

  18. Engineering Web Applications

    DEFF Research Database (Denmark)

    Casteleyn, Sven; Daniel, Florian; Dolog, Peter

    Nowadays, Web applications are almost omnipresent. The Web has become a platform not only for information delivery, but also for eCommerce systems, social networks, mobile services, and distributed learning environments. Engineering Web applications involves many intrinsic challenges due...... to their distributed nature, content orientation, and the requirement to make them available to a wide spectrum of users who are unknown in advance. The authors discuss these challenges in the context of well-established engineering processes, covering the whole product lifecycle from requirements engineering through...... design and implementation to deployment and maintenance. They stress the importance of models in Web application development, and they compare well-known Web-specific development processes like WebML, WSDM and OOHDM to traditional software development approaches like the waterfall model and the spiral...

  19. Engineering semantic web information systems in Hera

    NARCIS (Netherlands)

    Vdovják, R.; Frasincar, F.; Houben, G.J.P.M.; Barna, P.

    2003-01-01

    The success of the World Wide Web has caused the concept of information system to change. Web Information Systems (WIS) use from the Web its paradigm and technologies in order to retrieve information from sources on the Web, and to present the information in terms of a Web or hypermedia

  20. Women, pharmacy and the World Wide Web: could they be the answer to the obesity epidemic?

    Science.gov (United States)

    Fakih, Souhiela; Hussainy, Safeera; Marriott, Jennifer

    2014-04-01

    The objective of this article is to explore how giving women access to evidence-based information in weight management through pharmacies, and by utilising the World Wide Web, is a much needed step towards dealing with the obesity crisis. Women's needs should be considered when developing evidence-based information on weight. Excess weight places them at high risk of diabetes and cardiovascular disease, infertility and complications following pregnancy and giving birth. Women are also an important population group because they influence decision-making around meal choices for their families and are the biggest consumers of weight-loss products, many of which can be purchased in pharmacies. Pharmacies are readily accessible primary healthcare locations and given the pharmacist's expertise in being able to recognise underlying causes of obesity (e.g. medications, certain disease states), pharmacies are an ideal location to provide women with evidence-based information on all facets of weight management. Considering the exponential rise in the use of the World Wide Web, this information could be delivered as an online educational resource supported by other flexible formats. The time has come for the development of an online, evidence-based educational resource on weight management, which is combined with other flexible formats and targeted at women in general and according to different phases of their lives (pregnancy, post-partum, menopause). By empowering women with this knowledge it will allow them and their families to take better control of their health and wellbeing, and it may just be the much needed answer to complement already existing resources to help curb the obesity epidemic. © 2013 Royal Pharmaceutical Society.

  1. Integration of the White Sands Complex into a Wide Area Network

    Science.gov (United States)

    Boucher, Phillip Larry; Horan, Sheila, B.

    1996-01-01

    The NASA White Sands Complex (WSC) satellite communications facility consists of two main ground stations, an auxiliary ground station, a technical support facility, and a power plant building located on White Sands Missile Range. When constructed, terrestrial communication access to these facilities was limited to copper telephone circuits. There was no local or wide area communications network capability. This project incorporated a baseband local area network (LAN) topology at WSC and connected it to NASA's wide area network using the Program Support Communications Network-Internet (PSCN-I). A campus-style LAN is configured in conformance with the International Standards Organization (ISO) Open Systems Interconnect (ISO) model. Ethernet provides the physical and data link layers. Transmission Control Protocol and Internet Protocol (TCP/IP) are used for the network and transport layers. The session, presentation, and application layers employ commercial software packages. Copper-based Ethernet collision domains are constructed in each of the primary facilities and these are interconnected by routers over optical fiber links. The network and each of its collision domains are shown to meet IEEE technical configuration guidelines. The optical fiber links are analyzed for the optical power budget and bandwidth allocation and are found to provide sufficient margin for this application. Personal computers and work stations attached to the LAN communicate with and apply a wide variety of local and remote administrative software tools. The Internet connection provides wide area network (WAN) electronic access to other NASA centers and the world wide web (WWW). The WSC network reduces and simplifies the administrative workload while providing enhanced and advanced inter-communications capabilities among White Sands Complex departments and with other NASA centers.

  2. Web Security, Privacy & Commerce

    CERN Document Server

    Garfinkel, Simson

    2011-01-01

    Since the first edition of this classic reference was published, World Wide Web use has exploded and e-commerce has become a daily part of business and personal life. As Web use has grown, so have the threats to our security and privacy--from credit card fraud to routine invasions of privacy by marketers to web site defacements to attacks that shut down popular web sites. Web Security, Privacy & Commerce goes behind the headlines, examines the major security risks facing us today, and explains how we can minimize them. It describes risks for Windows and Unix, Microsoft Internet Exp

  3. XML and Better Web Searching.

    Science.gov (United States)

    Jackson, Joe; Gilstrap, Donald L.

    1999-01-01

    Addresses the implications of the new Web metalanguage XML for searching on the World Wide Web and considers the future of XML on the Web. Compared to HTML, XML is more concerned with structure of data than documents, and these data structures should prove conducive to precise, context rich searching. (Author/LRW)

  4. 60. The World-Wide Inaccessible Web, Part 1: Browsing

    Science.gov (United States)

    Baggaley, Jon; Batpurev, Batchuluun

    2007-01-01

    Two studies are reported, comparing the browser loading times of webpages created using common Web development techniques. The loading speeds were estimated in 12 Asian countries by members of the "PANdora" network, funded by the International Development Research Centre (IDRC) to conduct collaborative research in the development of…

  5. Web-based services for drug design and discovery.

    Science.gov (United States)

    Frey, Jeremy G; Bird, Colin L

    2011-09-01

    Reviews of the development of drug discovery through the 20(th) century recognised the importance of chemistry and increasingly bioinformatics, but had relatively little to say about the importance of computing and networked computing in particular. However, the design and discovery of new drugs is arguably the most significant single application of bioinformatics and cheminformatics to have benefitted from the increases in the range and power of the computational techniques since the emergence of the World Wide Web, commonly now referred to as simply 'the Web'. Web services have enabled researchers to access shared resources and to deploy standardized calculations in their search for new drugs. This article first considers the fundamental principles of Web services and workflows, and then explores the facilities and resources that have evolved to meet the specific needs of chem- and bio-informatics. This strategy leads to a more detailed examination of the basic components that characterise molecules and the essential predictive techniques, followed by a discussion of the emerging networked services that transcend the basic provisions, and the growing trend towards embracing modern techniques, in particular the Semantic Web. In the opinion of the authors, the issues that require community action are: increasing the amount of chemical data available for open access; validating the data as provided; and developing more efficient links between the worlds of cheminformatics and bioinformatics. The goal is to create ever better drug design services.

  6. Web OPAC Interfaces: An Overview.

    Science.gov (United States)

    Babu, B. Ramesh; O'Brien, Ann

    2000-01-01

    Discussion of Web-based online public access catalogs (OPACs) focuses on a review of six Web OPAC interfaces in use in academic libraries in the United Kingdom. Presents a checklist and guidelines of important features and functions that are currently available, including search strategies, access points, display, links, and layout. (Author/LRW)

  7. Drugs + HIV, Learn the Link

    Medline Plus

    Full Text Available ... and what to do to counter these trends. Online Resources NIDA for Teens Web site : This Web ... projects/learn-link-drugs-hiv . 120x90 460x80 486x60 Social Media Send the message to young people and ...

  8. Drugs + HIV, Learn the Link

    Medline Plus

    Full Text Available ... and what to do to counter these trends. Online Resources NIDA for Teens Web site : This Web ... at: http://www.drugabuse.gov/news-events/public-education-projects/learn-link-drugs-hiv . 120x90 460x80 486x60 ...

  9. Capataz: a framework for distributing algorithms via the World Wide Web

    Directory of Open Access Journals (Sweden)

    Gonzalo J. Martínez

    2015-08-01

    Full Text Available In recent years, some scientists have embraced the distributed computing paradigm. As experiments and simulations demand ever more computing power, coordinating the efforts of many different processors is often the only reasonable resort. We developed an open-source distributed computing framework based on web technologies, and named it Capataz. Acting as an HTTP server, web browsers running on many different devices can connect to it to contribute in the execution of distributed algorithms written in Javascript. Capataz takes advantage of architectures with many cores using web workers. This paper presents an improvement in Capataz´ usability and why it was needed. In previous experiments the total time of distributed algorithms proved to be susceptible to changes in the execution time of the jobs. The system now adapts by bundling jobs together if they are too simple. The computational experiment to test the solution is a brute force estimation of pi. The benchmark results show that by bundling jobs, the overall perfomance is greatly increased.

  10. Parasites affect food web structure primarily through increased diversity and complexity.

    Directory of Open Access Journals (Sweden)

    Jennifer A Dunne

    Full Text Available Comparative research on food web structure has revealed generalities in trophic organization, produced simple models, and allowed assessment of robustness to species loss. These studies have mostly focused on free-living species. Recent research has suggested that inclusion of parasites alters structure. We assess whether such changes in network structure result from unique roles and traits of parasites or from changes to diversity and complexity. We analyzed seven highly resolved food webs that include metazoan parasite data. Our analyses show that adding parasites usually increases link density and connectance (simple measures of complexity, particularly when including concomitant links (links from predators to parasites of their prey. However, we clarify prior claims that parasites "dominate" food web links. Although parasites can be involved in a majority of links, in most cases classic predation links outnumber classic parasitism links. Regarding network structure, observed changes in degree distributions, 14 commonly studied metrics, and link probabilities are consistent with scale-dependent changes in structure associated with changes in diversity and complexity. Parasite and free-living species thus have similar effects on these aspects of structure. However, two changes point to unique roles of parasites. First, adding parasites and concomitant links strongly alters the frequency of most motifs of interactions among three taxa, reflecting parasites' roles as resources for predators of their hosts, driven by trophic intimacy with their hosts. Second, compared to free-living consumers, many parasites' feeding niches appear broader and less contiguous, which may reflect complex life cycles and small body sizes. This study provides new insights about generic versus unique impacts of parasites on food web structure, extends the generality of food web theory, gives a more rigorous framework for assessing the impact of any species on trophic

  11. The Atlas of Chinese World Wide Web Ecosystem Shaped by the Collective Attention Flows.

    Science.gov (United States)

    Lou, Xiaodan; Li, Yong; Gu, Weiwei; Zhang, Jiang

    2016-01-01

    The web can be regarded as an ecosystem of digital resources connected and shaped by collective successive behaviors of users. Knowing how people allocate limited attention on different resources is of great importance. To answer this, we embed the most popular Chinese web sites into a high dimensional Euclidean space based on the open flow network model of a large number of Chinese users' collective attention flows, which both considers the connection topology of hyperlinks between the sites and the collective behaviors of the users. With these tools, we rank the web sites and compare their centralities based on flow distances with other metrics. We also study the patterns of attention flow allocation, and find that a large number of web sites concentrate on the central area of the embedding space, and only a small fraction of web sites disperse in the periphery. The entire embedding space can be separated into 3 regions(core, interim, and periphery). The sites in the core (1%) occupy a majority of the attention flows (40%), and the sites (34%) in the interim attract 40%, whereas other sites (65%) only take 20% flows. What's more, we clustered the web sites into 4 groups according to their positions in the space, and found that similar web sites in contents and topics are grouped together. In short, by incorporating the open flow network model, we can clearly see how collective attention allocates and flows on different web sites, and how web sites connected each other.

  12. The Atlas of Chinese World Wide Web Ecosystem Shaped by the Collective Attention Flows

    Science.gov (United States)

    Lou, Xiaodan; Li, Yong; Gu, Weiwei; Zhang, Jiang

    2016-01-01

    The web can be regarded as an ecosystem of digital resources connected and shaped by collective successive behaviors of users. Knowing how people allocate limited attention on different resources is of great importance. To answer this, we embed the most popular Chinese web sites into a high dimensional Euclidean space based on the open flow network model of a large number of Chinese users’ collective attention flows, which both considers the connection topology of hyperlinks between the sites and the collective behaviors of the users. With these tools, we rank the web sites and compare their centralities based on flow distances with other metrics. We also study the patterns of attention flow allocation, and find that a large number of web sites concentrate on the central area of the embedding space, and only a small fraction of web sites disperse in the periphery. The entire embedding space can be separated into 3 regions(core, interim, and periphery). The sites in the core (1%) occupy a majority of the attention flows (40%), and the sites (34%) in the interim attract 40%, whereas other sites (65%) only take 20% flows. What’s more, we clustered the web sites into 4 groups according to their positions in the space, and found that similar web sites in contents and topics are grouped together. In short, by incorporating the open flow network model, we can clearly see how collective attention allocates and flows on different web sites, and how web sites connected each other. PMID:27812133

  13. Object Lessons: Material Culture on the World Wide Web.

    Science.gov (United States)

    Mires, Charlene

    2001-01-01

    Describes the content of a course on material culture for undergraduate students that was separated into two sections: (1) first students read books and analyzed artifacts; and (2) then the class explored the Centennial Exhibition held in Philadelphia (Pennsylvania) in 1876, applying material culture methods and constructing a Web site from their…

  14. TMFoldWeb: a web server for predicting transmembrane protein fold class.

    Science.gov (United States)

    Kozma, Dániel; Tusnády, Gábor E

    2015-09-17

    Here we present TMFoldWeb, the web server implementation of TMFoldRec, a transmembrane protein fold recognition algorithm. TMFoldRec uses statistical potentials and utilizes topology filtering and a gapless threading algorithm. It ranks template structures and selects the most likely candidates and estimates the reliability of the obtained lowest energy model. The statistical potential was developed in a maximum likelihood framework on a representative set of the PDBTM database. According to the benchmark test the performance of TMFoldRec is about 77 % in correctly predicting fold class for a given transmembrane protein sequence. An intuitive web interface has been developed for the recently published TMFoldRec algorithm. The query sequence goes through a pipeline of topology prediction and a systematic sequence to structure alignment (threading). Resulting templates are ordered by energy and reliability values and are colored according to their significance level. Besides the graphical interface, a programmatic access is available as well, via a direct interface for developers or for submitting genome-wide data sets. The TMFoldWeb web server is unique and currently the only web server that is able to predict the fold class of transmembrane proteins while assigning reliability scores for the prediction. This method is prepared for genome-wide analysis with its easy-to-use interface, informative result page and programmatic access. Considering the info-communication evolution in the last few years, the developed web server, as well as the molecule viewer, is responsive and fully compatible with the prevalent tablets and mobile devices.

  15. Plankton food-webs: to what extent can they be simplified?

    Directory of Open Access Journals (Sweden)

    Domenico D'Alelio

    2016-05-01

    Full Text Available Plankton is a hugely diverse community including both unicellular and multicellular organisms, whose individual dimensions span over seven orders of magnitude. Plankton is a fundamental part of biogeochemical cycles and food-webs in aquatic systems. While knowledge has progressively accumulated at the level of single species and single trophic processes, the overwhelming biological diversity of plankton interactions is insufficiently known and a coherent and unifying trophic framework is virtually lacking. We performed an extensive review of the plankton literature to provide a compilation of data suitable for implementing food-web models including plankton trophic processes at high taxonomic resolution. We identified the components of the plankton community at the Long Term Ecological Research Station MareChiara in the Gulf of Naples. These components represented the sixty-three nodes of a plankton food-web. To each node we attributed biomass and vital rates, i.e. production, consumption, assimilation rates and ratio between autotrophy and heterotrophy in mixotrophic protists. Biomasses and rates values were defined for two opposite system’s conditions; relatively eutrophic and oligotrophic states. We finally identified 817 possible trophic links within the web and provided each of them with a relative weight, in order to define a diet-matrix, valid for both trophic states, which included all consumers, fromn anoflagellates to carnivorous plankton. Vital rates for plankton resulted, as expected, very wide; this strongly contrasts with the narrow ranges considered in plankton system models implemented so far. Moreover, the amount and variety of trophic links highlighted by our review is largely excluded by state-of-the-art biogeochemical and food-web models for aquatic systems. Plankton models could potentially benefit from the integration of the trophic diversity outlined in this paper: first, by using more realistic rates; second, by better

  16. Compilation and network analyses of cambrian food webs.

    Directory of Open Access Journals (Sweden)

    Jennifer A Dunne

    2008-04-01

    Full Text Available A rich body of empirically grounded theory has developed about food webs--the networks of feeding relationships among species within habitats. However, detailed food-web data and analyses are lacking for ancient ecosystems, largely because of the low resolution of taxa coupled with uncertain and incomplete information about feeding interactions. These impediments appear insurmountable for most fossil assemblages; however, a few assemblages with excellent soft-body preservation across trophic levels are candidates for food-web data compilation and topological analysis. Here we present plausible, detailed food webs for the Chengjiang and Burgess Shale assemblages from the Cambrian Period. Analyses of degree distributions and other structural network properties, including sensitivity analyses of the effects of uncertainty associated with Cambrian diet designations, suggest that these early Paleozoic communities share remarkably similar topology with modern food webs. Observed regularities reflect a systematic dependence of structure on the numbers of taxa and links in a web. Most aspects of Cambrian food-web structure are well-characterized by a simple "niche model," which was developed for modern food webs and takes into account this scale dependence. However, a few aspects of topology differ between the ancient and recent webs: longer path lengths between species and more species in feeding loops in the earlier Chengjiang web, and higher variability in the number of links per species for both Cambrian webs. Our results are relatively insensitive to the exclusion of low-certainty or random links. The many similarities between Cambrian and recent food webs point toward surprisingly strong and enduring constraints on the organization of complex feeding interactions among metazoan species. The few differences could reflect a transition to more strongly integrated and constrained trophic organization within ecosystems following the rapid

  17. Compilation and network analyses of cambrian food webs.

    Science.gov (United States)

    Dunne, Jennifer A; Williams, Richard J; Martinez, Neo D; Wood, Rachel A; Erwin, Douglas H

    2008-04-29

    A rich body of empirically grounded theory has developed about food webs--the networks of feeding relationships among species within habitats. However, detailed food-web data and analyses are lacking for ancient ecosystems, largely because of the low resolution of taxa coupled with uncertain and incomplete information about feeding interactions. These impediments appear insurmountable for most fossil assemblages; however, a few assemblages with excellent soft-body preservation across trophic levels are candidates for food-web data compilation and topological analysis. Here we present plausible, detailed food webs for the Chengjiang and Burgess Shale assemblages from the Cambrian Period. Analyses of degree distributions and other structural network properties, including sensitivity analyses of the effects of uncertainty associated with Cambrian diet designations, suggest that these early Paleozoic communities share remarkably similar topology with modern food webs. Observed regularities reflect a systematic dependence of structure on the numbers of taxa and links in a web. Most aspects of Cambrian food-web structure are well-characterized by a simple "niche model," which was developed for modern food webs and takes into account this scale dependence. However, a few aspects of topology differ between the ancient and recent webs: longer path lengths between species and more species in feeding loops in the earlier Chengjiang web, and higher variability in the number of links per species for both Cambrian webs. Our results are relatively insensitive to the exclusion of low-certainty or random links. The many similarities between Cambrian and recent food webs point toward surprisingly strong and enduring constraints on the organization of complex feeding interactions among metazoan species. The few differences could reflect a transition to more strongly integrated and constrained trophic organization within ecosystems following the rapid diversification of species, body

  18. Exposing SAMOS Data and Vocabularies within the Semantic Web

    Science.gov (United States)

    Dockery, Nkemdirim; Elya, Jocelyn; Smith, Shawn

    2014-05-01

    As part of the Ocean Data Interoperability Platform (ODIP), we at the Center for Ocean-Atmospheric Prediction Studies (COAPS) will present the development process for the exposure of quality-controlled data and core vocabularies managed by the Shipboard Automated Meteorological Oceanographic System (SAMOS) initiative using Semantic Web technologies. Participants in the SAMOS initiative collect continuous navigational (position, course, heading, speed), meteorological (winds, pressure, temperature, humidity, radiation), and near-surface oceanographic (sea temperature, salinity) parameters while at sea. One-minute interval observations are packaged and transmitted back to COAPS via daily emails, where they undergo standardized formatting and quality control. The authors will present methods used to expose these daily datasets. The Semantic Web, a vision of the World Wide Web Consortium, focuses on extending the principles of the web from connecting documents to connecting data. The creation of a web of Linked Data that can be used across different applications in a machine-readable way is the ultimate goal. The Resource Description Framework (RDF) is the standard language and format used in the Semantic Web. RDF pages may be queried using the SPARQL Protocol and RDF Query Language (SPARQL). The authors will showcase the development of RDF resources that map SAMOS vocabularies to internationally served vocabularies such as those found in the Natural Environment Research Council (NERC) Vocabulary Server. Each individual SAMOS vocabulary term (data parameter and quality control flag) will be described in an RDF resource page. These RDF resources will define each SAMOS vocabulary term and provide a link to the mapped vocabulary term (or multiple terms) served externally. Along with enhanced retrieval by parameter, time, and location, we will be able to add additional parameters with the confidence that they follow an international standard. The production of RDF

  19. The Electron Microscopy Outreach Program: A Web-based resource for research and education.

    Science.gov (United States)

    Sosinsky, G E; Baker, T S; Hand, G; Ellisman, M H

    1999-01-01

    We have developed a centralized World Wide Web (WWW)-based environment that serves as a resource of software tools and expertise for biological electron microscopy. A major focus is molecular electron microscopy, but the site also includes information and links on structural biology at all levels of resolution. This site serves to help integrate or link structural biology techniques in accordance with user needs. The WWW site, called the Electron Microscopy (EM) Outreach Program (URL: http://emoutreach.sdsc.edu), provides scientists with computational and educational tools for their research and edification. In particular, we have set up a centralized resource containing course notes, references, and links to image analysis and three-dimensional reconstruction software for investigators wanting to learn about EM techniques either within or outside of their fields of expertise. Copyright 1999 Academic Press.

  20. Ten years for the public Web

    CERN Multimedia

    2003-01-01

    Ten years ago, CERN issued a statement declaring that a little known piece of software called the World Wide Web was in the public domain. Nowadays, the Web is an indispensable part of modern communications. The idea for the Web goes back to March 1989 when CERN Computer scientist Tim Berners-Lee wrote a proposal for a 'Distributed Information Management System' for the high-energy physics community. The Web was originaly conceived and developed to meet the demand for information sharing between scientists working all over the world. There were many obstacles in the 1980s to the effective exchange of information. There was, for example a great variety of computer and network systems, with hardly any common features. The main purpose of the web was to allow scientists to access information from any source in a consistent and simple way. By Christmas 1990, Berners-Lee's idea had become the World Wide Web, with its first server and browser running at CERN. Through 1991, the Web spread to other particle physics ...

  1. Linking open vocabularies

    CERN Document Server

    Greifender, Elke; Seadle, Michael

    2013-01-01

    Linked Data (LD), Linked Open Data (LOD) and generating a web of data, present the new knowledge sharing frontier. In a philosophical context, LD is an evolving environment that reflects humankinds' desire to understand the world by drawing on the latest technologies and capabilities of the time. LD, while seemingly a new phenomenon did not emerge overnight; rather it represents the natural progression by which knowledge structures are developed, used, and shared. Linked Open Vocabularies is a significant trajectory of LD. Linked Open Vocabularies targets vocabularies that have traditionally b

  2. Integration of public procurement data using linked data

    OpenAIRE

    Jindrich Mynarz

    2014-01-01

    Linked data is frequently casted as a technology for performing integration of distributed datasets on the Web. In this paper, we propose a generic workflow for data integration based on linked data and semantic web technologies. The workflow comes out of an analysis of the application of linked data to integration of public procurement data. It organizes common data integration tasks, including schema alignment, data translation, entity reconciliation, and data fusion, into a sequence of rep...

  3. The Development of Interactive World Wide Web Based Teaching Material in Forensic Science.

    Science.gov (United States)

    Daeid, Niamh Nic

    2001-01-01

    Describes the development of a Web-based tutorial in the forensic science teaching program at the University of Strathclyde (Scotland). Highlights include the theoretical basis for course development; objectives; Web site design; student feedback; and staff feedback. (LRW)

  4. Storage Manager and File Transfer Web Services

    International Nuclear Information System (INIS)

    William A Watson III; Ying Chen; Jie Chen; Walt Akers

    2002-01-01

    Web services are emerging as an interesting mechanism for a wide range of grid services, particularly those focused upon information services and control. When coupled with efficient data transfer services, they provide a powerful mechanism for building a flexible, open, extensible data grid for science applications. In this paper we present our prototype work on a Java Storage Resource Manager (JSRM) web service and a Java Reliable File Transfer (JRFT) web service. A java client (Grid File Manager) on top of JSRM and is developed to demonstrate the capabilities of these web services. The purpose of this work is to show the extent to which SOAP based web services are an appropriate direction for building a grid-wide data management system, and eventually grid-based portals

  5. Orthopaedic Patient Information on the World Wide Web: An Essential Review.

    Science.gov (United States)

    Cassidy, John Tristan; Baker, Joseph F

    2016-02-17

    Patients increasingly use the Internet to research health-related issues. Internet content, unlike other forms of media, is not regulated. Although information accessed online can impact patients' opinions and expectations, there is limited information about the quality or readability of online orthopaedic information. PubMed, MEDLINE, and Google Scholar were searched using anatomic descriptors and three title keywords ("Internet," "web," and "online"). Articles examining online orthopaedic information from January 1, 2000, until April 1, 2015, were recorded. Articles were assessed for the number of reviewers evaluating the online material, whether the article examined for a link between authorship and quality, and the use of recognized quality and readability assessment tools. To facilitate a contemporary discussion, only publications since January 1, 2010, were considered for analysis. A total of thirty-eight peer-reviewed articles published since 2010 examining the quality and/or readability of online orthopaedic information were reviewed. For information quality, there was marked variation in the quality assessment methods utilized, the number of reviewers, and the manner of reporting. To date, the majority of examined information is of poor quality. Studies examining readability have focused on pages produced by professional orthopaedic societies. The quality and readability of online orthopaedic information are generally poor. For modern practices to adapt to the Internet and to prevent misinformation, the orthopaedic community should develop high-quality, readable online patient information. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.

  6. Food-web dynamics in a large river discontinuum

    Science.gov (United States)

    Cross, Wyatt F.; Baxter, Colden V.; Rosi-Marshall, Emma J.; Hall, Robert O.; Kennedy, Theodore A.; Donner, Kevin C.; Kelly, Holly A. Wellard; Seegert, Sarah E.Z.; Behn, Kathrine E.; Yard, Michael D.

    2013-01-01

    Nearly all ecosystems have been altered by human activities, and most communities are now composed of interacting species that have not co-evolved. These changes may modify species interactions, energy and material flows, and food-web stability. Although structural changes to ecosystems have been widely reported, few studies have linked such changes to dynamic food-web attributes and patterns of energy flow. Moreover, there have been few tests of food-web stability theory in highly disturbed and intensely managed freshwater ecosystems. Such synthetic approaches are needed for predicting the future trajectory of ecosystems, including how they may respond to natural or anthropogenic perturbations. We constructed flow food webs at six locations along a 386-km segment of the Colorado River in Grand Canyon (Arizona, USA) for three years. We characterized food-web structure and production, trophic basis of production, energy efficiencies, and interaction-strength distributions across a spatial gradient of perturbation (i.e., distance from Glen Canyon Dam), as well as before and after an experimental flood. We found strong longitudinal patterns in food-web characteristics that strongly correlated with the spatial position of large tributaries. Above tributaries, food webs were dominated by nonnative New Zealand mudsnails (62% of production) and nonnative rainbow trout (100% of fish production). The simple structure of these food webs led to few dominant energy pathways (diatoms to few invertebrate taxa to rainbow trout), large energy inefficiencies (i.e., Below large tributaries, invertebrate production declined ∼18-fold, while fish production remained similar to upstream sites and comprised predominately native taxa (80–100% of production). Sites below large tributaries had increasingly reticulate and detritus-based food webs with a higher prevalence of omnivory, as well as interaction strength distributions more typical of theoretically stable food webs (i

  7. Web cache location

    Directory of Open Access Journals (Sweden)

    Boffey Brian

    2004-01-01

    Full Text Available Stress placed on network infrastructure by the popularity of the World Wide Web may be partially relieved by keeping multiple copies of Web documents at geographically dispersed locations. In particular, use of proxy caches and replication provide a means of storing information 'nearer to end users'. This paper concentrates on the locational aspects of Web caching giving both an overview, from an operational research point of view, of existing research and putting forward avenues for possible further research. This area of research is in its infancy and the emphasis will be on themes and trends rather than on algorithm construction. Finally, Web caching problems are briefly related to referral systems more generally.

  8. grlc Makes GitHub Taste Like Linked Data APIs

    NARCIS (Netherlands)

    Meroño-Peñuela, A.; Hoekstra, Rinke; Sack, H; Rizzo, G; Steinmetz, N; Mladenić, D; Auer, S; Lange, C

    2016-01-01

    Building Web APIs on top of SPARQL endpoints is becoming a common practice to enable universal access to the integration favorable dataspace of Linked Data. However, the Linked Data community cannot expect users to learn SPARQL to query this dataspace, and Web APIs are the most extended way of

  9. grlc Makes GitHub Taste Like Linked Data APIs

    NARCIS (Netherlands)

    Meroño-Peñuela, A.; Hoekstra, R.

    2016-01-01

    Building Web APIs on top of SPARQL endpoints is becoming a common practice to enable universal access to the integration favorable dataspace of Linked Data. However, the Linked Data community cannot expect users to learn SPARQL to query this dataspace, and Web APIs are the most common way of

  10. LSD Dimensions: Use and Reuse of Linked Statistical Data

    NARCIS (Netherlands)

    Meroño-Peñuela, Albert

    2014-01-01

    RDF Data Cube (QB) has boosted the publication of Linked Statistical Data (LSD) on the Web, making them linkable to other related datasets and concepts following the Linked Data paradigm. In this demo we present LSD Dimensions, a web based application that monitors the usage of dimensions and codes

  11. Moving toward a universally accessible web: Web accessibility and education.

    Science.gov (United States)

    Kurt, Serhat

    2017-12-08

    The World Wide Web is an extremely powerful source of information, inspiration, ideas, and opportunities. As such, it has become an integral part of daily life for a great majority of people. Yet, for a significant number of others, the internet offers only limited value due to the existence of barriers which make accessing the Web difficult, if not impossible. This article illustrates some of the reasons that achieving equality of access to the online world of education is so critical, explores the current status of Web accessibility, discusses evaluative tools and methods that can help identify accessibility issues in educational websites, and provides practical recommendations and guidelines for resolving some of the obstacles that currently hinder the achievability of the goal of universal Web access.

  12. Happy birthday WWW: the web is now old enough to drive

    CERN Document Server

    Gilbertson, Scott

    2007-01-01

    "The World Wide Web can now drive. Sixteen years ago yeterday, in a short post to the alt.hypertext newsgroup, tim Berners-Lee revealed the first public web pages summarizing his World Wide Web project." (1/4 page)

  13. Bioprocess-Engineering Education with Web Technology

    NARCIS (Netherlands)

    Sessink, O.

    2006-01-01

    Development of learning material that is distributed through and accessible via the World Wide Web. Various options from web technology are exploited to improve the quality and efficiency of learning material.

  14. A cross disciplinary study of link decay and the effectiveness of mitigation techniques.

    Science.gov (United States)

    Hennessey, Jason; Ge, Steven

    2013-01-01

    The dynamic, decentralized world-wide-web has become an essential part of scientific research and communication. Researchers create thousands of web sites every year to share software, data and services. These valuable resources tend to disappear over time. The problem has been documented in many subject areas. Our goal is to conduct a cross-disciplinary investigation of the problem and test the effectiveness of existing remedies. We accessed 14,489 unique web pages found in the abstracts within Thomson Reuters' Web of Science citation index that were published between 1996 and 2010 and found that the median lifespan of these web pages was 9.3 years with 62% of them being archived. Survival analysis and logistic regression were used to find significant predictors of URL lifespan. The availability of a web page is most dependent on the time it is published and the top-level domain names. Similar statistical analysis revealed biases in current solutions: the Internet Archive favors web pages with fewer layers in the Universal Resource Locator (URL) while WebCite is significantly influenced by the source of publication. We also created a prototype for a process to submit web pages to the archives and increased coverage of our list of scientific webpages in the Internet Archive and WebCite by 22% and 255%, respectively. Our results show that link decay continues to be a problem across different disciplines and that current solutions for static web pages are helping and can be improved.

  15. International use of an academic nephrology World Wide Web site: from medical information resource to business tool.

    Science.gov (United States)

    Abbott, Kevin C; Oliver, David K; Boal, Thomas R; Gadiyak, Grigorii; Boocks, Carl; Yuan, Christina M; Welch, Paul G; Poropatich, Ronald K

    2002-04-01

    Studies of the use of the World Wide Web to obtain medical knowledge have largely focused on patients. In particular, neither the international use of academic nephrology World Wide Web sites (websites) as primary information sources nor the use of search engines (and search strategies) to obtain medical information have been described. Visits ("hits") to the Walter Reed Army Medical Center (WRAMC) Nephrology Service website from April 30, 2000, to March 14, 2001, were analyzed for the location of originating source using Webtrends, and search engines (Google, Lycos, etc.) were analyzed manually for search strategies used. From April 30, 2000 to March 14, 2001, the WRAMC Nephrology Service website received 1,007,103 hits and 12,175 visits. These visits were from 33 different countries, and the most frequent regions were Western Europe, Asia, Australia, the Middle East, Pacific Islands, and South America. The most frequent organization using the site was the military Internet system, followed by America Online and automated search programs of online search engines, most commonly Google. The online lecture series was the most frequently visited section of the website. Search strategies used in search engines were extremely technical. The use of "robots" by standard Internet search engines to locate websites, which may be blocked by mandatory registration, has allowed users worldwide to access the WRAMC Nephrology Service website to answer very technical questions. This suggests that it is being used as an alternative to other primary sources of medical information and that the use of mandatory registration may hinder users from finding valuable sites. With current Internet technology, even a single service can become a worldwide information resource without sacrificing its primary customers.

  16. Linked Data - the story so far

    OpenAIRE

    Bizer, Christian; Heath, Tom; Berners-Lee, Tim

    2009-01-01

    The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. T...

  17. A comprehensive and cost-effective preparticipation exam implemented on the World Wide Web.

    Science.gov (United States)

    Peltz, J E; Haskell, W L; Matheson, G O

    1999-12-01

    Mandatory preparticipation examinations (PPE) are labor intensive, offer little routine health maintenance and are poor predictors of future injury or illness. Our objective was to develop a new PPE for the Stanford University varsity athletes that improved both quality of primary and preventive care and physician time efficiency. This PPE is based on the annual submission, by each athlete, of a comprehensive medical history questionnaire that is then summarized in a two-page report for the examining physician. The questionnaire was developed through a search of MEDLINE from 1966 to 1997, review of PPE from 11 other institutions, and discussion with two experts from each of seven main content areas: medical and musculoskeletal history, eating, menstrual and sleep disorders, stress and health risk behaviors. Content validity was assessed by 10 sports medicine physicians and four epidemiologists. It was then programmed for the World Wide Web (http:// www.stanford.edu/dept/sportsmed/). The questionnaire demonstrated a 97 +/- 2% sensitivity in detecting positive responses requiring physician attention. Sixteen physicians administered the 1997/98 PPE; using the summary reports, 15 found improvement in their ability to provide overall medical care including health issues beyond clearance; 13 noted a decrease in time needed for each athlete exam. Over 90% of athletes who used the web site found it "easy" or "moderately easy" to access and complete. Initial assessment of this new PPE format shows good athlete compliance, improved exam efficiency and a strong increase in subjective physician satisfaction with the quality of screening and medical care provided. The data indicate a need for improvement of routine health maintenance in this population. The database offers opportunities to study trends, risk factors, and results of interventions.

  18. The Use of Web Search Engines in Information Science Research.

    Science.gov (United States)

    Bar-Ilan, Judit

    2004-01-01

    Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…

  19. [Preliminary construction of three-dimensional visual educational system for clinical dentistry based on world wide web webpage].

    Science.gov (United States)

    Hu, Jian; Xu, Xiang-yang; Song, En-min; Tan, Hong-bao; Wang, Yi-ning

    2009-09-01

    To establish a new visual educational system of virtual reality for clinical dentistry based on world wide web (WWW) webpage in order to provide more three-dimensional multimedia resources to dental students and an online three-dimensional consulting system for patients. Based on computer graphics and three-dimensional webpage technologies, the software of 3Dsmax and Webmax were adopted in the system development. In the Windows environment, the architecture of whole system was established step by step, including three-dimensional model construction, three-dimensional scene setup, transplanting three-dimensional scene into webpage, reediting the virtual scene, realization of interactions within the webpage, initial test, and necessary adjustment. Five cases of three-dimensional interactive webpage for clinical dentistry were completed. The three-dimensional interactive webpage could be accessible through web browser on personal computer, and users could interact with the webpage through rotating, panning and zooming the virtual scene. It is technically feasible to implement the visual educational system of virtual reality for clinical dentistry based on WWW webpage. Information related to clinical dentistry can be transmitted properly, visually and interactively through three-dimensional webpage.

  20. Drugs + HIV, Learn the Link

    Medline Plus

    Full Text Available ... Learn the Link campaign uses TV, print, and Web public service announcements (PSAs), as well as posters, e-cards, ... to misuse drugs. The Learn the Link public service campaign is just one ... site. Sincerely, Nora D. Volkow, M.D. Director ...

  1. Food-web structure of seagrass communities across different spatial scales and human impacts.

    Science.gov (United States)

    Coll, Marta; Schmidt, Allison; Romanuk, Tamara; Lotze, Heike K

    2011-01-01

    Seagrass beds provide important habitat for a wide range of marine species but are threatened by multiple human impacts in coastal waters. Although seagrass communities have been well-studied in the field, a quantification of their food-web structure and functioning, and how these change across space and human impacts has been lacking. Motivated by extensive field surveys and literature information, we analyzed the structural features of food webs associated with Zostera marina across 16 study sites in 3 provinces in Atlantic Canada. Our goals were to (i) quantify differences in food-web structure across local and regional scales and human impacts, (ii) assess the robustness of seagrass webs to simulated species loss, and (iii) compare food-web structure in temperate Atlantic seagrass beds with those of other aquatic ecosystems. We constructed individual food webs for each study site and cumulative webs for each province and the entire region based on presence/absence of species, and calculated 16 structural properties for each web. Our results indicate that food-web structure was similar among low impact sites across regions. With increasing human impacts associated with eutrophication, however, food-web structure show evidence of degradation as indicated by fewer trophic groups, lower maximum trophic level of the highest top predator, fewer trophic links connecting top to basal species, higher fractions of herbivores and intermediate consumers, and higher number of prey per species. These structural changes translate into functional changes with impacted sites being less robust to simulated species loss. Temperate Atlantic seagrass webs are similar to a tropical seagrass web, yet differed from other aquatic webs, suggesting consistent food-web characteristics across seagrass ecosystems in different regions. Our study illustrates that food-web structure and functioning of seagrass habitats change with human impacts and that the spatial scale of food-web analysis

  2. Food-web structure of seagrass communities across different spatial scales and human impacts.

    Directory of Open Access Journals (Sweden)

    Marta Coll

    Full Text Available Seagrass beds provide important habitat for a wide range of marine species but are threatened by multiple human impacts in coastal waters. Although seagrass communities have been well-studied in the field, a quantification of their food-web structure and functioning, and how these change across space and human impacts has been lacking. Motivated by extensive field surveys and literature information, we analyzed the structural features of food webs associated with Zostera marina across 16 study sites in 3 provinces in Atlantic Canada. Our goals were to (i quantify differences in food-web structure across local and regional scales and human impacts, (ii assess the robustness of seagrass webs to simulated species loss, and (iii compare food-web structure in temperate Atlantic seagrass beds with those of other aquatic ecosystems. We constructed individual food webs for each study site and cumulative webs for each province and the entire region based on presence/absence of species, and calculated 16 structural properties for each web. Our results indicate that food-web structure was similar among low impact sites across regions. With increasing human impacts associated with eutrophication, however, food-web structure show evidence of degradation as indicated by fewer trophic groups, lower maximum trophic level of the highest top predator, fewer trophic links connecting top to basal species, higher fractions of herbivores and intermediate consumers, and higher number of prey per species. These structural changes translate into functional changes with impacted sites being less robust to simulated species loss. Temperate Atlantic seagrass webs are similar to a tropical seagrass web, yet differed from other aquatic webs, suggesting consistent food-web characteristics across seagrass ecosystems in different regions. Our study illustrates that food-web structure and functioning of seagrass habitats change with human impacts and that the spatial scale of

  3. Webs on the Web (WOW): 3D visualization of ecological networks on the WWW for collaborative research and education

    Science.gov (United States)

    Yoon, Ilmi; Williams, Rich; Levine, Eli; Yoon, Sanghyuk; Dunne, Jennifer; Martinez, Neo

    2004-06-01

    This paper describes information technology being developed to improve the quality, sophistication, accessibility, and pedagogical simplicity of ecological network data, analysis, and visualization. We present designs for a WWW demonstration/prototype web site that provides database, analysis, and visualization tools for research and education related to food web research. Our early experience with a prototype 3D ecological network visualization guides our design of a more flexible architecture design. 3D visualization algorithms include variable node and link sizes, placements according to node connectivity and tropic levels, and visualization of other node and link properties in food web data. The flexible architecture includes an XML application design, FoodWebML, and pipelining of computational components. Based on users" choices of data and visualization options, the WWW prototype site will connect to an XML database (Xindice) and return the visualization in VRML format for browsing and further interactions.

  4. Blueprint of a Cross-Lingual Web Retrieval Collection

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.; van Zwol, R.

    2005-01-01

    The world wide web is a natural setting for cross-lingual information retrieval; web content is essentially multilingual, and web searchers are often polyglots. Even though English has emerged as the lingua franca of the web, planning for a business trip or holiday usually involves digesting pages

  5. Is Wikipedia link structure different?

    NARCIS (Netherlands)

    Kamps, J.; Koolen, M.; Baeza-Yates, R.; Boldi, P.; Ribeiro-Neto, B.; Cambazoglu, B.B.

    2010-01-01

    In this paper, we investigate the difference between Wikipedia and Web link structure with respect to their value as indicators of the relevance of a page for a given topic of request. Our experimental evidence is from two IR test-collections: the .GOV collection used at the TREC Web tracks and the

  6. Web document clustering using hyperlink structures

    Energy Technology Data Exchange (ETDEWEB)

    He, Xiaofeng; Zha, Hongyuan; Ding, Chris H.Q; Simon, Horst D.

    2001-05-07

    With the exponential growth of information on the World Wide Web there is great demand for developing efficient and effective methods for organizing and retrieving the information available. Document clustering plays an important role in information retrieval and taxonomy management for the World Wide Web and remains an interesting and challenging problem in the field of web computing. In this paper we consider document clustering methods exploring textual information hyperlink structure and co-citation relations. In particular we apply the normalized cut clustering method developed in computer vision to the task of hyperdocument clustering. We also explore some theoretical connections of the normalized-cut method to K-means method. We then experiment with normalized-cut method in the context of clustering query result sets for web search engines.

  7. Multimedia radiology self-learning course on the world wide web

    International Nuclear Information System (INIS)

    Sim, Jung Suk; Kim, Jong Hyo; Kim, Tae Kyoung; Han, Joon Koo; Kang, Heung Sik; Yeon, Kyung Mo; Han, Man Chung

    1997-01-01

    The creation and maintenance of radiology teaching materials is both laborious and very time-consuming, but at a teaching hospital is important. Through use of the technology offered by today's worldwide web, this problem can be efficiently solved, and on this basis, we devised a multimedia radiology self-learning course for abdominal ultrasound and CT. A combination of video and audio tapes has been used as teaching material; the authors digitized and converted these to Hypertext Mark-up Language (HTML) format. films were digitized with a digital camera and compressed to joint photographic expert group (JPEG) format, while audio tapes were digitized with a sound recorder and compressed to real audio format. Multimedia on the worldwide web will facilitate easy management and maintenance of a self-learning course. To make this more suitable for practical use, continual upgrading on the basis of experience is needed. (author). 3 refs., 4 figs

  8. Open Hypermedia as User Controlled Meta Data for the Web

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Sloth, Lennert; Bouvin, Niels Olof

    2000-01-01

    This paper introduces an approach to utilise open hypermedia structures such as links, annotations, collections and guided tours as meta data for Web resources. The paper introduces an XML based data format, called Open Hypermedia Interchange Format - OHIF, for such hypermedia structures. OHIF...... distributed open hypermedia linking between Web pages and WebDAV aware desktop applications. The paper describes the OHIF format and demonstrates how the Webvise system handles OHIF. Finally, it argues for better support for handling user controlled meta data, e.g. support for linking in non-XML data...... resembles XLink with respect to its representation of out-of-line links, but it goes beyond XLink with a more rich set of structuring mechanisms, including e.g. composites. Moreover OHIF includes an addressing mechanisms (LocSpecs) that goes beyond XPointer and URL in its ability to locate non-XML data...

  9. Sign Language Web Pages

    Science.gov (United States)

    Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.

    2006-01-01

    The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…

  10. Hacking web intelligence open source intelligence and web reconnaissance concepts and techniques

    CERN Document Server

    Chauhan, Sudhanshu

    2015-01-01

    Open source intelligence (OSINT) and web reconnaissance are rich topics for infosec professionals looking for the best ways to sift through the abundance of information widely available online. In many cases, the first stage of any security assessment-that is, reconnaissance-is not given enough attention by security professionals, hackers, and penetration testers. Often, the information openly present is as critical as the confidential data. Hacking Web Intelligence shows you how to dig into the Web and uncover the information many don't even know exists. The book takes a holistic approach

  11. WebQuests: Are They Developmentally Appropriate?

    Science.gov (United States)

    Maddux, Cleborne D.; Cummings, Rhoda

    2007-01-01

    A topic that currently is receiving a great deal of attention by educators is the nature and use of WebQuests--computer-based activities that guide student learning through use of the World Wide Web (Sharp 2004). Despite their popularity, questions remain about the effectiveness with which WebQuests are being used with students. This article…

  12. A Survey On Various Web Template Detection And Extraction Methods

    Directory of Open Access Journals (Sweden)

    Neethu Mary Varghese

    2015-03-01

    Full Text Available Abstract In todays digital world reliance on the World Wide Web as a source of information is extensive. Users increasingly rely on web based search engines to provide accurate search results on a wide range of topics that interest them. The search engines in turn parse the vast repository of web pages searching for relevant information. However majority of web portals are designed using web templates which are designed to provide consistent look and feel to end users. The presence of these templates however can influence search results leading to inaccurate results being delivered to the users. Therefore to improve the accuracy and reliability of search results identification and removal of web templates from the actual content is essential. A wide range of approaches are commonly employed to achieve this and this paper focuses on the study of the various approaches of template detection and extraction that can be applied across homogenous as well as heterogeneous web pages.

  13. Minimalist instruction for learning to search the World Wide Web

    NARCIS (Netherlands)

    Lazonder, Adrianus W.

    2001-01-01

    This study examined the efficacy of minimalist instruction to develop self-regulatory skills involved in Web searching. Two versions of minimalist self-regulatory skill instruction were compared to a control group that was merely taught procedural skills to operate the search engine. Acquired skills

  14. A Technique to Speedup Access to Web Contents

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. Web Caching - A Technique to Speedup Access to Web Contents. Harsha Srinath Shiva Shankar Ramanna. General Article Volume 7 Issue 7 July 2002 pp 54-62 ... Keywords. World wide web; data caching; internet traffic; web page access.

  15. Spinning the web of knowledge

    CERN Multimedia

    Knight, Matthew

    2007-01-01

    "On August 6, 1991, Tim Berners-Lee posted the World Wide Web's first Web site. Fifteen years on there are estimated to be over 100 million. The space of growth has happened at a bewildering rate and its success has even confounded its inventor." (1/2 page)

  16. A new generation of tools for search, recovery and quality evaluation of World Wide Web medical resources.

    Science.gov (United States)

    Aguillo, I

    2000-01-01

    Although the Internet is already a valuable information resource in medicine, there are important challenges to be faced before physicians and general users will have extensive access to this information. As a result of a research effort to compile a health-related Internet directory, new tools and strategies have been developed to solve key problems derived from the explosive growth of medical information on the Net and the great concern over the quality of such critical information. The current Internet search engines lack some important capabilities. We suggest using second generation tools (client-side based) able to deal with large quantities of data and to increase the usability of the records recovered. We tested the capabilities of these programs to solve health-related information problems, recognising six groups according to the kind of topics addressed: Z39.50 clients, downloaders, multisearchers, tracing agents, indexers and mappers. The evaluation of the quality of health information available on the Internet could require a large amount of human effort. A possible solution may be to use quantitative indicators based on the hypertext visibility of the Web sites. The cybermetric measures are valid for quality evaluation if they are derived from indirect peer review by experts with Web pages citing the site. The hypertext links acting as citations need to be extracted from a controlled sample of quality super-sites.

  17. Web 3.0: implicaciones educativas

    OpenAIRE

    Grupo TACE. Tecnologías Aplicadas a las Ciencias de la Educación

    2012-01-01

    La Web 3.0 se considera la etapa que sigue a la Web 2.0 o Web social. Todavía no es un término unívoco y suele aparecer unido con la web semántica. Es una extensión del World Wide Web, que permite expresar el lenguaje natural y también utilizar un lenguaje que se puede entender, interpretar y utilizar por agentes software, permitiendo encontrar, compartir e integrar la información más fácilmente. Se trata de un nuevo ciclo en el que la inteligencia artificial se combina con la capacidad de l...

  18. Flow Webs: Mechanism and Architecture for the Implementation of Sensor Webs

    Science.gov (United States)

    Gorlick, M. M.; Peng, G. S.; Gasster, S. D.; McAtee, M. D.

    2006-12-01

    The sensor web is a distributed, federated infrastructure much like its predecessors, the internet and the world wide web. It will be a federation of many sensor webs, large and small, under many distinct spans of control, that loosely cooperates and share information for many purposes. Realistically, it will grow piecemeal as distinct, individual systems are developed and deployed, some expressly built for a sensor web while many others were created for other purposes. Therefore, the architecture of the sensor web is of fundamental import and architectural strictures that inhibit innovation, experimentation, sharing or scaling may prove fatal. Drawing upon the architectural lessons of the world wide web, we offer a novel system architecture, the flow web, that elevates flows, sequences of messages over a domain of interest and constrained in both time and space, to a position of primacy as a dynamic, real-time, medium of information exchange for computational services. The flow web captures; in a single, uniform architectural style; the conflicting demands of the sensor web including dynamic adaptations to changing conditions, ease of experimentation, rapid recovery from the failures of sensors and models, automated command and control, incremental development and deployment, and integration at multiple levels—in many cases, at different times. Our conception of sensor webs—dynamic amalgamations of sensor webs each constructed within a flow web infrastructure—holds substantial promise for earth science missions in general, and of weather, air quality, and disaster management in particular. Flow webs, are by philosophy, design and implementation a dynamic infrastructure that permits massive adaptation in real-time. Flows may be attached to and detached from services at will, even while information is in transit through the flow. This concept, flow mobility, permits dynamic integration of earth science products and modeling resources in response to real

  19. Turkish University Students’ Perceptions of the World Wide Web as a Learning Tool: An Investigation Based on Gender, Socio-Economic Background, and Web Experience

    Directory of Open Access Journals (Sweden)

    Erkan Tekinarslan

    2009-04-01

    Full Text Available The main purpose of the study is to investigate Turkish undergraduate students’ perceptions of the Web as a learning tool and to analyze whether their perceptions differ significantly based on gender, socio-economic background, and Web experience. Data obtained from 722 undergraduate students (331 males and 391 females were used in the analyses. The findings indicated significant differences based on gender, socio-economic background, and Web experience. The students from higher socio-economic backgrounds indicated significantly higher attitude scores on the self-efficacy subscale of the Web attitude scale. Similarly, the male students indicated significantly higher scores on the self-efficacy subscale than the females. Also, the students with higher Web experience in terms of usage frequency indicated higher scores on all subscales (i.e., self-efficacy, affective, usefulness, Web-based learning. Moreover, the two-way ANOVA results indicated that the student’s PC ownership has significant main effects on their Web attitudes and on the usefulness, self-efficacy, and affective subscales.

  20. E-Learning 3.0 = E-Learning 2.0 + Web 3.0?

    Science.gov (United States)

    Hussain, Fehmida

    2012-01-01

    Web 3.0, termed as the semantic web or the web of data is the transformed version of Web 2.0 with technologies and functionalities such as intelligent collaborative filtering, cloud computing, big data, linked data, openness, interoperability and smart mobility. If Web 2.0 is about social networking and mass collaboration between the creator and…

  1. The Semantic Web and Educational Technology

    Science.gov (United States)

    Maddux, Cleborne D., Ed.

    2008-01-01

    The "Semantic Web" is an idea proposed by Tim Berners-Lee, the inventor of the "World Wide Web." The topic has been generating a great deal of interest and enthusiasm, and there is a rapidly growing body of literature dealing with it. This article attempts to explain how the Semantic Web would work, and explores short-term and long-term…

  2. Requirements of a security framework for the semantic web

    CSIR Research Space (South Africa)

    Mbaya, IR

    2009-02-01

    Full Text Available The vision of the Semantic Web is to provide the World Wide Web the ability to automate interoperate and reason about resources and services on the Web. However, the autonomous dynamic open distributed and heterogeneous nature of the Semantic Web...

  3. Web Mining of Hotel Customer Survey Data

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2008-12-01

    Full Text Available This paper provides an extensive literature review and list of references on the background of web mining as applied specifically to hotel customer survey data. This research applies the techniques of web mining to actual text of written comments for hotel customers using Megaputer PolyAnalyst®. Web mining functionalities utilized include those such as clustering, link analysis, key word and phrase extraction, taxonomy, and dimension matrices. This paper provides screen shots of the web mining applications using Megaputer PolyAnalyst®. Conclusions and future directions of the research are presented.

  4. Simple Enough--Even for Web Virgins: Lisa Mitten's Access to Native American Web Sites. Web Site Review Essay.

    Science.gov (United States)

    Belgarde, Mary Jiron

    1998-01-01

    A mixed-blood Mohawk urban Indian and university librarian, Lisa Mitten provides access to Web sites with solid information about American Indians. Links are provided to 10 categories--Native nations, Native organizations, Indian education, Native media, powwows and festivals, Indian music, Native arts, Native businesses, and Indian-oriented home…

  5. Deriving a Typology of Web 2.0 Learning Technologies

    Science.gov (United States)

    Bower, Matt

    2016-01-01

    This paper presents the methods and outcomes of a typological analysis of Web 2.0 technologies. A comprehensive review incorporating over 2000 links led to identification of over 200 Web 2.0 technologies that were suitable for learning and teaching purposes. The typological analysis involved development of relevant Web 2.0 dimensions, grouping…

  6. Programming NET Web Services

    CERN Document Server

    Ferrara, Alex

    2007-01-01

    Web services are poised to become a key technology for a wide range of Internet-enabled applications, spanning everything from straight B2B systems to mobile devices and proprietary in-house software. While there are several tools and platforms that can be used for building web services, developers are finding a powerful tool in Microsoft's .NET Framework and Visual Studio .NET. Designed from scratch to support the development of web services, the .NET Framework simplifies the process--programmers find that tasks that took an hour using the SOAP Toolkit take just minutes. Programming .NET

  7. Discovering More Accurate Frequent Web Usage Patterns

    OpenAIRE

    Bayir, Murat Ali; Toroslu, Ismail Hakki; Cosar, Ahmet; Fidan, Guven

    2008-01-01

    Web usage mining is a type of web mining, which exploits data mining techniques to discover valuable information from navigation behavior of World Wide Web users. As in classical data mining, data preparation and pattern discovery are the main issues in web usage mining. The first phase of web usage mining is the data processing phase, which includes the session reconstruction operation from server logs. Session reconstruction success directly affects the quality of the frequent patterns disc...

  8. Using EMBL-EBI Services via Web Interface and Programmatically via Web Services.

    Science.gov (United States)

    Lopez, Rodrigo; Cowley, Andrew; Li, Weizhong; McWilliam, Hamish

    2014-12-12

    The European Bioinformatics Institute (EMBL-EBI) provides access to a wide range of databases and analysis tools that are of key importance in bioinformatics. As well as providing Web interfaces to these resources, Web Services are available using SOAP and REST protocols that enable programmatic access to our resources and allow their integration into other applications and analytical workflows. This unit describes the various options available to a typical researcher or bioinformatician who wishes to use our resources via Web interface or programmatically via a range of programming languages. Copyright © 2014 John Wiley & Sons, Inc.

  9. SoyBase Simple Semantic Web Architecture and Protocol (SSWAP) Services

    Science.gov (United States)

    Semantic web technologies offer the potential to link internet resources and data by shared concepts without having to rely on absolute lexical matches. Thus two web sites or web resources which are concerned with similar data types could be identified based on similar semantics. In the biological...

  10. Trust estimation of the semantic web using semantic web clustering

    Science.gov (United States)

    Shirgahi, Hossein; Mohsenzadeh, Mehran; Haj Seyyed Javadi, Hamid

    2017-05-01

    Development of semantic web and social network is undeniable in the Internet world these days. Widespread nature of semantic web has been very challenging to assess the trust in this field. In recent years, extensive researches have been done to estimate the trust of semantic web. Since trust of semantic web is a multidimensional problem, in this paper, we used parameters of social network authority, the value of pages links authority and semantic authority to assess the trust. Due to the large space of semantic network, we considered the problem scope to the clusters of semantic subnetworks and obtained the trust of each cluster elements as local and calculated the trust of outside resources according to their local trusts and trust of clusters to each other. According to the experimental result, the proposed method shows more than 79% Fscore that is about 11.9% in average more than Eigen, Tidal and centralised trust methods. Mean of error in this proposed method is 12.936, that is 9.75% in average less than Eigen and Tidal trust methods.

  11. Semantic Web Technologies and Big Data Infrastructures: SPARQL Federated Querying of Heterogeneous Big Data Stores

    OpenAIRE

    Konstantopoulos, Stasinos; Charalambidis, Angelos; Mouchakis, Giannis; Troumpoukis, Antonis; Jakobitsch, Jürgen; Karkaletsis, Vangelis

    2016-01-01

    The ability to cross-link large scale data with each other and with structured Semantic Web data, and the ability to uniformly process Semantic Web and other data adds value to both the Semantic Web and to the Big Data community. This paper presents work in progress towards integrating Big Data infrastructures with Semantic Web technologies, allowing for the cross-linking and uniform retrieval of data stored in both Big Data infrastructures and Semantic Web data. The technical challenges invo...

  12. Linked data management

    CERN Document Server

    Hose, Katja; Schenkel, Ralf

    2014-01-01

    Linked Data Management presents techniques for querying and managing Linked Data that is available on today’s Web. The book shows how the abundance of Linked Data can serve as fertile ground for research and commercial applications. The text focuses on aspects of managing large-scale collections of Linked Data. It offers a detailed introduction to Linked Data and related standards, including the main principles distinguishing Linked Data from standard database technology. Chapters also describe how to generate links between datasets and explain the overall architecture of data integration systems based on Linked Data. A large part of the text is devoted to query processing in different setups. After presenting methods to publish relational data as Linked Data and efficient centralized processing, the book explores lookup-based, distributed, and parallel solutions. It then addresses advanced topics, such as reasoning, and discusses work related to read-write Linked Data for system interoperation. Desp...

  13. Safety and efficacy of aneurysm treatment with WEB

    DEFF Research Database (Denmark)

    Pierot, Laurent; Costalat, Vincent; Moret, Jacques

    2016-01-01

    OBJECT WEB is an innovative intrasaccular treatment for intracranial aneurysms. Preliminary series have shown good safety and efficacy. The WEB Clinical Assessment of Intrasaccular Aneurysm Therapy (WEBCAST) trial is a prospective European trial evaluating the safety and efficacy of WEB in wide......-neck bifurcation aneurysms. METHODS Patients with wide-neck bifurcation aneurysms for which WEB treatment was indicated were included in this multicentergood clinical practices study. Clinical data including adverse events and clinical status at 1 and 6 months were collected and independently analyzed by a medical....... RESULTS Ten European neurointerventional centers enrolled 51 patients with 51 aneurysms. Treatment with WEB was achieved in 48 of 51 aneurysms (94.1%). Adjunctive implants (coils/stents) were used in 4 of 48 aneurysms (8.3%). Thromboembolic events were observed in 9 of 51 patients (17.6%), resulting...

  14. WebQuest y anotaciones semánticas WebQuest and semantic annotations

    Directory of Open Access Journals (Sweden)

    Santiago Blanco Suárez

    2007-03-01

    Full Text Available En este artículo se presenta un sistema de búsqueda y recuperación de metadatos de actividades educativas que siguen el modelo WebQuest. Se trata de una base de datos relacional, accesible a través del web, que se complementa con un módulo que permite realizar anotaciones semánticas y cuyo objetivo es capturar y enriquecer el conocimiento acerca del uso de dichos ejercicios por parte de la comunidad de docentes que experimentan con ellos, así como documentar los recursos o sitios web de interés didáctico buscando construir un repositorio de enlaces educativos de calidad. This paper presents a system of searching and recovering educational activities that follow the Web-Quest model through the web, complemented with a module to make semantic annotations aimed at getting and enriching the knowledge on the use of these exercises by the teaching community. It also tries to document the resources or websites with didactic interest in order to build a qualified account of educational links.

  15. Deploying Linked Open Vocabulary (LOV to Enhance Library Linked Data

    Directory of Open Access Journals (Sweden)

    Oh, Sam Gyun

    2015-06-01

    Full Text Available Since the advent of Linked Data (LD as a method for building webs of data, there have been many attempts to apply and implement LD in various settings. Efforts have been made to convert bibliographic data in libraries into Linked Data, thereby generating Library Linked Data (LLD. However, when memory institutions have tried to link their data with external sources based on principles suggested by Tim Berners-Lee, identifying appropriate vocabularies for use in describing their bibliographic data has proved challenging. The objective of this paper is to discuss the potential role of Linked Open Vocabularies (LOV in providing better access to various open datasets and facilitating effective linking. The paper will also examine the ways in which memory institutions can utilize LOV to enhance the quality of LLD and LLD-based ontology design.

  16. The definitive guide to HTML5 WebSocket

    CERN Document Server

    Wang, Vanessa; Moskovits, Peter

    2013-01-01

    The Definitive Guide to HTML5 WebSocket is the ultimate insider's WebSocket resource. This revolutionary new web technology enables you to harness the power of true real-time connectivity and build responsive, modern web applications.   This book contains everything web developers and architects need to know about WebSocket. It discusses how WebSocket-based architectures provide a dramatic reduction in unnecessary network overhead and latency compared to older HTTP (Ajax) architectures, how to layer widely used protocols such as XMPP and STOMP on top of WebSocket, and how to secure WebSocket c

  17. Classical Hypermedia Virtues on the Web with Webstrates

    DEFF Research Database (Denmark)

    Bouvin, Niels Olof; Klokmose, Clemens Nylandsted

    2016-01-01

    We show and analyze herein how Webstrates can augment the Web from a classical hypermedia perspective. Webstrates turns the DOM of Web pages into persistent and collaborative objects. We demonstrate how this can be applied to realize bidirectional links, shared collaborative annotations, and in...

  18. Study on online community user motif using web usage mining

    Science.gov (United States)

    Alphy, Meera; Sharma, Ajay

    2016-04-01

    The Web usage mining is the application of data mining, which is used to extract useful information from the online community. The World Wide Web contains at least 4.73 billion pages according to Indexed Web and it contains at least 228.52 million pages according Dutch Indexed web on 6th august 2015, Thursday. It’s difficult to get needed data from these billions of web pages in World Wide Web. Here is the importance of web usage mining. Personalizing the search engine helps the web user to identify the most used data in an easy way. It reduces the time consumption; automatic site search and automatic restore the useful sites. This study represents the old techniques to latest techniques used in pattern discovery and analysis in web usage mining from 1996 to 2015. Analyzing user motif helps in the improvement of business, e-commerce, personalisation and improvement of websites.

  19. WebMGA: a customizable web server for fast metagenomic sequence analysis.

    Science.gov (United States)

    Wu, Sitao; Zhu, Zhengwei; Fu, Liming; Niu, Beifang; Li, Weizhong

    2011-09-07

    The new field of metagenomics studies microorganism communities by culture-independent sequencing. With the advances in next-generation sequencing techniques, researchers are facing tremendous challenges in metagenomic data analysis due to huge quantity and high complexity of sequence data. Analyzing large datasets is extremely time-consuming; also metagenomic annotation involves a wide range of computational tools, which are difficult to be installed and maintained by common users. The tools provided by the few available web servers are also limited and have various constraints such as login requirement, long waiting time, inability to configure pipelines etc. We developed WebMGA, a customizable web server for fast metagenomic analysis. WebMGA includes over 20 commonly used tools such as ORF calling, sequence clustering, quality control of raw reads, removal of sequencing artifacts and contaminations, taxonomic analysis, functional annotation etc. WebMGA provides users with rapid metagenomic data analysis using fast and effective tools, which have been implemented to run in parallel on our local computer cluster. Users can access WebMGA through web browsers or programming scripts to perform individual analysis or to configure and run customized pipelines. WebMGA is freely available at http://weizhongli-lab.org/metagenomic-analysis. WebMGA offers to researchers many fast and unique tools and great flexibility for complex metagenomic data analysis.

  20. WebMGA: a customizable web server for fast metagenomic sequence analysis

    Directory of Open Access Journals (Sweden)

    Niu Beifang

    2011-09-01

    Full Text Available Abstract Background The new field of metagenomics studies microorganism communities by culture-independent sequencing. With the advances in next-generation sequencing techniques, researchers are facing tremendous challenges in metagenomic data analysis due to huge quantity and high complexity of sequence data. Analyzing large datasets is extremely time-consuming; also metagenomic annotation involves a wide range of computational tools, which are difficult to be installed and maintained by common users. The tools provided by the few available web servers are also limited and have various constraints such as login requirement, long waiting time, inability to configure pipelines etc. Results We developed WebMGA, a customizable web server for fast metagenomic analysis. WebMGA includes over 20 commonly used tools such as ORF calling, sequence clustering, quality control of raw reads, removal of sequencing artifacts and contaminations, taxonomic analysis, functional annotation etc. WebMGA provides users with rapid metagenomic data analysis using fast and effective tools, which have been implemented to run in parallel on our local computer cluster. Users can access WebMGA through web browsers or programming scripts to perform individual analysis or to configure and run customized pipelines. WebMGA is freely available at http://weizhongli-lab.org/metagenomic-analysis. Conclusions WebMGA offers to researchers many fast and unique tools and great flexibility for complex metagenomic data analysis.

  1. Linking oceanic food webs to coastal production and growth rates of Pacific salmon ( Oncorhynchus spp.), using models on three scales

    Science.gov (United States)

    Aydin, Kerim Y.; McFarlane, Gordon A.; King, Jacquelynne R.; Megrey, Bernard A.; Myers, Katherine W.

    2005-03-01

    Three independent modeling methods—a nutrient-phytoplankton-zooplankton (NPZ) model (NEMURO), a food web model (Ecopath/Ecosim), and a bioenergetics model for pink salmon ( Oncorhynchus gorbuscha)—were linked to examine the relationship between seasonal zooplankton dynamics and annual food web productive potential for Pacific salmon feeding and growing in the Alaskan subarctic gyre ecosystem. The linked approach shows the importance of seasonal and ontogenetic prey switching for zooplanktivorous pink salmon, and illustrates the critical role played by lipid-rich forage species, especially the gonatid squid Berryteuthis anonychus, in connecting zooplankton to upper trophic level production in the subarctic North Pacific. The results highlight the need to uncover natural mechanisms responsible for accelerated late winter and early spring growth of salmon, especially with respect to climate change and zooplankton bloom timing. Our results indicate that the best match between modeled and observed high-seas pink salmon growth requires the inclusion of two factors into bioenergetics models: (1) decreasing energetic foraging costs for salmon as zooplankton are concentrated by the spring shallowing of pelagic mixed-layer depth and (2) the ontogenetic switch of salmon diets from zooplankton to squid. Finally, we varied the timing and input levels of coastal salmon production to examine effects of density-dependent coastal processes on ocean feeding; coastal processes that place relatively minor limitations on salmon growth may delay the seasonal timing of ontogenetic diet shifts and thus have a magnified effect on overall salmon growth rates.

  2. The effect of new links on Google Pagerank

    NARCIS (Netherlands)

    Avrachenkov, K.; Litvak, Nelli

    2006-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as the frequency that a random surfer visits a Web page, and thus it reflects the popularity of a Web page. We study the effect of newly created links on Google PageRank. We discuss to

  3. Integrating Web Services into Map Image Applications

    National Research Council Canada - National Science Library

    Tu, Shengru

    2003-01-01

    Web services have been opening a wide avenue for software integration. In this paper, we have reported our experiments with three applications that are built by utilizing and providing web services for Geographic Information Systems (GIS...

  4. Publication, discovery and interoperability of Clinical Decision Support Systems: A Linked Data approach.

    Science.gov (United States)

    Marco-Ruiz, Luis; Pedrinaci, Carlos; Maldonado, J A; Panziera, Luca; Chen, Rong; Bellika, J Gustav

    2016-08-01

    The high costs involved in the development of Clinical Decision Support Systems (CDSS) make it necessary to share their functionality across different systems and organizations. Service Oriented Architectures (SOA) have been proposed to allow reusing CDSS by encapsulating them in a Web service. However, strong barriers in sharing CDS functionality are still present as a consequence of lack of expressiveness of services' interfaces. Linked Services are the evolution of the Semantic Web Services paradigm to process Linked Data. They aim to provide semantic descriptions over SOA implementations to overcome the limitations derived from the syntactic nature of Web services technologies. To facilitate the publication, discovery and interoperability of CDS services by evolving them into Linked Services that expose their interfaces as Linked Data. We developed methods and models to enhance CDS SOA as Linked Services that define a rich semantic layer based on machine interpretable ontologies that powers their interoperability and reuse. These ontologies provided unambiguous descriptions of CDS services properties to expose them to the Web of Data. We developed models compliant with Linked Data principles to create a semantic representation of the components that compose CDS services. To evaluate our approach we implemented a set of CDS Linked Services using a Web service definition ontology. The definitions of Web services were linked to the models developed in order to attach unambiguous semantics to the service components. All models were bound to SNOMED-CT and public ontologies (e.g. Dublin Core) in order to count on a lingua franca to explore them. Discovery and analysis of CDS services based on machine interpretable models was performed reasoning over the ontologies built. Linked Services can be used effectively to expose CDS services to the Web of Data by building on current CDS standards. This allows building shared Linked Knowledge Bases to provide machine

  5. THE IMAGE OF INVESTMENT AND FINANCIAL SERVICES COMPANIES IN WWW LANDSCAPE (WORLD WIDE WEB

    Directory of Open Access Journals (Sweden)

    Iancu Ioana Ancuta

    2011-07-01

    Full Text Available In a world where the internet and its image are becoming more and more important, this study is about the importance of Investment and Financial Services Companies web sites. Market competition, creates the need of studies, focused on assessing and analyzing the websites of companies who are active in this sector. Our study wants to respond at several questions related to Romanian Investment and Financial Services Companies web sites through four dimensions: content, layout, handling and interactivity. Which web sites are best and from what point of view? Where should financial services companies direct their investments to differentiate themselves and their sites? In fact we want to rank the 58 Investment and Financial Services Companies web sites based on 127 criteria. There are numerous methods for evaluating web pages. The evaluation methods are similar from the structural point of view and the most popular are: Serqual, Sitequal, Webqual / Equal EtailQ, Ewam, e-Serqual, WebQEM (Badulescu, 2008:58. In the paper: "Assessment of Romanian Banks E-Image: A Marketing Perspective" (Catana, Catana and Constantinescu, 2006: 4 the authors point out that there are at least four complex variables: accessibility, functionality, performance and usability. Each of these can be decomposed into simple ones. We used the same method, and we examined from the utility point of view, 58 web sites of Investment and Financial Services Companies based on 127 criteria following a procedure developed by Institut fur ProfNet Internet Marketing, Munster (Germany. The data collection period was 1-30 September 2010. The results show that there are very large differences between corporate sites; their creators are concentrating on the information required by law and aesthetics, neglecting other aspects as communication and online service. In the future we want to extend this study at international level, by applying the same methods of research in 5 countries from

  6. Provenance-Based Approaches to Semantic Web Service Discovery and Usage

    Science.gov (United States)

    Narock, Thomas William

    2012-01-01

    The World Wide Web Consortium defines a Web Service as "a software system designed to support interoperable machine-to-machine interaction over a network." Web Services have become increasingly important both within and across organizational boundaries. With the recent advent of the Semantic Web, web services have evolved into semantic…

  7. Europeana no Linked Open Data: conceitos de Web Semântica na dimensão aplicada das Humanidades Digitais

    Directory of Open Access Journals (Sweden)

    Caio Saraiva Coneglian

    2017-01-01

    Full Text Available http://dx.doi.org/10.5007/1518-2924.2017v22n48p88 O surgimento de novas tecnologias, tem introduzido meios para a divulgação e a disponibilização das informações mais eficientemente. Uma iniciativa, chamada de Europeana, vem promovendo esta adaptação dos objetos informacionais dentro da Web, e mais especificamente no Linked Data. Desta forma, o presente estudo tem como objetivo apresentar uma discussão acerca da relação entre as Humanidades Digitais e o Linked Open Data, na figura da Europeana. Para tal, utilizamos uma metodologia exploratória e que busca explorar as questões relacionadas ao modelo de dados da Europeana, EDM, por meio do SPARQL. Como resultados, compreendemos as características do EDM, pela utilização do SPARQL. Identificamos, ainda, a importância que o conceito de Humanidades Digitais possui dentro do contexto da Europeana.

  8. Critical Reading of the Web

    Science.gov (United States)

    Griffin, Teresa; Cohen, Deb

    2012-01-01

    The ubiquity and familiarity of the world wide web means that students regularly turn to it as a source of information. In doing so, they "are said to rely heavily on simple search engines, such as Google to find what they want." Researchers have also investigated how students use search engines, concluding that "the young web users tended to…

  9. WebNet 99 : proceedings of WebNet 99 - World Conference on the WWW and Internet, Honolulu, Hawaii, October 24-30, 1999

    NARCIS (Netherlands)

    De Bra, P.M.E.; Leggett, J.

    1999-01-01

    The 1999 WebNet conference addressed research, new developments, and experiences related to the Internet and World Wide Web. The 394 contributions of WebNet 99 contained in this proceedings comprise the full and short papers accepted for presentation at the conference. Major topics covered include:

  10. Simple rules describe bottom-up and top-down control in food webs with alternative energy pathways.

    Science.gov (United States)

    Wollrab, Sabine; Diehl, Sebastian; De Roos, André M

    2012-09-01

    Many human influences on the world's ecosystems have their largest direct impacts at either the top or the bottom of the food web. To predict their ecosystem-wide consequences we must understand how these impacts propagate. A long-standing, but so far elusive, problem in this endeavour is how to reduce food web complexity to a mathematically tractable, but empirically relevant system. Simplification to main energy channels linking primary producers to top consumers has been recently advocated. Following this approach, we propose a general framework for the analysis of bottom-up and top-down forcing of ecosystems by reducing food webs to two energy pathways originating from a limiting resource shared by competing guilds of primary producers (e.g. edible vs. defended plants). Exploring dynamical models of such webs we find that their equilibrium responses to nutrient enrichment and top consumer harvesting are determined by only two easily measurable topological properties: the lengths of the component food chains (odd-odd, odd-even, or even-even) and presence vs. absence of a generalist top consumer reconnecting the two pathways (yielding looped vs. branched webs). Many results generalise to other looped or branched web structures and the model can be easily adapted to include a detrital pathway. © 2012 Blackwell Publishing Ltd/CNRS.

  11. Maintenance-Ready Web Application Development

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2016-01-01

    Full Text Available The current paper tackles the subject of developing maintenance-ready web applications. Maintenance is presented as a core stage in a web application’s lifecycle. The concept of maintenance-ready is defined in the context of web application development. Web application maintenance tasks types are enunciated and suitable task types are identified for further analysis. The research hypothesis is formulated based on a direct link between tackling maintenance in the development stage and reducing overall maintenance costs. A live maintenance-ready web application is presented and maintenance related aspects are highlighted. The web application’s features, that render it maintenance-ready, are emphasize. The cost of designing and building the web-application to be maintenance-ready are disclosed. The savings in maintenance development effort facilitated by maintenance ready features are also disclosed. Maintenance data is collected from 40 projects implemented by a web development company. Homogeneity and diversity of collected data is evaluated. A data sample is presented and the size and comprehensive nature of the entire dataset is depicted. Research hypothesis are validated and conclusions are formulated on the topic of developing maintenance-ready web applications. The limits of the research process which represented the basis for the current paper are enunciated. Future research topics are submitted for debate.

  12. Capturing Trust in Social Web Applications

    Science.gov (United States)

    O'Donovan, John

    The Social Web constitutes a shift in information flow from the traditional Web. Previously, content was provided by the owners of a website, for consumption by the end-user. Nowadays, these websites are being replaced by Social Web applications which are frameworks for the publication of user-provided content. Traditionally, Web content could be `trusted' to some extent based on the site it originated from. Algorithms such as Google's PageRank were (and still are) used to compute the importance of a website, based on analysis of underlying link topology. In the Social Web, analysis of link topology merely tells us about the importance of the information framework which hosts the content. Consumers of information still need to know about the importance/reliability of the content they are reading, and therefore about the reliability of the producers of that content. Research into trust and reputation of the producers of information in the Social Web is still very much in its infancy. Every day, people are forced to make trusting decisions about strangers on the Web based on a very limited amount of information. For example, purchasing a product from an eBay seller with a `reputation' of 99%, downloading a file from a peer-to-peer application such as Bit-Torrent, or allowing Amazon.com tell you what products you will like. Even something as simple as reading comments on a Web-blog requires the consumer to make a trusting decision about the quality of that information. In all of these example cases, and indeed throughout the Social Web, there is a pressing demand for increased information upon which we can make trusting decisions. This chapter examines the diversity of sources from which trust information can be harnessed within Social Web applications and discusses a high level classification of those sources. Three different techniques for harnessing and using trust from a range of sources are presented. These techniques are deployed in two sample Social Web

  13. Tracing where and who provenance in Linked Data: A calculus

    OpenAIRE

    Dezani-Ciancaglini, Mariangiola; Horne, Ross; Sassone, Vladimiro

    2012-01-01

    Linked Data provides some sensible guidelines for publishing and consuming data on the Web. Data published on the Web has no inherent truth, yet its quality can often be assessed based on its provenance. This work introduces a new approach to provenance for Linked Data. The simplest notion of provenance-viz., a named graph indicating where the data is now-is extended with a richer provenance format. The format reflects the behaviour of processes interacting with Linked Data, tracing where the...

  14. World wide web for database of Japanese translation on international nuclear event scale reports

    International Nuclear Information System (INIS)

    Watanabe, Norio; Hirano, Masashi

    1999-01-01

    The International Nuclear Event Scale (INES) is a means designed for providing prompt, clear and consistent information related to nuclear events, that occurred at nuclear facilities, and facilitating communication between the nuclear community, the media and the public. The INES is jointly operated by the IAEA and the OECD-NEA. Nuclear events reported are rated by the Scale', a consistent safety significance indicator. The scale runs from level 0, for events with no safety significance, to level 7 for a major accident with widespread health and environmental effects. The Japan Atomic Energy Research Institute (JAERI) has been promptly translating the INES reports into Japanese and developing a world-wide-web database for the Japanese translation, aiming at more efficient utilization of the INES information inside Japan. The present paper briefly introduces the definitions of INES rating levels and the scope of the Scale, and describes the outlines of the database (the information stored in the database, its functions and how to use it). As well, technical use of the INES reports and the availability/ effectiveness of the database are discussed. (author)

  15. Core Web Sites of Universities of Islamic world Countries Capitals

    Directory of Open Access Journals (Sweden)

    Farshid Danesh

    2012-07-01

    Full Text Available In order to serve the Islamic researchers, providing a web site is inevitable for Islamic Universities which are in transition from the real to the virtual world and. Today, almost all the major universities in Islamic community have websites. But, in the realization of their mission, it is not clear to what extant these universities were successful in terms of information dissemination. The aim of this paper was to determine the core web sites and evaluate the effectiveness, ranking and collaboration rate among these websites. The formulas of core website determination, co-links and in-links analysis and revised web impact factor were used beside cluster and multidimensional analysis methods in this study. Results showed that "King Saud University" website in Saudi Arabia had the highest visibility and the most authoritative website among all university websites. Also, co-link analysis showed that major Islamic university websites had collaboration in 12 clusters based on clustering analysis and in 11 clusters based on multidimensional analysis, where two of them (Iran and Turkey were national clusters in cluster analysis method. Results analysis indicated that web designers in these universities must identify how to attract links and web traffic in order to promote the quality and content of websites. However, the ultimate success of a website was dependent upon factors such as quality, size, language, and the approximate age of a website which was not limited to one or two factors.

  16. WebCN: A web-based computation tool for in situ-produced cosmogenic nuclides

    Energy Technology Data Exchange (ETDEWEB)

    Ma Xiuzeng [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States)]. E-mail: hongju@purdue.edu; Li Yingkui [Department of Geography, University of Missouri-Columbia, Columbia, MO 65211 (United States); Bourgeois, Mike [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Caffee, Marc [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Elmore, David [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Granger, Darryl [Department of Earth and Atmospheric Sciences, Purdue University, West Lafayette, IN 47907 (United States); Muzikar, Paul [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Smith, Preston [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States)

    2007-06-15

    Cosmogenic nuclide techniques are increasingly being utilized in geoscience research. For this it is critical to establish an effective, easily accessible and well defined tool for cosmogenic nuclide computations. We have been developing a web-based tool (WebCN) to calculate surface exposure ages and erosion rates based on the nuclide concentrations measured by the accelerator mass spectrometry. WebCN for {sup 10}Be and {sup 26}Al has been finished and published at http://www.physics.purdue.edu/primelab/for{sub u}sers/rockage.html. WebCN for {sup 36}Cl is under construction. WebCN is designed as a three-tier client/server model and uses the open source PostgreSQL for the database management and PHP for the interface design and calculations. On the client side, an internet browser and Microsoft Access are used as application interfaces to access the system. Open Database Connectivity is used to link PostgreSQL and Microsoft Access. WebCN accounts for both spatial and temporal distributions of the cosmic ray flux to calculate the production rates of in situ-produced cosmogenic nuclides at the Earth's surface.

  17. Specification of application logic in web information systems

    NARCIS (Netherlands)

    Barna, P.

    2007-01-01

    The importance of the World Wide Web has grown tremendously over the past decade (or decade and a half). With a quickly growing amount of information published on the Web and its rapidly growing audience, requirements put on Web-based Information Systems (WIS), their developers and maintainers have

  18. Web document engineering

    International Nuclear Information System (INIS)

    White, B.

    1996-05-01

    This tutorial provides an overview of several document engineering techniques which are applicable to the authoring of World Wide Web documents. It illustrates how pre-WWW hypertext research is applicable to the development of WWW information resources

  19. Overview of the TREC 2014 Federated Web Search Track

    OpenAIRE

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Nguyen, Dong-Phuong; Zhou, Ke; Hiemstra, Djoerd

    2014-01-01

    The TREC Federated Web Search track facilitates research in topics related to federated web search, by providing a large realistic data collection sampled from a multitude of online search engines. The FedWeb 2013 challenges of Resource Selection and Results Merging challenges are again included in FedWeb 2014, and we additionally introduced the task of vertical selection. Other new aspects are the required link between the Resource Selection and Results Merging, and the importance of diversi...

  20. The Rise and Fall of Text on the Web: A Quantitative Study of Web Archives

    Science.gov (United States)

    Cocciolo, Anthony

    2015-01-01

    Introduction: This study addresses the following research question: is the use of text on the World Wide Web declining? If so, when did it start declining, and by how much has it declined? Method: Web pages are downloaded from the Internet Archive for the years 1999, 2002, 2005, 2008, 2011 and 2014, producing 600 captures of 100 prominent and…

  1. An Algebraic Specification of the Semantic Web

    OpenAIRE

    Ksystra, Katerina; Triantafyllou, Nikolaos; Stefaneas, Petros; Frangos, Panayiotis

    2011-01-01

    We present a formal specification of the Semantic Web, as an extension of the World Wide Web using the well known algebraic specification language CafeOBJ. Our approach allows the description of the key elements of the Semantic Web technologies, in order to give a better understanding of the system, without getting involved with their implementation details that might not yet be standardized. This specification is part of our work in progress concerning the modeling the Social Semantic Web.

  2. Online Access to Weather Satellite Imagery Through the World Wide Web

    Science.gov (United States)

    Emery, W.; Baldwin, D.

    1998-01-01

    Both global area coverage (GAC) and high-resolution picture transmission (HRTP) data from the Advanced Very High Resolution Radiometer (AVHRR) are made available to laternet users through an online data access system. Older GOES-7 data am also available. Created as a "testbed" data system for NASA's future Earth Observing System Data and Information System (EOSDIS), this testbed provides an opportunity to test both the technical requirements of an onune'd;ta system and the different ways in which the -general user, community would employ such a system. Initiated in December 1991, the basic data system experienced five major evolutionary changes In response to user requests and requirements. Features added with these changes were the addition of online browse, user subsetting, dynamic image Processing/navigation, a stand-alone data storage system, and movement,from an X-windows graphical user Interface (GUI) to a World Wide Web (WWW) interface. Over Its lifetime, the system has had as many as 2500 registered users. The system on the WWW has had over 2500 hits since October 1995. Many of these hits are by casual users that only take the GIF images directly from the interface screens and do not specifically order digital data. Still, there b a consistent stream of users ordering the navigated image data and related products (maps and so forth). We have recently added a real-time, seven- day, northwestern United States normalized difference vegetation index (NDVI) composite that has generated considerable Interest. Index Terms-Data system, earth science, online access, satellite data.

  3. Histories of Public Service Broadcasters on the Web

    DEFF Research Database (Denmark)

    This edited volume details multiple and dynamic histories of relations between public service broadcasters and the World Wide Web. What does it mean to be a national broadcaster in a global communications environment? What are the commercial and public service pressures that were brought to bear...... when public service broadcasters implemented web services? How did “one- to-many” broadcasters adapt to the “many-to-many” medium of the internet? The thematic or- ganisation of this collection addresses such major issues, while each chapter offers a particular historical account of relations between...... public service broadcasters and the World Wide Web....

  4. Soil-Web: An online soil survey for California, Arizona, and Nevada

    Science.gov (United States)

    Beaudette, D. E.; O'Geen, A. T.

    2009-10-01

    Digital soil survey products represent one of the largest and most comprehensive inventories of soils information currently available. The complex structure of these databases, intensive use of codes and scientific jargon make it difficult for non-specialists to utilize digital soil survey resources. A project was initiated to construct a web-based interface to digital soil survey products (STATSGO and SSURGO) for California, Arizona, and Nevada that would be accessible to the general public. A collection of mature, open source applications (including Mapserver, PostGIS and Apache Web Server) were used as a framework to support data storage, querying, map composition, data presentation, and contextual links to related materials. Application logic was written in the PHP language to "glue" together the many components of an online soil survey. A comprehensive website ( http://casoilresource.lawr.ucdavis.edu/map) was created to facilitate access to digital soil survey databases through several interfaces including: interactive map, Google Earth and HTTP-based application programming interface (API). Each soil polygon is linked to a map unit summary page, which includes links to soil component summary pages. The most commonly used soil properties, land interpretations and ratings are presented. Graphical and tabular summaries of soil profile information are dynamically created, and aid with rapid assessment of key soil properties. Quick links to official series descriptions (OSD) and other such information are presented. All terminology is linked back to the USDA-NRCS Soil Survey Handbook which contains extended definitions. The Google Earth interface to Soil-Web can be used to explore soils information in three dimensions. A flexible web API was implemented to allow advanced users of soils information to access our website via simple web page requests. Soil-Web has been successfully used in soil science curriculum, outreach activities, and current research projects

  5. What lies beneath? : Linking litter and canopy food webs to protect ornamental crops

    NARCIS (Netherlands)

    Muñoz Cárdenas, K.A.

    2017-01-01

    The main research question of this thesis was how interactions between above-ground and below-ground food webs affect biological control. Arthropod food webs associated with plants are commonly composed of several species of herbivores, the detritivore community, specialist and generalist predators

  6. Publishing high-quality climate data on the semantic web

    Science.gov (United States)

    Woolf, Andrew; Haller, Armin; Lefort, Laurent; Taylor, Kerry

    2013-04-01

    The effort over more than a decade to establish the semantic web [Berners-Lee et. al., 2001] has received a major boost in recent years through the Open Government movement. Governments around the world are seeking technical solutions to enable more open and transparent access to Public Sector Information (PSI) they hold. Existing technical protocols and data standards tend to be domain specific, and so limit the ability to publish and integrate data across domains (health, environment, statistics, education, etc.). The web provides a domain-neutral platform for information publishing, and has proven itself beyond expectations for publishing and linking human-readable electronic documents. Extending the web pattern to data (often called Web 3.0) offers enormous potential. The semantic web applies the basic web principles to data [Berners-Lee, 2006]: using URIs as identifiers (for data objects and real-world 'things', instead of documents) making the URIs actionable by providing useful information via HTTP using a common exchange standard (serialised RDF for data instead of HTML for documents) establishing typed links between information objects to enable linking and integration Leading examples of 'linked data' for publishing PSI may be found in both the UK (http://data.gov.uk/linked-data) and US (http://www.data.gov/page/semantic-web). The Bureau of Meteorology (BoM) is Australia's national meteorological agency, and has a new mandate to establish a national environmental information infrastructure (under the National Plan for Environmental Information, NPEI [BoM, 2012a]). While the initial approach is based on the existing best practice Spatial Data Infrastructure (SDI) architecture, linked-data is being explored as a technological alternative that shows great promise for the future. We report here the first trial of government linked-data in Australia under data.gov.au. In this initial pilot study, we have taken BoM's new high-quality reference surface

  7. An Empirical Comparison of Navigation Effect of Pull-Down Menu Style on The World Wide Web.

    Science.gov (United States)

    Yu, Byeong-Min; Han, Sungwook

    Effective navigation is becoming more and more critical to the success of electronic commerce (E-commerce). It remains a challenge for educational technologists and Web designers to develop Web systems that can help customers find products or services without experiencing disorientation problems and cognitive overload. Many E-commerce Web sites…

  8. Web Sitings.

    Science.gov (United States)

    Lo, Erika

    2001-01-01

    Presents seven mathematics games, located on the World Wide Web, for elementary students, including: Absurd Math: Pre-Algebra from Another Dimension; The Little Animals Activity Centre; MathDork Game Room (classic video games focusing on algebra); Lemonade Stand (students practice math and business skills); Math Cats (teaches the artistic beauty…

  9. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  10. The emergent discipline of health web science.

    Science.gov (United States)

    Luciano, Joanne S; Cumming, Grant P; Wilkinson, Mark D; Kahana, Eva

    2013-08-22

    The transformative power of the Internet on all aspects of daily life, including health care, has been widely recognized both in the scientific literature and in public discourse. Viewed through the various lenses of diverse academic disciplines, these transformations reveal opportunities realized, the promise of future advances, and even potential problems created by the penetration of the World Wide Web for both individuals and for society at large. Discussions about the clinical and health research implications of the widespread adoption of information technologies, including the Internet, have been subsumed under the disciplinary label of Medicine 2.0. More recently, however, multi-disciplinary research has emerged that is focused on the achievement and promise of the Web itself, as it relates to healthcare issues. In this paper, we explore and interrogate the contributions of the burgeoning field of Web Science in relation to health maintenance, health care, and health policy. From this, we introduce Health Web Science as a subdiscipline of Web Science, distinct from but overlapping with Medicine 2.0. This paper builds on the presentations and subsequent interdisciplinary dialogue that developed among Web-oriented investigators present at the 2012 Medicine 2.0 Conference in Boston, Massachusetts.

  11. Understanding the Web from an Economic Perspective: The Evolution of Business Models and the Web

    Directory of Open Access Journals (Sweden)

    Louis Rinfret

    2014-08-01

    Full Text Available The advent of the World Wide Web is arguably amongst the most important changes that have occurred since the 1990s in the business landscape. It has fueled the rise of new industries, supported the convergence and reshaping of existing ones and enabled the development of new business models. During this time the web has evolved tremendously from a relatively static pagedisplay tool to a massive network of user-generated content, collective intelligence, applications and hypermedia. As technical standards continue to evolve, business models catch-up to the new capabilities. New ways of creating value, distributing it and profiting from it emerge more rapidly than ever. In this paper we explore how the World Wide Web and business models evolve and we identify avenues for future research in light of the web‟s ever-evolving nature and its influence on business models.

  12. Evaluation of the content and accessibility of web sites for accredited orthopaedic sports medicine fellowships.

    Science.gov (United States)

    Mulcahey, Mary K; Gosselin, Michelle M; Fadale, Paul D

    2013-06-19

    The Internet is a common source of information for orthopaedic residents applying for sports medicine fellowships, with the web sites of the American Orthopaedic Society for Sports Medicine (AOSSM) and the San Francisco Match serving as central databases. We sought to evaluate the web sites for accredited orthopaedic sports medicine fellowships with regard to content and accessibility. We reviewed the existing web sites of the ninety-five accredited orthopaedic sports medicine fellowships included in the AOSSM and San Francisco Match databases from February to March 2012. A Google search was performed to determine the overall accessibility of program web sites and to supplement information obtained from the AOSSM and San Francisco Match web sites. The study sample consisted of the eighty-seven programs whose web sites connected to information about the fellowship. Each web site was evaluated for its informational value. Of the ninety-five programs, fifty-one (54%) had links listed in the AOSSM database. Three (3%) of all accredited programs had web sites that were linked directly to information about the fellowship. Eighty-eight (93%) had links listed in the San Francisco Match database; however, only five (5%) had links that connected directly to information about the fellowship. Of the eighty-seven programs analyzed in our study, all eighty-seven web sites (100%) provided a description of the program and seventy-six web sites (87%) included information about the application process. Twenty-one web sites (24%) included a list of current fellows. Fifty-six web sites (64%) described the didactic instruction, seventy (80%) described team coverage responsibilities, forty-seven (54%) included a description of cases routinely performed by fellows, forty-one (47%) described the role of the fellow in seeing patients in the office, eleven (13%) included call responsibilities, and seventeen (20%) described a rotation schedule. Two Google searches identified direct links for

  13. Models and methods for building web recommendation systems

    OpenAIRE

    Stekh, Yu.; Artsibasov, V.

    2012-01-01

    Modern Word Wide Web contains a large number of Web sites and pages in each Web site. Web recommendation system (recommendation system for web pages) are typically implemented on web servers and use the data obtained from the collection viewed web templates (implicit data) or user registration data (explicit data). In article considering methods and algorithms of web recommendation system based on the technology of data mining (web mining). Сучасна мережа Інтернет містить велику кількість веб...

  14. Safety and Efficacy of Aneurysm Treatment with the WEB

    DEFF Research Database (Denmark)

    Pierot, L; Gubucz, I; Buhk, J H

    2017-01-01

    BACKGROUND AND PURPOSE: Flow disruption with the Woven EndoBridge (WEB) device is an innovative technique for the endovascular treatment of wide-neck bifurcation aneurysms. The initial version of the device (WEB Double-Layer) was evaluated in the WEB Clinical Assessment of IntraSaccular Aneurysm ...

  15. Information Waste on the World Wide Web and Combating the Clutter

    NARCIS (Netherlands)

    Amrit, Chintan Amrit; Wijnhoven, Alphonsus B.J.M.; Beckers, David

    2015-01-01

    The Internet has become a critical part of the infrastructure supporting modern life. The high degree of openness and autonomy of information providers determines the access to a vast amount of information on the Internet. However, this makes the web vulnerable to inaccurate, misleading, or outdated

  16. Fast 3D Net Expeditions: Tools for Effective Scientific Collaboration on the World Wide Web

    Science.gov (United States)

    Watson, Val; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D (three dimensional), high resolution, dynamic, interactive viewing of scientific data. The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG (Motion Picture Expert Group) movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewers local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: (1) The visual is much higher in resolution (1280x1024 pixels with 24 bits of color) than typical video format transmitted over the network. (2) The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). (3) A rich variety of guided expeditions through the data can be included easily. (4) A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of

  17. A Typology for Web 2.0

    DEFF Research Database (Denmark)

    Dalsgaard, Christian; Sorensen, Elsebeth Korsgaard

    2008-01-01

    of a learning environment: 1) organizing communicative processes and 2) organizing resources. Organizing communicative processes is supported by Web 2.0’s ability to provide a range of communicative tools that can be organized flexibly by students. Web 2.0 provides opportunities for communities and groups...... to organize their own communicative processes. Further, Web 2.0 supports organization of resources by empowering students to create, construct, manage and share content themselves. However, the main potential lies within collaborative creation and sharing in networks. Potentially, networking tools......Web 2.0 is a term used to describe recent developments on the World Wide Web. The term is often used to describe the increased use of the web for user-generated content, collaboration, and social networking. However, Web 2.0 is a weakly defined concept, and it is unclear exactly what kind...

  18. Web corpus construction

    CERN Document Server

    Schafer, Roland

    2013-01-01

    The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are several adavantages of this approach: (i) Working with such corpora obviates the problems encountered when using Internet search engines in quantitative linguistic research (such as non-transparent ranking algorithms). (ii) Creating a corpus from web data is virtually free. (iii) The size of corpora compiled from the WWW may exceed by several orders of magnitudes the size of language resources offered elsewhere. (iv) The data is locally available to the user, and it can be linguistically post-processed and queried with the tools preferred by her/him. This book addresses the main practical tasks in the creation of web corpora up to giga-token size. Among these tasks are the sampling process (i.e., web crawling) and the usual cleanups including boilerplate removal and rem...

  19. Automatically exposing OpenLifeData via SADI semantic Web Services.

    Science.gov (United States)

    González, Alejandro Rodríguez; Callahan, Alison; Cruz-Toledo, José; Garcia, Adrian; Egaña Aranguren, Mikel; Dumontier, Michel; Wilkinson, Mark D

    2014-01-01

    Two distinct trends are emerging with respect to how data is shared, collected, and analyzed within the bioinformatics community. First, Linked Data, exposed as SPARQL endpoints, promises to make data easier to collect and integrate by moving towards the harmonization of data syntax, descriptive vocabularies, and identifiers, as well as providing a standardized mechanism for data access. Second, Web Services, often linked together into workflows, normalize data access and create transparent, reproducible scientific methodologies that can, in principle, be re-used and customized to suit new scientific questions. Constructing queries that traverse semantically-rich Linked Data requires substantial expertise, yet traditional RESTful or SOAP Web Services cannot adequately describe the content of a SPARQL endpoint. We propose that content-driven Semantic Web Services can enable facile discovery of Linked Data, independent of their location. We use a well-curated Linked Dataset - OpenLifeData - and utilize its descriptive metadata to automatically configure a series of more than 22,000 Semantic Web Services that expose all of its content via the SADI set of design principles. The OpenLifeData SADI services are discoverable via queries to the SHARE registry and easy to integrate into new or existing bioinformatics workflows and analytical pipelines. We demonstrate the utility of this system through comparison of Web Service-mediated data access with traditional SPARQL, and note that this approach not only simplifies data retrieval, but simultaneously provides protection against resource-intensive queries. We show, through a variety of different clients and examples of varying complexity, that data from the myriad OpenLifeData can be recovered without any need for prior-knowledge of the content or structure of the SPARQL endpoints. We also demonstrate that, via clients such as SHARE, the complexity of federated SPARQL queries is dramatically reduced.

  20. Primer on client-side web security

    CERN Document Server

    De Ryck, Philippe; Piessens, Frank; Johns, Martin

    2014-01-01

    This volume illustrates the continuous arms race between attackers and defenders of the Web ecosystem by discussing a wide variety of attacks. In the first part of the book, the foundation of the Web ecosystem is briefly recapped and discussed. Based on this model, the assets of the Web ecosystem are identified, and the set of capabilities an attacker may have are enumerated. In the second part, an overview of the web security vulnerability landscape is constructed. Included are selections of the most representative attack techniques reported in great detail. In addition to descriptions of the

  1. Where the Semantic Web and Web 2.0 Meet Format Risk Management: P2 Registry

    Directory of Open Access Journals (Sweden)

    David Tarrant

    2011-03-01

    Full Text Available The Web is increasingly becoming a platform for linked data. This means making connections and adding value to data on the Web. As more data becomes openly available and more people are able to use the data, it becomes more powerful. An example is file format registries and the evaluation of format risks. Here the requirement for information is now greater than the effort that any single institution can put into gathering and collating this information. Recognising that more is better, the creators of PRONOM, JHOVE, GDFR and others are joining to lead a new initiative: the Unified Digital Format Registry. Ahead of this effort, a new RDF-based framework for structuring and facilitating file format data from multiple sources, including PRONOM, has demonstrated it is able to produce more links, and thus provide more answers to digital preservation questions - about format risks, applications, viewers and transformations - than the native data alone. This paper will describe this registry, P2, and its services, show how it can be used, and provide examples where it delivers more answers than the contributing resources. The P2 Registry is a reference platform to allow and encourage publication of preservation data, and also an examplar of what can be achieved if more data is published openly online as simple machine-readable documents. This approach calls for the active participation of the digital preservation community to contribute data by simply publishing it openly on the Web as linked data.

  2. The effect of new links on Google PageRank

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli

    2004-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. We study the effect of newly created links on Google PageRank. We discuss to

  3. Drugs + HIV, Learn the Link

    Medline Plus

    Full Text Available ... your Flickr, Pinterest, Instagram or other visually interesting page using pictures from NIDA ... Link campaign uses TV, print, and Web public service announcements (PSAs), as well as posters, ...

  4. Semantic web technologies for enterprise 2.0

    CERN Document Server

    Passant, A

    2010-01-01

    In this book, we detail different theories, methods and implementations combining Web 2.0 paradigms and Semantic Web technologies in Enterprise environments. After introducing those terms, we present the current shortcomings of tools such as blogs and wikis as well as tagging practices in an Enterprise 2.0 context. We define the SemSLATES methodology and the global vision of a middleware architecture based on Semantic Web technologies and Linked Data principles (languages, models, tools and protocols) to solve these issues. Then, we detail the various ontologies that we build to achieve this g

  5. Conducting Web-based Surveys.

    OpenAIRE

    David J. Solomon

    2001-01-01

    Web-based surveying is becoming widely used in social science and educational research. The Web offers significant advantages over more traditional survey techniques however there are still serious methodological challenges with using this approach. Currently coverage bias or the fact significant numbers of people do not have access, or choose not to use the Internet is of most concern to researchers. Survey researchers also have much to learn concerning the most effective ways to conduct s...

  6. Preservation of the Digital Culture: Archiving the World Wide Web Sayısal (Dijital Kültürün Korunması: Web Arşivleme

    Directory of Open Access Journals (Sweden)

    Ahmet Aldemir

    2006-09-01

    Full Text Available Information growth in the web medium has required the necessity of archiving these information to transmit them to future generations. Web archiving is a versatile application which covers technical, legal and organizational dimensions. Any stage within the life cycle of digital information is critically important for the information in web environment. All over the world, many countries have started web archiving efforts through the leadership of their national libraries and attempted to carry these initiatives on a legal bases. In the light of these developments, this paper examines the necessity and major techniques in web archiving and it also discuss national and international web archiving projects. Web ortamında yaşanan bilgi artışı, beraberinde bu bilgilerin gelecek nesillere aktarılması amacıyla arşivlenmesi gereğini gündeme getirmiştir. Web'in arşivlenmesi teknik, yasal ve örgütsel boyutları olan çok yönlü bir uygulamadır. Sayısal ortamda üretilmiş bilginin yaşam döngüsündeki her bir aşama, web ortamında yer alan bilgiler için hayati önem taşımaktadır. Dünyada bir çok ülke milli kütüphaneleri öncülüğünde web arşivleme çalışmalarını başlatmış ve bu girişimlerinin yasal bir platforma taşınması için gerekli adımlar atılmıştır. Bu gelişmeler ışığında çalışmamızda, web'in neden arşivlenmesi gerektiğine değinilmiş, web arşivlemede kullanılan belli başlı yaklaşımlar ele alınmış, ulusal ve uluslararası ölçekli web arşivleme çalışmalarına yer verilmiştir.

  7. Web interface for plasma analysis codes

    Energy Technology Data Exchange (ETDEWEB)

    Emoto, M. [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan)], E-mail: emo@nifs.ac.jp; Murakami, S. [Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501 (Japan); Yoshida, M.; Funaba, H.; Nagayama, Y. [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan)

    2008-04-15

    There are many analysis codes that analyze various aspects of plasma physics. However, most of them are FORTRAN programs that are written to be run in supercomputers. On the other hand, many scientists use GUI (graphical user interface)-based operating systems. For those who are not familiar with supercomputers, it is a difficult task to run analysis codes in supercomputers, and they often hesitate to use these programs to substantiate their ideas. Furthermore, these analysis codes are written for personal use, and the programmers do not expect these programs to be run by other users. In order to make these programs to be widely used by many users, the authors developed user-friendly interfaces using a Web interface. Since the Web browser is one of the most common applications, it is useful for both the users and developers. In order to realize interactive Web interface, AJAX technique is widely used, and the authors also adopted AJAX. To build such an AJAX based Web system, Ruby on Rails plays an important role in this system. Since this application framework, which is written in Ruby, abstracts the Web interfaces necessary to implement AJAX and database functions, it enables the programmers to efficiently develop the Web-based application. In this paper, the authors will introduce the system and demonstrate the usefulness of this approach.

  8. Web interface for plasma analysis codes

    International Nuclear Information System (INIS)

    Emoto, M.; Murakami, S.; Yoshida, M.; Funaba, H.; Nagayama, Y.

    2008-01-01

    There are many analysis codes that analyze various aspects of plasma physics. However, most of them are FORTRAN programs that are written to be run in supercomputers. On the other hand, many scientists use GUI (graphical user interface)-based operating systems. For those who are not familiar with supercomputers, it is a difficult task to run analysis codes in supercomputers, and they often hesitate to use these programs to substantiate their ideas. Furthermore, these analysis codes are written for personal use, and the programmers do not expect these programs to be run by other users. In order to make these programs to be widely used by many users, the authors developed user-friendly interfaces using a Web interface. Since the Web browser is one of the most common applications, it is useful for both the users and developers. In order to realize interactive Web interface, AJAX technique is widely used, and the authors also adopted AJAX. To build such an AJAX based Web system, Ruby on Rails plays an important role in this system. Since this application framework, which is written in Ruby, abstracts the Web interfaces necessary to implement AJAX and database functions, it enables the programmers to efficiently develop the Web-based application. In this paper, the authors will introduce the system and demonstrate the usefulness of this approach

  9. Web malware spread modelling and optimal control strategies

    Science.gov (United States)

    Liu, Wanping; Zhong, Shouming

    2017-02-01

    The popularity of the Web improves the growth of web threats. Formulating mathematical models for accurate prediction of malicious propagation over networks is of great importance. The aim of this paper is to understand the propagation mechanisms of web malware and the impact of human intervention on the spread of malicious hyperlinks. Considering the characteristics of web malware, a new differential epidemic model which extends the traditional SIR model by adding another delitescent compartment is proposed to address the spreading behavior of malicious links over networks. The spreading threshold of the model system is calculated, and the dynamics of the model is theoretically analyzed. Moreover, the optimal control theory is employed to study malware immunization strategies, aiming to keep the total economic loss of security investment and infection loss as low as possible. The existence and uniqueness of the results concerning the optimality system are confirmed. Finally, numerical simulations show that the spread of malware links can be controlled effectively with proper control strategy of specific parameter choice.

  10. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    Science.gov (United States)

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  11. Semantic Web

    Directory of Open Access Journals (Sweden)

    Anna Lamandini

    2011-06-01

    Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.

  12. Graph Structure in Three National Academic Webs: Power Laws with Anomalies.

    Science.gov (United States)

    Thelwall, Mike; Wilkinson, David

    2003-01-01

    Explains how the Web can be modeled as a mathematical graph and analyzes the graph structures of three national university publicly indexable Web sites from Australia, New Zealand, and the United Kingdom. Topics include commercial search engines and academic Web link research; method-analysis environment and data sets; and power laws. (LRW)

  13. Consolidating drug data on a global scale using Linked Data.

    Science.gov (United States)

    Jovanovik, Milos; Trajanov, Dimitar

    2017-01-21

    Drug product data is available on the Web in a distributed fashion. The reasons lie within the regulatory domains, which exist on a national level. As a consequence, the drug data available on the Web are independently curated by national institutions from each country, leaving the data in varying languages, with a varying structure, granularity level and format, on different locations on the Web. Therefore, one of the main challenges in the realm of drug data is the consolidation and integration of large amounts of heterogeneous data into a comprehensive dataspace, for the purpose of developing data-driven applications. In recent years, the adoption of the Linked Data principles has enabled data publishers to provide structured data on the Web and contextually interlink them with other public datasets, effectively de-siloing them. Defining methodological guidelines and specialized tools for generating Linked Data in the drug domain, applicable on a global scale, is a crucial step to achieving the necessary levels of data consolidation and alignment needed for the development of a global dataset of drug product data. This dataset would then enable a myriad of new usage scenarios, which can, for instance, provide insight into the global availability of different drug categories in different parts of the world. We developed a methodology and a set of tools which support the process of generating Linked Data in the drug domain. Using them, we generated the LinkedDrugs dataset by seamlessly transforming, consolidating and publishing high-quality, 5-star Linked Drug Data from twenty-three countries, containing over 248,000 drug products, over 99,000,000 RDF triples and over 278,000 links to generic drugs from the LOD Cloud. Using the linked nature of the dataset, we demonstrate its ability to support advanced usage scenarios in the drug domain. The process of generating the LinkedDrugs dataset demonstrates the applicability of the methodological guidelines and the

  14. Going, going, still there: using the WebCite service to permanently archive cited web pages.

    Science.gov (United States)

    Eysenbach, Gunther; Trudel, Mathieu

    2005-12-30

    Scholars are increasingly citing electronic "web references" which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To "webcite" a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page. This journal has amended its "instructions for authors" accordingly, asking authors to archive cited Web pages before submitting a manuscript. Almost 200 other journals are already using the system. We discuss the rationale for WebCite, its technology, and how scholars, editors, and publishers can benefit from the service. Citing scholars initiate an archiving process of all cited Web references, ideally before they submit a manuscript. Authors of online documents and websites which are expected to be cited by others can ensure that their work is permanently available by creating an archived copy using WebCite and providing the citation information including the WebCite link on their Web document(s). Editors should ask their authors to cache all cited Web addresses (Uniform Resource Locators, or URLs) "prospectively" before submitting their manuscripts to their journal. Editors and publishers should also instruct their copyeditors to cache cited Web material if the author has not done so already. Finally, WebCite can process publisher submitted "citing articles" (submitted for example as eXtensible Markup Language [XML] documents) to automatically archive all cited Web pages shortly before or on publication. Finally, WebCite can act as a focussed crawler, caching retrospectively references of already published articles. Copyright issues are addressed by honouring respective Internet standards (robot exclusion files, no-cache and no-archive tags). Long-term preservation is ensured by agreements with libraries and digital preservation organizations. The resulting WebCite Index may also have applications for research

  15. Production and Consumption of University Linked Data

    Science.gov (United States)

    Zablith, Fouad; Fernandez, Miriam; Rowe, Matthew

    2015-01-01

    Linked Data increases the value of an organisation's data over the web by introducing explicit and machine processable links at the data level. We have adopted this new stream of data representation to produce and expose existing data within The Open University (OU) as Linked Data. We present in this paper our approach for producing the data,…

  16. Teaching Web 2.0 technologies using Web 2.0 technologies.

    Science.gov (United States)

    Rethlefsen, Melissa L; Piorun, Mary; Prince, J Dale

    2009-10-01

    The research evaluated participant satisfaction with the content and format of the "Web 2.0 101: Introduction to Second Generation Web Tools" course and measured the impact of the course on participants' self-evaluated knowledge of Web 2.0 tools. The "Web 2.0 101" online course was based loosely on the Learning 2.0 model. Content was provided through a course blog and covered a wide range of Web 2.0 tools. All Medical Library Association members were invited to participate. Participants were asked to complete a post-course survey. Respondents who completed the entire course or who completed part of the course self-evaluated their knowledge of nine social software tools and concepts prior to and after the course using a Likert scale. Additional qualitative information about course strengths and weaknesses was also gathered. Respondents' self-ratings showed a significant change in perceived knowledge for each tool, using a matched pair Wilcoxon signed rank analysis (P<0.0001 for each tool/concept). Overall satisfaction with the course appeared high. Hands-on exercises were the most frequently identified strength of the course; the length and time-consuming nature of the course were considered weaknesses by some. Learning 2.0-style courses, though demanding time and self-motivation from participants, can increase knowledge of Web 2.0 tools.

  17. Marketing your medical practice with an effective web presence.

    Science.gov (United States)

    Finch, Tammy

    2004-01-01

    The proliferation of the World Wide Web has provided an opportunity for medical practices to sell themselves through low-cost marketing on the Internet. A Web site is a quick and effective way to provide patients with up-to-date treatment and procedure information. This article provides suggestions on what to include on a medical practice's Web site, how the Web can assist office staff and physicians, and cost options for your Web site. The article also discusses design tips, such as Web-site optimization.

  18. Drugs + HIV, Learn the Link

    Medline Plus

    Full Text Available ... Learn the Link campaign uses TV, print, and Web public service announcements (PSAs), as well as posters, e-cards, and other tools to send the message to America's youth that ...

  19. A Framework for Dynamic Web Services Composition

    NARCIS (Netherlands)

    Lécué, F.; Goncalves da Silva, Eduardo; Ferreira Pires, Luis

    2007-01-01

    Dynamic composition of web services is a promising approach and at the same time a challenging research area for the dissemination of service-oriented applications. It is widely recognised that service semantics is a key element for the dynamic composition of Web services, since it allows the

  20. Quality of web-based information on social phobia: a cross-sectional study.

    Science.gov (United States)

    Khazaal, Yasser; Fernandez, Sebastien; Cochand, Sophie; Reboh, Isabel; Zullino, Daniele

    2008-01-01

    The objective of the study is to evaluate the quality of web-based information on social phobia and to investigate particular quality indicators. Two keywords, "Social phobia" and "Social Anxiety Disorder", were entered into five popular World Wide Web search engines. Websites were assessed with a standardized proforma designed to rate sites on the basis of accountability, presentation, interactivity, readability, and content quality. "Health On the Net" (HON) quality label and DISCERN scale scores aiding people without content expertise to assess quality of written health publication were used to verify their efficiency as quality indicators. This study evaluates the quality of web-based information on social phobia. On the 200 identified links, 58 were included. On the basis of outcome measures, the overall quality of the sites turned out to be poor. DISCERN and HON label were indicators of good quality indicators. Accountability criteria were poor indicators of site quality. Although social phobia education Websites for patients are common, educational material highly varies in quality and content. There is a need for better evidence-based information about social phobia on the Web and a need to reconsider the role of accountability criteria as indicators of site quality. Clinicians should advise patients of the HON label and DISCERN as useful indicators of site quality. (c) 2007 Wiley-Liss, Inc.