WorldWideScience

Sample records for project web page

  1. Interstellar Initiative Web Page Design

    Science.gov (United States)

    Mehta, Alkesh

    1999-01-01

    This summer at NASA/MSFC, I have contributed to two projects: Interstellar Initiative Web Page Design and Lenz's Law Relative Motion Demonstration. In the Web Design Project, I worked on an Outline. The Web Design Outline was developed to provide a foundation for a Hierarchy Tree Structure. The Outline would help design a Website information base for future and near-term missions. The Website would give in-depth information on Propulsion Systems and Interstellar Travel. The Lenz's Law Relative Motion Demonstrator is discussed in this volume by Russell Lee.

  2. Web Page Recommendation Using Web Mining

    OpenAIRE

    Modraj Bhavsar; Mrs. P. M. Chavan

    2014-01-01

    On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each...

  3. Creating Web Pages Simplified

    CERN Document Server

    Wooldridge, Mike

    2011-01-01

    The easiest way to learn how to create a Web page for your family or organization Do you want to share photos and family lore with relatives far away? Have you been put in charge of communication for your neighborhood group or nonprofit organization? A Web page is the way to get the word out, and Creating Web Pages Simplified offers an easy, visual way to learn how to build one. Full-color illustrations and concise instructions take you through all phases of Web publishing, from laying out and formatting text to enlivening pages with graphics and animation. This easy-to-follow visual guide sho

  4. Enriching the trustworthiness of health-related web pages.

    Science.gov (United States)

    Gaudinat, Arnaud; Cruchet, Sarah; Boyer, Celia; Chrawdhry, Pravir

    2011-06-01

    We present an experimental mechanism for enriching web content with quality metadata. This mechanism is based on a simple and well-known initiative in the field of the health-related web, the HONcode. The Resource Description Framework (RDF) format and the Dublin Core Metadata Element Set were used to formalize these metadata. The model of trust proposed is based on a quality model for health-related web pages that has been tested in practice over a period of thirteen years. Our model has been explored in the context of a project to develop a research tool that automatically detects the occurrence of quality criteria in health-related web pages.

  5. The Faculty Web Page: Contrivance or Continuation?

    Science.gov (United States)

    Lennex, Lesia

    2007-01-01

    In an age of Internet education, what does it mean for a tenure/tenure-track faculty to have a web page? How many professors have web pages? If they have a page, what does it look like? Do they really need a web page at all? Many universities have faculty web pages. What do those collective pages look like? In what way do they represent the…

  6. WebScore: An Effective Page Scoring Approach for Uncertain Web Social Networks

    Directory of Open Access Journals (Sweden)

    Shaojie Qiao

    2011-10-01

    Full Text Available To effectively score pages with uncertainty in web social networks, we first proposed a new concept called transition probability matrix and formally defined the uncertainty in web social networks. Second, we proposed a hybrid page scoring algorithm, called WebScore, based on the PageRank algorithm and three centrality measures including degree, betweenness, and closeness. Particularly,WebScore takes into a full consideration of the uncertainty of web social networks by computing the transition probability from one page to another. The basic idea ofWebScore is to: (1 integrate uncertainty into PageRank in order to accurately rank pages, and (2 apply the centrality measures to calculate the importance of pages in web social networks. In order to verify the performance of WebScore, we developed a web social network analysis system which can partition web pages into distinct groups and score them in an effective fashion. Finally, we conducted extensive experiments on real data and the results show that WebScore is effective at scoring uncertain pages with less time deficiency than PageRank and centrality measures based page scoring algorithms.

  7. Dynamic Web Pages: Performance Impact on Web Servers.

    Science.gov (United States)

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  8. Exploiting link structure for web page genre identification

    KAUST Repository

    Zhu, Jia

    2015-07-07

    As the World Wide Web develops at an unprecedented pace, identifying web page genre has recently attracted increasing attention because of its importance in web search. A common approach for identifying genre is to use textual features that can be extracted directly from a web page, that is, On-Page features. The extracted features are subsequently inputted into a machine learning algorithm that will perform classification. However, these approaches may be ineffective when the web page contains limited textual information (e.g., the page is full of images). In this study, we address genre identification of web pages under the aforementioned situation. We propose a framework that uses On-Page features while simultaneously considering information in neighboring pages, that is, the pages that are connected to the original page by backward and forward links. We first introduce a graph-based model called GenreSim, which selects an appropriate set of neighboring pages. We then construct a multiple classifier combination module that utilizes information from the selected neighboring pages and On-Page features to improve performance in genre identification. Experiments are conducted on well-known corpora, and favorable results indicate that our proposed framework is effective, particularly in identifying web pages with limited textual information. © 2015 The Author(s)

  9. Exploiting link structure for web page genre identification

    KAUST Repository

    Zhu, Jia; Xie, Qing; Yu, Shoou I.; Wong, Wai Hung

    2015-01-01

    As the World Wide Web develops at an unprecedented pace, identifying web page genre has recently attracted increasing attention because of its importance in web search. A common approach for identifying genre is to use textual features that can be extracted directly from a web page, that is, On-Page features. The extracted features are subsequently inputted into a machine learning algorithm that will perform classification. However, these approaches may be ineffective when the web page contains limited textual information (e.g., the page is full of images). In this study, we address genre identification of web pages under the aforementioned situation. We propose a framework that uses On-Page features while simultaneously considering information in neighboring pages, that is, the pages that are connected to the original page by backward and forward links. We first introduce a graph-based model called GenreSim, which selects an appropriate set of neighboring pages. We then construct a multiple classifier combination module that utilizes information from the selected neighboring pages and On-Page features to improve performance in genre identification. Experiments are conducted on well-known corpora, and favorable results indicate that our proposed framework is effective, particularly in identifying web pages with limited textual information. © 2015 The Author(s)

  10. Finding Specification Pages from the Web

    Science.gov (United States)

    Yoshinaga, Naoki; Torisawa, Kentaro

    This paper presents a method of finding a specification page on the Web for a given object (e.g., ``Ch. d'Yquem'') and its class label (e.g., ``wine''). A specification page for an object is a Web page which gives concise attribute-value information about the object (e.g., ``county''-``Sauternes'') in well formatted structures. A simple unsupervised method using layout and symbolic decoration cues was applied to a large number of the Web pages to acquire candidate attributes for each class (e.g., ``county'' for a class ``wine''). We then filter out irrelevant words from the putative attributes through an author-aware scoring function that we called site frequency. We used the acquired attributes to select a representative specification page for a given object from the Web pages retrieved by a normal search engine. Experimental results revealed that our system greatly outperformed the normal search engine in terms of this specification retrieval.

  11. Classifying web pages with visual features

    NARCIS (Netherlands)

    de Boer, V.; van Someren, M.; Lupascu, T.; Filipe, J.; Cordeiro, J.

    2010-01-01

    To automatically classify and process web pages, current systems use the textual content of those pages, including both the displayed content and the underlying (HTML) code. However, a very important feature of a web page is its visual appearance. In this paper, we show that using generic visual

  12. Metadata Schema Used in OCLC Sampled Web Pages

    Directory of Open Access Journals (Sweden)

    Fei Yu

    2005-12-01

    Full Text Available The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources. However, many questions have been raised about the use of metadata schemas such as which metadata schemas have been used on the Web? How did they describe Web accessible information? What is the distribution of these metadata schemas among Web pages? Do certain schemas dominate the others? To address these issues, this study analyzed 16,383 Web pages with meta tags extracted from 200,000 OCLC sampled Web pages in 2000. It found that only 8.19% Web pages used meta tags; description tags, keyword tags, and Dublin Core tags were the only three schemas used in the Web pages. This article revealed the use of meta tags in terms of their function distribution, syntax characteristics, granularity of the Web pages, and the length distribution and word number distribution of both description and keywords tags.

  13. Web page classification on child suitability

    NARCIS (Netherlands)

    C. Eickhoff (Carsten); P. Serdyukov; A.P. de Vries (Arjen)

    2010-01-01

    htmlabstractChildren spend significant amounts of time on the Internet. Recent studies showed, that during these periods they are often not under adult supervision. This work presents an automatic approach to identifying suitable web pages for children based on topical and non-topical web page

  14. WEB LOG EXPLORER – CONTROL OF MULTIDIMENSIONAL DYNAMICS OF WEB PAGES

    Directory of Open Access Journals (Sweden)

    Mislav Šimunić

    2012-07-01

    Full Text Available Demand markets dictate and pose increasingly more requirements to the supplymarket that are not easily satisfied. The supply market presenting its web pages to thedemand market should find the best and quickest ways to respond promptly to the changesdictated by the demand market. The question is how to do that in the most efficient andquickest way. The data on the usage of web pages on a specific web site are recorded in alog file. The data in a log file are stochastic and unordered and require systematicmonitoring, categorization, analyses, and weighing. From the data processed in this way, itis necessary to single out and sort the data by their importance that would be a basis for acontinuous generation of dynamics/changes to the web site pages in line with the criterionchosen. To perform those tasks successfully, a new software solution is required. For thatpurpose, the authors have developed the first version of the WLE (WebLogExplorersoftware solution, which is actually realization of web page multidimensionality and theweb site as a whole. The WebLogExplorer enables statistical and semantic analysis of a logfile and on the basis thereof, multidimensional control of the web page dynamics. Theexperimental part of the work was done within the web site of HTZ (Croatian NationalTourist Board being the main portal of the global tourist supply in the Republic of Croatia(on average, daily "log" consists of c. 600,000 sets, average size of log file is 127 Mb, andc. 7000-8000 daily visitors on the web site.

  15. Migrating Multi-page Web Applications to Single-page AJAX Interfaces

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.

    2006-01-01

    Recently, a new web development technique for creating interactive web applications, dubbed AJAX, has emerged. In this new model, the single-page web interface is composed of individual components which can be updated/replaced independently. With the rise of AJAX web applications classical

  16. Socorro Students Translate NRAO Web Pages Into Spanish

    Science.gov (United States)

    2002-07-01

    Six Socorro High School students are spending their summer working at the National Radio Astronomy Observatory (NRAO) on a unique project that gives them experience in language translation, World Wide Web design, and technical communication. Under the project, called "Un puente a los cielos," the students are translating many of NRAO's Web pages on astronomy into Spanish. "These students are using their bilingual skills to help us make basic information about astronomy and radio telescopes available to the Spanish-speaking community," said Kristy Dyer, who works at NRAO as a National Science Foundation postdoctoral fellow and who developed the project and obtained funding for it from the National Aeronautics and Space Administration. The students are: Daniel Acosta, 16; Rossellys Amarante, 15; Sandra Cano, 16; Joel Gonzalez, 16; Angelica Hernandez, 16; and Cecilia Lopez, 16. The translation project, a joint effort of NRAO and the NM Tech physics department, also includes Zammaya Moreno, a teacher from Ecuador, Robyn Harrison, NRAO's education officer, and NRAO computer specialist Allan Poindexter. The students are translating NRAO Web pages aimed at the general public. These pages cover the basics of radio astronomy and frequently-asked questions about NRAO and the scientific research done with NRAO's telescopes. "Writing about science for non-technical audiences has to be done carefully. Scientific concepts must be presented in terms that are understandable to non-scientists but also that remain scientifically accurate," Dyer said. "When translating this type of writing from one language to another, we need to preserve both the understandability and the accuracy," she added. For that reason, Dyer recruited 14 Spanish-speaking astronomers from Argentina, Mexico and the U.S. to help verify the scientific accuracy of the Spanish translations. The astronomers will review the translations. The project is giving the students a broad range of experience. "They are

  17. Measurment of Web Usability: Web Page of Hacettepe University Department of Information Management

    OpenAIRE

    Nazan Özenç Uçak; Tolga Çakmak

    2009-01-01

    Today, information is produced increasingly in electronic form and retrieval of information is provided via web pages. As a result of the rise of the number of web pages, many of them seem to comprise similar contents but different designs. In this respect, presenting information over the web pages according to user expectations and specifications is important in terms of effective usage of information. This study provides an insight about web usability studies that are executed for measuring...

  18. Web-page Prediction for Domain Specific Web-search using Boolean Bit Mask

    OpenAIRE

    Sinha, Sukanta; Duttagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Search Engine is a Web-page retrieval tool. Nowadays Web searchers utilize their time using an efficient search engine. To improve the performance of the search engine, we are introducing a unique mechanism which will give Web searchers more prominent search results. In this paper, we are going to discuss a domain specific Web search prototype which will generate the predicted Web-page list for user given search string using Boolean bit mask.

  19. Virtual real-time inspection of nuclear material via VRML and secure web pages

    International Nuclear Information System (INIS)

    Nilsen, C.; Jortner, J.; Damico, J.; Friesen, J.; Schwegel, J.

    1996-01-01

    Sandia National Laboratories'' Straight-Line project is working to provide the right sensor information to the right user to enhance the safety, security, and international accountability of nuclear material. One of Straight-Line''s efforts is to create a system to securely disseminate this data on the Internet''s World-Wide-Web. To make the user interface more intuitive, Sandia has generated a three dimensional VRML (virtual reality modeling language) interface for a secure web page. This paper will discuss the implementation of the Straight-Line secure 3-D web page. A discussion of the pros and cons of a 3-D web page is also presented. The public VRML demonstration described in this paper can be found on the Internet at this address, http://www.ca.sandia.gov/NMM/. A Netscape browser, version 3 is strongly recommended

  20. Virtual real-time inspection of nuclear material via VRML and secure web pages

    International Nuclear Information System (INIS)

    Nilsen, C.; Jortner, J.; Damico, J.; Friesen, J.; Schwegel, J.

    1997-04-01

    Sandia National Laboratories' Straight Line project is working to provide the right sensor information to the right user to enhance the safety, security, and international accountability of nuclear material. One of Straight Line's efforts is to create a system to securely disseminate this data on the Internet's World-Wide-Web. To make the user interface more intuitive, Sandia has generated a three dimensional VRML (virtual reality modeling language) interface for a secure web page. This paper will discuss the implementation of the Straight Line secure 3-D web page. A discussion of the ''pros and cons'' of a 3-D web page is also presented. The public VRML demonstration described in this paper can be found on the Internet at the following address: http://www.ca.sandia.gov/NMM/. A Netscape browser, version 3 is strongly recommended

  1. Code AI Personal Web Pages

    Science.gov (United States)

    Garcia, Joseph A.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The document consists of a publicly available web site (george.arc.nasa.gov) for Joseph A. Garcia's personal web pages in the AI division. Only general information will be posted and no technical material. All the information is unclassified.

  2. Stochastic analysis of web page ranking

    NARCIS (Netherlands)

    Volkovich, Y.

    2009-01-01

    Today, the study of the World Wide Web is one of the most challenging subjects. In this work we consider the Web from a probabilistic point of view. We analyze the relations between various characteristics of the Web. In particular, we are interested in the Web properties that affect the Web page

  3. Classroom Web Pages: A "How-To" Guide for Educators.

    Science.gov (United States)

    Fehling, Eric E.

    This manual provides teachers, with very little or no technology experience, with a step-by-step guide for developing the necessary skills for creating a class Web Page. The first part of the manual is devoted to the thought processes preceding the actual creation of the Web Page. These include looking at other Web Pages, deciding what should be…

  4. A thorough spring-clean for CERN's Web pages

    CERN Multimedia

    2001-01-01

    This coming Tuesday will see the unveiling of CERN's new user pages on the Web. Their simplified layout and design will make everybody's lives a whole lot easier. Stand by for Tuesday 17 April when, as announced in the Weekly Bulletin of 2 April (n°14/2001), the new newly-designed users welcome page will be hitting our screens as the default CERN home page. But don't worry, if you've got the blues for the good old blue-green home page it's still in service and, to ensure a smooth transition, will be maintained in parallel until 25 May. But in all likelihood you'll be quickly won over by the new-look pages, which are so much simpler to use. Welcome to the new Web! The aim of this revamp, led by the WPE (Web Public Education) group, is to simplify and introduce a more logical hierarchy into the menus and welcome pages on CERN's Intranet. In a second stage, the 'General Public' pages will get a similar makeover. The fact is that the number of links on the user pages, and in particular the welcome page...

  5. Educational use of World Wide Web pages on CD-ROM.

    Science.gov (United States)

    Engel, Thomas P; Smith, Michael

    2002-01-01

    The World Wide Web is increasingly important for medical education. Internet served pages may also be used on a local hard disk or CD-ROM without a network or server. This allows authors to reuse existing content and provide access to users without a network connection. CD-ROM offers several advantages over network delivery of Web pages for several applications. However, creating Web pages for CD-ROM requires careful planning. Issues include file names, relative links, directory names, default pages, server created content, image maps, other file types and embedded programming. With care, it is possible to create server based pages that can be copied directly to CD-ROM. In addition, Web pages on CD-ROM may reference Internet served pages to provide the best features of both methods.

  6. Business Systems Branch Abilities, Capabilities, and Services Web Page

    Science.gov (United States)

    Cortes-Pena, Aida Yoguely

    2009-01-01

    During the INSPIRE summer internship I acted as the Business Systems Branch Capability Owner for the Kennedy Web-based Initiative for Communicating Capabilities System (KWICC), with the responsibility of creating a portal that describes the services provided by this Branch. This project will help others achieve a clear view ofthe services that the Business System Branch provides to NASA and the Kennedy Space Center. After collecting the data through the interviews with subject matter experts and the literature in Business World and other web sites I identified discrepancies, made the necessary corrections to the sites and placed the information from the report into the KWICC web page.

  7. Required Discussion Web Pages in Psychology Courses and Student Outcomes

    Science.gov (United States)

    Pettijohn, Terry F., II; Pettijohn, Terry F.

    2007-01-01

    We conducted 2 studies that investigated student outcomes when using discussion Web pages in psychology classes. In Study 1, we assigned 213 students enrolled in Introduction to Psychology courses to either a mandatory or an optional Web page discussion condition. Students used the discussion Web page significantly more often and performed…

  8. Recognition of pornographic web pages by classifying texts and images.

    Science.gov (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  9. Automatically annotating web pages using Google Rich Snippets

    NARCIS (Netherlands)

    Hogenboom, F.P.; Frasincar, F.; Vandic, D.; Meer, van der J.; Boon, F.; Kaymak, U.

    2011-01-01

    We propose the Automatic Review Recognition and annO- tation of Web pages (ARROW) framework, a framework for Web page review identification and annotation using RDFa Google Rich Snippets. The ARROW framework consists of four steps: hotspot identification, subjectivity analysis, in- formation

  10. A teen's guide to creating web pages and blogs

    CERN Document Server

    Selfridge, Peter; Osburn, Jennifer

    2008-01-01

    Whether using a social networking site like MySpace or Facebook or building a Web page from scratch, millions of teens are actively creating a vibrant part of the Internet. This is the definitive teen''s guide to publishing exciting web pages and blogs on the Web. This easy-to-follow guide shows teenagers how to: Create great MySpace and Facebook pages Build their own unique, personalized Web site Share the latest news with exciting blogging ideas Protect themselves online with cyber-safety tips Written by a teenager for other teens, this book leads readers step-by-step through the basics of web and blog design. In this book, teens learn to go beyond clicking through web sites to learning winning strategies for web design and great ideas for writing blogs that attract attention and readership.

  11. Identify Web-page Content meaning using Knowledge based System for Dual Meaning Words

    OpenAIRE

    Sinha, Sukanta; Dattagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Meaning of Web-page content plays a big role while produced a search result from a search engine. Most of the cases Web-page meaning stored in title or meta-tag area but those meanings do not always match with Web-page content. To overcome this situation we need to go through the Web-page content to identify the Web-page meaning. In such cases, where Webpage content holds dual meaning words that time it is really difficult to identify the meaning of the Web-page. In this paper, we are introdu...

  12. Building interactive simulations in a Web page design program.

    Science.gov (United States)

    Kootsey, J Mailen; Siriphongs, Daniel; McAuley, Grant

    2004-01-01

    A new Web software architecture, NumberLinX (NLX), has been integrated into a commercial Web design program to produce a drag-and-drop environment for building interactive simulations. NLX is a library of reusable objects written in Java, including input, output, calculation, and control objects. The NLX objects were added to the palette of available objects in the Web design program to be selected and dropped on a page. Inserting an object in a Web page is accomplished by adding a template block of HTML code to the page file. HTML parameters in the block must be set to user-supplied values, so the HTML code is generated dynamically, based on user entries in a popup form. Implementing the object inspector for each object permits the user to edit object attributes in a form window. Except for model definition, the combination of the NLX architecture and the Web design program permits construction of interactive simulation pages without writing or inspecting code.

  13. D5.2 Project web environment

    OpenAIRE

    Fernie, Kate; Usher, Carol

    2011-01-01

    This deliverable presents a snapshot of the project web environment in July 2011. The project website http://www.digcur-education.org/eng was launched in month one of the project. The aim of this site is to provide information about the project to stakeholders and to related projects, as well as provide an Intranet for members of the project consortium. The website includes a ‘Join Us’ page to encourage interested parties to register as members of the DigCurV network, to receive the p...

  14. Is Domain Highlighting Actually Helpful in Identifying Phishing Web Pages?

    Science.gov (United States)

    Xiong, Aiping; Proctor, Robert W; Yang, Weining; Li, Ninghui

    2017-06-01

    To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants' visual attention was attracted by the highlighted domains. Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages.

  15. Digital libraries and World Wide Web sites and page persistence.

    Directory of Open Access Journals (Sweden)

    Wallace Koehler

    1999-01-01

    Full Text Available Web pages and Web sites, some argue, can either be collected as elements of digital or hybrid libraries, or, as others would have it, the WWW is itself a library. We begin with the assumption that Web pages and Web sites can be collected and categorized. The paper explores the proposition that the WWW constitutes a library. We conclude that the Web is not a digital library. However, its component parts can be aggregated and included as parts of digital library collections. These, in turn, can be incorporated into "hybrid libraries." These are libraries with both traditional and digital collections. Material on the Web can be organized and managed. Native documents can be collected in situ, disseminated, distributed, catalogueed, indexed, controlled, in traditional library fashion. The Web therefore is not a library, but material for library collections is selected from the Web. That said, the Web and its component parts are dynamic. Web documents undergo two kinds of change. The first type, the type addressed in this paper, is "persistence" or the existence or disappearance of Web pages and sites, or in a word the lifecycle of Web documents. "Intermittence" is a variant of persistence, and is defined as the disappearance but reappearance of Web documents. At any given time, about five percent of Web pages are intermittent, which is to say they are gone but will return. Over time a Web collection erodes. Based on a 120-week longitudinal study of a sample of Web documents, it appears that the half-life of a Web page is somewhat less than two years and the half-life of a Web site is somewhat more than two years. That is to say, an unweeded Web document collection created two years ago would contain the same number of URLs, but only half of those URLs point to content. The second type of change Web documents experience is change in Web page or Web site content. Again based on the Web document samples, very nearly all Web pages and sites undergo some

  16. A web-based repository of surgical simulator projects.

    Science.gov (United States)

    Leskovský, Peter; Harders, Matthias; Székely, Gábor

    2006-01-01

    The use of computer-based surgical simulators for training of prospective surgeons has been a topic of research for more than a decade. As a result, a large number of academic projects have been carried out, and a growing number of commercial products are available on the market. Keeping track of all these endeavors for established groups as well as for newly started projects can be quite arduous. Gathering information on existing methods, already traveled research paths, and problems encountered is a time consuming task. To alleviate this situation, we have established a modifiable online repository of existing projects. It contains detailed information about a large number of simulator projects gathered from web pages, papers and personal communication. The database is modifiable (with password protected sections) and also allows for a simple statistical analysis of the collected data. For further information, the surgical repository web page can be found at www.virtualsurgery.vision.ee.ethz.ch.

  17. [An evaluation of the quality of health web pages using a validated questionnaire].

    Science.gov (United States)

    Conesa Fuentes, Maria del Carmen; Aguinaga Ontoso, Enrique; Hernández Morante, Juan José

    2011-01-01

    The objective of the present study was to evaluate the quality of general health information in Spanish language web pages, and the official Regional Services web pages from the different Autonomous Regions. It is a cross-sectional study. We have used a previously validated questionnaire to study the present state of the health information on Internet for a lay-user point of view. By mean of PageRank (Google®), we obtained a group of webs, including a total of 65 health web pages. We applied some exclusion criteria, and finally obtained a total of 36 webs. We also analyzed the official web pages from the different Health Services in Spain (19 webs), making a total of 54 health web pages. In the light of our data, we observed that, the quality of the general information health web pages was generally rather low, especially regarding the information quality. Not one page reached the maximum score (19 points). The mean score of the web pages was of 9.8±2.8. In conclusion, to avoid the problems arising from the lack of quality, health professionals should design advertising campaigns and other media to teach the lay-user how to evaluate the information quality. Copyright © 2009 Elsevier España, S.L. All rights reserved.

  18. An Analysis of Academic Library Web Pages for Faculty

    Science.gov (United States)

    Gardner, Susan J.; Juricek, John Eric; Xu, F. Grace

    2008-01-01

    Web sites are increasingly used by academic libraries to promote key services and collections to teaching faculty. This study analyzes the content, location, language, and technological features of fifty-four academic library Web pages designed especially for faculty to expose patterns in the development of these pages.

  19. Near-Duplicate Web Page Detection: An Efficient Approach Using Clustering, Sentence Feature and Fingerprinting

    Directory of Open Access Journals (Sweden)

    J. Prasanna Kumar

    2013-02-01

    Full Text Available Duplicate and near-duplicate web pages are the chief concerns for web search engines. In reality, they incur enormous space to store the indexes, ultimately slowing down and increasing the cost of serving results. A variety of techniques have been developed to identify pairs of web pages that are aldquo;similarardquo; to each other. The problem of finding near-duplicate web pages has been a subject of research in the database and web-search communities for some years. In order to identify the near duplicate web pages, we make use of sentence level features along with fingerprinting method. When a large number of web documents are in consideration for the detection of web pages, then at first, we use K-mode clustering and subsequently sentence feature and fingerprint comparison is used. Using these steps, we exactly identify the near duplicate web pages in an efficient manner. The experimentation is carried out on the web page collections and the results ensured the efficiency of the proposed approach in detecting the near duplicate web pages.

  20. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric Martin; van der Geest, Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  1. Reporting on post-menopausal hormone therapy: an analysis of gynaecologists' web pages.

    Science.gov (United States)

    Bucksch, Jens; Kolip, Petra; Deitermann, Bernhilde

    2004-01-01

    The present study was designed to analyse Web pages of German gynaecologists with regard to postmenopausal hormone therapy (HT). There is a growing body of evidence, that the overall health risks of HT exceed the benefits. Making one's own informed choice has become a central concern for menopausal women. The Internet is an important source of health information, but the quality is often dubious. The study focused on the analysis of basic criteria such as last modification date and quality of the HT information content. The results of the Women's Health Initiative Study (WHI) were used as a benchmark. We searched for relevant Web pages by entering a combination of key words (9 x 13 = 117) into the search engine www.google.de. Each Web page was analysed using a standardized questionnaire. The basic criteria and the quality of content on each Web page were separately categorized by two evaluators. Disagreements were resolved by discussion. Of the 97 websites identified, basic criteria were not met by the majority. For example, the modification date was displayed by only 23 (23.7%) Web pages. The quality of content of most Web pages regarding HT was inaccurate and incomplete. Whilst only nine (9.3%) took up a balanced position, 66 (68%) recommended HT without any restrictions. In 22 cases the recommendation was indistinct and none of the sites refused HT. With regard to basic criteria, there was no difference between HT-recommending Web pages and sites with balanced position. Evidence-based information resulting from the WHI trial was insufficiently represented on gynaecologists' Web pages. Because of the growing number of consumers looking online for health information, the danger of obtaining harmful information has to be minimized. Web pages of gynaecologists do not appear to be recommendable for women because they do not provide recent evidence-based findings about HT.

  2. Going, going, still there: using the WebCite service to permanently archive cited web pages.

    Science.gov (United States)

    Eysenbach, Gunther; Trudel, Mathieu

    2005-12-30

    Scholars are increasingly citing electronic "web references" which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To "webcite" a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page. This journal has amended its "instructions for authors" accordingly, asking authors to archive cited Web pages before submitting a manuscript. Almost 200 other journals are already using the system. We discuss the rationale for WebCite, its technology, and how scholars, editors, and publishers can benefit from the service. Citing scholars initiate an archiving process of all cited Web references, ideally before they submit a manuscript. Authors of online documents and websites which are expected to be cited by others can ensure that their work is permanently available by creating an archived copy using WebCite and providing the citation information including the WebCite link on their Web document(s). Editors should ask their authors to cache all cited Web addresses (Uniform Resource Locators, or URLs) "prospectively" before submitting their manuscripts to their journal. Editors and publishers should also instruct their copyeditors to cache cited Web material if the author has not done so already. Finally, WebCite can process publisher submitted "citing articles" (submitted for example as eXtensible Markup Language [XML] documents) to automatically archive all cited Web pages shortly before or on publication. Finally, WebCite can act as a focussed crawler, caching retrospectively references of already published articles. Copyright issues are addressed by honouring respective Internet standards (robot exclusion files, no-cache and no-archive tags). Long-term preservation is ensured by agreements with libraries and digital preservation organizations. The resulting WebCite Index may also have applications for research

  3. Web page sorting algorithm based on query keyword distance relation

    Science.gov (United States)

    Yang, Han; Cui, Hong Gang; Tang, Hao

    2017-08-01

    In order to optimize the problem of page sorting, according to the search keywords in the web page in the relationship between the characteristics of the proposed query keywords clustering ideas. And it is converted into the degree of aggregation of the search keywords in the web page. Based on the PageRank algorithm, the clustering degree factor of the query keyword is added to make it possible to participate in the quantitative calculation. This paper proposes an improved algorithm for PageRank based on the distance relation between search keywords. The experimental results show the feasibility and effectiveness of the method.

  4. In-Degree and PageRank of web pages: why do they follow similar power laws?

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    2009-01-01

    PageRank is a popularity measure designed by Google to rank Web pages. Experiments confirm that PageRank values obey a power law with the same exponent as In-Degree values. This paper presents a novel mathematical model that explains this phenomenon. The relation between PageRank and In-Degree is

  5. Learning Structural Classification Rules for Web-page Categorization

    NARCIS (Netherlands)

    Stuckenschmidt, Heiner; Hartmann, Jens; Van Harmelen, Frank

    2002-01-01

    Content-related metadata plays an important role in the effort of developing intelligent web applications. One of the most established form of providing content-related metadata is the assignment of web-pages to content categories. We describe the Spectacle system for classifying individual web

  6. In-degree and pageRank of web pages: Why do they follow similar power laws?

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    The PageRank is a popularity measure designed by Google to rank Web pages. Experiments confirm that the PageRank obeys a 'power law' with the same exponent as the In-Degree. This paper presents a novel mathematical model that explains this phenomenon. The relation between the PageRank and In-Degree

  7. Enhancing the Ranking of a Web Page in the Ocean of Data

    Directory of Open Access Journals (Sweden)

    Hitesh KUMAR SHARMA

    2013-10-01

    Full Text Available In today's world, web is considered as ocean of data and information (like text, videos, multimedia etc. consisting of millions and millions of web pages in which web pages are linked with each other like a tree. It is often argued that, especially considering the dynamic of the internet, too much time has passed since the scientific work on PageRank, as that it still could be the basis for the ranking methods of the Google search engine. There is no doubt that within the past years most likely many changes, adjustments and modifications regarding the ranking methods of Google have taken place, but PageRank was absolutely crucial for Google's success, so that at least the fundamental concept behind PageRank should still be constitutive. This paper describes the components which affects the ranking of the web pages and helps in increasing the popularity of web site. By adapting these factors website developers can increase their site's page rank and within the PageRank concept, considering the rank of a document is given by the rank of those documents which link to it. Their rank again is given by the rank of documents which link to them. The PageRank of a document is always determined recursively by the PageRank of other documents.

  8. Measuring consistency of web page design and its effects on performance and satisfaction.

    Science.gov (United States)

    Ozok, A A; Salvendy, G

    2000-04-01

    This study examines the methods for measuring the consistency levels of web pages and the effect of consistency on the performance and satisfaction of the world-wide web (WWW) user. For clarification, a home page is referred to as a single page that is the default page of a web site on the WWW. A web page refers to a single screen that indicates a specific address on the WWW. This study has tested a series of web pages that were mostly hyperlinked. Therefore, the term 'web page' has been adopted for the nomenclature while referring to the objects of which the features were tested. It was hypothesized that participants would perform better and be more satisfied using web pages that have consistent rather than inconsistent interface design; that the overall consistency level of an interface design would significantly correlate with the three elements of consistency, physical, communicational and conceptual consistency; and that physical and communicational consistencies would interact with each other. The hypotheses were tested in a four-group, between-subject design, with 10 participants in each group. The results partially support the hypothesis regarding error rate, but not regarding satisfaction and performance time. The results also support the hypothesis that each of the three elements of consistency significantly contribute to the overall consistency of a web page, and that physical and communicational consistencies interact with each other, while conceptual consistency does not interact with them.

  9. An ant colony optimization based feature selection for web page classification.

    Science.gov (United States)

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  10. Readability of the web: a study on 1 billion web pages

    NARCIS (Netherlands)

    de Heus, Marije; Hiemstra, Djoerd

    We have performed a readability study on more than 1 billion web pages. The Automated Readability Index was used to determine the average grade level required to easily comprehend a website. Some of the results are that a 16-year-old can easily understand 50% of the web and an 18-year old can easily

  11. The impact of visual layout factors on performance in Web pages: a cross-language study.

    Science.gov (United States)

    Parush, Avi; Shwarts, Yonit; Shtub, Avy; Chandra, M Jeya

    2005-01-01

    Visual layout has a strong impact on performance and is a critical factor in the design of graphical user interfaces (GUIs) and Web pages. Many design guidelines employed in Web page design were inherited from human performance literature and GUI design studies and practices. However, few studies have investigated the more specific patterns of performance with Web pages that may reflect some differences between Web page and GUI design. We investigated interactions among four visual layout factors in Web page design (quantity of links, alignment, grouping indications, and density) in two experiments: one with pages in Hebrew, entailing right-to-left reading, and the other with English pages, entailing left-to-right reading. Some performance patterns (measured by search times and eye movements) were similar between languages. Performance was particularly poor in pages with many links and variable densities, but it improved with the presence of uniform density. Alignment was not shown to be a performance-enhancing factor. The findings are discussed in terms of the similarities and differences in the impact of layout factors between GUIs and Web pages. Actual or potential applications of this research include specific guidelines for Web page design.

  12. Digital Ethnography: Library Web Page Redesign among Digital Natives

    Science.gov (United States)

    Klare, Diane; Hobbs, Kendall

    2011-01-01

    Presented with an opportunity to improve Wesleyan University's dated library home page, a team of librarians employed ethnographic techniques to explore how its users interacted with Wesleyan's current library home page and web pages in general. Based on the data that emerged, a group of library staff and members of the campus' information…

  13. ARL Physics Web Pages: An Evaluation by Established, Transitional and Emerging Benchmarks.

    Science.gov (United States)

    Duffy, Jane C.

    2002-01-01

    Provides an overview of characteristics among Association of Research Libraries (ARL) physics Web pages. Examines current academic Web literature and from that develops six benchmarks to measure physics Web pages: ease of navigation; logic of presentation; representation of all forms of information; engagement of the discipline; interactivity of…

  14. Network and User-Perceived Performance of Web Page Retrievals

    Science.gov (United States)

    Kruse, Hans; Allman, Mark; Mallasch, Paul

    1998-01-01

    The development of the HTTP protocol has been driven by the need to improve the network performance of the protocol by allowing the efficient retrieval of multiple parts of a web page without the need for multiple simultaneous TCP connections between a client and a server. We suggest that the retrieval of multiple page elements sequentially over a single TCP connection may result in a degradation of the perceived performance experienced by the user. We attempt to quantify this perceived degradation through the use of a model which combines a web retrieval simulation and an analytical model of TCP operation. Starting with the current HTTP/l.1 specification, we first suggest a client@side heuristic to improve the perceived transfer performance. We show that the perceived speed of the page retrieval can be increased without sacrificing data transfer efficiency. We then propose a new client/server extension to the HTTP/l.1 protocol to allow for the interleaving of page element retrievals. We finally address the issue of the display of advertisements on web pages, and in particular suggest a number of mechanisms which can make efficient use of IP multicast to send advertisements to a number of clients within the same network.

  15. Document representations for classification of short web-page descriptions

    Directory of Open Access Journals (Sweden)

    Radovanović Miloš

    2008-01-01

    Full Text Available Motivated by applying Text Categorization to classification of Web search results, this paper describes an extensive experimental study of the impact of bag-of- words document representations on the performance of five major classifiers - Naïve Bayes, SVM, Voted Perceptron, kNN and C4.5. The texts, representing short Web-page descriptions sorted into a large hierarchy of topics, are taken from the dmoz Open Directory Web-page ontology, and classifiers are trained to automatically determine the topics which may be relevant to a previously unseen Web-page. Different transformations of input data: stemming, normalization, logtf and idf, together with dimensionality reduction, are found to have a statistically significant improving or degrading effect on classification performance measured by classical metrics - accuracy, precision, recall, F1 and F2. The emphasis of the study is not on determining the best document representation which corresponds to each classifier, but rather on describing the effects of every individual transformation on classification, together with their mutual relationships. .

  16. Social Responsibility and Corporate Web Pages: Self-Presentation or Agenda-Setting?

    Science.gov (United States)

    Esrock, Stuart L.; Leichty, Greg B.

    1998-01-01

    Examines how corporate entities use the Web to present themselves as socially responsible citizens and to advance policy positions. Samples randomly "Fortune 500" companies, revealing that, although 90% had Web pages and 82% of the sites addressed a corporate social responsibility issue, few corporations used their pages to monitor…

  17. Does content affect whether users remember that Web pages were hyperlinked?

    Science.gov (United States)

    Jones, Keith S; Ballew, Timothy V; Probst, C Adam

    2008-10-01

    We determined whether memory for hyperlinks improved when they represented relations between the contents of the Web pages. J. S. Farris (2003) found that memory for hyperlinks improved when they represented relations between the contents of the Web pages. However, Farris's (2003) participants could have used their knowledge of site content to answer questions about relations that were instantiated via the site's content and its hyperlinks. In Experiment 1, users navigated a Web site and then answered questions about relations that were instantiated only via content, only via hyperlinks, and via content and hyperlinks. Unlike Farris (2003), we split the latter into two sets. One asked whether certain content elements were related, and the other asked whether certain Web pages were hyperlinked. Experiment 2 replicated Experiment 1 with one modification: The questions that were asked about relations instantiated via content and hyperlinks were changed so that each question's wrong answer was also related to the question's target. Memory for hyperlinks improved when they represented relations instantiated within the content of the Web pages. This was true when (a) questions about content and hyperlinks were separated (Experiment 1) and (b) each question's wrong answer was also related to the question's target (Experiment 2). The accuracy of users' mental representations of local architecture depended on whether hyperlinks were related to the site's content. Designers who want users to remember hyperlinks should associate those hyperlinks with content that reflects the relation between the contents on the Web pages.

  18. AUTOMATIC TAGGING OF PERSIAN WEB PAGES BASED ON N-GRAM LANGUAGE MODELS USING MAPREDUCE

    Directory of Open Access Journals (Sweden)

    Saeed Shahrivari

    2015-07-01

    Full Text Available Page tagging is one of the most important facilities for increasing the accuracy of information retrieval in the web. Tags are simple pieces of data that usually consist of one or several words, and briefly describe a page. Tags provide useful information about a page and can be used for boosting the accuracy of searching, document clustering, and result grouping. The most accurate solution to page tagging is using human experts. However, when the number of pages is large, humans cannot be used, and some automatic solutions should be used instead. We propose a solution called PerTag which can automatically tag a set of Persian web pages. PerTag is based on n-gram models and uses the tf-idf method plus some effective Persian language rules to select proper tags for each web page. Since our target is huge sets of web pages, PerTag is built on top of the MapReduce distributed computing framework. We used a set of more than 500 million Persian web pages during our experiments, and extracted tags for each page using a cluster of 40 machines. The experimental results show that PerTag is both fast and accurate

  19. Improving Web Page Retrieval using Search Context from Clicked Domain Names

    NARCIS (Netherlands)

    Li, R.

    Search context is a crucial factor that helps to understand a user’s information need in ad-hoc Web page retrieval. A query log of a search engine contains rich information on issued queries and their corresponding clicked Web pages. The clicked data implies its relevance to the query and can be

  20. Identification of Malicious Web Pages by Inductive Learning

    Science.gov (United States)

    Liu, Peishun; Wang, Xuefang

    Malicious web pages are an increasing threat to current computer systems in recent years. Traditional anti-virus techniques focus typically on detection of the static signatures of Malware and are ineffective against these new threats because they cannot deal with zero-day attacks. In this paper, a novel classification method for detecting malicious web pages is presented. This method is generalization and specialization of attack pattern based on inductive learning, which can be used for updating and expanding knowledge database. The attack pattern is established from an example and generalized by inductive learning, which can be used to detect unknown attacks whose behavior is similar to the example.

  1. Referencing web pages and e-journals.

    Science.gov (United States)

    Bryson, David

    2013-12-01

    One of the areas that can confuse students and authors alike is how to reference web pages and electronic journals (e-journals). The aim of this professional development article is to go back to first principles for referencing and see how with examples these should be referenced.

  2. Automatic Hidden-Web Table Interpretation by Sibling Page Comparison

    Science.gov (United States)

    Tao, Cui; Embley, David W.

    The longstanding problem of automatic table interpretation still illudes us. Its solution would not only be an aid to table processing applications such as large volume table conversion, but would also be an aid in solving related problems such as information extraction and semi-structured data management. In this paper, we offer a conceptual modeling solution for the common special case in which so-called sibling pages are available. The sibling pages we consider are pages on the hidden web, commonly generated from underlying databases. We compare them to identify and connect nonvarying components (category labels) and varying components (data values). We tested our solution using more than 2,000 tables in source pages from three different domains—car advertisements, molecular biology, and geopolitical information. Experimental results show that the system can successfully identify sibling tables, generate structure patterns, interpret tables using the generated patterns, and automatically adjust the structure patterns, if necessary, as it processes a sequence of hidden-web pages. For these activities, the system was able to achieve an overall F-measure of 94.5%.

  3. Search Engine Ranking, Quality, and Content of Web Pages That Are Critical Versus Noncritical of Human Papillomavirus Vaccine.

    Science.gov (United States)

    Fu, Linda Y; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G

    2016-01-01

    Online information can influence attitudes toward vaccination. The aim of the present study was to provide a systematic evaluation of the search engine ranking, quality, and content of Web pages that are critical versus noncritical of human papillomavirus (HPV) vaccination. We identified HPV vaccine-related Web pages with the Google search engine by entering 20 terms. We then assessed each Web page for critical versus noncritical bias and for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than ninth grade. We also determined Web page comprehensiveness in terms of mention of 14 HPV vaccine-relevant topics. Twenty searches yielded 116 unique Web pages. HPV vaccine-critical Web pages comprised roughly a third of the top, top 5- and top 10-ranking Web pages. The prevalence of HPV vaccine-critical Web pages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical Web pages, Web pages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (p engine queries despite being of lower quality and less comprehensive than noncritical Web pages. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  4. Design of a Web Page as a complement of educative innovation through MOODLE

    Science.gov (United States)

    Mendiola Ubillos, M. A.; Aguado Cortijo, Pedro L.

    2010-05-01

    In the context of Information Technology to impart knowledge and to establish MOODLE system as a support and complementary tool to on-site educational methodology (b-learning) a Web Page was designed in Agronomic and Food Industry Crops (Plantas de interés Agroalimentario) during 2006-07 course. This web was inserted in the Thecnical University of Madrid (Universidad Politécnica de Madrid) computer system to facilitate to the students the first contact with the contents of this subject. In this page the objectives and methodology, personal work planning, subject program given plus the activities are showed. At another web site, the evaluation criteria and recommended bibliography are located. The objective of this web page has been to make more transparent and accessible the necessary information in the learning process and presenting it in a more attractive frame. This page has been update and modified in each academic course offered since its first implementation. We had added in some cases new specific links to increase its useful. At the end of each course a test is applied to the students that take this subject. We have asked which elements would like to modify, delete and add to this web page. In this way the direct users give their point of view and help to improve the web page each course.

  5. Categorization of web pages - Performance enhancement to search engine

    Digital Repository Service at National Institute of Oceanography (India)

    Lakshminarayana, S.

    of Artificial Intelligence, Volume III. Los Altos, CA.: William Kaufmann. pp 1-74. 18. Brin, S. & Page, L. (1998). The anatomy of a large scale hyper-textual web search engine. In Proceedings of the seventh World Wide Web conference, Brisbane, Australia. 19...

  6. Developing a web page: bringing clinics online.

    Science.gov (United States)

    Peterson, Ronnie; Berns, Susan

    2004-01-01

    Introducing clinical staff education, along with new policies and procedures, to over 50 different clinical sites can be a challenge. As any staff educator will confess, getting people to attend an educational inservice session can be difficult. Clinical staff request training, but no one has time to attend training sessions. Putting the training along with the policies and other information into "neat" concise packages via the computer and over the company's intranet was the way to go. However, how do you bring the clinics online when some of the clinical staff may still be reluctant to turn on their computers for anything other than to gather laboratory results? Developing an easy, fun, and accessible Web page was the answer. This article outlines the development of the first training Web page at the University of Wisconsin Medical Foundation, Madison, WI.

  7. A reverse engineering approach for automatic annotation of Web pages

    NARCIS (Netherlands)

    R. de Virgilio (Roberto); F. Frasincar (Flavius); W. Hop (Walter); S. Lachner (Stephan)

    2013-01-01

    textabstractThe Semantic Web is gaining increasing interest to fulfill the need of sharing, retrieving, and reusing information. Since Web pages are designed to be read by people, not machines, searching and reusing information on the Web is a difficult task without human participation. To this aim

  8. Environment: General; Grammar & Usage; Money Management; Music History; Web Page Creation & Design.

    Science.gov (United States)

    Web Feet, 2001

    2001-01-01

    Describes Web site resources for elementary and secondary education in the topics of: environment, grammar, money management, music history, and Web page creation and design. Each entry includes an illustration of a sample page on the site and an indication of the grade levels for which it is appropriate. (AEF)

  9. Key-phrase based classification of public health web pages.

    Science.gov (United States)

    Dolamic, Ljiljana; Boyer, Célia

    2013-01-01

    This paper describes and evaluates the public health web pages classification model based on key phrase extraction and matching. Easily extendible both in terms of new classes as well as the new language this method proves to be a good solution for text classification faced with the total lack of training data. To evaluate the proposed solution we have used a small collection of public health related web pages created by a double blind manual classification. Our experiments have shown that by choosing the adequate threshold value the desired value for either precision or recall can be achieved.

  10. Web Project Management

    OpenAIRE

    Suralkar, Sunita; Joshi, Nilambari; Meshram, B B

    2013-01-01

    This paper describes about the need for Web project management, fundamentals of project management for web projects: what it is, why projects go wrong, and what's different about web projects. We also discuss Cost Estimation Techniques based on Size Metrics. Though Web project development is similar to traditional software development applications, the special characteristics of Web Application development requires adaption of many software engineering approaches or even development of comple...

  11. A framework for automatic annotation of web pages using the Google Rich Snippets vocabulary

    NARCIS (Netherlands)

    Meer, van der J.; Boon, F.; Hogenboom, F.P.; Frasincar, F.; Kaymak, U.

    2011-01-01

    One of the latest developments for the Semantic Web is Google Rich Snippets, a service that uses Web page annotations for displaying search results in a visually appealing manner. In this paper we propose the Automatic Review Recognition and annOtation of Web pages (ARROW) framework, which is able

  12. Building Interactive Simulations in Web Pages without Programming.

    Science.gov (United States)

    Mailen Kootsey, J; McAuley, Grant; Bernal, Julie

    2005-01-01

    A software system is described for building interactive simulations and other numerical calculations in Web pages. The system is based on a new Java-based software architecture named NumberLinX (NLX) that isolates each function required to build the simulation so that a library of reusable objects could be assembled. The NLX objects are integrated into a commercial Web design program for coding-free page construction. The model description is entered through a wizard-like utility program that also functions as a model editor. The complete system permits very rapid construction of interactive simulations without coding. A wide range of applications are possible with the system beyond interactive calculations, including remote data collection and processing and collaboration over a network.

  13. A Quantitative Comparison of Semantic Web Page Segmentation Approaches

    NARCIS (Netherlands)

    Kreuzer, Robert; Hage, J.; Feelders, A.J.

    2015-01-01

    We compare three known semantic web page segmentation algorithms, each serving as an example of a particular approach to the problem, and one self-developed algorithm, WebTerrain, that combines two of the approaches. We compare the performance of the four algorithms for a large benchmark of modern

  14. Modeling user navigation behavior in web by colored Petri nets to determine the user's interest in recommending web pages

    Directory of Open Access Journals (Sweden)

    Mehdi Sadeghzadeh

    2013-01-01

    Full Text Available One of existing challenges in personalization of the web is increasing the efficiency of a web in meeting the users' requirements for the contents they require in an optimal state. All the information associated with the current user behavior following in web and data obtained from pervious users’ interaction in web can provide some necessary keys to recommend presentation of services, productions, and the required information of the users. This study aims at presenting a formal model based on colored Petri nets to identify the present user's interest, which is utilized to recommend the most appropriate pages ahead. In the proposed design, recommendation of the pages is considered with respect to information obtained from pervious users' profile as well as the current session of the present user. This model offers the updated proposed pages to the user by clicking on the web pages. Moreover, an example of web is modeled using CPN Tools. The results of the simulation show that this design improves the precision factor. We explain, through evaluation where the results of this method are more objective and the dynamic recommendations demonstrate that the results of the recommended method improve the precision criterion 15% more than the static method.

  15. Exploring Cultural Variation in Eye Movements on a Web Page between Americans and Koreans

    Science.gov (United States)

    Yang, Changwoo

    2009-01-01

    This study explored differences in eye movement on a Web page between members of two different cultures to provide insight and guidelines for implementation of global Web site development. More specifically, the research examines whether differences of eye movement exist between the two cultures (American vs. Korean) when viewing a Web page, and…

  16. Analysis and Testing of Ajax-based Single-page Web Applications

    NARCIS (Netherlands)

    Mesbah, A.

    2009-01-01

    This dissertation has focused on better understanding the shifting web paradigm and the consequences of moving from the classical multi-page model to an Ajax-based single-page style. Specifically to that end, this work has examined this new class of software from three main software engineering

  17. Web pages: What can you see in a single fixation?

    Science.gov (United States)

    Jahanian, Ali; Keshvari, Shaiyan; Rosenholtz, Ruth

    2018-01-01

    Research in human vision suggests that in a single fixation, humans can extract a significant amount of information from a natural scene, e.g. the semantic category, spatial layout, and object identities. This ability is useful, for example, for quickly determining location, navigating around obstacles, detecting threats, and guiding eye movements to gather more information. In this paper, we ask a new question: What can we see at a glance at a web page - an artificial yet complex "real world" stimulus? Is it possible to notice the type of website, or where the relevant elements are, with only a glimpse? We find that observers, fixating at the center of a web page shown for only 120 milliseconds, are well above chance at classifying the page into one of ten categories. Furthermore, this ability is supported in part by text that they can read at a glance. Users can also understand the spatial layout well enough to reliably localize the menu bar and to detect ads, even though the latter are often camouflaged among other graphical elements. We discuss the parallels between web page gist and scene gist, and the implications of our findings for both vision science and human-computer interaction.

  18. Detection of spam web page using content and link-based techniques

    Indian Academy of Sciences (India)

    Spam pages are generally insufficient and inappropriate results for user. ... kinds of Web spamming techniques: Content spam and Link spam. 1. Content spam: The .... of the spam pages are machine generated and hence tech- nique of ...

  19. Arabic web pages clustering and annotation using semantic class features

    Directory of Open Access Journals (Sweden)

    Hanan M. Alghamdi

    2014-12-01

    Full Text Available To effectively manage the great amount of data on Arabic web pages and to enable the classification of relevant information are very important research problems. Studies on sentiment text mining have been very limited in the Arabic language because they need to involve deep semantic processing. Therefore, in this paper, we aim to retrieve machine-understandable data with the help of a Web content mining technique to detect covert knowledge within these data. We propose an approach to achieve clustering with semantic similarities. This approach comprises integrating k-means document clustering with semantic feature extraction and document vectorization to group Arabic web pages according to semantic similarities and then show the semantic annotation. The document vectorization helps to transform text documents into a semantic class probability distribution or semantic class density. To reach semantic similarities, the approach extracts the semantic class features and integrates them into the similarity weighting schema. The quality of the clustering result has evaluated the use of the purity and the mean intra-cluster distance (MICD evaluation measures. We have evaluated the proposed approach on a set of common Arabic news web pages. We have acquired favorable clustering results that are effective in minimizing the MICD, expanding the purity and lowering the runtime.

  20. Web Page Classification Method Using Neural Networks

    Science.gov (United States)

    Selamat, Ali; Omatu, Sigeru; Yanagimoto, Hidekazu; Fujinaka, Toru; Yoshioka, Michifumi

    Automatic categorization is the only viable method to deal with the scaling problem of the World Wide Web (WWW). In this paper, we propose a news web page classification method (WPCM). The WPCM uses a neural network with inputs obtained by both the principal components and class profile-based features (CPBF). Each news web page is represented by the term-weighting scheme. As the number of unique words in the collection set is big, the principal component analysis (PCA) has been used to select the most relevant features for the classification. Then the final output of the PCA is combined with the feature vectors from the class-profile which contains the most regular words in each class before feeding them to the neural networks. We have manually selected the most regular words that exist in each class and weighted them using an entropy weighting scheme. The fixed number of regular words from each class will be used as a feature vectors together with the reduced principal components from the PCA. These feature vectors are then used as the input to the neural networks for classification. The experimental evaluation demonstrates that the WPCM method provides acceptable classification accuracy with the sports news datasets.

  1. Around power law for PageRank components in Buckley-Osthus model of web graph

    OpenAIRE

    Gasnikov, Alexander; Zhukovskii, Maxim; Kim, Sergey; Noskov, Fedor; Plaunov, Stepan; Smirnov, Daniil

    2017-01-01

    In the paper we investigate power law for PageRank components for the Buckley-Osthus model for web graph. We compare different numerical methods for PageRank calculation. With the best method we do a lot of numerical experiments. These experiments confirm the hypothesis about power law. At the end we discuss real model of web-ranking based on the classical PageRank approach.

  2. First U.S. Web page went up 10 years ago

    CERN Multimedia

    Kornblum, J

    2001-01-01

    Wednesday marks the 10th anniversary of the first U.S. Web page, created by Paul Kunz, a physicist at SLAC. He says that if World Wide Web creator Tim Berners-Lee hadn't been so persistant that he attended a demonstration meeting on a visit to CERN, the Web wouldn't have taken off when it did -- maybe not at all.

  3. Ecosystem Food Web Lift-The-Flap Pages

    Science.gov (United States)

    Atwood-Blaine, Dana; Rule, Audrey C.; Morgan, Hannah

    2016-01-01

    In the lesson on which this practical article is based, third grade students constructed a "lift-the-flap" page to explore food webs on the prairie. The moveable papercraft focused student attention on prairie animals' external structures and how the inferred functions of those structures could support further inferences about the…

  4. Design of an Interface for Page Rank Calculation using Web Link Attributes Information

    Directory of Open Access Journals (Sweden)

    Jeyalatha SIVARAMAKRISHNAN

    2010-01-01

    Full Text Available This paper deals with the Web Structure Mining and the different Structure Mining Algorithms like Page Rank, HITS, Trust Rank and Sel-HITS. The functioning of these algorithms are discussed. An incremental algorithm for calculation of PageRank using an interface has been formulated. This algorithm makes use of Web Link Attributes Information as key parameters and has been implemented using Visibility and Position of a Link. The application of Web Structure Mining Algorithm in an Academic Search Application has been discussed. The present work can be a useful input to Web Users, Faculty, Students and Web Administrators in a University Environment.

  5. Standards opportunities around data-bearing Web pages.

    Science.gov (United States)

    Karger, David

    2013-03-28

    The evolving Web has seen ever-growing use of structured data, thanks to the way it enhances information authoring, querying, visualization and sharing. To date, however, most structured data authoring and management tools have been oriented towards programmers and Web developers. End users have been left behind, unable to leverage structured data for information management and communication as well as professionals. In this paper, I will argue that many of the benefits of structured data management can be provided to end users as well. I will describe an approach and tools that allow end users to define their own schemas (without knowing what a schema is), manage data and author (not program) interactive Web visualizations of that data using the Web tools with which they are already familiar, such as plain Web pages, blogs, wikis and WYSIWYG document editors. I will describe our experience deploying these tools and some lessons relevant to their future evolution.

  6. Collaborative Design of World Wide Web Pages: A Case Study.

    Science.gov (United States)

    Andrew, Paige G; Musser, Linda R.

    1997-01-01

    This case study of the collaborative design of an earth science World Wide Web page at Pennsylvania State University highlights the role of librarians. Discusses the original Web site and links, planning, the intended audience, and redesign and recommended changes; and considers the potential contributions of librarians. (LRW)

  7. Project Management - Development of course materiale as WEB pages

    DEFF Research Database (Denmark)

    Thorsteinsson, Uffe; Bjergø, Søren

    1997-01-01

    Development of Internet pages with lessons plans, slideshows, links, conference system and interactive student section for communication between students and to teacher as well.......Development of Internet pages with lessons plans, slideshows, links, conference system and interactive student section for communication between students and to teacher as well....

  8. RDFa Primer, Embedding Structured Data in Web Pages

    NARCIS (Netherlands)

    institution W3C; M. Birbeck (Mark); not CWI et al

    2007-01-01

    textabstractCurrent Web pages, written in XHTML, contain inherent structured data: calendar events, contact information, photo captions, song titles, copyright licensing information, etc. When authors and publishers can express this data precisely, and when tools can read it robustly, a new world of

  9. A step-by-step solution for embedding user-controlled cines into educational Web pages.

    Science.gov (United States)

    Cornfeld, Daniel

    2008-03-01

    The objective of this article is to introduce a simple method for embedding user-controlled cines into a Web page using a simple JavaScript. Step-by-step instructions are included and the source code is made available. This technique allows the creation of portable Web pages that allow the user to scroll through cases as if seated at a PACS workstation. A simple JavaScript allows scrollable image stacks to be included on Web pages. With this technique, you can quickly and easily incorporate entire stacks of CT or MR images into online teaching files. This technique has the potential for use in case presentations, online didactics, teaching archives, and resident testing.

  10. Teaching Materials to Enhance the Visual Expression of Web Pages for Students Not in Art or Design Majors

    Science.gov (United States)

    Ariga, T.; Watanabe, T.

    2008-01-01

    The explosive growth of the Internet has made the knowledge and skills for creating Web pages into general subjects that all students should learn. It is now common to teach the technical side of the production of Web pages and many teaching materials have been developed. However teaching the aesthetic side of Web page design has been neglected,…

  11. Citations to Web pages in scientific articles: the permanence of archived references.

    Science.gov (United States)

    Thorp, Andrea W; Schriger, David L

    2011-02-01

    We validate the use of archiving Internet references by comparing the accessibility of published uniform resource locators (URLs) with corresponding archived URLs over time. We scanned the "Articles in Press" section in Annals of Emergency Medicine from March 2009 through June 2010 for Internet references in research articles. If an Internet reference produced the authors' expected content, the Web page was archived with WebCite (http://www.webcitation.org). Because the archived Web page does not change, we compared it with the original URL to determine whether the original Web page had changed. We attempted to access each original URL and archived Web site URL at 3-month intervals from the time of online publication during an 18-month study period. Once a URL no longer existed or failed to contain the original authors' expected content, it was excluded from further study. The number of original URLs and archived URLs that remained accessible over time was totaled and compared. A total of 121 articles were reviewed and 144 Internet references were found within 55 articles. Of the original URLs, 15% (21/144; 95% confidence interval [CI] 9% to 21%) were inaccessible at publication. During the 18-month observation period, there was no loss of archived URLs (apart from the 4% [5/123; 95% CI 2% to 9%] that could not be archived), whereas 35% (49/139) of the original URLs were lost (46% loss; 95% CI 33% to 61% by the Kaplan-Meier method; difference between curves PWeb page at publication can help preserve the authors' expected information. Copyright © 2010 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  12. Investigating Large-Scale Internet Abuse Through Web Page Classification

    OpenAIRE

    Der, Matthew Francis

    2015-01-01

    The Internet is rife with abuse: examples include spam, phishing, malicious advertising, DNS abuse, search poisoning, click fraud, and so on. To detect, investigate, and defend against such abuse, security efforts frequently crawl large sets of Web sites that need to be classified into categories, e.g., the attacker behind the abuse or the type of abuse.Domain expertise is often required at first, but classifying thousands to even millions of Web pages manually is infeasible. In this disse...

  13. Teaching E-Commerce Web Page Evaluation and Design: A Pilot Study Using Tourism Destination Sites

    Science.gov (United States)

    Susser, Bernard; Ariga, Taeko

    2006-01-01

    This study explores a teaching method for improving business students' skills in e-commerce page evaluation and making Web design majors aware of business content issues through cooperative learning. Two groups of female students at a Japanese university studying either tourism or Web page design were assigned tasks that required cooperation to…

  14. Automatic Removal of Advertising from Web-Page Display / Extended Abstract

    OpenAIRE

    Rowe, Neil C.; Coffman, Jim; Degirmenci, Yilmaz; Hall, Scott; Lee, Shong; Williams, Clifton

    2002-01-01

    Joint Conference on Digital Libraries ’02, July 8-12, Portland, Oregon. The usefulness of the World Wide Web as a digital library of precise and reliable information is reduced by the increasing presence of advertising on Web pages. But no one is required to read or see advertising, and this cognitive censorship can be automated by software. Such filters can be useful to the U.S. government which must permit its employees to use the Web but which is prohibited by law from endorsing c...

  15. Searchers' relevance judgments and criteria in evaluating Web pages in a learning style perspective

    DEFF Research Database (Denmark)

    Papaeconomou, Chariste; Zijlema, Annemarie F.; Ingwersen, Peter

    2008-01-01

    The paper presents the results of a case study of searcher's relevance criteria used for assessments of Web pages in a perspective of learning style. 15 test persons participated in the experiments based on two simulated work tasks that provided cover stories to trigger their information needs. Two...... learning styles were examined: Global and Sequential learners. The study applied eye-tracking for the observation of relevance hot spots on Web pages, learning style index analysis and post-search interviews to gain more in-depth information on relevance behavior. Findings reveal that with respect to use......, they are statistically insignificant. When interviewed in retrospective the resulting profiles tend to become even similar across learning styles but a shift occurs from instant assessments with content features of web pages replacing topicality judgments as predominant relevance criteria....

  16. Het WEB leert begrijpen

    CERN Multimedia

    Stroeykens, Steven

    2004-01-01

    The WEB could be much more useful if the computers understood something of information on the Web pages. That explains the goal of the "semantic Web", a project in which takes part, amongst others, Tim Berners Lee, the inventor of the original WEB

  17. Evaluating the usability of web pages: a case study

    NARCIS (Netherlands)

    Lautenbach, M.A.E.; Schegget, I.E. ter; Schoute, A.E.; Witteman, C.L.M.

    1999-01-01

    An evaluation of the Utrecht University website was carried out with 240 students. New criteria were drawn from the literature and operationalized for the study. These criteria are surveyability and findability. Web pages can be said to satisfy a usability criterion if their efficiency and

  18. Internet resources and web pages for pediatric surgeons.

    Science.gov (United States)

    Lugo-Vicente, H

    2000-02-01

    The Internet, the largest network of connected computers, provides immediate, dynamic, and downloadable information. By re-architecturing the work place and becoming familiar with Internet resources, pediatric surgeons have anticipated the informatics capabilities of this computer-based technology creating a new vision of work and organization in such areas as patient care, teaching, and research. This review aims to highlight how Internet navigational technology can be a useful educational resource in pediatric surgery, examines web pages of interest, and defines ideas of network communication. Basic Internet resources are electronic mail, discussion groups, file transfer, and the Worldwide Web (WWW). Electronic mailing is the most useful resource extending the avenue of learning to an international audience through news or list-servers groups. Pediatric Surgery List Server, the most popular discussion group, is a constant forum for exchange of ideas, difficult cases, consensus on management, and development of our specialty. The WWW provides an all-in-one medium of text, image, sound, and video. Associations, departments, educational sites, organizations, peer-reviewed scientific journals and Medline database web pages of prime interest to pediatric surgeons have been developing at an amazing pace. Future developments of technological advance nurturing our specialty will consist of online journals, telemedicine, international chatting, computer-based training for surgical education, and centralization of cyberspace information into database search sites.

  19. Future Trends in Children's Web Pages: Probing Hidden Biases for Information Quality

    Science.gov (United States)

    Kurubacak, Gulsun

    2007-01-01

    As global digital communication continues to flourish, Children's Web pages become more critical for children to realize not only the surface but also breadth and deeper meanings in presenting these milieus. These pages not only are very diverse and complex but also enable intense communication across social, cultural and political restrictions…

  20. What Snippets Say About Pages in Federated Web Search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd; Hou, Yuexian; Nie, Jian-Yun; Sun, Le; Wang, Bo; Zhang, Peng

    2012-01-01

    What is the likelihood that a Web page is considered relevant to a query, given the relevance assessment of the corresponding snippet? Using a new federated IR test collection that contains search results from over a hundred search engines on the internet, we are able to investigate such research

  1. Do-It-Yourself: A Special Library's Approach to Creating Dynamic Web Pages Using Commercial Off-The-Shelf Applications

    Science.gov (United States)

    Steeman, Gerald; Connell, Christopher

    2000-01-01

    Many librarians may feel that dynamic Web pages are out of their reach, financially and technically. Yet we are reminded in library and Web design literature that static home pages are a thing of the past. This paper describes how librarians at the Institute for Defense Analyses (IDA) library developed a database-driven, dynamic intranet site using commercial off-the-shelf applications. Administrative issues include surveying a library users group for interest and needs evaluation; outlining metadata elements; and, committing resources from managing time to populate the database and training in Microsoft FrontPage and Web-to-database design. Technical issues covered include Microsoft Access database fundamentals, lessons learned in the Web-to-database process (including setting up Database Source Names (DSNs), redesigning queries to accommodate the Web interface, and understanding Access 97 query language vs. Standard Query Language (SQL)). This paper also offers tips on editing Active Server Pages (ASP) scripting to create desired results. A how-to annotated resource list closes out the paper.

  2. Happy birthday WWW: the web is now old enough to drive

    CERN Document Server

    Gilbertson, Scott

    2007-01-01

    "The World Wide Web can now drive. Sixteen years ago yeterday, in a short post to the alt.hypertext newsgroup, tim Berners-Lee revealed the first public web pages summarizing his World Wide Web project." (1/4 page)

  3. Is This Information Source Commercially Biased? How Contradictions between Web Pages Stimulate the Consideration of Source Information

    Science.gov (United States)

    Kammerer, Yvonne; Kalbfell, Eva; Gerjets, Peter

    2016-01-01

    In two experiments we systematically examined whether contradictions between two web pages--of which one was commercially biased as stated in an "about us" section--stimulated university students' consideration of source information both during and after reading. In Experiment 1 "about us" information of the web pages was…

  4. Why Web Pages Annotation Tools Are Not Killer Applications? A New Approach to an Old Problem.

    Science.gov (United States)

    Ronchetti, Marco; Rizzi, Matteo

    The idea of annotating Web pages is not a new one: early proposals date back to 1994. A tool providing the ability to add notes to a Web page, and to share the notes with other users seems to be particularly well suited to an e-learning environment. Although several tools already provide such possibility, they are not widely popular. This paper…

  5. Domainwise Web Page Optimization Based On Clustered Query Sessions Using Hybrid Of Trust And ACO For Effective Information Retrieval

    Directory of Open Access Journals (Sweden)

    Dr. Suruchi Chawla

    2015-08-01

    Full Text Available Abstract In this paper hybrid of Ant Colony OptimizationACO and trust has been used for domainwise web page optimization in clustered query sessions for effective Information retrieval. The trust of the web page identifies its degree of relevance in satisfying specific information need of the user. The trusted web pages when optimized using pheromone updates in ACO will identify the trusted colonies of web pages which will be relevant to users information need in a given domain. Hence in this paper the hybrid of Trust and ACO has been used on clustered query sessions for identifying more and more relevant number of documents in a given domain in order to better satisfy the information need of the user. Experiment was conducted on the data set of web query sessions to test the effectiveness of the proposed approach in selected three domains Academics Entertainment and Sports and the results confirm the improvement in the precision of search results.

  6. The ATLAS Public Web Pages: Online Management of HEP External Communication Content

    CERN Document Server

    Goldfarb, Steven; Phoboo, Abha Eli; Shaw, Kate

    2015-01-01

    The ATLAS Education and Outreach Group is in the process of migrating its public online content to a professionally designed set of web pages built on the Drupal content management system. Development of the front-end design passed through several key stages, including audience surveys, stakeholder interviews, usage analytics, and a series of fast design iterations, called sprints. Implementation of the web site involves application of the html design using Drupal templates, refined development iterations, and the overall population of the site with content. We present the design and development processes and share the lessons learned along the way, including the results of the data-driven discovery studies. We also demonstrate the advantages of selecting a back-end supported by content management, with a focus on workflow. Finally, we discuss usage of the new public web pages to implement outreach strategy through implementation of clearly presented themes, consistent audience targeting and messaging, and th...

  7. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    Science.gov (United States)

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  8. Science on the Web: Secondary School Students' Navigation Patterns and Preferred Pages' Characteristics

    Science.gov (United States)

    Dimopoulos, Kostas; Asimakopoulos, Apostolos

    2010-01-01

    This study aims to explore navigation patterns and preferred pages' characteristics of ten secondary school students searching the web for information about cloning. The students navigated the Web for as long as they wished in a context of minimum support of teaching staff. Their navigation patterns were analyzed using audit trail data software.…

  9. Toward automated assessment of health Web page quality using the DISCERN instrument.

    Science.gov (United States)

    Allam, Ahmed; Schulz, Peter J; Krauthammer, Michael

    2017-05-01

    As the Internet becomes the number one destination for obtaining health-related information, there is an increasing need to identify health Web pages that convey an accurate and current view of medical knowledge. In response, the research community has created multicriteria instruments for reliably assessing online medical information quality. One such instrument is DISCERN, which measures health Web page quality by assessing an array of features. In order to scale up use of the instrument, there is interest in automating the quality evaluation process by building machine learning (ML)-based DISCERN Web page classifiers. The paper addresses 2 key issues that are essential before constructing automated DISCERN classifiers: (1) generation of a robust DISCERN training corpus useful for training classification algorithms, and (2) assessment of the usefulness of the current DISCERN scoring schema as a metric for evaluating the performance of these algorithms. Using DISCERN, 272 Web pages discussing treatment options in breast cancer, arthritis, and depression were evaluated and rated by trained coders. First, different consensus models were compared to obtain a robust aggregated rating among the coders, suitable for a DISCERN ML training corpus. Second, a new DISCERN scoring criterion was proposed (features-based score) as an ML performance metric that is more reflective of the score distribution across different DISCERN quality criteria. First, we found that a probabilistic consensus model applied to the DISCERN instrument was robust against noise (random ratings) and superior to other approaches for building a training corpus. Second, we found that the established DISCERN scoring schema (overall score) is ill-suited to measure ML performance for automated classifiers. Use of a probabilistic consensus model is advantageous for building a training corpus for the DISCERN instrument, and use of a features-based score is an appropriate ML metric for automated DISCERN

  10. Key word placing in Web page body text to increase visibility to search engines

    Directory of Open Access Journals (Sweden)

    W. T. Kritzinger

    2007-11-01

    Full Text Available The growth of the World Wide Web has spawned a wide variety of new information sources, which has also left users with the daunting task of determining which sources are valid. Many users rely on the Web as an information source because of the low cost of information retrieval. It is also claimed that the Web has evolved into a powerful business tool. Examples include highly popular business services such as Amazon.com and Kalahari.net. It is estimated that around 80% of users utilize search engines to locate information on the Internet. This, by implication, places emphasis on the underlying importance of Web pages being listed on search engines indices. Empirical evidence that the placement of key words in certain areas of the body text will have an influence on the Web sites' visibility to search engines could not be found in the literature. The result of two experiments indicated that key words should be concentrated towards the top, and diluted towards the bottom of a Web page to increase visibility. However, care should be taken in terms of key word density, to prevent search engine algorithms from raising the spam alarm.

  11. The Evaluation of Web pages of State Universities’ Usability via Content Analysis

    Directory of Open Access Journals (Sweden)

    Ezgi CEVHER

    2015-12-01

    Full Text Available Within the scope of e-transformation project in Turkey, the “Preparation of Guideline for State Institutions’ Web Pages” action has been carried out for ensuring the minimal cohesiveness among government institutions’ and organizations’ Web pages in terms of design and content. As a result of those efforts, the first edition of “Guideline for State Institutions’ Web Pages” has been prepared in year 2006. The second edition of this guideline has been published in year 2009 under in simpler form under the name of “Guideline and Suggestions for the Standards of Governmental Institutions’ Web Pages”. It became compulsory for local and central level governmental institutions and organizations to obey the procedures and principles stated in Guideline. Through this Guideline, the preparation of websites of governmental institutions in harmony with mentioned standards, and updating them in parallel with changing conditions and requirements have been brought to agenda especially in recent years. In this study, by considering the characteristics stated in Guideline, the webpages of state universities’ have been assessed through “content analysis”. Considering that the webpages of universities are being visited by hundreds of visitors daily, it is required to ensure the effective, productive and comfortable usability. For this reason, by analyzing their webpages, the object is to determine to what extend the state universities implement the compulsory principles stated in Guideline, the webpages of universities have been assessed in this study from the aspects of compliance with standards, usability, and accessibility

  12. Appraisals of Salient Visual Elements in Web Page Design

    Directory of Open Access Journals (Sweden)

    Johanna M. Silvennoinen

    2016-01-01

    Full Text Available Visual elements in user interfaces elicit emotions in users and are, therefore, essential to users interacting with different software. Although there is research on the relationship between emotional experience and visual user interface design, the focus has been on the overall visual impression and not on visual elements. Additionally, often in a software development process, programming and general usability guidelines are considered as the most important parts of the process. Therefore, knowledge of programmers’ appraisals of visual elements can be utilized to understand the web page designs we interact with. In this study, appraisal theory of emotion is utilized to elaborate the relationship of emotional experience and visual elements from programmers’ perspective. Participants (N=50 used 3E-templates to express their visual and emotional experiences of web page designs. Content analysis of textual data illustrates how emotional experiences are elicited by salient visual elements. Eight hierarchical visual element categories were found and connected to various emotions, such as frustration, boredom, and calmness, via relational emotion themes. The emotional emphasis was on centered, symmetrical, and balanced composition, which was experienced as pleasant and calming. The results benefit user-centered visual interface design and researchers of visual aesthetics in human-computer interaction.

  13. Learning Hierarchical User Interest Models from Web Pages

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    We propose an algorithm for learning hierarchical user interest models according to the Web pages users have browsed. In this algorithm, the interests of a user are represented into a tree which is called a user interest tree, the content and the structure of which can change simultaneously to adapt to the changes in a user's interests. This expression represents a user's specific and general interests as a continuum. In some sense, specific interests correspond to short-term interests, while general interests correspond to long-term interests. So this representation more really reflects the users' interests. The algorithm can automatically model a user's multiple interest domains, dynamically generate the interest models and prune a user interest tree when the number of the nodes in it exceeds given value. Finally, we show the experiment results in a Chinese Web Site.

  14. Elementary Algebra + Student-Written Web Illustrations = Math Mastery.

    Science.gov (United States)

    Veteto, Bette R.

    This project focuses on the construction and use of a student-made elementary algebra tutorial World Wide Web page at the University of Memphis (Tennessee), how this helps students further explore the topics studied in elementary algebra, and how students can publish their work on the class Web page for use by other students. Practical,…

  15. A STUDY ON RANKING METHOD IN RETRIEVING WEB PAGES BASED ON CONTENT AND LINK ANALYSIS: COMBINATION OF FOURIER DOMAIN SCORING AND PAGERANK SCORING

    Directory of Open Access Journals (Sweden)

    Diana Purwitasari

    2008-01-01

    Full Text Available Ranking module is an important component of search process which sorts through relevant pages. Since collection of Web pages has additional information inherent in the hyperlink structure of the Web, it can be represented as link score and then combined with the usual information retrieval techniques of content score. In this paper we report our studies about ranking score of Web pages combined from link analysis, PageRank Scoring, and content analysis, Fourier Domain Scoring. Our experiments use collection of Web pages relate to Statistic subject from Wikipedia with objectives to check correctness and performance evaluation of combination ranking method. Evaluation of PageRank Scoring show that the highest score does not always relate to Statistic. Since the links within Wikipedia articles exists so that users are always one click away from more information on any point that has a link attached, it it possible that unrelated topics to Statistic are most likely frequently mentioned in the collection. While the combination method show link score which is given proportional weight to content score of Web pages does effect the retrieval results.

  16. Age differences in search of web pages: the effects of link size, link number, and clutter.

    Science.gov (United States)

    Grahame, Michael; Laberge, Jason; Scialfa, Charles T

    2004-01-01

    Reaction time, eye movements, and errors were measured during visual search of Web pages to determine age-related differences in performance as a function of link size, link number, link location, and clutter. Participants (15 young adults, M = 23 years; 14 older adults, M = 57 years) searched Web pages for target links that varied from trial to trial. During one half of the trials, links were enlarged from 10-point to 12-point font. Target location was distributed among the left, center, and bottom portions of the screen. Clutter was manipulated according to the percentage of used space, including graphics and text, and the number of potentially distracting nontarget links was varied. Increased link size improved performance, whereas increased clutter and links hampered search, especially for older adults. Results also showed that links located in the left region of the page were found most easily. Actual or potential applications of this research include Web site design to increase usability, particularly for older adults.

  17. Credibility judgments in web page design - a brief review.

    Science.gov (United States)

    Selejan, O; Muresanu, D F; Popa, L; Muresanu-Oloeriu, I; Iudean, D; Buzoianu, A; Suciu, S

    2016-01-01

    Today, more than ever, knowledge that interfaces appearance analysis is a crucial point in human-computer interaction field has been accepted. As nowadays virtually anyone can publish information on the web, the credibility role has grown increasingly important in relation to the web-based content. Areas like trust, credibility, and behavior, doubled by overall impression and user expectation are today in the spotlight of research compared to the last period, when other pragmatic areas such as usability and utility were considered. Credibility has been discussed as a theoretical construct in the field of communication in the past decades and revealed that people tend to evaluate the credibility of communication primarily by the communicator's expertise. Other factors involved in the content communication process are trustworthiness and dynamism as well as various other criteria but to a lower extent. In this brief review, factors like web page aesthetics, browsing experiences and user experience are considered.

  18. Web-based Project Reporting System

    Data.gov (United States)

    US Agency for International Development — Web-PRS is a web-based system that captures financial information and project status information that is sortable by geographical location, pillar, project type and...

  19. The Recognition of Web Pages' Hyperlinks by People with Intellectual Disabilities: An Evaluation Study

    Science.gov (United States)

    Rocha, Tania; Bessa, Maximino; Goncalves, Martinho; Cabral, Luciana; Godinho, Francisco; Peres, Emanuel; Reis, Manuel C.; Magalhaes, Luis; Chalmers, Alan

    2012-01-01

    Background: One of the most mentioned problems of web accessibility, as recognized in several different studies, is related to the difficulty regarding the perception of what is or is not clickable in a web page. In particular, a key problem is the recognition of hyperlinks by a specific group of people, namely those with intellectual…

  20. The effects of link format and screen location on visual search of web pages.

    Science.gov (United States)

    Ling, Jonathan; Van Schaik, Paul

    2004-06-22

    Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.

  1. Application of Project Portfolio Management

    Science.gov (United States)

    Pankowska, Malgorzata

    The main goal of the chapter is the presentation of the application project portfolio management approach to support development of e-Municipality and public administration information systems. The models of how people publish and utilize information on the web have been transformed continually. Instead of simply viewing on static web pages, users publish their own content through blogs and photo- and video-sharing slides. Analysed in this chapter, ICT (Information Communication Technology) projects for municipalities cover the mixture of the static web pages, e-Government information systems, and Wikis. So, for the management of the ICT projects' mixtures the portfolio project management approach is proposed.

  2. Upgrade of CERN OP Webtools IRRAD Page

    CERN Document Server

    Vik, Magnus Bjerke

    2017-01-01

    CERN Beams Department maintains a website with various tools for the Operations Group, with one of them being specific for the Proton Irradiation Facility (IRRAD). The IRRAD team use the tool to follow up and optimize the operation of the facility. The original version of the tool was difficult to maintain and adding new features to the page was challenging. Thus this summer student project is aimed to upgrade the web page by rewriting the web page with maintainability and flexibility in mind. The new application uses a server--client architecture with a REST API on the back end which is used by the front end to request data for visualization. PHP is used on the back end to implement the API's and Swagger is used to document them. Vue, Semantic UI, Webpack, Node and ECMAScript 5 is used on the fronted to visualize and administrate the data. The result is a new IRRAD operations web application with extended functionality, improved structure and an improved user interface. It includes a new Status Panel page th...

  3. Affordances of students' using the World Wide Web as a publishing medium in project-based learning environments

    Science.gov (United States)

    Bos, Nathan Daniel

    source, and evaluating their organizational features; however, students struggled to identify scientific evidence, bias, or sophisticated use of media in Web pages. Shortcomings were shown to be partly due to deficiencies in the Web pages themselves and partly due to students' inexperience with the medium or lack of critical evaluation skills. Future directions of this idea are discussed, including discussion of how students' reviews have been integrated into a current digital library development project.

  4. What is the title of a Web page? A study of Webography practice

    Directory of Open Access Journals (Sweden)

    Timothy C. Craven

    2002-01-01

    Full Text Available Few style guides recommend a specific source for citing the title of a Web page that is not a duplicate of a printed format. Sixteen Web bibliographies were analyzed for uses of two different recommended sources: (1 the tagged title; (2 the title as it would appear to be from viewing the beginning of the page in the browser (apparent title. In all sixteen, the proportion of tagged titles was much less than that of apparent titles, and only rarely did the bibliography title match the tagged title and not the apparent title. Convenience of copying may partly explain the preference for the apparent title. Contrary to expectation, correlation between proportion of valid links in a bibliography and proportion of accurately reproduced apparent titles was slightly negative.

  5. SurveyWiz and factorWiz: JavaScript Web pages that make HTML forms for research on the Internet.

    Science.gov (United States)

    Birnbaum, M H

    2000-05-01

    SurveyWiz and factorWiz are Web pages that act as wizards to create HTML forms that enable one to collect data via the Web. SurveyWiz allows the user to enter survey questions or personality test items with a mixture of text boxes and scales of radio buttons. One can add demographic questions of age, sex, education, and nationality with the push of a button. FactorWiz creates the HTML for within-subjects, two-factor designs as large as 9 x 9, or higher order factorial designs up to 81 cells. The user enters levels of the row and column factors, which can be text, images, or other multimedia. FactorWiz generates the stimulus combinations, randomizes their order, and creates the page. In both programs HTML is displayed in a window, and the user copies it to a text editor to save it. When uploaded to a Web server and supported by a CGI script, the created Web pages allow data to be collected, coded, and saved on the server. These programs are intended to assist researchers and students in quickly creating studies that can be administered via the Web.

  6. Children's recognition of advertisements on television and on Web pages.

    Science.gov (United States)

    Blades, Mark; Oates, Caroline; Li, Shiying

    2013-03-01

    In this paper we consider the issue of advertising to children. Advertising to children raises a number of concerns, in particular the effects of food advertising on children's eating habits. We point out that virtually all the research into children's understanding of advertising has focused on traditional television advertisements, but much marketing aimed at children is now via the Internet and little is known about children's awareness of advertising on the Web. One important component of understanding advertisements is the ability to distinguish advertisements from other messages, and we suggest that young children's ability to recognise advertisements on a Web page is far behind their ability to recognise advertisements on television. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Web page of the Ibero-American laboratories network of radioactivity analysis in foods: a tool for inter regional diffusion

    International Nuclear Information System (INIS)

    Melo Ferreira, Ana C. de; Osores, Jose M.; Fernandez Gomez, Isis M.; Iglicki, Flora A.; Vazquez Bolanos, Luis R.; Romero, Maria de L.; Aguirre Gomez, Jaime; Flores, Yasmine

    2008-01-01

    One objective of the thematic networks is the exchanges of knowledge among participants, for this reason, actions focused to the diffusion of their respective work are prioritized, evidencing the result of the cooperation among the participant groups and also among different networks. The Ibero-American Laboratories Network of Radioactivity Analysis in Foods (RILARA) was constituted in 2007, and one of the first actions carried out in this framework, was the design and conformation of a web page. The web pages have become a powerful means for diffusion of specialized information. Their power, as well as their continuous upgrading and the specificity of the topics that can develop, allow the user to obtain fast information on a wide range of products, services and organizations at local and world level. The main objective of the RILARA web page is to provide updated relevant information to interested specialists in the subject and also to public in general, about the work developed by the network laboratories regarding the control of radioactive pollutants in foods and related scientific issues. This web has been developed based on a Content Management Systems that helps to eliminate potential barriers to the communication web, reducing the creation costs, contribution and maintenance of the content. The tool used for its design is very effective to be used in the process of teaching, learning and for the organization of the information. This paper describes how was conceived the design of this web page, the information that contains and how can be accessed and/or to include any contribution, the value of this page depends directly on the grade of updating of the available contents so that it can be useful and attractive to the users. (author)

  8. SChiSM2: creating interactive web page annotations of molecular structure models using Jmol.

    Science.gov (United States)

    Cammer, Stephen

    2007-02-01

    SChiSM2 is a web server-based program for creating web pages that include interactive molecular graphics using the freely-available applet, Jmol, for illustration. The program works with Internet Explorer and Firefox on Windows, Safari and Firefox on Mac OSX and Firefox on Linux. The program can be accessed at the following address: http://ci.vbi.vt.edu/cammer/schism2.html.

  9. A Survey on PageRank Computing

    OpenAIRE

    Berkhin, Pavel

    2005-01-01

    This survey reviews the research related to PageRank computing. Components of a PageRank vector serve as authority weights for web pages independent of their textual content, solely based on the hyperlink structure of the web. PageRank is typically used as a web search ranking component. This defines the importance of the model and the data structures that underly PageRank processing. Computing even a single PageRank is a difficult computational task. Computing many PageRanks is a much mor...

  10. Web accessibility: a longitudinal study of college and university home pages in the northwestern United States.

    Science.gov (United States)

    Thompson, Terrill; Burgstahler, Sheryl; Moore, Elizabeth J

    2010-01-01

    This article reports on a follow-up assessment to Thompson et al. (Proceedings of The First International Conference on Technology-based Learning with Disability, July 19-20, Dayton, Ohio, USA; 2007. pp 127-136), in which higher education home pages were evaluated over a 5-year period on their accessibility to individuals with disabilities. The purpose of this article is to identify trends in web accessibility and long-term impact of outreach and education. Home pages from 127 higher education institutions in the Northwest were evaluated for accessibility three times over a 6-month period in 2004-2005 (Phase I), and again in 2009 (Phase II). Schools in the study were offered varying degrees of training and/or support on web accessibility during Phase I. Pages were evaluated for accessibility using a set of manual checkpoints developed by the researchers. Over the 5-year period reported in this article, significant positive gains in accessibility were revealed on some measures, but accessibility declined on other measures. The areas of improvement are arguably the more basic, easy-to-implement accessibility features, while the area of decline is keyboard accessibility, which is likely associated with the emergence of dynamic new technologies on web pages. Even on those measures where accessibility is improving, it is still strikingly low. In Phase I of the study, institutions that received extensive training and support were more likely than other institutions to show improved accessibility on the measures where institutions improved overall, but were equally or more likely than others to show a decline on measures where institutions showed an overall decline. In Phase II, there was no significant difference between institutions who had received support earlier in the study, and those who had not. Results suggest that growing numbers of higher education institutions in the Northwest are motivated to add basic accessibility features to their home pages, and that

  11. Web Page Layout: A Comparison Between Left- and Right-justified Site Navigation Menus

    OpenAIRE

    Kalbach, James; Bosenick, Tim

    2006-01-01

    The usability of two Web page layouts was directly compared: one with the main site navigation menu on the left of the page, and one with the main site navigation menu on the right. Sixty-four participants were divided equally into two groups and assigned to either the left- or the right-hand navigation test condition. Using a stopwatch, the time to complete each of five tasks was measured. The hypothesis that the left-hand navigation would perform significantly faster than the right-hand nav...

  12. The Impact of Salient Advertisements on Reading and Attention on Web Pages

    Science.gov (United States)

    Simola, Jaana; Kuisma, Jarmo; Oorni, Anssi; Uusitalo, Liisa; Hyona, Jukka

    2011-01-01

    Human vision is sensitive to salient features such as motion. Therefore, animation and onset of advertisements on Websites may attract visual attention and disrupt reading. We conducted three eye tracking experiments with authentic Web pages to assess whether (a) ads are efficiently ignored, (b) ads attract overt visual attention and disrupt…

  13. WebVis: a hierarchical web homepage visualizer

    Science.gov (United States)

    Renteria, Jose C.; Lodha, Suresh K.

    2000-02-01

    WebVis, the Hierarchical Web Home Page Visualizer, is a tool for managing home web pages. The user can access this tool via the WWW and obtain a hierarchical visualization of one's home web pages. WebVis is a real time interactive tool that supports many different queries on the statistics of internal files such as sizes, age, and type. In addition, statistics on embedded information such as VRML files, Java applets, images and sound files can be extracted and queried. Results of these queries are visualized using color, shape and size of different nodes of the hierarchy. The visualization assists the user in a variety of task, such as quickly finding outdated information or locate large files. WebVIs is one solution to the growing web space maintenance problem. Implementation of WebVis is realized with Perl and Java. Perl pattern matching and file handling routines are used to collect and process web space linkage information and web document information. Java utilizes the collected information to produce visualization of the web space. Java also provides WebVis with real time interactivity, while running off the WWW. Some WebVis examples of home web page visualization are presented.

  14. A comprehensive analysis of Italian web pages mentioning squalene-based influenza vaccine adjuvants reveals a high prevalence of misinformation.

    Science.gov (United States)

    Panatto, Donatella; Amicizia, Daniela; Arata, Lucia; Lai, Piero Luigi; Gasparini, Roberto

    2018-04-03

    Squalene-based adjuvants have been included in influenza vaccines since 1997. Despite several advantages of adjuvanted seasonal and pandemic influenza vaccines, laypeople's perception of such formulations may be hesitant or even negative under certain circumstances. Moreover, in Italian, the term "squalene" has the same root as such common words as "shark" (squalo), "squalid" and "squalidness" that tend to have negative connotations. This study aimed to quantitatively and qualitatively analyze a representative sample of Italian web pages mentioning squalene-based adjuvants used in influenza vaccines. Every effort was made to limit the subjectivity of judgments. Eighty-four unique web pages were assessed. A high prevalence (47.6%) of pages with negative or ambiguous attitudes toward squalene-based adjuvants was established. Compared with web pages reporting balanced information on squalene-based adjuvants, those categorized as negative/ambiguous had significantly lower odds of belonging to a professional institution [adjusted odds ratio (aOR) = 0.12, p = .004], and significantly higher odds of containing pictures (aOR = 1.91, p = .034) and being more readable (aOR = 1.34, p = .006). Some differences in wording between positive/neutral and negative/ambiguous web pages were also observed. The most common scientifically unsound claims concerned safety issues and, in particular, claims linking squalene-based adjuvants to the Gulf War Syndrome and autoimmune disorders. Italian users searching the web for information on vaccine adjuvants have a high likelihood of finding unbalanced and misleading material. Information provided by institutional websites should be not only evidence-based but also carefully targeted towards laypeople. Conversely, authors writing for non-institutional websites should avoid sensationalism and provide their readers with more balanced information.

  15. Architecture for large-scale automatic web accessibility evaluation based on the UWEM methodology

    DEFF Research Database (Denmark)

    Ulltveit-Moe, Nils; Olsen, Morten Goodwin; Pillai, Anand B.

    2008-01-01

    The European Internet Accessibility project (EIAO) has developed an Observatory for performing large scale automatic web accessibility evaluations of public sector web sites in Europe. The architecture includes a distributed web crawler that crawls web sites for links until either a given budget...... of web pages have been identified or the web site has been crawled exhaustively. Subsequently, a uniform random subset of the crawled web pages is sampled and sent for accessibility evaluation and the evaluation results are stored in a Resource Description Format (RDF) database that is later loaded...... challenges that the project faced and the solutions developed towards building a system capable of regular large-scale accessibility evaluations with sufficient capacity and stability. It also outlines some possible future architectural improvements....

  16. The One-Page Project Manager Comunicate and Manage Any Project With a Single Sheet of Paper

    CERN Document Server

    Campbell, Clark A

    2007-01-01

    The One-Page Project Manager shows you how to boil down any project into a simple, one-page document that can be used to communicate all essential details to upper management, other departments, suppliers, and audiences. This practical guide will save time and effort, helping you identify the vital parts of a project and communicate those parts and duties to other team members.

  17. Personal Web home pages of adolescents with cancer: self-presentation, information dissemination, and interpersonal connection.

    Science.gov (United States)

    Suzuki, Lalita K; Beale, Ivan L

    2006-01-01

    The content of personal Web home pages created by adolescents with cancer is a new source of information about this population of potential benefit to oncology nurses and psychologists. Individual Internet elements found on 21 home pages created by youths with cancer (14-22 years old) were rated for cancer-related self-presentation, information dissemination, and interpersonal connection. Examples of adolescents' online narratives were also recorded. Adolescents with cancer used various Internet elements on their home pages for cancer-related self-presentation (eg, welcome messages, essays, personal history and diary pages, news articles, and poetry), information dissemination (e.g., through personal interest pages, multimedia presentations, lists, charts, and hyperlinks), and interpersonal connection (eg, guestbook entries). Results suggest that various elements found on personal home pages are being used by a limited number of young patients with cancer for self-expression, information access, and contact with peers.

  18. THE NEW PURCHASING SERVICE PAGE NOW ON THE WEB!

    CERN Multimedia

    SPL Division

    2000-01-01

    Users of CERN's Purchasing Service are encouraged to visit the new Purchasing Service web page, accessible from the CERN homepage or directly at: http://spl-purchasing.web.cern.ch/spl-purchasing/ There, you will find answers to questions such as: Who are the buyers? What do I need to know before creating a DAI? How many offers do I need? Where shall I send the offer I received? I know the amount of my future requirement, how do I proceed? How are contracts adjudicated at CERN? Which exhibitions and visits of Member State companies are foreseen in the future? A company I know is interested in making a presentation at CERN, who should they contact? Additionally, you will find information concerning: The Purchasing procedures Market Surveys and Invitations to Tender The Industrial Liaison Officers appointed in each Member State The Purchasing Broker at CERN

  19. Effects of picture amount on preference, balance, and dynamic feel of Web pages.

    Science.gov (United States)

    Chiang, Shu-Ying; Chen, Chien-Hsiung

    2012-04-01

    This study investigates the effects of picture amount on subjective evaluation. The experiment herein adopted two variables to define picture amount: column ratio and picture size. Six column ratios were employed: 7:93,15:85, 24:76, 33:67, 41:59, and 50:50. Five picture sizes were examined: 140 x 81, 220 x 127, 300 x 173, 380 x 219, and 460 x 266 pixels. The experiment implemented a within-subject design; 104 participants were asked to evaluate 30 web page layouts. Repeated measurements revealed that the column ratio and picture size have significant effects on preference, balance, and dynamic feel. The results indicated the most appropriate picture amount for display: column ratios of 15:85 and 24:76, and picture sizes of 220 x 127, 300 x 173, and 380 x 219. The research findings can serve as the basis for the application of design guidelines for future web page interface design.

  20. Developing Dynamic Single Page Web Applications Using Meteor : Comparing JavaScript Frameworks: Blaze and React

    OpenAIRE

    Yetayeh, Asabeneh

    2017-01-01

    This paper studies Meteor which is a JavaScript full-stack framework to develop interactive single page web applications. Meteor allows building web applications entirely in JavaScript. Meteor uses Blaze, React or AngularJS as a view layer and Node.js and MongoDB as a back-end. The main purpose of this study is to compare the performance of Blaze and React. A multi-user Blaze and React web applications with similar HTML and CSS were developed. Both applications were deployed on Heroku’s w...

  1. Hormone replacement therapy advertising: sense and nonsense on the web pages of the best-selling pharmaceuticals in Spain.

    Science.gov (United States)

    Chilet-Rosell, Elisa; Martín Llaguno, Marta; Ruiz Cantero, María Teresa; Alonso-Coello, Pablo

    2010-03-16

    The balance of the benefits and risks of long term use of hormone replacement therapy (HRT) have been a matter of debate for decades. In Europe, HRT requires medical prescription and its advertising is only permitted when aimed at health professionals (direct to consumer advertising is allowed in some non European countries). The objective of this study is to analyse the appropriateness and quality of Internet advertising about HRT in Spain. A search was carried out on the Internet (January 2009) using the eight best-selling HRT drugs in Spain. The brand name of each drug was entered into Google's search engine. The web sites appearing on the first page of results and the corresponding companies were analysed using the European Code of Good Practice as the reference point. Five corporate web pages: none of them included bibliographic references or measures to ensure that the advertising was only accessible by health professionals. Regarding non-corporate web pages (n = 27): 41% did not include the company name or address, 44% made no distinction between patient and health professional information, 7% contained bibliographic references, 26% provided unspecific information for the use of HRT for osteoporosis and 19% included menstrual cycle regulation or boosting feminity as an indication. Two online pharmacies sold HRT drugs which could be bought online in Spain, did not include the name or contact details of the registered company, nor did they stipulate the need for a medical prescription or differentiate between patient and health professional information. Even though pharmaceutical companies have committed themselves to compliance with codes of good practice, deficiencies were observed regarding the identification, information and promotion of HRT medications on their web pages. Unaffected by legislation, non-corporate web pages are an ideal place for indirect HRT advertising, but they often contain misleading information. HRT can be bought online from Spain

  2. NUCLEAR STRUCTURE AND DECAY DATA: INTRODUCTION TO RELEVANT WEB PAGES

    International Nuclear Information System (INIS)

    BURROWS, T.W.; MCLAUGHLIN, P.D.; NICHOLS, A.L.

    2005-01-01

    A brief description is given of the nuclear data centres around the world able to provide access to those databases and programs of highest relevance to nuclear structure and decay data specialists. A number of Web-page addresses are also provided for the reader to inspect and investigate these data and codes for study, evaluation and calculation. These instructions are not meant to be comprehensive, but should provide the reader with a reasonable means of electronic access to the most important data sets and programs

  3. Beginning ASPNET Web Pages with WebMatrix

    CERN Document Server

    Brind, Mike

    2011-01-01

    Learn to build dynamic web sites with Microsoft WebMatrix Microsoft WebMatrix is designed to make developing dynamic ASP.NET web sites much easier. This complete Wrox guide shows you what it is, how it works, and how to get the best from it right away. It covers all the basic foundations and also introduces HTML, CSS, and Ajax using jQuery, giving beginning programmers a firm foundation for building dynamic web sites.Examines how WebMatrix is expected to become the new recommended entry-level tool for developing web sites using ASP.NETArms beginning programmers, students, and educators with al

  4. THE DIFFERENCE BETWEEN DEVELOPING SINGLE PAGE APPLICATION AND TRADITIONAL WEB APPLICATION BASED ON MECHATRONICS ROBOT LABORATORY ONAFT APPLICATION

    Directory of Open Access Journals (Sweden)

    V. Solovei

    2018-04-01

    Full Text Available Today most of desktop and mobile applications have analogues in the form of web-based applications.  With evolution of development technologies and web technologies web application increased in functionality to desktop applications. The Web application consists of two parts of the client part and the server part. The client part is responsible for providing the user with visual information through the browser. The server part is responsible for processing and storing data.MPA appeared simultaneously with the Internet. Multiple-page applications work in a "traditional" way. Every change eg. display the data or submit data back to the server. With the advent of AJAX, MPA learned to load not the whole page, but only a part of it, which eventually led to the appearance of the SPA. SPA is the principle of development when only one page is transferred to the client part, and the content is downloaded only to a certain part of the page, without rebooting it, which allows to speed up the application and simplify the user experience of using the application to the level of desktop applications.Based on the SPA, the Mechatronics Robot Laboratory ONAFT application was designed to automate the management process. The application implements the client-server architecture. The server part consists of a RESTful API, which allows you to get unified access to the application functionality, and a database for storing information. Since the client part is a spa, this allows you to reduce the load on the connection to the server and improve the user experience

  5. Users page feedback

    CERN Multimedia

    2010-01-01

    In October last year the Communication Group proposed an interim redesign of the users’ web pages in order to improve the visibility of key news items, events and announcements to the CERN community. The proposed update to the users' page (right), and the current version (left, behind) This proposed redesign was seen as a small step on the way to much wider reforms of the CERN web landscape proposed in the group’s web communication plan.   The results are available here. Some of the key points: - the balance between news / events / announcements and access to links on the users’ pages was not right - many people asked to see a reversal of the order so that links appeared first, news/events/announcements last; - many people felt that we should keep the primary function of the users’ pages as an index to other CERN websites; - many people found the sections of the front page to be poorly delineated; - people do not like scrolling; - there were performance...

  6. MPEG-7 low level image descriptors for modeling users' web pages visual appeal opinion

    OpenAIRE

    Uribe Mayoral, Silvia; Alvarez Garcia, Federico; Menendez Garcia, Jose Manuel

    2015-01-01

    The study of the users' web pages first impression is an important factor for interface designers, due to its influence over the final opinion about a site. In this regard, the analysis of web aesthetics can be considered as an interesting tool for evaluating this early impression, and the use of low level image descriptors for modeling it in an objective way represents an innovative research field. According to this, in this paper we present a new model for website aesthetics evaluation and ...

  7. The effect of new links on Google PageRank

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli

    2004-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. We study the effect of newly created links on Google PageRank. We discuss to

  8. A quality evaluation methodology of health web-pages for non-professionals.

    Science.gov (United States)

    Currò, Vincenzo; Buonuomo, Paola Sabrina; Onesimo, Roberta; de Rose, Paola; Vituzzi, Andrea; di Tanna, Gian Luca; D'Atri, Alessandro

    2004-06-01

    The proposal of an evaluation methodology for determining the quality of healthcare web sites for the dissemination of medical information to non-professionals. Three (macro) factors are considered for the quality evaluation: medical contents, accountability of the authors, and usability of the web site. Starting from two results in the literature the problem of whether or not to introduce a weighting function has been investigated. This methodology has been validated on a specialized information content, i.e., sore throats, due to the large interest such a topic enjoys with target users. The World Wide Web was accessed using a meta-search system merging several search engines. A statistical analysis was made to compare the proposed methodology with the obtained ranks of the sample web pages. The statistical analysis confirms that the variables examined (per item and sub factor) show substantially similar ranks and are capable of contributing to the evaluation of the main quality macro factors. A comparison between the aggregation functions in the proposed methodology (non-weighted averages) and the weighting functions, derived from the literature, allowed us to verify the suitability of the method. The proposed methodology suggests a simple approach which can quickly award an overall quality score for medical web sites oriented to non-professionals.

  9. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.; Zaiane, O.R.; Srivastav, J.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  10. Hormone Replacement Therapy advertising: sense and nonsense on the web pages of the best-selling pharmaceuticals in Spain

    Directory of Open Access Journals (Sweden)

    Cantero María

    2010-03-01

    Full Text Available Abstract Background The balance of the benefits and risks of long term use of hormone replacement therapy (HRT have been a matter of debate for decades. In Europe, HRT requires medical prescription and its advertising is only permitted when aimed at health professionals (direct to consumer advertising is allowed in some non European countries. The objective of this study is to analyse the appropriateness and quality of Internet advertising about HRT in Spain. Methods A search was carried out on the Internet (January 2009 using the eight best-selling HRT drugs in Spain. The brand name of each drug was entered into Google's search engine. The web sites appearing on the first page of results and the corresponding companies were analysed using the European Code of Good Practice as the reference point. Results Five corporate web pages: none of them included bibliographic references or measures to ensure that the advertising was only accessible by health professionals. Regarding non-corporate web pages (n = 27: 41% did not include the company name or address, 44% made no distinction between patient and health professional information, 7% contained bibliographic references, 26% provided unspecific information for the use of HRT for osteoporosis and 19% included menstrual cycle regulation or boosting feminity as an indication. Two online pharmacies sold HRT drugs which could be bought online in Spain, did not include the name or contact details of the registered company, nor did they stipulate the need for a medical prescription or differentiate between patient and health professional information. Conclusions Even though pharmaceutical companies have committed themselves to compliance with codes of good practice, deficiencies were observed regarding the identification, information and promotion of HRT medications on their web pages. Unaffected by legislation, non-corporate web pages are an ideal place for indirect HRT advertising, but they often contain

  11. PROTOTIPE PEMESANAN BAHAN PUSTAKA MELALUI WEB MENGGUNAKAN ACTIVE SERVER PAGE (ASP

    Directory of Open Access Journals (Sweden)

    Djoni Haryadi Setiabudi

    2002-01-01

    Full Text Available Electronic commerce is one of the components in the internet that growing fast in the world. In this research, it is developed the prototype for library service that offers library collection ordering especially books and articles through World Wide Web. In order to get an interaction between seller and buyer, there is an urgency to develop a dynamic web, which needs the technology and software. One of the programming languages is called Active Server Pages (ASP and it is combining with database system to store data. The other component as an interface between application and database is ActiveX Data Objects (ADO. ASP has an advantage in the scripting method and it is easy to make the configuration with database. This application consists of two major parts those are administrator and user. This prototype has the facilities for editing, searching and looking through ordering information online. Users can also do downloading process for searching and ordering articles. Paying method in this e-commerce system is quite essential because in Indonesia not everybody has a credit card. As a solution to this situation, this prototype has a form for user who does not have credit card. If the bill has been paid, he can do the transaction online. In this case, one of the ASP advantages will be used. This is called "session" when data in process would not be lost as long as the user still in that "session". This will be used in user area and admin area where the users and the admin can do various processes. Abstract in Bahasa Indonesia : Electronic commerce adalah satu bagian dari internet yang berkembang pesat di dunia saat ini. Pada penelitian ini dibuat suatu prototipe program aplikasi untuk pengembangan jasa layanan perpustakaan khususnya pemesanan artikel dan buku melalui World Wide Web. Untuk membangun aplikasi berbasis web diperlukan teknologi dan software yang mendukung pembuatan situs web dinamis sehingga ada interaksi antara pembeli dan penjual

  12. Web page quality: can we measure it and what do we find? A report of exploratory findings.

    Science.gov (United States)

    Abbott, V P

    2000-06-01

    The aim of this study was to report exploratory findings from an attempt to quantify the quality of a sample of World Wide Web (WWW) pages relating to MMR vaccine that a typical user might locate. Forty pages obtained from a search of the WWW using two search engines and the search expression 'mmr vaccine' were analysed using a standard proforma. The proforma looked at the information the pages contained in terms of three categories: content, authorship and aesthetics. The information from each category was then quantified into a summary statistic, and receiver operating characteristic (ROC) curves were generated using a 'gold standard' of quality derived from the published literature. Optimal cut-off points for each of the three sections were calculated that best discriminated 'good' from 'bad' pages. Pages were also assessed as to whether they were pro- or anti-vaccination. For this sample, the combined contents and authorship score, with a cut-off of five, was a good discriminator, having 88 per cent sensitivity and 92 per cent specificity. Aesthetics was not a good discriminator. In the sample, 32.5 per cent of pages were pro-vaccination; 42.5 per cent were anti-vaccination and 25 per cent were neutral. The relative risk of being of poor quality if anti-vaccination was 3.3 (95 per cent confidence interval 1.8, 6.1). The sample of Web pages did contain some quality information on MMR vaccine. It also contained a great deal of misleading, inaccurate data. The proforma, combined with a knowledge of the literature, may help to distinguish between the two.

  13. JERHRE's New Web Pages.

    Science.gov (United States)

    2006-06-01

    JERHRE'S WEBSITE, www.csueastbay.edu/JERHRE/ has two new pages. One of those pages is devoted to curriculum that may be used to educate students, investigators and ethics committee members about issues in the ethics of human subjects research, and to evaluate their learning. It appears at www.csueastbay.edu/JERHRE/cur.html. The other is devoted to emailed letters from readers. Appropriate letters will be posted as soon as they are received by the editor. Letters from readers appear at www.csueastbay.edu/JERHRE/let.html.

  14. Communicating public health preparedness information to pregnant and postpartum women: an assessment of Centers for Disease Control and Prevention web pages.

    Science.gov (United States)

    McDonough, Brianna; Felter, Elizabeth; Downes, Amia; Trauth, Jeanette

    2015-04-01

    Pregnant and postpartum women have special needs during public health emergencies but often have inadequate levels of disaster preparedness. Thus, improving maternal emergency preparedness is a public health priority. More research is needed to identify the strengths and weaknesses of various approaches to how preparedness information is communicated to these women. A sample of web pages from the Centers for Disease Control and Prevention intended to address the preparedness needs of pregnant and postpartum populations was examined for suitability for this audience. Five of the 7 web pages examined were considered adequate. One web page was considered not suitable and one the raters split between not suitable and adequate. None of the resources examined were considered superior. If these resources are considered some of the best available to pregnant and postpartum women, more work is needed to improve the suitability of educational resources, especially for audiences with low literacy and low incomes.

  15. Book Reviews, Annotation, and Web Technology.

    Science.gov (United States)

    Schulze, Patricia

    From reading texts to annotating web pages, grade 6-8 students rely on group cooperation and individual reading and writing skills in this research project that spans six 50-minute lessons. Student objectives for this project are that they will: read, discuss, and keep a journal on a book in literature circles; understand the elements of and…

  16. Cluster Analysis of Customer Reviews Extracted from Web Pages

    Directory of Open Access Journals (Sweden)

    S. Shivashankar

    2010-01-01

    Full Text Available As e-commerce is gaining popularity day by day, the web has become an excellent source for gathering customer reviews / opinions by the market researchers. The number of customer reviews that a product receives is growing at very fast rate (It could be in hundreds or thousands. Customer reviews posted on the websites vary greatly in quality. The potential customer has to read necessarily all the reviews irrespective of their quality to make a decision on whether to purchase the product or not. In this paper, we make an attempt to assess are view based on its quality, to help the customer make a proper buying decision. The quality of customer review is assessed as most significant, more significant, significant and insignificant.A novel and effective web mining technique is proposed for assessing a customer review of a particular product based on the feature clustering techniques, namely, k-means method and fuzzy c-means method. This is performed in three steps : (1Identify review regions and extract reviews from it, (2 Extract and cluster the features of reviews by a clustering technique and then assign weights to the features belonging to each of the clusters (groups and (3 Assess the review by considering the feature weights and group belongingness. The k-means and fuzzy c-means clustering techniques are implemented and tested on customer reviews extracted from web pages. Performance of these techniques are analyzed.

  17. Onto-Agents-Enabling Intelligent Agents on the Web

    Science.gov (United States)

    2005-05-01

    Manual annotation is tedious, and often done poorly. Even within the funded DAML project fewer pages were annotated than was hoped. In eCommerce , there...been overcome, congratulations! The DAML project was initiated at the birth of the semantic web. It contributed greatly to define a new research

  18. User Perceptions of the Library's Web Pages: A Focus Group Study at Texas A&M University.

    Science.gov (United States)

    Crowley, Gwyneth H.; Leffel, Rob; Ramirez, Diana; Hart, Judith L.; Armstrong, Tommy S., II

    2002-01-01

    This focus group study explored library patrons' opinions about Texas A&M library's Web pages. Discusses information seeking behavior which indicated that patrons are confused when trying to navigate the Public Access Menu and suggests the need for a more intuitive interface. (Author/LRW)

  19. Using Frames and JavaScript To Automate Teacher-Side Web Page Navigation for Classroom Presentations.

    Science.gov (United States)

    Snyder, Robin M.

    HTML provides a platform-independent way of creating and making multimedia presentations for classroom instruction and making that content available on the Internet. However, time in class is very valuable, so that any way to automate or otherwise assist the presenter in Web page navigation during class can save valuable seconds. This paper…

  20. PSB goes personal: The failure of personalised PSB web pages

    Directory of Open Access Journals (Sweden)

    Jannick Kirk Sørensen

    2013-12-01

    Full Text Available Between 2006 and 2011, a number of European public service broadcasting (PSB organisations offered their website users the opportunity to create their own PSB homepage. The web customisation was conceived by the editors as a response to developments in commercial web services, particularly social networking and content aggregation services, but the customisation projects revealed tensions between the ideals of customer sovereignty and the editorial agenda-setting. This paper presents an overview of the PSB activities as well as reflections on the failure of the customisable PSB homepages. The analysis is based on interviews with the PSB editors involved in the projects and on studies of the interfaces and user comments. Commercial media customisation is discussed along with the PSB projects to identify similarities and differences.

  1. Bad on the net, or bipolars' lives on the web: analyzing discussion web pages for individuals with bipolar affective disorder.

    Science.gov (United States)

    Latalova, Klara; Prasko, Jan; Kamaradova, Dana; Ivanova, Katerina; Jurickova, Lubica

    2014-01-01

    The main therapeutic approach in the treatment of bipolar affective disorder is the administration of drugs. The effectiveness of this approach can be increased by specific psychotherapeutic interventions. There is not much knowledge about self-help initiatives in this field. Anonymous internet communication may be beneficial, regardless of the fact that it is non-professional. It offers a chance to confide and share symptoms with other patients, to open up for persons with feelings of shame, and to obtain relevant information without having a direct contact with an expert. Qualitative analysis of web discussions used by patients with bipolar disorder in Czech language was performed. Using key words "diskuze" (discussion), "maniodeprese" (manic depression) and "bipolární porucha" (bipolar disorder), 8 discussions were found, but only 3 of them were anonymous and non-professional. Individual discussion entries were analyzed for basic categories or subcategories, and these were subsequently assessed so that their relationships could be better understood. A total of 436 entries from 3 discussion web pages were analyzed. Subsequently, six categories were identified (participant, diagnosis, relationships, communication, topic and treatment), each having 5-12 subcategories. These were analyzed in terms of relationships and patterns. Czech discussion web pages for people suffering from bipolar disorder are a lively community of users supporting each other, that may be characterized as a compact body open to newcomers. They seem to fulfill patients' needs that are not fully met by health care services. It also has a "self-cleaning" ability, effectively dealing with posts that are inappropriate, provocative, criticizing, aggressive or meaningless.

  2. DATA EXTRACTION AND LABEL ASSIGNMENT FOR WEB DATABASES

    OpenAIRE

    T. Rajesh; T. Prathap; S.Naveen Nambi; A.R. Arunachalam

    2015-01-01

    Deep Web contents are accessed by queries submitted to Web databases and the returned data records are en wrapped in dynamically generated Web pages (they will be called deep Web pages in this paper). The structured data that Extracting from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now, a too many number of techniques have been proposed to address this problem, but all of them have limitations because they are Web-page-programming...

  3. Monte Carlo methods in PageRank computation: When one iteration is sufficient

    NARCIS (Netherlands)

    Avrachenkov, K.; Litvak, Nelli; Nemirovsky, D.; Osipova, N.

    2005-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method which requires

  4. Monte Carlo methods in PageRank computation: When one iteration is sufficient

    NARCIS (Netherlands)

    Avrachenkov, K.; Litvak, Nelli; Nemirovsky, D.; Osipova, N.

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer, and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method, which requires

  5. Web Based Project Management System

    OpenAIRE

    Aadamsoo, Anne-Mai

    2010-01-01

    To increase an efficiency of a product, nowadays many web development companies are using different project management systems. A company may run a number of projects at a time, and requires input from a number of individuals, or teams for a multi level development plan, whereby a good project management system is needed. Project management systems represent a rapidly growing technology in IT industry. As the number of users, who utilize project management applications continues to grow, w...

  6. Personal and Public Start Pages in a library setting

    NARCIS (Netherlands)

    Kieft-Wondergem, Dorine

    Personal and Public Start Pages are web-based resources. With these kind of tools it is possible to make your own free start page. A Start Page allows you to put all your web resources into one page, including blogs, email, podcasts, RSSfeeds. It is possible to share the content of the page with

  7. [Analysis of the web pages of the intensive care units of Spain].

    Science.gov (United States)

    Navarro-Arnedo, J M

    2009-01-01

    In order to determine the Intensive Care Units (ICU) of Spanish hospitals that had a web site, to analyze the information they offered and to know what information they needed to offer according to a sample of ICU nurses, a cross-sectional observational, descriptive study was carried out between January and September 2008. For each ICU website, an analysis was made on the information available on the unit, its care, teaching and research activity on nursing. Simultaneously, based on a sample of intensive care nurses, the information that should be contained on an ICU website was determined. The results, expressed in absolute numbers and percentage, showed that 66 of the 292 hospitals with ICU (22.6%) had a web site; 50.7% of the sites showed the number of beds, 19.7% the activity report, 11.3% the published articles/studies and followed research lines and 9.9% the organized formation courses. 14 webs (19.7%) displayed images of nurses. However, only 1 (1.4%) offered guides on the actions followed. No web site offered a navigation section for nursing, the E-mail of the chief nursing, the nursing documentation used or if any nursing model of their own was used. It is concluded that only one-fourth of the Spanish hospitals with ICU have a web site; number of beds was the data offered by the most sites, whereas information on care, educational and investigating activities was very reduced and that on nursing was practically omitted on the web pages of intensive care units.

  8. [Improving vaccination social marketing by monitoring the web].

    Science.gov (United States)

    Ferro, A; Bonanni, P; Castiglia, P; Montante, A; Colucci, M; Miotto, S; Siddu, A; Murrone, L; Baldo, V

    2014-01-01

    Immunisation is one of the most important and cost- effective interventions in Public Health because of their significant positive impact on population health.However, since Jenner's discovery there always been a lively debate between supporters and opponents of vaccination; Today the antivaccination movement spreads its message mostly on the web, disseminating inaccurate data through blogs and forums, increasing vaccine rejection.In this context, the Società Italiana di Igiene (SItI) created a web project in order to fight the misinformation on the web regarding vaccinations, through a series of information tools, including scientific articles, educational information, video and multimedia presentations The web portal (http://www.vaccinarsi.org) was published in May 2013 and now is already available over one hundred web pages related to vaccinations Recently a Forum, a periodic newsletter and a Twitter page have been created. There has been an average of 10,000 hits per month. Currently our users are mostly healthcare professionals. The visibility of the site is very good and it currently ranks first in the Google's search engine, taping the word "vaccinarsi" The results of the first four months of activity are extremely encouraging and show the importance of this project; furthermore the application for quality certification by independent international Organizations has been submitted.

  9. PSB goes personal: The failure of personalised PSB web pages

    OpenAIRE

    Jannick Kirk Sørensen

    2013-01-01

    Between 2006 and 2011, a number of European public service broadcasting (PSB) organisations offered their website users the opportunity to create their own PSB homepage. The web customisation was conceived by the editors as a response to developments in commercial web services, particularly social networking and content aggregation services, but the customisation projects revealed tensions between the ideals of customer sovereignty and the editorial agenda-setting. This paper presents an over...

  10. The Importance of Prior Probabilities for Entry Page Search

    NARCIS (Netherlands)

    Kraaij, W.; Westerveld, T.H.W.; Hiemstra, Djoerd

    An important class of searches on the world-wide-web has the goal to find an entry page (homepage) of an organisation. Entry page search is quite different from Ad Hoc search. Indeed a plain Ad Hoc system performs disappointingly. We explored three non-content features of web pages: page length,

  11. The sources and popularity of online drug information: an analysis of top search engine results and web page views.

    Science.gov (United States)

    Law, Michael R; Mintzes, Barbara; Morgan, Steven G

    2011-03-01

    The Internet has become a popular source of health information. However, there is little information on what drug information and which Web sites are being searched. To investigate the sources of online information about prescription drugs by assessing the most common Web sites returned in online drug searches and to assess the comparative popularity of Web pages for particular drugs. This was a cross-sectional study of search results for the most commonly dispensed drugs in the US (n=278 active ingredients) on 4 popular search engines: Bing, Google (both US and Canada), and Yahoo. We determined the number of times a Web site appeared as the first result. A linked retrospective analysis counted Wikipedia page hits for each of these drugs in 2008 and 2009. About three quarters of the first result on Google USA for both brand and generic names linked to the National Library of Medicine. In contrast, Wikipedia was the first result for approximately 80% of generic name searches on the other 3 sites. On these other sites, over two thirds of brand name searches led to industry-sponsored sites. The Wikipedia pages with the highest number of hits were mainly for opiates, benzodiazepines, antibiotics, and antidepressants. Wikipedia and the National Library of Medicine rank highly in online drug searches. Further, our results suggest that patients most often seek information on drugs with the potential for dependence, for stigmatized conditions, that have received media attention, and for episodic treatments. Quality improvement efforts should focus on these drugs.

  12. Microsoft Expression Web for dummies

    CERN Document Server

    Hefferman, Linda

    2013-01-01

    Expression Web is Microsoft's newest tool for creating and maintaining dynamic Web sites. This FrontPage replacement offers all the simple ""what-you-see-is-what-you-get"" tools for creating a Web site along with some pumped up new features for working with Cascading Style Sheets and other design options. Microsoft Expression Web For Dummies arrives in time for early adopters to get a feel for how to build an attractive Web site. Author Linda Hefferman teams up with longtime FrontPage For Dummies author Asha Dornfest to show the easy way for first-time Web designers, FrontPage ve

  13. Designing and Implementing a Unique Website Design Project in an Undergraduate Course

    Science.gov (United States)

    Kontos, George

    2016-01-01

    The following paper describes a distinctive collaborative service-learning project done in an undergraduate class on web design. In this project, students in a web design class contacted local community non-profit organizations to create websites (collections of web pages) to benefit these organizations. The two phases of creating a website,…

  14. An Improved Approach to the PageRank Problems

    Directory of Open Access Journals (Sweden)

    Yue Xie

    2013-01-01

    Full Text Available We introduce a partition of the web pages particularly suited to the PageRank problems in which the web link graph has a nested block structure. Based on the partition of the web pages, dangling nodes, common nodes, and general nodes, the hyperlink matrix can be reordered to be a more simple block structure. Then based on the parallel computation method, we propose an algorithm for the PageRank problems. In this algorithm, the dimension of the linear system becomes smaller, and the vector for general nodes in each block can be calculated separately in every iteration. Numerical experiments show that this approach speeds up the computation of PageRank.

  15. A construction scheme of web page comment information extraction system based on frequent subtree mining

    Science.gov (United States)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    Based on the frequent sub-tree mining algorithm, this paper proposes a construction scheme of web page comment information extraction system based on frequent subtree mining, referred to as FSM system. The entire system architecture and the various modules to do a brief introduction, and then the core of the system to do a detailed description, and finally give the system prototype.

  16. Web wisdom how to evaluate and create information quality on the Web

    CERN Document Server

    Alexander, Janet E

    1999-01-01

    Web Wisdom is an essential reference for anyone needing to evaluate or establish information quality on the World Wide Web. The book includes easy to use checklists for step-by-step quality evaluations of virtually any Web page. The checklists can also be used by Web authors to help them ensure quality information on their pages. In addition, Web Wisdom addresses other important issues, such as understanding the ways that advertising and sponsorship may affect the quality of Web information. It features: * a detailed discussion of the items involved in evaluating Web information; * checklists

  17. Using Power-Law Degree Distribution to Accelerate PageRank

    Directory of Open Access Journals (Sweden)

    Zhaoyan Jin

    2012-12-01

    Full Text Available The PageRank vector of a network is very important, for it can reflect the importance of a Web page in the World Wide Web, or of a people in a social network. However, with the growth of the World Wide Web and social networks, it needs more and more time to compute the PageRank vector of a network. In many real-world applications, the degree and PageRank distributions of these complex networks conform to the Power-Law distribution. This paper utilizes the degree distribution of a network to initialize its PageRank vector, and presents a Power-Law degree distribution accelerating algorithm of PageRank computation. Experiments on four real-world datasets show that the proposed algorithm converges more quickly than the original PageRank algorithm.

  18. MANAGEMENT CONSULTANCIES DISCURSIVE CONSTRUCTION OF WORK-LIFE BALANCE : A DISCOURSE ANALYSIS OF WEB PAGES

    OpenAIRE

    Bergqvist, Sofie; Vestin, Mikaela

    2014-01-01

    Academics, practitioners and media agree that the topic of work-life balance is on the agenda and valued by the new business generation. Although Sweden might be considered a working friendly country, the management consultancy industry is not recognized to be the same. With an institutional perspective we will through a discourse analysis investigate the communication on Swedish management consultancies web pages in order to explore how consultancies relate to the work-life balance discourse...

  19. The ICAP (Interactive Course Assignment Pages Publishing System

    Directory of Open Access Journals (Sweden)

    Kim Griggs

    2008-03-01

    Full Text Available The ICAP publishing system is an open source custom content management system that enables librarians to easily and quickly create and manage library help pages for course assignments (ICAPs, without requiring knowledge of HTML or other web technologies. The system's unique features include an emphasis on collaboration and content reuse and an easy-to-use interface that includes in-line help, simple forms and drag and drop functionality. The system generates dynamic, attractive course assignment pages that blend Web 2.0 features with traditional library resources, and makes the pages easier to find by providing a central web page for the course assignment pages. As of December 2007, the code is available as free, open-source software under the GNU General Public License.

  20. The Top 100 Linked-To Pages on UK University Web Sites: High Inlink Counts Are Not Usually Associated with Quality Scholarly Content.

    Science.gov (United States)

    Thelwall, Mike

    2002-01-01

    Reports on an investigation into the most highly linked pages on United Kingdom university Web sites. Concludes that simple link counts are highly unreliable indicators of the average behavior of scholars, and that the most highly linked-to pages are those that facilitate access to a wide range of information rather than providing specific…

  1. Analysis of co-occurrence toponyms in web pages based on complex networks

    Science.gov (United States)

    Zhong, Xiang; Liu, Jiajun; Gao, Yong; Wu, Lun

    2017-01-01

    A large number of geographical toponyms exist in web pages and other documents, providing abundant geographical resources for GIS. It is very common for toponyms to co-occur in the same documents. To investigate these relations associated with geographic entities, a novel complex network model for co-occurrence toponyms is proposed. Then, 12 toponym co-occurrence networks are constructed from the toponym sets extracted from the People's Daily Paper documents of 2010. It is found that two toponyms have a high co-occurrence probability if they are at the same administrative level or if they possess a part-whole relationship. By applying complex network analysis methods to toponym co-occurrence networks, we find the following characteristics. (1) The navigation vertices of the co-occurrence networks can be found by degree centrality analysis. (2) The networks express strong cluster characteristics, and it takes only several steps to reach one vertex from another one, implying that the networks are small-world graphs. (3) The degree distribution satisfies the power law with an exponent of 1.7, so the networks are free-scale. (4) The networks are disassortative and have similar assortative modes, with assortative exponents of approximately 0.18 and assortative indexes less than 0. (5) The frequency of toponym co-occurrence is weakly negatively correlated with geographic distance, but more strongly negatively correlated with administrative hierarchical distance. Considering the toponym frequencies and co-occurrence relationships, a novel method based on link analysis is presented to extract the core toponyms from web pages. This method is suitable and effective for geographical information retrieval.

  2. Sign Language Web Pages

    Science.gov (United States)

    Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.

    2006-01-01

    The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…

  3. Adaptive web data extraction policies

    Directory of Open Access Journals (Sweden)

    Provetti, Alessandro

    2008-12-01

    Full Text Available Web data extraction is concerned, among other things, with routine data accessing and downloading from continuously-updated dynamic Web pages. There is a relevant trade-off between the rate at which the external Web sites are accessed and the computational burden on the accessing client. We address the problem by proposing a predictive model, typical of the Operating Systems literature, of the rate-of-update of each Web source. The presented model has been implemented into a new version of the Dynamo project: a middleware that assists in generating informative RSS feeds out of traditional HTML Web sites. To be effective, i.e., make RSS feeds be timely and informative and to be scalable, Dynamo needs a careful tuning and customization of its polling policies, which are described in detail.

  4. Quality of drug information on the World Wide Web and strategies to improve pages with poor information quality. An intervention study on pages about sildenafil.

    Science.gov (United States)

    Martin-Facklam, Meret; Kostrzewa, Michael; Martin, Peter; Haefeli, Walter E

    2004-01-01

    The generally poor quality of health information on the world wide web (WWW) has caused preventable adverse outcomes. Quality management of information on the internet is therefore critical given its widespread use. In order to develop strategies for the safe use of drugs, we scored general and content quality of pages about sildenafil and performed an intervention to improve their quality. The internet was searched with Yahoo and AltaVista for pages about sildenafil and 303 pages were included. For assessment of content quality a score based on accuracy and completeness of essential drug information was assigned. For assessment of general quality, four criteria were evaluated and their association with high content quality was determined by multivariate logistic regression analysis. The pages were randomly allocated to either control or intervention group. Evaluation took place before, as well as 7 and 22 weeks after an intervention which consisted of two letters with individualized feedback information on the respective page which were sent electronically to the address mentioned on the page. Providing references to scientific publications or prescribing information was significantly associated with high content quality (odds ratio: 8.2, 95% CI 3.2, 20.5). The intervention had no influence on general or content quality. To prevent adverse outcomes caused by misinformation on the WWW individualized feedback to the address mentioned on the page was ineffective. It is currently probably the most straight-forward approach to inform lay persons about indicators of high information quality, i.e. the provision of references.

  5. When the Web meets the cell: using personalized PageRank for analyzing protein interaction networks.

    Science.gov (United States)

    Iván, Gábor; Grolmusz, Vince

    2011-02-01

    Enormous and constantly increasing quantity of biological information is represented in metabolic and in protein interaction network databases. Most of these data are freely accessible through large public depositories. The robust analysis of these resources needs novel technologies, being developed today. Here we demonstrate a technique, originating from the PageRank computation for the World Wide Web, for analyzing large interaction networks. The method is fast, scalable and robust, and its capabilities are demonstrated on metabolic network data of the tuberculosis bacterium and the proteomics analysis of the blood of melanoma patients. The Perl script for computing the personalized PageRank in protein networks is available for non-profit research applications (together with sample input files) at the address: http://uratim.com/pp.zip.

  6. Web server for priority ordered multimedia services

    Science.gov (United States)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  7. Research of Subgraph Estimation Page Rank Algorithm for Web Page Rank

    Directory of Open Access Journals (Sweden)

    LI Lan-yin

    2017-04-01

    Full Text Available The traditional PageRank algorithm can not efficiently perform large data Webpage scheduling problem. This paper proposes an accelerated algorithm named topK-Rank,which is based on PageRank on the MapReduce platform. It can find top k nodes efficiently for a given graph without sacrificing accuracy. In order to identify top k nodes,topK-Rank algorithm prunes unnecessary nodes and edges in each iteration to dynamically construct subgraphs,and iteratively estimates lower/upper bounds of PageRank scores through subgraphs. Theoretical analysis shows that this method guarantees result exactness. Experiments show that topK-Rank algorithm can find k nodes much faster than the existing approaches.

  8. Web technology for emergency medicine and secure transmission of electronic patient records.

    Science.gov (United States)

    Halamka, J D

    1998-01-01

    The American Heritage dictionary defines the word "web" as "something intricately contrived, especially something that ensnares or entangles." The wealth of medical resources on the World Wide Web is now so extensive, yet disorganized and unmonitored, that such a definition seems fitting. In emergency medicine, for example, a field in which accurate and complete information, including patients' records, is urgently needed, more than 5000 Web pages are available today, whereas fewer than 50 were available in December 1994. Most sites are static Web pages using the Internet to publish textbook material, but new technology is extending the scope of the Internet to include online medical education and secure exchange of clinical information. This article lists some of the best Web sites for use in emergency medicine and then describes a project in which the Web is used for transmission and protection of electronic medical records.

  9. CrazyEgg Reports for Single Page Analysis

    Science.gov (United States)

    CrazyEgg provides an in depth look at visitor behavior on one page. While you can use GA to do trend analysis of your web area, CrazyEgg helps diagnose the design of a single Web page by visually displaying all visitor clicks during a specified time.

  10. Association and Sequence Mining in Web Usage

    Directory of Open Access Journals (Sweden)

    Claudia Elena DINUCA

    2011-06-01

    Full Text Available Web servers worldwide generate a vast amount of information on web users’ browsing activities. Several researchers have studied these so-called clickstream or web access log data to better understand and characterize web users. Clickstream data can be enriched with information about the content of visited pages and the origin (e.g., geographic, organizational of the requests. The goal of this project is to analyse user behaviour by mining enriched web access log data. With the continued growth and proliferation of e-commerce, Web services, and Web-based information systems, the volumes of click stream and user data collected by Web-based organizations in their daily operations has reached astronomical proportions. This information can be exploited in various ways, such as enhancing the effectiveness of websites or developing directed web marketing campaigns. The discovered patterns are usually represented as collections of pages, objects, or re-sources that are frequently accessed by groups of users with common needs or interests. The focus of this paper is to provide an overview how to use frequent pattern techniques for discovering different types of patterns in a Web log database. In this paper we will focus on finding association as a data mining technique to extract potentially useful knowledge from web usage data. I implemented in Java, using NetBeans IDE, a program for identification of pages’ association from sessions. For exemplification, we used the log files from a commercial web site.

  11. The ADAM project: a generic web interface for retrieval and display of ATLAS TDAQ information

    International Nuclear Information System (INIS)

    Harwood, A; Miotto, G Lehmann; Magnoni, L; Vandelli, W; Savu, D

    2012-01-01

    This paper describes a new approach to the visualization of information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, it visualizes the collected data using a flexible and interactive front-end web system. Structurally, the project comprises of 3 main levels of the data collection cycle: The Level 0 represents the information sources within ATLAS. These providers do not store information in a uniform fashion. The first step of the project was to define a common interface with which to expose stored data. The interface designed for the project originates from the Google Data Protocol API. The idea is to allow read-only access to data providers, through HTTP requests similar in format to the SQL query structure. This provides a standardized way to access this different information sources within ATLAS. The Level 1 can be considered the engine of the system. The primary task of the Level 1 is to gather data from multiple data sources via the common interface, to correlate this data together, or over a defined time series, and expose the combined data as a whole to the Level 2 web

  12. The ADAM project: a generic web interface for retrieval and display of ATLAS TDAQ information

    Science.gov (United States)

    Harwood, A.; Lehmann Miotto, G.; Magnoni, L.; Vandelli, W.; Savu, D.

    2012-06-01

    This paper describes a new approach to the visualization of information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, it visualizes the collected data using a flexible and interactive front-end web system. Structurally, the project comprises of 3 main levels of the data collection cycle: The Level 0 represents the information sources within ATLAS. These providers do not store information in a uniform fashion. The first step of the project was to define a common interface with which to expose stored data. The interface designed for the project originates from the Google Data Protocol API. The idea is to allow read-only access to data providers, through HTTP requests similar in format to the SQL query structure. This provides a standardized way to access this different information sources within ATLAS. The Level 1 can be considered the engine of the system. The primary task of the Level 1 is to gather data from multiple data sources via the common interface, to correlate this data together, or over a defined time series, and expose the combined data as a whole to the Level 2 web

  13. EPA Web Training Classes

    Science.gov (United States)

    Scheduled webinars can help you better manage EPA web content. Class topics include Drupal basics, creating different types of pages in the WebCMS such as document pages and forms, using Google Analytics, and best practices for metadata and accessibility.

  14. Introduction to Webometrics Quantitative Web Research for the Social Sciences

    CERN Document Server

    Thelwall, Michael

    2009-01-01

    Webometrics is concerned with measuring aspects of the web: web sites, web pages, parts of web pages, words in web pages, hyperlinks, web search engine results. The importance of the web itself as a communication medium and for hosting an increasingly wide array of documents, from journal articles to holiday brochures, needs no introduction. Given this huge and easily accessible source of information, there are limitless possibilities for measuring or counting on a huge scale (e.g., the number of web sites, the number of web pages, the number of blogs) or on a smaller scale (e.g., the number o

  15. On Page Rank

    NARCIS (Netherlands)

    Hoede, C.

    In this paper the concept of page rank for the world wide web is discussed. The possibility of describing the distribution of page rank by an exponential law is considered. It is shown that the concept is essentially equal to that of status score, a centrality measure discussed already in 1953 by

  16. Improving Interdisciplinary Provider Communication Through a Unified Paging System.

    Science.gov (United States)

    Heidemann, Lauren; Petrilli, Christopher; Gupta, Ashwin; Campbell, Ian; Thompson, Maureen; Cinti, Sandro; Stewart, David A

    2016-06-01

    Interdisciplinary communication at a Veterans Affairs (VA) academic teaching hospital is largely dependent on alphanumeric paging, which has limitations as a result of one-way communication and lack of reliable physician identification. Adverse patient outcomes related to difficulty contacting the correct consulting provider in a timely manner have been reported. House officers were surveyed on the level of satisfaction with the current VA communication system and the rate of perceived adverse patient outcomes caused by potential delays within this system. Respondents were then asked to identify the ideal paging system. These results were used to develop and deploy a new Web site. A postimplementation survey was repeated 1 year later. This study was conducted as a quality improvement project. House officer satisfaction with the preintervention system was 3%. The majority used more than four modalities to identify consultants, with 59% stating that word of mouth was a typical source. The preferred mode of paging was the university hospital paging system, a Web-based program that is used at the partnering academic institution. Following integration of VA consulting services within the university hospital paging system, the level of satisfaction improved to 87%. Significant decreases were seen in perceived adverse patient outcomes (from 16% to 2%), delays in patient care (from 90% to 16%), and extended hospitalizations (from 46% to 4%). Our study demonstrates significant improvement in physician satisfaction with a newly implemented paging system that was associated with a decreased perceived number of adverse patient events and delays in care.

  17. Health on the Net Foundation: assessing the quality of health web pages all over the world.

    Science.gov (United States)

    Boyer, Célia; Gaudinat, Arnaud; Baujard, Vincent; Geissbühler, Antoine

    2007-01-01

    The Internet provides a great amount of information and has become one of the communication media which is most widely used [1]. However, the problem is no longer finding information but assessing the credibility of the publishers as well as the relevance and accuracy of the documents retrieved from the web. This problem is particularly relevant in the medical area which has a direct impact on the well-being of citizens. In this paper, we assume that the quality of web pages can be controlled, even when a huge amount of documents has to be reviewed. But this must be supported by both specific automatic tools and human expertise. In this context, we present various initiatives of the Health on the Net Foundation informing the citizens about the reliability of the medical content on the web.

  18. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M.B. Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  19. Food Enterprise Web Design Based on User Experience

    OpenAIRE

    Fei Wang

    2015-01-01

    Excellent auxiliary food enterprise web design conveyed good visual transmission effect through user experience. This study was based on the food enterprise managers and customers as the main operating object to get the performance of the web page creation, web page design not only focused on the function and work efficiency, the most important thing was that the user experience in the process of web page interaction.

  20. Identification of the unidentified deceased and locating next of kin: experience with a UID web site page, Fulton County, Georgia.

    Science.gov (United States)

    Hanzlick, Randy

    2006-06-01

    Medical examiner and coroner offices may face difficulties in trying to achieve identification of deceased persons who are unidentified or in locating next of kin for deceased persons who have been identified. The Fulton County medical examiner (FCME) has an office web site which includes information about unidentified decedents and cases for which next of kin are being sought. Information about unidentified deceased and cases in need of next of kin has been posted on the FCME web site for 3 years and 1 year, respectively. FCME investigators and staff medical examiners were surveyed about the web site's usefulness for making identifications and locating next of kin. No cases were recalled in which the web site led to making an identification. Two cases were reported in which next of kin were located, and another case involved a missing person being ruled out as one of the decedents. The web site page is visited by agencies interested in missing and unidentified persons, and employees do find it useful for follow-up because information about all unidentified decedents is located and easily accessible, electronically, in a single location. Despite low yield in making identifications and locating next of kin, the UID web site is useful in some respects, and there is no compelling reason to discontinue its existence. It is proposed that UID pages on office web sites be divided into "hot" (less than 30 days, for example) and "warm" (31 days to 1 year, for example) cases and that cases older than a year be designated as "cold cases." It is conceivable that all unidentified deceased cases nationally could be placed on a single web site designed for such purposes, to remain in public access until identity is established and confirmed.

  1. ELAN - the web page based information system for emergency preparedness in Germany

    International Nuclear Information System (INIS)

    Zaehringer, M.; Hoebler, Ch.; Bieringer, P.

    2002-01-01

    A plan for a WEB-page based system was developed which compiles all important information in case of an nuclear emergency with actual or potential release of radioactivity into the environment. A prototype system providing information of the Federal Ministry for Environment, Nature Conservation and Reactor Safety (BMU) was tested successfully. The implementation at the National Emergency Operations Centre of Switzerland was used as template. However, further planning takes into account the special conditions of the federal structure in Germany. The main purpose of the system is to compile, to clearly arrange, and to timely provide on a central server all relevant information of the federal government, the states (Laender), and, if available, from foreign authorities that is needed for decision making. It is envisaged to integrate similar existing systems in some states conceptually and technically. ELAN makes use of standardised and secure web technology. Uploading of information and delivery to national and foreign authorities, international organisations and the public is managed by role specific access controlling. (orig.)

  2. World Wide Web Usage Mining Systems and Technologies

    Directory of Open Access Journals (Sweden)

    Wen-Chen Hu

    2003-08-01

    Full Text Available Web usage mining is used to discover interesting user navigation patterns and can be applied to many real-world problems, such as improving Web sites/pages, making additional topic or product recommendations, user/customer behavior studies, etc. This article provides a survey and analysis of current Web usage mining systems and technologies. A Web usage mining system performs five major tasks: i data gathering, ii data preparation, iii navigation pattern discovery, iv pattern analysis and visualization, and v pattern applications. Each task is explained in detail and its related technologies are introduced. A list of major research systems and projects concerning Web usage mining is also presented, and a summary of Web usage mining is given in the last section.

  3. PACS project management utilizing web-based tools

    Science.gov (United States)

    Patel, Sunil; Levin, Brad; Gac, Robert J., Jr.; Harding, Douglas, Jr.; Chacko, Anna K.; Radvany, Martin; Romlein, John R.

    2000-05-01

    As Picture Archiving and Communications Systems (PACS) implementations become more widespread, the management of deploying large, multi-facility PACS will become a more frequent occurrence. The tools and usability of the World Wide Web to disseminate project management information obviates time, distance, participant availability, and data format constraints, allowing for the effective collection and dissemination of PACS planning, implementation information, for a potentially limitless number of concurrent PACS sites. This paper will speak to tools, such as (1) a topic specific discussion board, (2) a 'restricted' Intranet, within a 'project' Intranet. We will also discuss project specific methods currently in use in a leading edge, regional PACS implementation concerning the sharing of project schedules, physical drawings, images of implementations, site-specific data, point of contacts lists, project milestones, and a general project overview. The individual benefits realized for the end user from each tool will also be covered. These details will be presented, balanced with a spotlight on communication as a critical component of any project management undertaking. Using today's technology, the web arguably provides the most cost and resource effective vehicle to facilitate the broad based, interactive sharing of project information.

  4. WEB STRUCTURE MINING

    Directory of Open Access Journals (Sweden)

    CLAUDIA ELENA DINUCĂ

    2011-01-01

    Full Text Available The World Wide Web became one of the most valuable resources for information retrievals and knowledge discoveries due to the permanent increasing of the amount of data available online. Taking into consideration the web dimension, the users get easily lost in the web’s rich hyper structure. Application of data mining methods is the right solution for knowledge discovery on the Web. The knowledge extracted from the Web can be used to raise the performances for Web information retrievals, question answering and Web based data warehousing. In this paper, I provide an introduction of Web mining categories and I focus on one of these categories: the Web structure mining. Web structure mining, one of three categories of web mining for data, is a tool used to identify the relationship between Web pages linked by information or direct link connection. It offers information about how different pages are linked together to form this huge web. Web Structure Mining finds hidden basic structures and uses hyperlinks for more web applications such as web search.

  5. Introduction to the world wide web.

    Science.gov (United States)

    Downes, P K

    2007-05-12

    The World Wide Web used to be nicknamed the 'World Wide Wait'. Now, thanks to high speed broadband connections, browsing the web has become a much more enjoyable and productive activity. Computers need to know where web pages are stored on the Internet, in just the same way as we need to know where someone lives in order to post them a letter. This section explains how the World Wide Web works and how web pages can be viewed using a web browser.

  6. Constructing a web recommender system using web usage mining and user’s profiles

    Directory of Open Access Journals (Sweden)

    T. Mombeini

    2014-12-01

    Full Text Available The World Wide Web is a great source of information, which is nowadays being widely used due to the availability of useful information changing, dynamically. However, the large number of webpages often confuses many users and it is hard for them to find information on their interests. Therefore, it is necessary to provide a system capable of guiding users towards their desired choices and services. Recommender systems search among a large collection of user interests and recommend those, which are likely to be favored the most by the user. Web usage mining was designed to function on web server records, which are included in user search results. Therefore, recommender servers use the web usage mining technique to predict users’ browsing patterns and recommend those patterns in the form of a suggestion list. In this article, a recommender system based on web usage mining phases (online and offline was proposed. In the offline phase, the first step is to analyze user access records to identify user sessions. Next, user profiles are built using data from server records based on the frequency of access to pages, the time spent by the user on each page and the date of page view. Date is of importance since it is more possible for users to request new pages more than old ones and old pages are less probable to be viewed, as users mostly look for new information. Following the creation of user profiles, users are categorized in clusters using the Fuzzy C-means clustering algorithm and S(c criterion based on their similarities. In the online phase, a neural network is offered to identify the suggested model while online suggestions are generated using the suggestion module for the active user. Search engines analyze suggestion lists based on rate of user interest in pages and page rank and finally suggest appropriate pages to the active user. Experiments show that the proposed method of predicting user recent requested pages has more accuracy and

  7. Finding pages on the unarchived Web

    NARCIS (Netherlands)

    J. Kamps; A. Ben-David; H.C. Huurdeman; A.P. de Vries (Arjen); T. Samar (Thaer)

    2014-01-01

    htmlabstractWeb archives preserve the fast changing Web, yet are highly incomplete due to crawling restrictions, crawling depth and frequency, or restrictive selection policies-most of the Web is unarchived and therefore lost to posterity. In this paper, we propose an approach to recover significant

  8. 16 CFR 1130.8 - Requirements for Web site registration or alternative e-mail registration.

    Science.gov (United States)

    2010-01-01

    ... registration. (a) Link to registration page. The manufacturer's Web site, or other Web site established for the... web page that goes directly to “Product Registration.” (b) Purpose statement. The registration page... registration page. The Web site registration page shall request only the consumer's name, address, telephone...

  9. Towards Second and Third Generation Web-Based Multimedia

    OpenAIRE

    Ossenbruggen, Jacco; Geurts, Joost; Cornelissen, F.J.; Rutledge, Lloyd; Hardman, Lynda

    2001-01-01

    textabstractFirst generation Web-content encodes information in handwritten (HTML) Web pages. Second generation Web content generates HTML pages on demand, e.g. by filling in templates with content retrieved dynamically from a database or transformation of structured documents using style sheets (e.g. XSLT). Third generation Web pages will make use of rich markup (e.g. XML) along with metadata (e.g. RDF) schemes to make the content not only machine readable but also machine processable - a ne...

  10. Effects of Learning Style and Training Method on Computer Attitude and Performance in World Wide Web Page Design Training.

    Science.gov (United States)

    Chou, Huey-Wen; Wang, Yu-Fang

    1999-01-01

    Compares the effects of two training methods on computer attitude and performance in a World Wide Web page design program in a field experiment with high school students in Taiwan. Discusses individual differences, Kolb's Experiential Learning Theory and Learning Style Inventory, Computer Attitude Scale, and results of statistical analyses.…

  11. Project Management Methodology for the Development of M-Learning Web Based Applications

    Directory of Open Access Journals (Sweden)

    Adrian VISOIU

    2010-01-01

    Full Text Available M-learning web based applications are a particular case of web applications designed to be operated from mobile devices. Also, their purpose is to implement learning aspects. Project management of such applications takes into account the identified peculiarities. M-learning web based application characteristics are identified. M-learning functionality covers the needs of an educational process. Development is described taking into account the mobile web and its influences over the analysis, design, construction and testing phases. Activities building up a work breakdown structure for development of m-learning web based applications are presented. Project monitoring and control techniques are proposed. Resources required for projects are discussed.

  12. Beginning JSP, JSF, and Tomcat web development from novice to professional

    CERN Document Server

    Zambon, Giulio

    2008-01-01

    A comprehensive introduction to JavaServer Pages (JSP), JavaServer Faces (JSF), and the Apache Tomcat Web application server, this manual makes key concepts easy to grasp by numerous working examples and a walk-through of the development of a complete e-commerce project.

  13. Give your feedback on the new Users’ page

    CERN Multimedia

    CERN Bulletin

    If you haven't already done so, visit the new Users’ page and provide the Communications group with your feedback. You can do this quickly and easily via an online form. A dedicated web steering group will design the future page on the basis of your comments. As a first step towards reforming the CERN website, the Communications group is proposing a ‘beta’ version of the Users’ pages. The primary aim of this version is to improve the visibility of key news items, events and announcements to the CERN community. The beta version is very much work in progress: your input is needed to make sure that the final site meets the needs of CERN’s wide and mixed community. The Communications group will read all your comments and suggestions, and will establish a web steering group that will make sure that the future CERN web pages match the needs of the community. More information on this process, including the gradual 'retirement' of the grey Users' pages we are a...

  14. WEB STRUCTURE MINING USING PAGERANK, IMPROVED PAGERANK – AN OVERVIEW

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2011-03-01

    Full Text Available Web Mining is the extraction of interesting and potentially useful patterns and information from Web. It includes Web documents, hyperlinks between documents, and usage logs of web sites. The significant task for web mining can be listed out as Information Retrieval, Information Selection / Extraction, Generalization and Analysis. Web information retrieval tools consider only the text on pages and ignore information in the links. The goal of Web structure mining is to explore structural summary about web. Web structure mining focusing on link information is an important aspect of web data. This paper presents an overview of the PageRank, Improved Page Rank and its working functionality in web structure mining.

  15. Towards Second and Third Generation Web-Based Multimedia

    NARCIS (Netherlands)

    J.R. van Ossenbruggen (Jacco); J.P.T.M. Geurts (Joost); F.J. Cornelissen; L. Rutledge (Lloyd); L. Hardman (Lynda)

    2001-01-01

    textabstractFirst generation Web-content encodes information in handwritten (HTML) Web pages. Second generation Web content generates HTML pages on demand, e.g. by filling in templates with content retrieved dynamically from a database or transformation of structured documents using style sheets

  16. Web party effect: a cocktail party effect in the web environment.

    Science.gov (United States)

    Rigutti, Sara; Fantoni, Carlo; Gerbino, Walter

    2015-01-01

    In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all) web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots) on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search). Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment): users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others.

  17. Web pages of Slovenian public libraries

    Directory of Open Access Journals (Sweden)

    Silva Novljan

    2002-01-01

    Full Text Available Libraries should offer their patrons web sites which establish the unmistakeable concept (public of library, the concept that cannot be mistaken for other information brokers and services available on the Internet, but inside this framework of the concept of library, would show a diversity which directs patrons to other (public libraries. This can be achieved by reliability, quality of information and services, and safety of usage.Achieving this, patrons regard library web sites as important reference sources deserving continuous usage for obtaining relevant information. Libraries excuse investment in the development and sustainance of their web sites by the number of visits and by patron satisfaction. The presented research, made on a sample of Slovene public libraries’web sites, determines how the libraries establish their purpose and role, as well as the given professional recommendations in web site design.The results uncover the striving of libraries for the modernisation of their functions,major attention is directed to the presentation of classic libraries and their activities,lesser to the expansion of available contents and electronic sources. Pointing to their diversity is significant since it is not a result of patrons’ needs, but more the consequence of improvisation, too little attention to selection, availability, organisation and formation of different kind of information and services on the web sites. Based on the analysis of a common concept of the public library web site, certain activities for improving the existing state of affairs are presented in the paper.

  18. EVALUATION OF WEB SEARCHING METHOD USING A NOVEL WPRR ALGORITHM FOR TWO DIFFERENT CASE STUDIES

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2012-04-01

    Full Text Available The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to web data and documents. Web content mining and web structure mining have important roles in identifying the relevant web page. Relevancy of web page denotes how well a retrieved web page or set of web pages meets the information need of the user. Page Rank, Weighted Page Rank and Hypertext Induced Topic Selection (HITS are existing algorithms which considers only web structure mining. Vector Space Model (VSM, Cover Density Ranking (CDR, Okapi similarity measurement (Okapi and Three-Level Scoring method (TLS are some of existing relevancy score methods which consider only web content mining. In this paper, we propose a new algorithm, Weighted Page with Relevant Rank (WPRR which is blend of both web content mining and web structure mining that demonstrates the relevancy of the page with respect to given query for two different case scenarios. It is shown that WPRR’s performance is better than the existing algorithms.

  19. Probabilistic relation between In-Degree and PageRank

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    2008-01-01

    This paper presents a novel stochastic model that explains the relation between power laws of In-Degree and PageRank. PageRank is a popularity measure designed by Google to rank Web pages. We model the relation between PageRank and In-Degree through a stochastic equation, which is inspired by the

  20. Nuclear expert web search and crawler algorithm

    International Nuclear Information System (INIS)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D.

    2013-01-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  1. Nuclear expert web search and crawler algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D., E-mail: thiagoreis@usp.br, E-mail: barroso@ipen.br, E-mail: bdbfilho@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  2. Soil-Web: An online soil survey for California, Arizona, and Nevada

    Science.gov (United States)

    Beaudette, D. E.; O'Geen, A. T.

    2009-10-01

    Digital soil survey products represent one of the largest and most comprehensive inventories of soils information currently available. The complex structure of these databases, intensive use of codes and scientific jargon make it difficult for non-specialists to utilize digital soil survey resources. A project was initiated to construct a web-based interface to digital soil survey products (STATSGO and SSURGO) for California, Arizona, and Nevada that would be accessible to the general public. A collection of mature, open source applications (including Mapserver, PostGIS and Apache Web Server) were used as a framework to support data storage, querying, map composition, data presentation, and contextual links to related materials. Application logic was written in the PHP language to "glue" together the many components of an online soil survey. A comprehensive website ( http://casoilresource.lawr.ucdavis.edu/map) was created to facilitate access to digital soil survey databases through several interfaces including: interactive map, Google Earth and HTTP-based application programming interface (API). Each soil polygon is linked to a map unit summary page, which includes links to soil component summary pages. The most commonly used soil properties, land interpretations and ratings are presented. Graphical and tabular summaries of soil profile information are dynamically created, and aid with rapid assessment of key soil properties. Quick links to official series descriptions (OSD) and other such information are presented. All terminology is linked back to the USDA-NRCS Soil Survey Handbook which contains extended definitions. The Google Earth interface to Soil-Web can be used to explore soils information in three dimensions. A flexible web API was implemented to allow advanced users of soils information to access our website via simple web page requests. Soil-Web has been successfully used in soil science curriculum, outreach activities, and current research projects

  3. Working with WebQuests: Making the Web Accessible to Students with Disabilities.

    Science.gov (United States)

    Kelly, Rebecca

    2000-01-01

    This article describes how students with disabilities in regular classes are using the WebQuest lesson format to access the Internet. It explains essential WebQuest principles, creating a draft Web page, and WebQuest components. It offers an example of a WebQuest about salvaging the sunken ships, Titanic and Lusitania. A WebQuest planning form is…

  4. Funnel-web spider bite

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/002844.htm Funnel-web spider bite To use the sharing features on ... the effects of a bite from the funnel-web spider. Male funnel-web spiders are more poisonous ...

  5. UPGRADE OF THE CENTRAL WEB SERVERS

    CERN Multimedia

    WEB Services

    2000-01-01

    During the weekend of the 25-26 March, the infrastructure of the CERN central web servers will undergo a major upgrade.As a result, the web services hosted by the central servers (that is, the services the address of which starts with www.cern.ch) will be unavailable Friday 24th, from 17:30 to 18:30, and may suffer from short interruptions until 20:00. This includes access to the CERN top-level page as well as the services referenced by this page (such as access to the scientific program and events information, or training, recruitment, housing services).After the upgrade, the change will be transparent to the users. Expert readers may however notice that when they connect to a web page starting with www.cern.ch this address is slightly changed when the page is actually displayed on their screen (e.g. www.cern.ch/Press will be changed to Press.web.cern.ch/Press). They should not worry: this behaviour, necessary for technical reasons, is normal.web.services@cern.chTel 74989

  6. Project Assessment Skills Web Application

    Science.gov (United States)

    Goff, Samuel J.

    2013-01-01

    The purpose of this project is to utilize Ruby on Rails to create a web application that will replace a spreadsheet keeping track of training courses and tasks. The goal is to create a fast and easy to use web application that will allow users to track progress on training courses. This application will allow users to update and keep track of all of the training required of them. The training courses will be organized by group and by user, making readability easier. This will also allow group leads and administrators to get a sense of how everyone is progressing in training. Currently, updating and finding information from this spreadsheet is a long and tedious task. By upgrading to a web application, finding and updating information will be easier than ever as well as adding new training courses and tasks. Accessing this data will be much easier in that users just have to go to a website and log in with NDC credentials rather than request the relevant spreadsheet from the holder. In addition to Ruby on Rails, I will be using JavaScript, CSS, and jQuery to help add functionality and ease of use to my web application. This web application will include a number of features that will help update and track progress on training. For example, one feature will be to track progress of a whole group of users to be able to see how the group as a whole is progressing. Another feature will be to assign tasks to either a user or a group of users. All of these together will create a user friendly and functional web application.

  7. Google Analytics: Single Page Traffic Reports

    Science.gov (United States)

    These are pages that live outside of Google Analytics (GA) but allow you to view GA data for any individual page on either the public EPA web or EPA intranet. You do need to log in to Google Analytics to view them.

  8. Development of portal Web pages for the LHD experiment

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Funaba, Hisamichi; Nakanishi, Hideya; Iwata, Chie; Yoshida, Masanori; Nagayama, Yoshio

    2011-01-01

    Because the LHD project has been operating with the cooperation of many institutes in Japan, the remote participation facilities play an important role. Therefore, NIFS has been introducing these facilities to its remote participants. Because the authors regard Web services as essential tools for the current Internet communication, Web services for remote participation have been developed. However, because these services are dispersed among several servers in NIFS, users cannot find the required services easily. Therefore, the authors developed a portal Web server to list the existing and new Web services for the LHD experiment. The server provides services such as summary graph, plasma movie of the last plasma discharge, daily experiment logs, and daily experimental schedules. One of the most important information from these services is the summary graph. Usually, the plasma discharges of the LHD experiment are executed every three minutes. Between the discharges, the summary graph of the last plasma discharge is displayed on the front screen in the control room soon after the discharge is complete. The graph is useful in evaluating the last discharge, which is important information for determining the subsequent experiment schedule. Therefore, it is required to display the summary graph, which plots more than 10 data diagnostics, as soon as possible. On the other hand, the data-appearance time varies from one diagnostic to another. To display the graph faster, the new system retrieves the data asynchronously; several data retrieval processes work simultaneously, and the system plots the data all at once. (author)

  9. EPA Web Taxonomy

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA's Web Taxonomy is a faceted hierarchical vocabulary used to tag web pages with terms from a controlled vocabulary. Tagging enables search and discovery of EPA's...

  10. Web party effect: a cocktail party effect in the web environment

    Directory of Open Access Journals (Sweden)

    Sara Rigutti

    2015-03-01

    Full Text Available In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search. Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment: users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others.

  11. Web party effect: a cocktail party effect in the web environment

    Science.gov (United States)

    Gerbino, Walter

    2015-01-01

    In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all) web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots) on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search). Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment): users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others. PMID:25802803

  12. A Collaborative Writing Project Using the Worldwide Web.

    Science.gov (United States)

    Sylvester, Allen; Essex, Christopher

    A student in a distance education course, as part of a midterm project, set out to build a Web site that had written communication as its main focus. The Web site, "The Global Campfire," was modeled on the old Appalachian tradition of the "Story Tree," where a storyteller begins a story and allows group members to add to it.…

  13. Sharing and reusing cardiovascular anatomical models over the Web: a step towards the implementation of the virtual physiological human project.

    Science.gov (United States)

    Gianni, Daniele; McKeever, Steve; Yu, Tommy; Britten, Randall; Delingette, Hervé; Frangi, Alejandro; Hunter, Peter; Smith, Nicolas

    2010-06-28

    Sharing and reusing anatomical models over the Web offers a significant opportunity to progress the investigation of cardiovascular diseases. However, the current sharing methodology suffers from the limitations of static model delivery (i.e. embedding static links to the models within Web pages) and of a disaggregated view of the model metadata produced by publications and cardiac simulations in isolation. In the context of euHeart--a research project targeting the description and representation of cardiovascular models for disease diagnosis and treatment purposes--we aim to overcome the above limitations with the introduction of euHeartDB, a Web-enabled database for anatomical models of the heart. The database implements a dynamic sharing methodology by managing data access and by tracing all applications. In addition to this, euHeartDB establishes a knowledge link with the physiome model repository by linking geometries to CellML models embedded in the simulation of cardiac behaviour. Furthermore, euHeartDB uses the exFormat--a preliminary version of the interoperable FieldML data format--to effectively promote reuse of anatomical models, and currently incorporates Continuum Mechanics, Image Analysis, Signal Processing and System Identification Graphical User Interface (CMGUI), a rendering engine, to provide three-dimensional graphical views of the models populating the database. Currently, euHeartDB stores 11 cardiac geometries developed within the euHeart project consortium.

  14. A design method for an intuitive web site

    Energy Technology Data Exchange (ETDEWEB)

    Quinniey, M.L.; Diegert, K.V.; Baca, B.G.; Forsythe, J.C.; Grose, E.

    1999-11-03

    The paper describes a methodology for designing a web site for human factor engineers that is applicable for designing a web site for a group of people. Many web pages on the World Wide Web are not organized in a format that allows a user to efficiently find information. Often the information and hypertext links on web pages are not organized into intuitive groups. Intuition implies that a person is able to use their knowledge of a paradigm to solve a problem. Intuitive groups are categories that allow web page users to find information by using their intuition or mental models of categories. In order to improve the human factors engineers efficiency for finding information on the World Wide Web, research was performed to develop a web site that serves as a tool for finding information effectively. The paper describes a methodology for designing a web site for a group of people who perform similar task in an organization.

  15. Semantic Advertising for Web 3.0

    Science.gov (United States)

    Thomas, Edward; Pan, Jeff Z.; Taylor, Stuart; Ren, Yuan; Jekjantuk, Nophadol; Zhao, Yuting

    Advertising on the World Wide Web is based around automatically matching web pages with appropriate advertisements, in the form of banner ads, interactive adverts, or text links. Traditionally this has been done by manual classification of pages, or more recently using information retrieval techniques to find the most important keywords from the page, and match these to keywords being used by adverts. In this paper, we propose a new model for online advertising, based around lightweight embedded semantics. This will improve the relevancy of adverts on the World Wide Web and help to kick-start the use of RDFa as a mechanism for adding lightweight semantic attributes to the Web. Furthermore, we propose a system architecture for the proposed new model, based on our scalable ontology reasoning infrastructure TrOWL.

  16. CT colonography: Project of High National Interest No. 2005062137 of the Italian Ministry of Education, University and Research (MIUR).

    Science.gov (United States)

    Neri, E; Laghi, A; Regge, D; Sacco, P; Gallo, T; Turini, F; Talini, E; Ferrari, R; Mellaro, M; Rengo, M; Marchi, S; Caramella, D; Bartolozzi, C

    2008-12-01

    The aim of this paper is to describe the Web site of the Italian Project on CT Colonography (Research Project of High National Interest, PRIN No. 2005062137) and present the prototype of the online database. The Web site was created with Microsoft Office Publisher 2003 software, which allows the realisation of multiple Web pages linked through a main menu located on the home page. The Web site contains a database of computed tomography (CT) colonography studies in the Digital Imaging and Communications in Medicine (DICOM) standard, all acquired with multidetector-row CT according to the parameters defined by the European Society of Abdominal and Gastrointestinal Radiology (ESGAR). The cases present different bowel-cleansing and tagging methods, and each case has been anonymised and classified according to the Colonography Reporting and Data System (C-RADS). The Web site is available at http address www.ctcolonography.org and is composed of eight pages. Download times for a 294-Mbyte file were 33 min from a residential ADSL (6 Mbit/s) network, 200 s from a local university network (100 Mbit/s) and 2 h and 50 min from a remote academic site in the USA. The Web site received 256 accesses in the 22 days since it went online. The Web site is an immediate and up-to-date tool for publicising the activity of the research project and a valuable learning resource for CT colonography.

  17. What’s New? Deploying a Library New Titles Page with Minimal Programming

    Directory of Open Access Journals (Sweden)

    John Meyerhofer

    2017-01-01

    Full Text Available With a new titles web page, a library has a place to show faculty, students, and staff the items they are purchasing for their community. However, many times heavy programing knowledge and/or a LAMP stack (Linux, Apache, MySQL, PHP or APIs separate a library’s data from making a new titles web page a reality. Without IT staff, a new titles page can become nearly impossible or not worth the effort. Here we will demonstrate how a small liberal arts college took its acquisition data and combined it with a Google Sheet, HTML, and a little JavaScript to create a new titles web page that was dynamic and engaging to its users.

  18. Evaluating Web Usability

    Science.gov (United States)

    Snider, Jean; Martin, Florence

    2012-01-01

    Web usability focuses on design elements and processes that make web pages easy to use. A website for college students was evaluated for underutilization. One-on-one testing, focus groups, web analytics, peer university review and marketing focus group and demographic data were utilized to conduct usability evaluation. The results indicated that…

  19. Web-based communication tools in a European research project: the example of the TRACE project

    Directory of Open Access Journals (Sweden)

    Baeten V.

    2009-01-01

    Full Text Available The multi-disciplinary and international nature of large European projects requires powerful managerial and communicative tools to ensure the transmission of information to the end-users. One such project is TRACE entitled “Tracing Food Commodities in Europe”. One of its objectives is to provide a communication system dedicated to be the central source of information on food authenticity and traceability in Europe. This paper explores the web tools used and communication vehicles offered to scientists involved in the TRACE project to communicate internally as well as to the public. Two main tools have been built: an Intranet and a public website. The TRACE website can be accessed at http://www.trace.eu.org. A particular emphasis was placed on the efficiency, the relevance and the accessibility of the information, the publicity of the website as well as the use of the collaborative utilities. The rationale of web space design as well as integration of proprietary software solutions are presented. Perspectives on the using of web tools in the research projects are discussed.

  20. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  1. An Enhanced Rule-Based Web Scanner Based on Similarity Score

    Directory of Open Access Journals (Sweden)

    LEE, M.

    2016-08-01

    Full Text Available This paper proposes an enhanced rule-based web scanner in order to get better accuracy in detecting web vulnerabilities than the existing tools, which have relatively high false alarm rate when the web pages are installed in unconventional directory paths. Using the proposed matching method based on similarity score, the proposed scheme can determine whether two pages have the same vulnerabilities or not. With this method, the proposed scheme is able to figure out the target web pages are vulnerable by comparing them to the web pages that are known to have vulnerabilities. We show the proposed scanner reduces 12% false alarm rate compared to the existing well-known scanner through the performance evaluation via various experiments. The proposed scheme is especially helpful in detecting vulnerabilities of the web applications which come from well-known open-source web applications after small customization, which happens frequently in many small-sized companies.

  2. Importance of intrinsic and non-network contribution in PageRank centrality and its effect on PageRank localization

    OpenAIRE

    Deyasi, Krishanu

    2016-01-01

    PageRank centrality is used by Google for ranking web-pages to present search result for a user query. Here, we have shown that PageRank value of a vertex also depends on its intrinsic, non-network contribution. If the intrinsic, non-network contributions of the vertices are proportional to their degrees or zeros, then their PageRank centralities become proportion to their degrees. Some simulations and empirical data are used to support our study. In addition, we have shown that localization ...

  3. Testing the visual consistency of web sites

    NARCIS (Netherlands)

    van der Geest, Thea; Loorbach, N.R.

    2005-01-01

    Consistency in the visual appearance of Web pages is often checked by experts, such as designers or reviewers. This article reports a card sort study conducted to determine whether users rather than experts could distinguish visual (in-)consistency in Web elements and pages. The users proved to

  4. Integration of Web mining and web crawler: Relevance and State of Art

    OpenAIRE

    Subhendu kumar pani; Deepak Mohapatra,; Bikram Keshari Ratha

    2010-01-01

    This study presents the role of web crawler in web mining environment. As the growth of the World Wide Web exceeded all expectations,the research on Web mining is growing more and more.web mining research topic which combines two of the activated research areas: Data Mining and World Wide Web .So, the World Wide Web is a very advanced area for data mining research. Search engines that are based on web crawling framework also used in web mining to find theinteracted web pages. This paper discu...

  5. A Web Service for File-Level Access to Disk Images

    Directory of Open Access Journals (Sweden)

    Sunitha Misra

    2014-07-01

    Full Text Available Digital forensics tools have many potential applications in the curation of digital materials in libraries, archives and museums (LAMs. Open source digital forensics tools can help LAM professionals to extract digital contents from born-digital media and make more informed preservation decisions. Many of these tools have ways to display the metadata of the digital media, but few provide file-level access without having to mount the device or use complex command-line utilities. This paper describes a project to develop software that supports access to the contents of digital media without having to mount or download the entire image. The work examines two approaches in creating this tool: First, a graphical user interface running on a local machine. Second, a web-based application running in web browser. The project incorporates existing open source forensics tools and libraries including The Sleuth Kit and libewf along with the Flask web application framework and custom Python scripts to generate web pages supporting disk image browsing.

  6. A Runtime System for Interactive Web Services

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Sandholm, Anders

    1999-01-01

    Interactive web services are increasingly replacing traditional static web pages. Producing web services seems to require a tremendous amount of laborious low-level coding due to the primitive nature of CGI programming. We present ideas for an improved runtime system for interactive web services ...... built on top of CGI running on virtually every combination of browser and HTTP/CGI server. The runtime system has been implemented and used extensively in , a tool for producing interactive web services.......Interactive web services are increasingly replacing traditional static web pages. Producing web services seems to require a tremendous amount of laborious low-level coding due to the primitive nature of CGI programming. We present ideas for an improved runtime system for interactive web services...

  7. PageRank of integers

    International Nuclear Information System (INIS)

    Frahm, K M; Shepelyansky, D L; Chepelianskii, A D

    2012-01-01

    We up a directed network tracing links from a given integer to its divisors and analyze the properties of the Google matrix of this network. The PageRank vector of this matrix is computed numerically and it is shown that its probability is approximately inversely proportional to the PageRank index thus being similar to the Zipf law and the dependence established for the World Wide Web. The spectrum of the Google matrix of integers is characterized by a large gap and a relatively small number of nonzero eigenvalues. A simple semi-analytical expression for the PageRank of integers is derived that allows us to find this vector for matrices of billion size. This network provides a new PageRank order of integers. (paper)

  8. Preprocessing and Content/Navigational Pages Identification as Premises for an Extended Web Usage Mining Model Development

    Directory of Open Access Journals (Sweden)

    Daniel MICAN

    2009-01-01

    Full Text Available From its appearance until nowadays, the internet saw a spectacular growth not only in terms of websites number and information volume, but also in terms of the number of visitors. Therefore, the need of an overall analysis regarding both the web sites and the content provided by them was required. Thus, a new branch of research was developed, namely web mining, that aims to discover useful information and knowledge, based not only on the analysis of websites and content, but also on the way in which the users interact with them. The aim of the present paper is to design a database that captures only the relevant data from logs in a way that will allow to store and manage large sets of temporal data with common tools in real time. In our work, we rely on different web sites or website sections with known architecture and we test several hypotheses from the literature in order to extend the framework to sites with unknown or chaotic structure, which are non-transparent in determining the type of visited pages. In doing this, we will start from non-proprietary, preexisting raw server logs.

  9. MedlinePlus Connect: Web Service

    Science.gov (United States)

    ... MedlinePlus Connect → Web Service URL of this page: https://medlineplus.gov/connect/service.html MedlinePlus Connect: Web ... will change.) Old URLs New URLs Web Application https://apps.nlm.nih.gov/medlineplus/services/mpconnect.cfm? ...

  10. MedlinePlus Connect: Web Application

    Science.gov (United States)

    ... MedlinePlus Connect → Web Application URL of this page: https://medlineplus.gov/connect/application.html MedlinePlus Connect: Web ... will change.) Old URLs New URLs Web Application https://apps.nlm.nih.gov/medlineplus/services/mpconnect.cfm? ...

  11. Models and methods for building web recommendation systems

    OpenAIRE

    Stekh, Yu.; Artsibasov, V.

    2012-01-01

    Modern Word Wide Web contains a large number of Web sites and pages in each Web site. Web recommendation system (recommendation system for web pages) are typically implemented on web servers and use the data obtained from the collection viewed web templates (implicit data) or user registration data (explicit data). In article considering methods and algorithms of web recommendation system based on the technology of data mining (web mining). Сучасна мережа Інтернет містить велику кількість веб...

  12. Global Land Survey Impervious Mapping Project Web Site

    Science.gov (United States)

    DeColstoun, Eric Brown; Phillips, Jacqueline

    2014-01-01

    The Global Land Survey Impervious Mapping Project (GLS-IMP) aims to produce the first global maps of impervious cover at the 30m spatial resolution of Landsat. The project uses Global Land Survey (GLS) Landsat data as its base but incorporates training data generated from very high resolution commercial satellite data and using a Hierarchical segmentation program called Hseg. The web site contains general project information, a high level description of the science, examples of input and output data, as well as links to other relevant projects.

  13. Info.cern.ch returns to the Web

    CERN Document Server

    2006-01-01

    First web address is reincarnated as a historical reference on the birth of the Web. Tim Berners-Lee, inventor of the Web, with one of the first Web pages on his computer. CERN invites you to take a virtual trip back in time and have a look at what the very first URL, which led to a revolution of the way we communicate and share information, was all about. The original web server, whose address was info.cern.ch, centred on information regarding the WorldWideWeb (WWW) project. Visitors could learn more about hypertext, technical details for creating one's own webpage, and even an explanation on how to search the Web for information-something 5 year-olds of today have mastered since it all started 17 years ago. Now info.cern.ch has been re-launched with a much brighter façade and a focus on the ideas that inspired this new wave of technology. The first browser created by Tim Berners-Lee, inventor of the Web, contained just about everything we see today on a web browser, including graphics, menus, layouts and...

  14. AstrodyToolsWeb an e-Science project in Astrodynamics and Celestial Mechanics fields

    Science.gov (United States)

    López, R.; San-Juan, J. F.

    2013-05-01

    Astrodynamics Web Tools, AstrodyToolsWeb (http://tastrody.unirioja.es), is an ongoing collaborative Web Tools computing infrastructure project which has been specially designed to support scientific computation. AstrodyToolsWeb provides project collaborators with all the technical and human facilities in order to wrap, manage, and use specialized noncommercial software tools in Astrodynamics and Celestial Mechanics fields, with the aim of optimizing the use of resources, both human and material. However, this project is open to collaboration from the whole scientific community in order to create a library of useful tools and their corresponding theoretical backgrounds. AstrodyToolsWeb offers a user-friendly web interface in order to choose applications, introduce data, and select appropriate constraints in an intuitive and easy way for the user. After that, the application is executed in real time, whenever possible; then the critical information about program behavior (errors and logs) and output, including the postprocessing and interpretation of its results (graphical representation of data, statistical analysis or whatever manipulation therein), are shown via the same web interface or can be downloaded to the user's computer.

  15. The Web Lecture Archive Project: Archiving ATLAS Presentations and Tutorials

    CERN Multimedia

    Herr, J

    2004-01-01

    The geographical diversity of the ATLAS Collaboration presents constant challenges in the communication between and training of its members. One important example is the need for training of new collaboration members and/or current members on new developments. The Web Lecture Archive Project (WLAP), a joint project between the University of Michigan and CERN Technical Training, has addressed this challenge by recording ATLAS tutorials in the form of streamed "Web Lectures," consisting of synchronized audio, video and high-resolution slides, available on demand to anyone in the world with a Web browser. ATLAS software tutorials recorded by WLAP include ATHENA, ATLANTIS, Monte Carlo event generators, Object Oriented Analysis and Design, GEANT4, and Physics EDM and tools. All ATLAS talks, including both tutorials and meetings are available at http://www.wlap.org/browser.php?ID=atlas. Members of the University of Michigan Physics Department and Media Union, under the framework of the ATLAS Collaboratory Project ...

  16. Even Faster Web Sites Performance Best Practices for Web Developers

    CERN Document Server

    Souders, Steve

    2009-01-01

    Performance is critical to the success of any web site, and yet today's web applications push browsers to their limits with increasing amounts of rich content and heavy use of Ajax. In this book, Steve Souders, web performance evangelist at Google and former Chief Performance Yahoo!, provides valuable techniques to help you optimize your site's performance. Souders' previous book, the bestselling High Performance Web Sites, shocked the web development world by revealing that 80% of the time it takes for a web page to load is on the client side. In Even Faster Web Sites, Souders and eight exp

  17. Study on online community user motif using web usage mining

    Science.gov (United States)

    Alphy, Meera; Sharma, Ajay

    2016-04-01

    The Web usage mining is the application of data mining, which is used to extract useful information from the online community. The World Wide Web contains at least 4.73 billion pages according to Indexed Web and it contains at least 228.52 million pages according Dutch Indexed web on 6th august 2015, Thursday. It’s difficult to get needed data from these billions of web pages in World Wide Web. Here is the importance of web usage mining. Personalizing the search engine helps the web user to identify the most used data in an easy way. It reduces the time consumption; automatic site search and automatic restore the useful sites. This study represents the old techniques to latest techniques used in pattern discovery and analysis in web usage mining from 1996 to 2015. Analyzing user motif helps in the improvement of business, e-commerce, personalisation and improvement of websites.

  18. Web Environments for Group-Based Project Work in Higher Education

    NARCIS (Netherlands)

    Andernach, J.A.; van Diepen, N.M.; Collis, Betty; Andernach, Toine

    1997-01-01

    We discuss problems confronting the use of group-based project work as an instructional strategy in higher education and describe two courses in which course-specific World Wide Web (Web) environments have evolved over a series of course sequences and are used both as tool environments for

  19. Instant responsive web design

    CERN Document Server

    Simmons, Cory

    2013-01-01

    A step-by-step tutorial approach which will teach the readers what responsive web design is and how it is used in designing a responsive web page.If you are a web-designer looking to expand your skill set by learning the quickly growing industry standard of responsive web design, this book is ideal for you. Knowledge of CSS is assumed.

  20. Automated grading of homework assignments and tests in introductory and intermediate statistics courses using active server pages.

    Science.gov (United States)

    Stockburger, D W

    1999-05-01

    Active server pages permit a software developer to customize the Web experience for users by inserting server-side script and database access into Web pages. This paper describes applications of these techniques and provides a primer on the use of these methods. Applications include a system that generates and grades individualized homework assignments and tests for statistics students. The student accesses the system as a Web page, prints out the assignment, does the assignment, and enters the answers on the Web page. The server, running on NT Server 4.0, grades the assignment, updates the grade book (on a database), and returns the answer key to the student.

  1. WebSelF: A Web Scraping Framework

    DEFF Research Database (Denmark)

    Thomsen, Jakob; Ernst, Erik; Brabrand, Claus

    2012-01-01

    We present, WebSelF, a framework for web scraping which models the process of web scraping and decomposes it into four conceptually independent, reusable, and composable constituents. We have validated our framework through a full parameterized implementation that is flexible enough to capture...... previous work on web scraping. We have experimentally evaluated our framework and implementation in an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over...... a period of more than one year. Our framework solves three concrete problems with current web scraping and our experimental results indicate that com- position of previous and our new techniques achieve a higher degree of accuracy, precision and specificity than existing techniques alone....

  2. Fluid annotations through open hypermedia: Using and extending emerging Web standards

    DEFF Research Database (Denmark)

    Bouvin, Niels Olof; Zellweger, Polle Trescott; Grønbæk, Kaj

    2002-01-01

    and browsing of fluid annotations on third-party Web pages. This prototype is an extension of the Arakne Environment, an open hypermedia application that can augment Web pages with externally stored hypermedia structures. This paper describes how various Web standards, including DOM, CSS, XLink, XPointer...

  3. Open Hypermedia as User Controlled Meta Data for the Web

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Sloth, Lennert; Bouvin, Niels Olof

    2000-01-01

    segments. By means of the Webvise system, OHIF structures can be authored, imposed on Web pages, and finally linked on the Web as any ordinary Web resource. Following a link to an OHIF file automatically invokes a Webvise download of the meta data structures and the annotated Web content will be displayed...... in the browser. Moreover, the Webvise system provides support for users to create, manipulate, and share the OHIF structures together with custom made web pages and MS Office 2000 documents on WebDAV servers. These Webvise facilities goes beyond ealier open hypermedia systems in that it now allows fully...... distributed open hypermedia linking between Web pages and WebDAV aware desktop applications. The paper describes the OHIF format and demonstrates how the Webvise system handles OHIF. Finally, it argues for better support for handling user controlled meta data, e.g. support for linking in non-XML data...

  4. JavaScript: Convenient Interactivity for the Class Web Page.

    Science.gov (United States)

    Gray, Patricia

    This paper shows how JavaScript can be used within HTML pages to add interactive review sessions and quizzes incorporating graphics and sound files. JavaScript has the advantage of providing basic interactive functions without the use of separate software applications and players. Because it can be part of a standard HTML page, it is…

  5. PageRank, HITS and a unified framework for link analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Chris; He, Xiaofeng; Husbands, Parry; Zha, Hongyuan; Simon, Horst

    2001-10-01

    Two popular webpage ranking algorithms are HITS and PageRank. HITS emphasizes mutual reinforcement between authority and hub webpages, while PageRank emphasizes hyperlink weight normalization and web surfing based on random walk models. We systematically generalize/combine these concepts into a unified framework. The ranking framework contains a large algorithm space; HITS and PageRank are two extreme ends in this space. We study several normalized ranking algorithms which are intermediate between HITS and PageRank, and obtain closed-form solutions. We show that, to first order approximation, all ranking algorithms in this framework, including PageRank and HITS, lead to same ranking which is highly correlated with ranking by indegree. These results support the notion that in web resource ranking indegree and outdegree are of fundamental importance. Rankings of webgraphs of different sizes and queries are presented to illustrate our analysis.

  6. Clinical software development for the Web: lessons learned from the BOADICEA project.

    Science.gov (United States)

    Cunningham, Alex P; Antoniou, Antonis C; Easton, Douglas F

    2012-04-10

    In the past 20 years, society has witnessed the following landmark scientific advances: (i) the sequencing of the human genome, (ii) the distribution of software by the open source movement, and (iii) the invention of the World Wide Web. Together, these advances have provided a new impetus for clinical software development: developers now translate the products of human genomic research into clinical software tools; they use open-source programs to build them; and they use the Web to deliver them. Whilst this open-source component-based approach has undoubtedly made clinical software development easier, clinical software projects are still hampered by problems that traditionally accompany the software process. This study describes the development of the BOADICEA Web Application, a computer program used by clinical geneticists to assess risks to patients with a family history of breast and ovarian cancer. The key challenge of the BOADICEA Web Application project was to deliver a program that was safe, secure and easy for healthcare professionals to use. We focus on the software process, problems faced, and lessons learned. Our key objectives are: (i) to highlight key clinical software development issues; (ii) to demonstrate how software engineering tools and techniques can facilitate clinical software development for the benefit of individuals who lack software engineering expertise; and (iii) to provide a clinical software development case report that can be used as a basis for discussion at the start of future projects. We developed the BOADICEA Web Application using an evolutionary software process. Our approach to Web implementation was conservative and we used conventional software engineering tools and techniques. The principal software development activities were: requirements, design, implementation, testing, documentation and maintenance. The BOADICEA Web Application has now been widely adopted by clinical geneticists and researchers. BOADICEA Web

  7. Calculating PageRank in a changing network with added or removed edges

    Science.gov (United States)

    Engström, Christopher; Silvestrov, Sergei

    2017-01-01

    PageRank was initially developed by S. Brinn and L. Page in 1998 to rank homepages on the Internet using the stationary distribution of a Markov chain created using the web graph. Due to the large size of the web graph and many other real world networks fast methods to calculate PageRank is needed and even if the original way of calculating PageRank using a Power iterations is rather fast, many other approaches have been made to improve the speed further. In this paper we will consider the problem of recalculating PageRank of a changing network where the PageRank of a previous version of the network is known. In particular we will consider the special case of adding or removing edges to a single vertex in the graph or graph component.

  8. TDCCREC: AN EFFICIENT AND SCALABLE WEB-BASED RECOMMENDATION SYSTEM

    Directory of Open Access Journals (Sweden)

    K.Latha

    2010-10-01

    Full Text Available Web browsers are provided with complex information space where the volume of information available to them is huge. There comes the Recommender system which effectively recommends web pages that are related to the current webpage, to provide the user with further customized reading material. To enhance the performance of the recommender systems, we include an elegant proposed web based recommendation system; Truth Discovery based Content and Collaborative RECommender (TDCCREC which is capable of addressing scalability. Existing approaches such as Learning automata deals with usage and navigational patterns of users. On the other hand, Weighted Association Rule is applied for recommending web pages by assigning weights to each page in all the transactions. Both of them have their own disadvantages. The websites recommended by the search engines have no guarantee for information correctness and often delivers conflicting information. To solve them, content based filtering and collaborative filtering techniques are introduced for recommending web pages to the active user along with the trustworthiness of the website and confidence of facts which outperforms the existing methods. Our results show how the proposed recommender system performs better in predicting the next request of web users.

  9. News from the Library: The CERN Web Archive

    CERN Multimedia

    CERN Library

    2012-01-01

    The World Wide Web was born at CERN in 1989. However, although historic paper documents from over 50 years ago survive in the CERN Archive, it is by no means certain that we will be able to consult today's web pages 50 years from now.   The Internet Archive's Wayback Machine includes an impressive collection of archived CERN web pages from 1996 onwards. However, their coverage is not complete - they aim for broad coverage of the whole Internet, rather than in-depth coverage of particular organisations. To try to fill this gap, the CERN Archive has entered into a partnership agreement with the Internet Memory Foundation. Harvesting of CERN's publicly available web pages is now being carried out on a regular basis, and the results are available here. 

  10. EuroGOV: Engineering a Multilingual Web Corpus

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.

    2005-01-01

    EuroGOV is a multilingual web corpus that was created to serve as the document collection for WebCLEF, the CLEF 2005 web retrieval task. EuroGOV is a collection of web pages crawled from the European Union portal, European Union member state governmental web sites, and Russian government web sites.

  11. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.

    Directory of Open Access Journals (Sweden)

    Mansour Alsaleh

    Full Text Available Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.

  12. Web Science emerges

    OpenAIRE

    Shadbolt, Nigel; Berners-Lee, Tim

    2008-01-01

    The relentless rise in Web pages and links is creating emergent properties, from social networks to virtual identity theft, that are transforming society. A new discipline, Web Science, aims to discover how Web traits arise and how they can be harnessed or held in check to benefit society. Important advances are beginning to be made; more work can solve major issues such as securing privacy and conveying trust.

  13. Web-based training in German university eye hospitals - Education 2.0?

    Science.gov (United States)

    Handzel, Daniel M; Hesse, L

    2011-01-01

    To analyse web-based training in ophthalmology offered by German university eye hospitals. In January 2010 the websites of all 36 German university hospitals were searched for information provided for visitors, students and doctors alike. We evaluated the offer in terms of quantity and quality. All websites could be accessed at the time of the study. 28 pages provided information for students and doctors, one page only for students, three exclusively for doctors. Four pages didn't offer any information for these target groups. The websites offered information on events like congresses or students curricular education, there were also material for download for these events or for other purposes. We found complex e-learning-platforms on 9 pages. These dealt with special ophthalmological topics in a didactic arrangement. In spite of the extensive possibilities offered by the technology of Web 2.0, many conceivable tools were only rarely made available. It was not always possible to determine if the information provided was up-to-date, very often the last actualization of the content was long ago. On one page the date for the last change was stated as 2004. Currently there are 9 functional e-learning-applications offered by German university eye hospitals. Two additional hospitals present links to a project of the German Ophthalmological Society. There was a considerable variation in quantity and quality. No website made use of crediting successful studying, e.g. with CME-points or OSCE-credits. All German university eye hospitals present themselves in the World Wide Web. However, the lack of modern, technical as well as didactical state-of-the-art learning applications is alarming as it leaves an essential medium of today's communication unused.

  14. First in the web, but where are the pieces

    Energy Technology Data Exchange (ETDEWEB)

    Deken, J.M.

    1998-04-01

    The World Wide Web (WWW) does matter to the SLAC Archives and History Office for two very important, and related, reasons. The first reason is that the early Web at SLAC is historically significant: it was the first of its kind on this continent, and it achieved new and important things. The second reason is that the Web at SLAC--in its present and future forms--is a large and changing collection of official documents of the organization, many of which exist in no other form or environment. As of the first week of August, 1997, SLAC had 8,940 administratively-accounted-for web pages, and an estimated 2,000 to 4,000 additional pages that are hard to administratively track because they either reside on the main server in users directories several levels below their top-level pages, or they reside on one of the more than 60 non-main servers at the Center. A very small sampling of the information that SLAC WWW pages convey includes: information for the general public about programs and activities at SLAC; pages which allow physics experiment collaborators to monitor data, arrange work schedules and analyze results; pages that convey information to staff and visiting scientists about seminar and activity schedules, publication procedures, and ongoing experiments; and pages that allow staff and outside users to access databases maintained at SLAC. So, when SLAC's Archives and History Office begins to approach collecting the documents of their WWW presence, what are they collecting, and how are they to go about the process of collecting it. In this paper, the author discusses the effort to archive SLAC's Web in two parts, concentrating on the first task that has been undertaken: the initial effort to identify and gather into the archives evidence and documentation of the early days of the SLAC Web. The second task, which is the effort to collect present and future web pages at SLAC, are also covered, although in less detail, since it is an effort that is only

  15. Analyzing Web pages visual scanpaths: between and within tasks variability.

    Science.gov (United States)

    Drusch, Gautier; Bastien, J M Christian

    2012-01-01

    In this paper, we propose a new method for comparing scanpaths in a bottom-up approach, and a test of the scanpath theory. To do so, we conducted a laboratory experiment in which 113 participants were invited to accomplish a set of tasks on two different websites. For each site, they had to perform two tasks that had to be repeated ounce. The data were analyzed using a procedure similar to the one used by Duchowski et al. [8]. The first step was to automatically identify, then label, AOIs with the mean-shift clustering procedure [19]. Then, scanpaths were compared two by two with a modified version of the string-edit method, which take into account the order of AOIs visualizations [2]. Our results show that scanpaths variability between tasks but within participants seems to be lower than the variability within task for a given participant. In other words participants seem to be more coherent when they perform different tasks, than when they repeat the same tasks. In addition, participants view more of the same AOI when they perform a different task on the same Web page than when they repeated the same task. These results are quite different from what predicts the scanpath theory.

  16. Building single-page web apps with meteor

    CERN Document Server

    Vogelsteller, Fabian

    2015-01-01

    If you are a web developer with basic knowledge of JavaScript and want to take on Web 2.0, build real-time applications, or simply want to write a complete application using only JavaScript and HTML/CSS, this is the book for you.This book is based on Meteor 1.0.

  17. Project Photofly: New 3d Modeling Online Web Service (case Studies and Assessments)

    Science.gov (United States)

    Abate, D.; Furini, G.; Migliori, S.; Pierattini, S.

    2011-09-01

    During summer 2010, Autodesk has released a still ongoing project called Project Photofly, freely downloadable from AutodeskLab web site until August 1 2011. Project Photofly based on computer-vision and photogrammetric principles, exploiting the power of cloud computing, is a web service able to convert collections of photographs into 3D models. Aim of our research was to evaluate the Project Photofly, through different case studies, for 3D modeling of cultural heritage monuments and objects, mostly to identify for which goals and objects it is suitable. The automatic approach will be mainly analyzed.

  18. Experiment Software and Projects on the Web with VISPA

    Science.gov (United States)

    Erdmann, M.; Fischer, B.; Fischer, R.; Geiser, E.; Glaser, C.; Müller, G.; Rieger, M.; Urban, M.; von Cube, R. F.; Welling, C.

    2017-10-01

    The Visual Physics Analysis (VISPA) project defines a toolbox for accessing software via the web. It is based on latest web technologies and provides a powerful extension mechanism that enables to interface a wide range of applications. Beyond basic applications such as a code editor, a file browser, or a terminal, it meets the demands of sophisticated experiment-specific use cases that focus on physics data analyses and typically require a high degree of interactivity. As an example, we developed a data inspector that is capable of browsing interactively through event content of several data formats, e.g., MiniAOD which is utilized by the CMS collaboration. The VISPA extension mechanism can also be used to embed external web-based applications that benefit from dynamic allocation of user-defined computing resources via SSH. For example, by wrapping the JSROOT project, ROOT files located on any remote machine can be inspected directly through a VISPA server instance. We introduced domains that combine groups of users and role-based permissions. Thereby, tailored projects are enabled, e.g. for teaching where access to student’s homework is restricted to a team of tutors, or for experiment-specific data that may only be accessible for members of the collaboration. We present the extension mechanism including corresponding applications and give an outlook onto the new permission system.

  19. Web-based surveillance of public information needs for informing preconception interventions.

    Directory of Open Access Journals (Sweden)

    Angelo D'Ambrosio

    Full Text Available The risk of adverse pregnancy outcomes can be minimized through the adoption of healthy lifestyles before pregnancy by women of childbearing age. Initiatives for promotion of preconception health may be difficult to implement. Internet can be used to build tailored health interventions through identification of the public's information needs. To this aim, we developed a semi-automatic web-based system for monitoring Google searches, web pages and activity on social networks, regarding preconception health.Based on the American College of Obstetricians and Gynecologists guidelines and on the actual search behaviors of Italian Internet users, we defined a set of keywords targeting preconception care topics. Using these keywords, we analyzed the usage of Google search engine and identified web pages containing preconception care recommendations. We also monitored how the selected web pages were shared on social networks. We analyzed discrepancies between searched and published information and the sharing pattern of the topics.We identified 1,807 Google search queries which generated a total of 1,995,030 searches during the study period. Less than 10% of the reviewed pages contained preconception care information and in 42.8% information was consistent with ACOG guidelines. Facebook was the most used social network for sharing. Nutrition, Chronic Diseases and Infectious Diseases were the most published and searched topics. Regarding Genetic Risk and Folic Acid, a high search volume was not associated to a high web page production, while Medication pages were more frequently published than searched. Vaccinations elicited high sharing although web page production was low; this effect was quite variable in time.Our study represent a resource to prioritize communication on specific topics on the web, to address misconceptions, and to tailor interventions to specific populations.

  20. Decomposition of the Google PageRank and Optimal Linking Strategy

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli

    We provide the analysis of the Google PageRank from the perspective of the Markov Chain Theory. First we study the Google PageRank for a Web that can be decomposed into several connected components which do not have any links to each other. We show that in order to determine the Google PageRank for

  1. Study of a Random Navigation on the Web Using Software Simulation

    Directory of Open Access Journals (Sweden)

    Mirella-Amelia Mioc

    2015-12-01

    Full Text Available The general information about the World Wide Web are especially nowadays useful in all types of communications. The most used model for simulating the functioning of the web is through the hypergraph. The surfer model was chosen from the known algorithms used for web navigation in this simulation. The main objective of this paper is to analyze the Page Rank and its dependency of Markov Chain Length. In this paper some software implementation are presented and used. The experimental results demonstrate the differences between the Algorithm Page Rank and Experimental Page Rank.

  2. Using centrality to rank web snippets

    NARCIS (Netherlands)

    Jijkoun, V.; de Rijke, M.; Peters, C.; Jijkoun, V.; Mandl, T.; Müller, H.; Oard, D.W.; Peñas, A.; Petras, V.; Santos, D.

    2008-01-01

    We describe our participation in the WebCLEF 2007 task, targeted at snippet retrieval from web data. Our system ranks snippets based on a simple similarity-based centrality, inspired by the web page ranking algorithms. We experimented with retrieval units (sentences and paragraphs) and with the

  3. Limitations of existing web services

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Limitations of existing web services. Uploading or downloading large data. Serving too many user from single source. Difficult to provide computer intensive job. Depend on internet and its bandwidth. Security of data in transition. Maintain confidentiality of data ...

  4. Heuristic evaluation of paper-based Web pages: a simplified inspection usability methodology.

    Science.gov (United States)

    Allen, Mureen; Currie, Leanne M; Bakken, Suzanne; Patel, Vimla L; Cimino, James J

    2006-08-01

    Online medical information, when presented to clinicians, must be well-organized and intuitive to use, so that the clinicians can conduct their daily work efficiently and without error. It is essential to actively seek to produce good user interfaces that are acceptable to the user. This paper describes the methodology used to develop a simplified heuristic evaluation (HE) suitable for the evaluation of screen shots of Web pages, the development of an HE instrument used to conduct the evaluation, and the results of the evaluation of the aforementioned screen shots. In addition, this paper presents examples of the process of categorizing problems identified by the HE and the technological solutions identified to resolve these problems. Four usability experts reviewed 18 paper-based screen shots and made a total of 108 comments. Each expert completed the task in about an hour. We were able to implement solutions to approximately 70% of the violations. Our study found that a heuristic evaluation using paper-based screen shots of a user interface was expeditious, inexpensive, and straightforward to implement.

  5. Web-based Tools for Educators: Outreach Activities of the Polar Radar for Ice Sheet Measurements (PRISM) Project

    Science.gov (United States)

    Braaten, D. A.; Holvoet, J. F.; Gogineni, S.

    2003-12-01

    (if necessary) by teachers everywhere. The PRISM project has added a search engine for polar related tracks, and has developed numerous new tracks on robotics, polar exploration, and climate change under the guidance of a K-12 teacher advisory group. The PRISM project is also developing and hosting several other web-based lesson design tools and resources for K-12 educators and students on the PRISM project web page (http://www.ku-prism.org). These tools and resources include: i) "Polar Scientists and Explorers, Past and Present" covering the travels and/or unknown fate of polar explorers and scientists; ii) "Polar News" providing links to current news articles related to polar regions; iii) "Letter of Global Concern", which is a tool to help students draft a letter to a politician, government official, or business leader; iv) "Graphic Sleuth", which is an online utility that allows teachers to make lessons for student use; v) "Bears on Ice" for students in grades K - 6 that can follow the adventures of two stuffed bears that travel with scientists into polar regions; and vi) "K-12 Polar Resources," which provides teachers with images, information, TrackStar lessons, and a search engine designed to identify polar related lessons. In our presentation, we will describe and show examples of these tools and resources, and provide an assessment of their popularity with teachers nationwide.

  6. Indian accent text-to-speech system for web browsing

    Indian Academy of Sciences (India)

    This paper describes a 'web reader' which 'reads out' the textual contents of a selected web page in Hindi or in English with Indian accent. The content of the page is downloaded and parsed into suitable textual form. It is then passed on to an indigenously developed text-to-speech system for Hindi/Indian English, ...

  7. Application of FrontPage 98 to the Development of Web Sites for the Science Division and the Center for the Advancement of Learning and Teaching (CALT) at Anne Arundel Community College.

    Science.gov (United States)

    Bird, Bruce

    This paper discusses the development of two World Wide Web sites at Anne Arundel Community College (Maryland). The criteria for the selection of hardware and software for Web site development that led to the decision to use Microsoft FrontPage 98 are described along with its major components and features. The discussion of the Science Division Web…

  8. Instant PageSpeed optimization

    CERN Document Server

    Jaiswal, Sanjeev

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Instant PageSpeed Optimization is a hands-on guide that provides a number of clear, step-by-step exercises for optimizing your websites for better performance and improving their efficiency.Instant PageSpeed Optimization is aimed at website developers and administrators who wish to make their websites load faster without any errors and consume less bandwidth. It's assumed that you will have some experience in basic web technologies like HTML, CSS3, JavaScript, and the basics of netw

  9. Web Spam, Social Propaganda and the Evolution of Search Engine Rankings

    Science.gov (United States)

    Metaxas, Panagiotis Takis

    Search Engines have greatly influenced the way we experience the web. Since the early days of the web, users have been relying on them to get informed and make decisions. When the web was relatively small, web directories were built and maintained using human experts to screen and categorize pages according to their characteristics. By the mid 1990's, however, it was apparent that the human expert model of categorizing web pages does not scale. The first search engines appeared and they have been evolving ever since, taking over the role that web directories used to play.

  10. The electronic image of the energy. A functional and aesthetical study of the web pages of the companies of energy.

    OpenAIRE

    Álvarez Ruiz, Antón; Reyes Moreno, María

    2011-01-01

    This article analyzes the web pages of the major companies in the energy market (electricity and gas) of the more developed countries (EU and North America). The present situation of the market is very significant: For the first time, the Government of these countries have given permission to liberalize the power market. This process goes on very slowly but we can see now some changes that leads to a process of concentration, through acquisitions, merges and commercial agreements between the ...

  11. Process evaluation of Project WebHealth: a nondieting Web-based intervention for obesity prevention in college students.

    Science.gov (United States)

    Dour, Colleen A; Horacek, Tanya M; Schembre, Susan M; Lohse, Barbara; Hoerr, Sharon; Kattelmann, Kendra; White, Adrienne A; Shoff, Suzanne; Phillips, Beatrice; Greene, Geoffrey

    2013-01-01

    To evaluate the motivational effect of the Project WebHealth study procedures and intervention components on weight-related health behavior changes in male and female college students. Process evaluation. Eight universities in the United States. Project WebHealth participants (n = 653; 29% men). Participants rated motivational effects of study procedures and intervention components. Participants were grouped into outcome-based health behavior categories based on achievement of desired targets for fruit and vegetable intake, physical activity, and/or body weight. Differences in motivation from each procedure and component were analyzed by gender- and outcome-based health behavior category. Women were generally more motivated than men. Compared to those who did not meet any target health behaviors, men with improved health outcomes (68%) were significantly more motivated by the skills to fuel the body lesson, goal setting, and research snippets. Their female counterparts (63%) were significantly more motivated by the lessons on body size and eating enjoyment, and by the suggested weekly activities. Specific study procedures and components of Project WebHealth motivated study participants to improve their weight-related health behaviors, and they differed by gender. Findings support the need for gender-tailored interventions in this population. Copyright © 2013 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  12. Aesthetic design of e-commerce web pages—Complexity, order, and preferences

    NARCIS (Netherlands)

    Poole, M.S.

    2012-01-01

    This study was conducted to understand the perceptual structure of e-commerce webpage visual aesthetics and to provide insight into how physical design features of web pages influence users' aesthetic perception of and preference for web pages. Drawing on the environmental aesthetics, human-computer

  13. A Note on the PageRank of Undirected Graphs

    OpenAIRE

    Grolmusz, Vince

    2012-01-01

    The PageRank is a widely used scoring function of networks in general and of the World Wide Web graph in particular. The PageRank is defined for directed graphs, but in some special cases applications for undirected graphs occur. In the literature it is widely noted that the PageRank for undirected graphs are proportional to the degrees of the vertices of the graph. We prove that statement for a particular personalization vector in the definition of the PageRank, and we also show that in gene...

  14. Classifying web genres in context: a case study documenting the web genres used by a software engineer

    NARCIS (Netherlands)

    Montesi, M.; Navarrete, T.

    2008-01-01

    This case study analyzes the Internet-based resources that a software engineer uses in his daily work. Methodologically, we studied the web browser history of the participant, classifying all the web pages he had seen over a period of 12 days into web genres. We interviewed him before and after the

  15. Scientist who weaves wonderful web

    CERN Multimedia

    Wills, D

    2000-01-01

    Mr Berners-Lee's unique standing makes him a sought-after speaker. People want to know how he developed the Web and where he thinks it is headed. 'Weaving the Web', written by himself with Mark Fischetti, is his attempt to answer these questions (1 page).

  16. INVENTORY OF ECOSYSTEM RESTORATION PROJECTS - PUBLISHED ON THE OFFICE OF WATER WEB PAGE

    Science.gov (United States)

    USEPA's National Risk Management Research Laboratory working jointly with the Office of Water, has developed an Internet-accessible database of ecosystem restoration projects within the Mid-Atlantic Integrated Assessment (MAIA) region. This article informs project owners of the i...

  17. How Useful are Orthopedic Surgery Residency Web Pages?

    Science.gov (United States)

    Oladeji, Lasun O; Yu, Jonathan C; Oladeji, Afolayan K; Ponce, Brent A

    2015-01-01

    Medical students interested in orthopedic surgery residency positions frequently use the Internet as a modality to gather information about individual residency programs. Students often invest a painstaking amount of time and effort in determining programs that they are interested in, and the Internet is central to this process. Numerous studies have concluded that program websites are a valuable resource for residency and fellowship applicants. The purpose of the present study was to provide an update on the web pages of academic orthopedic surgery departments in the United States and to rate their utility in providing information on quality of education, faculty and resident information, environment, and applicant information. We reviewed existing websites for the 156 departments or divisions of orthopedic surgery that are currently accredited for resident education by the Accreditation Council for Graduate Medical Education. Each website was assessed for quality of information regarding quality of education, faculty and resident information, environment, and applicant information. We noted that 152 of the 156 departments (97%) had functioning websites that could be accessed. There was high variability regarding the comprehensiveness of orthopedic residency websites. Most of the orthopedic websites provided information on conference, didactics, and resident rotations. Less than 50% of programs provided information on resident call schedules, resident or faculty research and publications, resident hometowns, or resident salary. There is a lack of consistency regarding the content presented on orthopedic residency websites. As the competition for orthopedic websites continues to increase, applicants flock to the Internet to learn more about orthopedic websites in greater number. A well-constructed website has the potential to increase the caliber of students applying to a said program. Copyright © 2015 Association of Program Directors in Surgery. Published by

  18. Blueprint of a Cross-Lingual Web Retrieval Collection

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.; van Zwol, R.

    2005-01-01

    The world wide web is a natural setting for cross-lingual information retrieval; web content is essentially multilingual, and web searchers are often polyglots. Even though English has emerged as the lingua franca of the web, planning for a business trip or holiday usually involves digesting pages

  19. INTERNET and information about nuclear sciences. The world wide web virtual library: nuclear sciences

    International Nuclear Information System (INIS)

    Kuruc, J.

    1999-01-01

    In this work author proposes to constitute new virtual library which should centralize the information from nuclear disciplines on the INTERNET, in order to them to give first and foremost the connection on the most important links in set nuclear sciences. The author has entitled this new virtual library The World Wide Web Library: Nuclear Sciences. By constitution of this virtual library next basic principles were chosen: home pages of international organizations important from point of view of nuclear disciplines; home pages of the National Nuclear Commissions and governments; home pages of nuclear scientific societies; web-pages specialized on nuclear problematic, in general; periodical tables of elements and isotopes; web-pages aimed on Chernobyl crash and consequences; web-pages with antinuclear aim. Now continue the links grouped on web-pages according to single nuclear areas: nuclear arsenals; nuclear astrophysics; nuclear aspects of biology (radiobiology); nuclear chemistry; nuclear company; nuclear data centres; nuclear energy; nuclear energy, environmental aspects of (radioecology); nuclear energy info centres; nuclear engineering; nuclear industries; nuclear magnetic resonance; nuclear material monitoring; nuclear medicine and radiology; nuclear physics; nuclear power (plants); nuclear reactors; nuclear risk; nuclear technologies and defence; nuclear testing; nuclear tourism; nuclear wastes; nuclear wastes. In these single groups web-links will be concentrated into following groups: virtual libraries and specialized servers; science; nuclear societies; nuclear departments of the academic institutes; nuclear research institutes and laboratories; centres, info links

  20. An Efficient PageRank Approach for Urban Traffic Optimization

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2012-01-01

    to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.

  1. Universal emergence of PageRank

    Energy Technology Data Exchange (ETDEWEB)

    Frahm, K M; Georgeot, B; Shepelyansky, D L, E-mail: frahm@irsamc.ups-tlse.fr, E-mail: georgeot@irsamc.ups-tlse.fr, E-mail: dima@irsamc.ups-tlse.fr [Laboratoire de Physique Theorique du CNRS, IRSAMC, Universite de Toulouse, UPS, 31062 Toulouse (France)

    2011-11-18

    The PageRank algorithm enables us to rank the nodes of a network through a specific eigenvector of the Google matrix, using a damping parameter {alpha} Element-Of ]0, 1[. Using extensive numerical simulations of large web networks, with a special accent on British University networks, we determine numerically and analytically the universal features of the PageRank vector at its emergence when {alpha} {yields} 1. The whole network can be divided into a core part and a group of invariant subspaces. For {alpha} {yields} 1, PageRank converges to a universal power-law distribution on the invariant subspaces whose size distribution also follows a universal power law. The convergence of PageRank at {alpha} {yields} 1 is controlled by eigenvalues of the core part of the Google matrix, which are extremely close to unity, leading to large relaxation times as, for example, in spin glasses. (paper)

  2. Universal emergence of PageRank

    International Nuclear Information System (INIS)

    Frahm, K M; Georgeot, B; Shepelyansky, D L

    2011-01-01

    The PageRank algorithm enables us to rank the nodes of a network through a specific eigenvector of the Google matrix, using a damping parameter α ∈ ]0, 1[. Using extensive numerical simulations of large web networks, with a special accent on British University networks, we determine numerically and analytically the universal features of the PageRank vector at its emergence when α → 1. The whole network can be divided into a core part and a group of invariant subspaces. For α → 1, PageRank converges to a universal power-law distribution on the invariant subspaces whose size distribution also follows a universal power law. The convergence of PageRank at α → 1 is controlled by eigenvalues of the core part of the Google matrix, which are extremely close to unity, leading to large relaxation times as, for example, in spin glasses. (paper)

  3. The poor quality of information about laparoscopy on the World Wide Web as indexed by popular search engines.

    Science.gov (United States)

    Allen, J W; Finch, R J; Coleman, M G; Nathanson, L K; O'Rourke, N A; Fielding, G A

    2002-01-01

    This study was undertaken to determine the quality of information on the Internet regarding laparoscopy. Four popular World Wide Web search engines were used with the key word "laparoscopy." Advertisements, patient- or physician-directed information, and controversial material were noted. A total of 14,030 Web pages were found, but only 104 were unique Web sites. The majority of the sites were duplicate pages, subpages within a main Web page, or dead links. Twenty-eight of the 104 pages had a medical product for sale, 26 were patient-directed, 23 were written by a physician or group of physicians, and six represented corporations. The remaining 21 were "miscellaneous." The 46 pages containing educational material were critically reviewed. At least one of the senior authors found that 32 of the pages contained controversial or misleading statements. All of the three senior authors (LKN, NAO, GAF) independently agreed that 17 of the 46 pages contained controversial information. The World Wide Web is not a reliable source for patient or physician information about laparoscopy. Authenticating medical information on the World Wide Web is a difficult task, and no government or surgical society has taken the lead in regulating what is presented as fact on the World Wide Web.

  4. Spinning the web of knowledge

    CERN Multimedia

    Knight, Matthew

    2007-01-01

    "On August 6, 1991, Tim Berners-Lee posted the World Wide Web's first Web site. Fifteen years on there are estimated to be over 100 million. The space of growth has happened at a bewildering rate and its success has even confounded its inventor." (1/2 page)

  5. Developing Web Services for Technology Education. The Graphic Communication Electronic Publishing Project.

    Science.gov (United States)

    Sanders, Mark

    1999-01-01

    Graphic Communication Electronic Publishing Project supports a Web site (http://TechEd.vt.edu/gcc/) for graphic communication teachers and students, providing links to Web materials, conversion of print materials to electronic formats, and electronic products and services including job listings, resume posting service, and a listserv. (SK)

  6. A Technique to Speedup Access to Web Contents

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. Web Caching - A Technique to Speedup Access to Web Contents. Harsha Srinath Shiva Shankar Ramanna. General Article Volume 7 Issue 7 July 2002 pp 54-62 ... Keywords. World wide web; data caching; internet traffic; web page access.

  7. SPIAR: an architectural style for single page internet applications

    NARCIS (Netherlands)

    A. Mesbah (Ali); K. Broenink; A. van Deursen (Arie)

    2006-01-01

    textabstractA new breed of Web application, dubbed AJAX, is emerging in response to a limited degree of interactivity in large-grain stateless Web interactions. At the heart of this new approach lies a single page interaction model that facilitates rich interactivity. In this paper, we examine the

  8. Project Leadership Lived Experiences with Web-Based Social Networking: A Phenomenological Study

    Science.gov (United States)

    Scroggins, Charles W.

    2010-01-01

    This study explores the lived experiences of project leaders adopting and using Web-2.0 social networking collaboration applications for their project leadership activities. The experiences of 20 project leaders in a Fortune 500 aerospace and defense enterprise in the northeastern United States of America were explored using a qualitative…

  9. Grouping of Items in Mobile Web Questionnaires

    Science.gov (United States)

    Mavletova, Aigul; Couper, Mick P.

    2016-01-01

    There is some evidence that a scrolling design may reduce breakoffs in mobile web surveys compared to a paging design, but there is little empirical evidence to guide the choice of the optimal number of items per page. We investigate the effect of the number of items presented on a page on data quality in two types of questionnaires: with or…

  10. Web-based pathology practice examination usage

    Directory of Open Access Journals (Sweden)

    Edward C Klatt

    2014-01-01

    Full Text Available Context: General and subject specific practice examinations for students in health sciences studying pathology were placed onto a free public internet web site entitled web path and were accessed four clicks from the home web site menu. Subjects and Methods: Multiple choice questions were coded into. html files with JavaScript functions for web browser viewing in a timed format. A Perl programming language script with common gateway interface for web page forms scored examinations and placed results into a log file on an internet computer server. The four general review examinations of 30 questions each could be completed in up to 30 min. The 17 subject specific examinations of 10 questions each with accompanying images could be completed in up to 15 min each. The results of scores and user educational field of study from log files were compiled from June 2006 to January 2014. Results: The four general review examinations had 31,639 accesses with completion of all questions, for a completion rate of 54% and average score of 75%. A score of 100% was achieved by 7% of users, ≥90% by 21%, and ≥50% score by 95% of users. In top to bottom web page menu order, review examination usage was 44%, 24%, 17%, and 15% of all accessions. The 17 subject specific examinations had 103,028 completions, with completion rate 73% and average score 74%. Scoring at 100% was 20% overall, ≥90% by 37%, and ≥50% score by 90% of users. The first three menu items on the web page accounted for 12.6%, 10.0%, and 8.2% of all completions, and the bottom three accounted for no more than 2.2% each. Conclusions: Completion rates were higher for shorter 10 questions subject examinations. Users identifying themselves as MD/DO scored higher than other users, averaging 75%. Usage was higher for examinations at the top of the web page menu. Scores achieved suggest that a cohort of serious users fully completing the examinations had sufficient preparation to use them to support

  11. Assembly and concept of a web-based GIS within the paleolimnological project CONTINENT (Lake Baikal, Russia)

    OpenAIRE

    B. Heim; Jens Klump; N. Fagel; Hedi Oberhänsli

    2008-01-01

    Web-based Geographical Information Systems (GIS) are excellent tools within interdisciplinary and multi-national geoscience projects to exchange and visualize project data. The web-based GIS presented in this paper was designed for the paleolimnological project 'High-resolution CONTINENTal paleoclimate record in Lake Baikal' (CONTINENT) (Lake Baikal, Siberia, Russia) to allow the interactive handling of spatial data. The GIS database combines project data (core positions, sample positions, th...

  12. A Web-Based Monitoring System for Multidisciplinary Design Projects

    Science.gov (United States)

    Rogers, James L.; Salas, Andrea O.; Weston, Robert P.

    1998-01-01

    In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary computational environments, is defined as a hardware and software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, displaying, monitoring, and controlling the design process. The objective of this research is to explore how Web technology, integrated with an existing framework, can improve these areas of weakness. This paper describes a Web-based system that optimizes and controls the execution sequence of design processes; and monitors the project status and results. The three-stage evolution of the system with increasingly complex problems demonstrates the feasibility of this approach.

  13. Improving the interactivity and functionality of Web-based radiology teaching files with the Java programming language.

    Science.gov (United States)

    Eng, J

    1997-01-01

    Java is a programming language that runs on a "virtual machine" built into World Wide Web (WWW)-browsing programs on multiple hardware platforms. Web pages were developed with Java to enable Web-browsing programs to overlay transparent graphics and text on displayed images so that the user could control the display of labels and annotations on the images, a key feature not available with standard Web pages. This feature was extended to include the presentation of normal radiologic anatomy. Java programming was also used to make Web browsers compatible with the Digital Imaging and Communications in Medicine (DICOM) file format. By enhancing the functionality of Web pages, Java technology should provide greater incentive for using a Web-based approach in the development of radiology teaching material.

  14. Default Parallels Plesk Panel Page

    Science.gov (United States)

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  15. SOAP based web services and their future role in VO projects

    Science.gov (United States)

    Topf, F.; Jacquey, C.; Génot, V.; Cecconi, B.; André, N.; Zhang, T. L.; Kallio, E.; Lammer, H.; Facsko, G.; Stöckler, R.; Khodachenko, M.

    2011-10-01

    Modern state-of-the-art web services are from crucial importance for the interoperability of different VO tools existing in the planetary community. SOAP based web services assure the interconnectability between different data sources and tools by providing a common protocol for communication. This paper will point out a best practice approach with the Automated Multi-Dataset Analysis Tool (AMDA) developed by CDPP, Toulouse and the provision of VEX/MAG data from a remote database located at IWF, Graz. Furthermore a new FP7 project IMPEx will be introduced with a potential usage example of AMDA web services in conjunction with simulation models.

  16. Guide on Project Web Access of SFR R and D and Technology Monitoring System

    International Nuclear Information System (INIS)

    Lee, Dong Uk; Won, Byung Chool; Lee, Yong Bum; Kim, Young In; Hahn, Do Hee

    2008-09-01

    The SFR R and D and technology monitoring system based on the MS enterprise project management is developed for systematic effective management of 'Development of Basic Key Technologies for Gen IV SFR' project which was performed under the Mid- and Long-term Nuclear R and D Program sponsored by the Ministry of Education, Science and Technology. This system is a tool for project management based on web access. Therefore this manual is a detailed guide for Project Web Access(PWA). Section 1 describes the common guide for using of system functions such as project server 2007 client connection setting, additional outlook function setting etc. The section 2 describes the guide for system administrator. It is described the guide for project management in section 3, 4

  17. Guide on Project Web Access of SFR R and D and Technology Monitoring System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Uk; Won, Byung Chool; Lee, Yong Bum; Kim, Young In; Hahn, Do Hee

    2008-09-15

    The SFR R and D and technology monitoring system based on the MS enterprise project management is developed for systematic effective management of 'Development of Basic Key Technologies for Gen IV SFR' project which was performed under the Mid- and Long-term Nuclear R and D Program sponsored by the Ministry of Education, Science and Technology. This system is a tool for project management based on web access. Therefore this manual is a detailed guide for Project Web Access(PWA). Section 1 describes the common guide for using of system functions such as project server 2007 client connection setting, additional outlook function setting etc. The section 2 describes the guide for system administrator. It is described the guide for project management in section 3, 4.

  18. Web-based Surveys: Changing the Survey Process

    OpenAIRE

    Gunn, Holly

    2002-01-01

    Web-based surveys are having a profound influence on the survey process. Unlike other types of surveys, Web page design skills and computer programming expertise play a significant role in the design of Web-based surveys. Survey respondents face new and different challenges in completing a Web-based survey. This paper examines the different types of Web-based surveys, the advantages and challenges of using Web-based surveys, the design of Web-based surveys, and the issues of validity, error, ...

  19. Modelling Safe Interface Interactions in Web Applications

    Science.gov (United States)

    Brambilla, Marco; Cabot, Jordi; Grossniklaus, Michael

    Current Web applications embed sophisticated user interfaces and business logic. The original interaction paradigm of the Web based on static content pages that are browsed by hyperlinks is, therefore, not valid anymore. In this paper, we advocate a paradigm shift for browsers and Web applications, that improves the management of user interaction and browsing history. Pages are replaced by States as basic navigation nodes, and Back/Forward navigation along the browsing history is replaced by a full-fledged interactive application paradigm, supporting transactions at the interface level and featuring Undo/Redo capabilities. This new paradigm offers a safer and more precise interaction model, protecting the user from unexpected behaviours of the applications and the browser.

  20. Using Web-Based Peer Benchmarking to Manage the Client-Based Project

    Science.gov (United States)

    Raska, David; Keller, Eileen Weisenbach; Shaw, Doris

    2013-01-01

    The complexities of integrating client-based projects into marketing courses provide challenges for the instructor but produce richness of context and active learning for the student. This paper explains the integration of Web-based peer benchmarking as a means of improving student performance on client-based projects within a single semester in…

  1. Grey Guide: A Community Driven Open Resource Project in Grey Literature

    OpenAIRE

    Biagioni, Stefania; Giannini, Silvia

    2017-01-01

    In December 2013, the GreyGuide Project was formerly launched as an online forum and repository of good practice in grey literature. The GreyGuide manages Open Source Repositories and provides a unique resource in the field of grey literature that is long awaited and which responds to the information needs of a diverse, international grey literature community. As GreyNet's web access Portal, the GreyGuide now provides a wealth of content that was previously either confined to web pages or was...

  2. The use of the TWiki Web in ATLAS

    International Nuclear Information System (INIS)

    Amram, Nir; Antonelli, Stefano; Haywood, Stephen; Lloyd, Steve; Luehring, Frederick; Poulard, Gilbert

    2010-01-01

    The ATLAS Experiment, with over 2000 collaborators, needs efficient and effective means of communicating information. The Collaboration has been using the TWiki Web at CERN for over three years and now has more than 7000 web pages, some of which are protected. This number greatly exceeds the number of 'static' HTML pages, and in the last year, there has been a significant migration to the TWiki. The TWiki is one example of the many different types of Wiki web which exist. In this paper, a description is given of the ATLAS TWiki at CERN. The tools used by the Collaboration to manage the TWiki are described and some of the problems encountered explained. A very useful development has been the creation of a set of Workbooks (Users' Guides) - these have benefitted from the TWiki environment and, in particular, a tool to extract pdf from the associated pages.

  3. Medium-sized Universities Connect to Their Libraries: Links on University Home Pages and User Group Pages

    Directory of Open Access Journals (Sweden)

    Pamela Harpel-Burk

    2006-03-01

    Full Text Available From major tasks—such as recruitment of new students and staff—to the more mundane but equally important tasks—such as providing directions to campus—college and university Web sites perform a wide range of tasks for a varied assortment of users. Overlapping functions and user needs meld to create the need for a Web site with three major functions: promotion and marketing, access to online services, and providing a means of communication between individuals and groups. In turn, college and university Web sites that provide links to their library home page can be valuable assets for recruitment, public relations, and for helping users locate online services.

  4. Software Project Management and Measurement on the World-Wide-Web (WWW)

    Science.gov (United States)

    Callahan, John; Ramakrishnan, Sudhaka

    1996-01-01

    We briefly describe a system for forms-based, work-flow management that helps members of a software development team overcome geographical barriers to collaboration. Our system, called the Web Integrated Software Environment (WISE), is implemented as a World-Wide-Web service that allows for management and measurement of software development projects based on dynamic analysis of change activity in the workflow. WISE tracks issues in a software development process, provides informal communication between the users with different roles, supports to-do lists, and helps in software process improvement. WISE minimizes the time devoted to metrics collection and analysis by providing implicit delivery of messages between users based on the content of project documents. The use of a database in WISE is hidden from the users who view WISE as maintaining a personal 'to-do list' of tasks related to the many projects on which they may play different roles.

  5. Dental practice websites: creating a Web presence.

    Science.gov (United States)

    Miller, Syrene A; Forrest, Jane L

    2002-07-01

    Web technology provides an opportunity for dentists to showcase their practice philosophy, quality of care, office setting, and staff in a creative manner. Having a Website provides a practice with innovative and cost-effective communications and marketing tools for current and potential patients who use the Internet. The main benefits of using a Website to promote one's practice are: Making office time more productive, tasks more timely, follow-up less necessary Engaging patients in an interactive and visual learning process Providing online forms and procedure examples for patients Projecting a competent and current image Tracking the usage of Web pages. Several options are available when considering the development of a Website. These options range in cost based on customization of the site and ongoing support services, such as site updates, technical assistance, and Web usage statistics. In most cases, Websites are less expensive than advertising in the phone book. Options in creating a Website include building one's own, employing a company that offers Website templates, and employing a company that offers customized sites. These development options and benefits will continue to grow as individuals access the Web and more information and sites become available.

  6. Automatic web site authoring with SiteGuide

    NARCIS (Netherlands)

    de Boer, V.; Hollink, V.; van Someren, M.W.; Kłopotek, M.A.; Przepiórkowski, A.; Wierzchoń, S.T.; Trojanowski, K.

    2009-01-01

    An important step in the design process for a web site is to determine which information is to be included and how the information should be organized on the web site’s pages. In this paper we describe ’SiteGuide’, a tool that automatically produces an information architecture for a web site that a

  7. Hiding in Plain Sight: The Anatomy of Malicious Facebook Pages

    OpenAIRE

    Dewan, Prateek; Kumaraguru, Ponnurangam

    2015-01-01

    Facebook is the world's largest Online Social Network, having more than 1 billion users. Like most other social networks, Facebook is home to various categories of hostile entities who abuse the platform by posting malicious content. In this paper, we identify and characterize Facebook pages that engage in spreading URLs pointing to malicious domains. We used the Web of Trust API to determine domain reputations of URLs published by pages, and identified 627 pages publishing untrustworthy info...

  8. Facilitating Adoption of Web Tools for Problem and Project Based Learning Activities

    DEFF Research Database (Denmark)

    Khalid, Md. Saifuddin; Rongbutsri, Nikorn; Buus, Lillian

    2012-01-01

    and project based learning. In the area of problem and project based learning, facilitation is the core term and the teacher often has the role as facilitator or moderator instead of a teacher teaching. Technology adoption for learning activities needs facilitation, which is mostly absent. Sustainable......This paper builds on research directions from ‘activity theory’ and ‘learning design’ to provide ‘facilitation’ for students standing within decision making related to selection of web 2.0 tools and university provided web-based applications for supporting students activities within problem...... adoption might be facilitated based on tool appropriation with activities associated with courses and projects. Our mapping of different tools in a framework is reported based on interviews, observations, narratives and survey. A direction towards facilitation process for adoption is discussed as part...

  9. A web implementation: the good and the not-so-good.

    Science.gov (United States)

    Bergsneider, C; Piraino, D; Fuerst, M

    2001-06-01

    E-commerce, e-mail, e-greeting, e-this, and e-that everywhere you turn there is a new "e" word for an internet or Web application. We, at the Cleveland Clinic Foundation, have been "e-nlightened" and will discuss in this report the implementation of a web-based radiology information system (RIS) in our radiology division or "e-radiology" division. The application, IDXRad Version 10.0 from IDX Corp, Burlington, VT, is in use at the Cleveland Clinic Foundation and has both intranet (for use in Radiology) and internet (referring physician viewing) modules. We will concentrate on the features of using a web browser for the application's front-end, including easy prototyping for screen review, easier mock-ups of demonstrations by vendors and developers, and easier training as more people become web-addicted. Project communication can be facilitated with an internal project web page, and use of the web browser can accommodate quicker turnaround of software upgrades as the software code is centrally located. Compared with other technologies, including client/server, there is a smaller roll out cost when using a standard web browser. However, the new technology requires a change and changes are never implemented without challenges. A seasoned technologist using a legacy system can enter data quicker using function keys than using a graphical user interface and pointing and clicking through a series of pop-up windows. Also, effective use of a web browser depends on intuitive design for it to be easily implemented and accepted by the user. Some software packages will not work on both of the popular web browsers and then are tailored to specific release levels. As computer-based patient records become a standard, patient confidentiality must be enforced. The technical design and application security features that support the web-based software package will be discussed. Also web technologies have their own implementation issues.

  10. A Web Server for MACCS Magnetometer Data

    Science.gov (United States)

    Engebretson, Mark J.

    1998-01-01

    NASA Grant NAG5-3719 was provided to Augsburg College to support the development of a web server for the Magnetometer Array for Cusp and Cleft Studies (MACCS), a two-dimensional array of fluxgate magnetometers located at cusp latitudes in Arctic Canada. MACCS was developed as part of the National Science Foundation's GEM (Geospace Environment Modeling) Program, which was designed in part to complement NASA's Global Geospace Science programs during the decade of the 1990s. This report describes the successful use of these grant funds to support a working web page that provides both daily plots and file access to any user accessing the worldwide web. The MACCS home page can be accessed at http://space.augsburg.edu/space/MaccsHome.html.

  11. A Runtime System for Interactive Web Services

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Sandholm, Anders

    1999-01-01

    Interactive web services are increasingly replacing traditional static web pages. Producing web services seems to require a tremendous amount of laborious low-level coding due to the primitive nature of CGI programming. We present ideas for an improved runtime system for interactive web services...... built on top of CGI running on virtually every combination of browser and HTTP/CGI server. The runtime system has been implemented and used extensively in , a tool for producing interactive web services....

  12. A web-based online collaboration platform for formulating engineering design projects

    Science.gov (United States)

    Varikuti, Sainath

    Effective communication and collaboration among students, faculty and industrial sponsors play a vital role while formulating and solving engineering design projects. With the advent in the web technology, online platforms and systems have been proposed to facilitate interactions and collaboration among different stakeholders in the context of senior design projects. However, there are noticeable gaps in the literature with respect to understanding the effects of online collaboration platforms for formulating engineering design projects. Most of the existing literature is focused on exploring the utility of online platforms on activities after the problem is defined and teams are formed. Also, there is a lack of mechanisms and tools to guide the project formation phase in senior design projects, which makes it challenging for students and faculty to collaboratively develop and refine project ideas and to establish appropriate teams. In this thesis a web-based online collaboration platform is designed and implemented to share, discuss and obtain feedback on project ideas and to facilitate collaboration among students and faculty prior to the start of the semester. The goal of this thesis is to understand the impact of an online collaboration platform for formulating engineering design projects, and how a web-based online collaboration platform affects the amount of interactions among stakeholders during the early phases of design process. A survey measuring the amount of interactions among students and faculty is administered. Initial findings show a marked improvement in the students' ability to share project ideas and form teams with other students and faculty. Students found the online platform simple to use. The suggestions for improving the tool generally included features that were not necessarily design specific, indicating that the underlying concept of this collaborative platform provides a strong basis and can be extended for future online platforms

  13. A fuzzy method for improving the functionality of search engines based on user's web interactions

    Directory of Open Access Journals (Sweden)

    Farzaneh Kabirbeyk

    2015-04-01

    Full Text Available Web mining has been widely used to discover knowledge from various sources in the web. One of the important tools in web mining is mining of web user’s behavior that is considered as a way to discover the potential knowledge of web user’s interaction. Nowadays, Website personalization is regarded as a popular phenomenon among web users and it plays an important role in facilitating user access and provides information of users’ requirements based on their own interests. Extracting important features about web user behavior plays a significant role in web usage mining. Such features are page visit frequency in each session, visit duration, and dates of visiting a certain pages. This paper presents a method to predict user’s interest and to propose a list of pages based on their interests by identifying user’s behavior based on fuzzy techniques called fuzzy clustering method. Due to the user’s different interests and use of one or more interest at a time, user’s interest may belong to several clusters and fuzzy clustering provide a possible overlap. Using the resulted cluster helps extract fuzzy rules. This helps detecting user’s movement pattern and using neural network a list of suggested pages to the users is provided.

  14. WEB 2.0 SERVICES AS A TECHNOLOGICAL FOUNDATION OF A NETWORK PROJECT

    Directory of Open Access Journals (Sweden)

    Tatiana Ivanovna Kanyanina

    2015-02-01

    Full Text Available In the light of the requirements of the federal state educational standard increases the value of the network design based on the use of Web 2.0 services. The article outlines the range of technological challenges faced by the developer of the network project, such as the choice of the network platform of the project, design tools and tools for project results placement, project coordination and organization of communication lines, the choice of instruments for the promotion of the project on the Internet. The main attention is focused on the examples of network services that address these challenges, and describe their features. The authors rely on the specific network projects implemented in Nizhny Novgorod region. The paper analyzes the possible project’s network sites (network environment. The article gives a general description of each environment, marks its distinctive features, advantages and disadvantages. The article presents project tasks’ examples, designed on the basis of the variety of Web 2.0 services that functionally meet the requirements of a particular task, examples of services to represent the project products and to summarize its results. Attention is paid to the organization of the lines of communication of the project participants and organizers and to the role of network services in the project promotion on the Internet.

  15. A Web Browser Interface to Manage the Searching and Organizing of Information on the Web by Learners

    Science.gov (United States)

    Li, Liang-Yi; Chen, Gwo-Dong

    2010-01-01

    Information Gathering is a knowledge construction process. Web learners make a plan for their Information Gathering task based on their prior knowledge. The plan is evolved with new information encountered and their mental model is constructed through continuously assimilating and accommodating new information gathered from different Web pages. In…

  16. The Rise and Fall of Text on the Web: A Quantitative Study of Web Archives

    Science.gov (United States)

    Cocciolo, Anthony

    2015-01-01

    Introduction: This study addresses the following research question: is the use of text on the World Wide Web declining? If so, when did it start declining, and by how much has it declined? Method: Web pages are downloaded from the Internet Archive for the years 1999, 2002, 2005, 2008, 2011 and 2014, producing 600 captures of 100 prominent and…

  17. Workflow and web application for annotating NCBI BioProject transcriptome data.

    Science.gov (United States)

    Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A; Barrero, Luz S; Landsman, David; Mariño-Ramírez, Leonardo

    2017-01-01

    The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. URL: http://www.ncbi.nlm.nih.gov/projects/physalis/. Published by Oxford University Press 2017. This work is written by US Government employees and is in the public domain in the US.

  18. Project Management Web Tools at the MICE experiment

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Project management tools like Trac are commonly used within the open-source community to coordinate projects. The Muon Ionization Cooling Experiment (MICE) uses the project management web application Redmine to host mice.rl.ac.uk. Many groups within the experiment have a Redmine project: analysis, computing and software (including offline, online, controls and monitoring, and database subgroups), executive board, and operations. All of these groups use the website to communicate, track effort, develop schedules, and maintain documentation. The issue tracker is a rich tool that is used to identify tasks and monitor progress within groups on timescales ranging from immediate and unexpected problems to milestones that cover the life of the experiment. It allows the prioritization of tasks according to time-sensitivity, while providing a searchable record of work that has been done. This record of work can be used to measure both individual and overall group activity, identify areas lacking sufficient personne...

  19. Fuzzy Clustering: An Approachfor Mining Usage Profilesfrom Web

    OpenAIRE

    Ms.Archana N. Boob; Prof. D. M. Dakhane

    2012-01-01

    Web usage mining is an application of data mining technology to mining the data of the web server log file. It can discover the browsing patterns of user and some kind of correlations between the web pages. Web usage mining provides the support for the web site design, providing personalization server and other business making decision, etc. Web mining applies the data mining, the artificial intelligence and the chart technology and so on to the web data and traces users' visiting characteris...

  20. Design of remote weather monitor system based on embedded web database

    International Nuclear Information System (INIS)

    Gao Jiugang; Zhuang Along

    2010-01-01

    The remote weather monitoring system is designed by employing the embedded Web database technology and the S3C2410 microprocessor as the core. The monitoring system can simultaneously monitor the multi-channel sensor signals, and can give a dynamic Web pages display of various types of meteorological information on the remote computer. It gives a elaborated introduction of the construction and application of the Web database under the embedded Linux. Test results show that the client access the Web page via the GPRS or the Internet, acquires data and uses an intuitive graphical way to display the value of various types of meteorological information. (authors)

  1. Lost but Not Forgotten: Finding Pages on the Unarchived Web

    NARCIS (Netherlands)

    Huurdeman, H.C.; Kamps, J.; Samar, T.; de Vries, A.P.; Ben-David, A.; Rogers, R.A.

    2015-01-01

    Web archives attempt to preserve the fast changing web, yet they will always be incomplete. Due to restrictions in crawling depth, crawling frequency, and restrictive selection policies, large parts of the Web are unarchived and, therefore, lost to posterity. In this paper, we propose an approach to

  2. Lost but not forgotten: finding pages on the unarchived web

    NARCIS (Netherlands)

    H.C. Huurdeman; J. Kamps; T. Samar (Thaer); A.P. de Vries (Arjen); A. Ben-David; R.A. Rogers (Richard)

    2015-01-01

    htmlabstractWeb archives attempt to preserve the fast changing web, yet they will always be incomplete. Due to restrictions in crawling depth, crawling frequency, and restrictive selection policies, large parts of the Web are unarchived and, therefore, lost to posterity. In this paper, we propose an

  3. Comparing Web, Group and Telehealth Formats of a Military Parenting Program

    Science.gov (United States)

    2016-06-01

    materials are available upon request: • Online questionnaire for baseline data collection (9 pages) • Online parent survey for time point 1 (69 pages...web-based parenting intervention for military families with school-aged children, we expect to strengthen parenting practices in families and...AWARD NUMBER: W81XWH-14-1-0143 TITLE: Comparing Web, Group and Telehealth Formats of a Military Parenting Program PRINCIPAL INVESTIGATOR

  4. Looking back over the LHC Project

    CERN Multimedia

    2007-01-01

    Have you always wanted to delve into the history of the phenomenal LHC Project? Well, now you can. A chronological history of the LHC Project is now available on the web. It traces the Project's key milestones, from its first approval in 1994 to the most recent spectacular transport operations for detector components. The photographs used to illustrate these events are linked to the CDS database, allowing visitors who wish to do so the opportunity to download them or to search for photographs associated with subjects that are of interest to them. To explore the history of the LHC Project, go to the CERN Public Welcome page and click on 'LHC Milestones' or simply go directly to the following link: http://cern.ch/LHC-Milestones/

  5. Vague but exciting…CERN celebrates 20 years of the Web

    CERN Multimedia

    2009-01-01

    Twenty years ago work started on something that would change the world forever. It would change the way we work, the way we communicate and the way we make our voices heard. On 13 March CERN will celebrate the 20th anniversary of the birth of the World Wide Web. Tim Berners-Lee with Nicola Pellow, next to the NeXT computer.In March 1989 here at CERN, Tim Berners-Lee submitted a proposal for a new information management system to his boss, Mike Sendall. ‘Vague, but exciting’, were the words that Sendall wrote on the proposal, allowing Berners-Lee to continue with the project, but unaware that it would evolve into one of the most important communication tools ever created. Tim Berners-Lee used a NeXT computer at CERN to create the first web server running a single website – info.cern.ch. Since then the World Wide Web has grown into the incredible phenomenon that we know today, a web of more than 60 billion pages, and hundreds of ...

  6. UK Web Archive programme: a brief history of opportunities and challenges

    Directory of Open Access Journals (Sweden)

    Aquiles Alencar Brayner

    2016-05-01

    Full Text Available Webpages have been playing a key role in the creation and dissemination of information in recent decades. However, given their ephemeral nature, many Web pages published on the World Wide Web have had their content changed or have been permanently deleted without leaving any trace of their existence. In order to avoid the loss of this important material that represents our contemporary cultural heritage, various institutions have launched programmes to harvest and archive Web pages  registered in specific national domains . Based on the example of development of the Web archive program in the UK, this article raises some key questions in relation to the technological obstacles and curatorial models adopted for the preservation and access to the content published on the Web.

  7. A Survey On Various Web Template Detection And Extraction Methods

    Directory of Open Access Journals (Sweden)

    Neethu Mary Varghese

    2015-03-01

    Full Text Available Abstract In todays digital world reliance on the World Wide Web as a source of information is extensive. Users increasingly rely on web based search engines to provide accurate search results on a wide range of topics that interest them. The search engines in turn parse the vast repository of web pages searching for relevant information. However majority of web portals are designed using web templates which are designed to provide consistent look and feel to end users. The presence of these templates however can influence search results leading to inaccurate results being delivered to the users. Therefore to improve the accuracy and reliability of search results identification and removal of web templates from the actual content is essential. A wide range of approaches are commonly employed to achieve this and this paper focuses on the study of the various approaches of template detection and extraction that can be applied across homogenous as well as heterogeneous web pages.

  8. Demonstrating the use of web analytics and an online survey to understand user groups of a national network of river level data

    Science.gov (United States)

    Macleod, Christopher Kit; Braga, Joao; Arts, Koen; Ioris, Antonio; Han, Xiwu; Sripada, Yaji; van der Wal, Rene

    2016-04-01

    The number of local, national and international networks of online environmental sensors are rapidly increasing. Where environmental data are made available online for public consumption, there is a need to advance our understanding of the relationships between the supply of and the different demands for such information. Understanding how individuals and groups of users are using online information resources may provide valuable insights into their activities and decision making. As part of the 'dot.rural wikiRivers' project we investigated the potential of web analytics and an online survey to generate insights into the use of a national network of river level data from across Scotland. These sources of online information were collected alongside phone interviews with volunteers sampled from the online survey, and interviews with providers of online river level data; as part of a larger project that set out to help improve the communication of Scotland's online river data. Our web analytics analysis was based on over 100 online sensors which are maintained by the Scottish Environmental Protection Agency (SEPA). Through use of Google Analytics data accessed via the R Ganalytics package we assessed: if the quality of data provided by Google Analytics free service is good enough for research purposes; if we could demonstrate what sensors were being used, when and where; how the nature and pattern of sensor data may affect web traffic; and whether we can identify and profile these users based on information from traffic sources. Web analytics data consists of a series of quantitative metrics which capture and summarize various dimensions of the traffic to a certain web page or set of pages. Examples of commonly used metrics include the number of total visits to a site and the number of total page views. Our analyses of the traffic sources from 2009 to 2011 identified several different major user groups. To improve our understanding of how the use of this national

  9. The rendering context for stereoscopic 3D web

    Science.gov (United States)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  10. Undergraduate Students’Evaluation Criteria When Using Web Resources for Class Papers

    Directory of Open Access Journals (Sweden)

    Tsai-Youn Hung

    2004-09-01

    Full Text Available The growth in popularity of the World Wide Web has dramatically changed the way undergraduate students conduct information searches. The purpose of this study is to investigate what core quality criteria undergraduate students use to evaluate Web resources for their class papers and to what extent they evaluate the Web resources. This study reports on five Web page evaluations and a questionnaire survey of thirty five undergraduate students in the Information Technology and Informatics Program at Rutgers University. Results show that undergraduate students have become increasingly sophisticated about using Web resources, but not yet sophisticated about searching them. Undergraduate students only used one or two surface quality criteria to evaluate Web resources. They made immediate judgments about the surface features of Web pages and ignored the content of the documents themselves. This research suggests that undergraduate instructors should take the responsibility for instructing students on basic Web use knowledge or work with librarians to develop undergraduate students information literacy skills.

  11. The Ensembl Web site: mechanics of a genome browser.

    Science.gov (United States)

    Stalker, James; Gibbins, Brian; Meidl, Patrick; Smith, James; Spooner, William; Hotz, Hans-Rudolf; Cox, Antony V

    2004-05-01

    The Ensembl Web site (http://www.ensembl.org/) is the principal user interface to the data of the Ensembl project, and currently serves >500,000 pages (approximately 2.5 million hits) per week, providing access to >80 GB (gigabyte) of data to users in more than 80 countries. Built atop an open-source platform comprising Apache/mod_perl and the MySQL relational database management system, it is modular, extensible, and freely available. It is being actively reused and extended in several different projects, and has been downloaded and installed in companies and academic institutions worldwide. Here, we describe some of the technical features of the site, with particular reference to its dynamic configuration that enables it to handle disparate data from multiple species.

  12. Webmail: an Automated Web Publishing System

    Science.gov (United States)

    Bell, David

    A system for publishing frequently updated information to the World Wide Web will be described. Many documents now hosted by the NOAO Web server require timely posting and frequent updates, but need only minor changes in markup or are in a standard format requiring only conversion to HTML. These include information from outside the organization, such as electronic bulletins, and a number of internal reports, both human and machine generated. Webmail uses procmail and Perl scripts to process incoming email messages in a variety of ways. This processing may include wrapping or conversion to HTML, posting to the Web or internal newsgroups, updating search indices or links on related pages, and sending email notification of the new pages to interested parties. The Webmail system has been in use at NOAO since early 1997 and has steadily grown to include fourteen recipes that together handle about fifty messages per week.

  13. Final Report for DOE Project: Portal Web Services: Support of DOE SciDAC Collaboratories

    Energy Technology Data Exchange (ETDEWEB)

    Mary Thomas, PI; Geoffrey Fox, Co-PI; Gannon, D; Pierce, M; Moore, R; Schissel, D; Boisseau, J

    2007-10-01

    Grid portals provide the scientific community with familiar and simplified interfaces to the Grid and Grid services, and it is important to deploy grid portals onto the SciDAC grids and collaboratories. The goal of this project is the research, development and deployment of interoperable portal and web services that can be used on SciDAC National Collaboratory grids. This project has four primary task areas: development of portal systems; management of data collections; DOE science application integration; and development of web and grid services in support of the above activities.

  14. Reactor Engineering Division Material for World Wide Web Pages

    International Nuclear Information System (INIS)

    1996-01-01

    This document presents the home page of the Reactor Engineering Division of Argonne National Laboratory. This WWW site describes the activities of the Division, an introduction to its wide variety of programs and samples of the results of research by people in the division

  15. Intelligent Agent Based Semantic Web in Cloud Computing Environment

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Considering today's web scenario, there is a need of effective and meaningful search over the web which is provided by Semantic Web. Existing search engines are keyword based. They are vulnerable in answering intelligent queries from the user due to the dependence of their results on information available in web pages. While semantic search engines provides efficient and relevant results as the semantic web is an extension of the current web in which information is given well defined meaning....

  16. The Partial Mapping of the Web Graph

    Directory of Open Access Journals (Sweden)

    Kristina Machova

    2009-06-01

    Full Text Available The paper presents an approach to partial mapping of a web sub-graph. This sub-graph contains the nearest surroundings of an actual web page. Our work deals with acquiring relevant Hyperlinks of a base web site, generation of adjacency matrix, the nearest distance matrix and matrix of converted distances of Hyperlinks, detection of compactness of web representation, and visualization of its graphical representation. The paper introduces an LWP algorithm – a technique for Hyperlink filtration.  This work attempts to help users with the orientation within the web graph.

  17. Head First Web Design

    CERN Document Server

    Watrall, Ethan

    2008-01-01

    Want to know how to make your pages look beautiful, communicate your message effectively, guide visitors through your website with ease, and get everything approved by the accessibility and usability police at the same time? Head First Web Design is your ticket to mastering all of these complex topics, and understanding what's really going on in the world of web design. Whether you're building a personal blog or a corporate website, there's a lot more to web design than div's and CSS selectors, but what do you really need to know? With this book, you'll learn the secrets of designing effecti

  18. Effective Web and Desktop Retrieval with Enhanced Semantic Spaces

    Science.gov (United States)

    Daoud, Amjad M.

    We describe the design and implementation of the NETBOOK prototype system for collecting, structuring and efficiently creating semantic vectors for concepts, noun phrases, and documents from a corpus of free full text ebooks available on the World Wide Web. Automatic generation of concept maps from correlated index terms and extracted noun phrases are used to build a powerful conceptual index of individual pages. To ensure scalabilty of our system, dimension reduction is performed using Random Projection [13]. Furthermore, we present a complete evaluation of the relative effectiveness of the NETBOOK system versus the Google Desktop [8].

  19. A combined paging alert and web-based instrument alters clinician behavior and shortens hospital length of stay in acute pancreatitis.

    Science.gov (United States)

    Dimagno, Matthew J; Wamsteker, Erik-Jan; Rizk, Rafat S; Spaete, Joshua P; Gupta, Suraj; Sahay, Tanya; Costanzo, Jeffrey; Inadomi, John M; Napolitano, Lena M; Hyzy, Robert C; Desmond, Jeff S

    2014-03-01

    There are many published clinical guidelines for acute pancreatitis (AP). Implementation of these recommendations is variable. We hypothesized that a clinical decision support (CDS) tool would change clinician behavior and shorten hospital length of stay (LOS). Observational study, entitled, The AP Early Response (TAPER) Project. Tertiary center emergency department (ED) and hospital. Two consecutive samplings of patients having ICD-9 code (577.0) for AP were generated from the emergency department (ED) or hospital admissions. Diagnosis of AP was based on conventional Atlanta criteria. The Pre-TAPER-CDS-Tool group (5/30/06-6/22/07) had 110 patients presenting to the ED with AP per 976 ICD-9 (577.0) codes and the Post-TAPER-CDS-Tool group (5/30/06-6/22/07) had 113 per 907 ICD-9 codes (7/14/10-5/5/11). The TAPER-CDS-Tool, developed 12/2008-7/14/2010, is a combined early, automated paging-alert system, which text pages ED clinicians about a patient with AP and an intuitive web-based point-of-care instrument, consisting of seven early management recommendations. The pre- vs. post-TAPER-CDS-Tool groups had similar baseline characteristics. The post-TAPER-CDS-Tool group met two management goals more frequently than the pre-TAPER-CDS-Tool group: risk stratification (P6L/1st 0-24 h (P=0.0003). Mean (s.d.) hospital LOS was significantly shorter in the post-TAPER-CDS-Tool group (4.6 (3.1) vs. 6.7 (7.0) days, P=0.0126). Multivariate analysis identified four independent variables for hospital LOS: the TAPER-CDS-Tool associated with shorter LOS (P=0.0049) and three variables associated with longer LOS: Japanese severity score (P=0.0361), persistent organ failure (P=0.0088), and local pancreatic complications (<0.0001). The TAPER-CDS-Tool is associated with changed clinician behavior and shortened hospital LOS, which has significant financial implications.

  20. Snippet-based relevance predictions for federated web search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd

    How well can the relevance of a page be predicted, purely based on snippets? This would be highly useful in a Federated Web Search setting where caching large amounts of result snippets is more feasible than caching entire pages. The experiments reported in this paper make use of result snippets and

  1. Classical Hypermedia Virtues on the Web with Webstrates

    DEFF Research Database (Denmark)

    Bouvin, Niels Olof; Klokmose, Clemens Nylandsted

    2016-01-01

    We show and analyze herein how Webstrates can augment the Web from a classical hypermedia perspective. Webstrates turns the DOM of Web pages into persistent and collaborative objects. We demonstrate how this can be applied to realize bidirectional links, shared collaborative annotations, and in...

  2. Ten years on, the web spans the globe

    CERN Multimedia

    Dalton, A W

    2003-01-01

    Short article on the history of the WWW. Prof Berner-Lee states that one of the main reasons the web was such a success was due to CERN's decision to make the web foundations and protocols available on a royalty-free basis (1/2 page).

  3. Twelve Theses on Reactive Rules for the Web

    OpenAIRE

    Bry, François; Eckert, Michael

    2006-01-01

    Reactivity, the ability to detect and react to events, is an essential functionality in many information systems. In particular, Web systems such as online marketplaces, adaptive (e.g., recommender) sys- tems, and Web services, react to events such as Web page updates or data posted to a server. This article investigates issues of relevance in designing high-level programming languages dedicated to reactivity on the Web. It presents twelve theses on features desira...

  4. [DianaWeb: a demonstration project to improve breast cancer prognosis through lifestyles].

    Science.gov (United States)

    Villarini, Anna; Villarini, Milena; Gargano, Giuliana; Moretti, Massimo; Berrino, Franco

    2015-01-01

    In the field of cancer prevention, the public ask to be involved more actively in scientific research and in the production of knowledge. This is leading to an increase of participatory projects in the field of epidemiology. Community-based participatory research (CBPR) has received considerable attention in the past 15 years; it is becoming a recognized and important approach in addressing health disparities in cancer prevention. The increasing accessibility of new methods of comparison, discussion and information allows to link a large number of people. The project DianaWeb was born in 2015 at the Department of Predictive Medicine and Prevention of the National Cancer Institute, Milan. This CBPR involves women with diagnosis of breast cancer (BC). DianaWeb communications are based on an interactive online platform developed "ad hoc" (www.dianaweb.org). With very few exceptions, all communication between participants and research team will be on the web. The recruitment is done through Internet, hospitals, physicians, media and word of mouth. Women can join the project independently, under the control of researchers and the aim of the study is to assess whether healthy eating and regular physical activity can improve the quality of life and increase survival rates in women with diagnosis of BC. About 50,000 Italian women with a diagnosis of BC with or without metastasis, local recurrence or second cancers; with in situ or invasive cancer, whatever the disease stage at diagnosis, whatever histological diagnosis, whatever the time elapsed since diagnosis should be recruited in the DianaWeb project. The volunteers are asked to send clinical information about their condition from diagnosis onwards, on their weight and other anthropometric measures, lifestyles and nutrition through online questionnaires. Moreover, the women enrolled in the study, after login, can access evidence-based information and results obtained during the project (individual and whole community

  5. Incorporating the surfing behavior of web users into PageRank

    OpenAIRE

    Ashyralyyev, Shatlyk

    2013-01-01

    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013. Thesis (Master's) -- Bilkent University, 2013. Includes bibliographical references leaves 68-73 One of the most crucial factors that determines the effectiveness of a large-scale commercial web search engine is the ranking (i.e., order) in which web search results are presented to the end user. In modern web search engines, the skeleton for the rank...

  6. Web-Based Dissemination System for the Trusted Computing Exemplar Project

    Science.gov (United States)

    2005-06-01

    6 3. Fiasco Microkernel ..............................................................................6 4. Apache Web Server...Fiasco Microkernel The next project examined was the Fiasco Microkernel developed by the Dresden University of Technology. This dissemination...System,” 1999, http://www.eros-os.org, Accessed: May 2005. [5] “The Fiasco Microkernel ,” February 2004, http://os.inf.tu-dresden.de/fiasco/, Accessed

  7. Client-Side Event Processing for Personalized Web Advertisement

    Science.gov (United States)

    Stühmer, Roland; Anicic, Darko; Sen, Sinan; Ma, Jun; Schmidt, Kay-Uwe; Stojanovic, Nenad

    The market for Web advertisement is continuously growing and correspondingly, the number of approaches that can be used for realizing Web advertisement are increasing. However, current approaches fail to generate very personalized ads for a current Web user that is visiting a particular Web content. They mainly try to develop a profile based on the content of that Web page or on a long-term user's profile, by not taking into account current user's preferences. We argue that by discovering a user's interest from his current Web behavior we can support the process of ad generation, especially the relevance of an ad for the user. In this paper we present the conceptual architecture and implementation of such an approach. The approach is based on the extraction of simple events from the user interaction with a Web page and their combination in order to discover the user's interests. We use semantic technologies in order to build such an interpretation out of many simple events. We present results from preliminary evaluation studies. The main contribution of the paper is a very efficient, semantic-based client-side architecture for generating and combining Web events. The architecture ensures the agility of the whole advertisement system, by complexly processing events on the client. In general, this work contributes to the realization of new, event-driven applications for the (Semantic) Web.

  8. Commentary: Building Web Research Strategies for Teachers and Students

    Science.gov (United States)

    Maloy, Robert W.

    2016-01-01

    This paper presents web research strategies for teachers and students to use in building Dramatic Event, Historical Biography, and Influential Literature wiki pages for history/social studies learning. Dramatic Events refer to milestone or turning point moments in history. Historical Biographies and Influential Literature pages feature…

  9. Web Approach for Ontology-Based Classification, Integration, and Interdisciplinary Usage of Geoscience Metadata

    Directory of Open Access Journals (Sweden)

    B Ritschel

    2012-10-01

    Full Text Available The Semantic Web is a W3C approach that integrates the different sources of semantics within documents and services using ontology-based techniques. The main objective of this approach in the geoscience domain is the improvement of understanding, integration, and usage of Earth and space science related web content in terms of data, information, and knowledge for machines and people. The modeling and representation of semantic attributes and relations within and among documents can be realized by human readable concept maps and machine readable OWL documents. The objectives for the usage of the Semantic Web approach in the GFZ data center ISDC project are the design of an extended classification of metadata documents for product types related to instruments, platforms, and projects as well as the integration of different types of metadata related to data product providers, users, and data centers. Sources of content and semantics for the description of Earth and space science product types and related classes are standardized metadata documents (e.g., DIF documents, publications, grey literature, and Web pages. Other sources are information provided by users, such as tagging data and social navigation information. The integration of controlled vocabularies as well as folksonomies plays an important role in the design of well formed ontologies.

  10. Page 170 Use of Electronic Resources by Undergraduates in Two ...

    African Journals Online (AJOL)

    undergraduate students use electronic resources such as NUC virtual library, HINARI, ... web pages articles from magazines, encyclopedias, pamphlets and other .... of Nigerian university libraries have Internet connectivity, some of the system.

  11. Content and Design Features of Academic Health Sciences Libraries' Home Pages.

    Science.gov (United States)

    McConnaughy, Rozalynd P; Wilson, Steven P

    2018-01-01

    The goal of this content analysis was to identify commonly used content and design features of academic health sciences library home pages. After developing a checklist, data were collected from 135 academic health sciences library home pages. The core components of these library home pages included a contact phone number, a contact email address, an Ask-a-Librarian feature, the physical address listed, a feedback/suggestions link, subject guides, a discovery tool or database-specific search box, multimedia, social media, a site search option, a responsive web design, and a copyright year or update date.

  12. From web 2.0 to learning games

    DEFF Research Database (Denmark)

    Duus Henriksen, Thomas

    2011-01-01

    a game-based learning process that made active use of the participants’ current knowledge, both on leadership and on the context they were to lead. The solution developed for doing so was based on some of the ideas from web 2.0, by which the web-page or game merely provides a structure that allows...

  13. Wetis – a Web based tourist information system for East Slovakia

    Directory of Open Access Journals (Sweden)

    Jana Jablonská

    2009-12-01

    Full Text Available Services like tourism have to use the possibilities of modern advertising and data presentation. In particular, these includethe World Wide Web (WWW. This platform brings its products and services directly to the customers. To place the information abouta region and its tourist offer in optimal manner, requires an exact definition of elements in the planned tourist info systems and theirpresentation. WETIS is the acronym of the “Web based tourist information system”.The WETIS elements and their integration into other services, the data structures and types, maps and their interactivityare considered. Furthermore, the paper presents tourism related data of selected Slovakian villages and cities. Finally, a reportregarding the demonstration web page and its future development is given.Usually, the tourist information systems on the web relate to the most interesting historical or cultural sights in the region. In thisway, the beauty of the many towns and villages is not appreciable and thus tend to be left out of tourist programmes. In order to fillthe gap, a project was initiated at the Institute of Geotourism, where students cooperated in mapping the actual situation in the fieldand gradually covering all the villages and cities of the region of East Slovakia.The project is still an ongoing project and the data upload is still incomplete. The WETIS contains useful information on a givenlocality. The detailed photo-documentation makes it possible to see each building of interest at a given locality. In order to avoidexcessive data load for the WETIS users, the detailed data interesting only for professionals are available only on demand.

  14. Sample-based XPath Ranking for Web Information Extraction

    NARCIS (Netherlands)

    Jundt, Oliver; van Keulen, Maurice

    Web information extraction typically relies on a wrapper, i.e., program code or a configuration that specifies how to extract some information from web pages at a specific website. Manually creating and maintaining wrappers is a cumbersome and error-prone task. It may even be prohibitive as some

  15. Traitor: associating concepts using the world wide web

    NARCIS (Netherlands)

    Drijfhout, Wanno; Oliver, J.; Oliver, Jundt; Wevers, L.; Hiemstra, Djoerd

    We use Common Crawl's 25TB data set of web pages to construct a database of associated concepts using Hadoop. The database can be queried through a web application with two query interfaces. A textual interface allows searching for similarities and differences between multiple concepts using a query

  16. Caching web service for TICF project

    International Nuclear Information System (INIS)

    Pais, V.F.; Stancalie, V.

    2008-01-01

    A caching web service was developed to allow caching of any object to a network cache, presented in the form of a web service. This application was used to increase the speed of previously implemented web services and for new ones. Various tests were conducted to determine the impact of using this caching web service in the existing network environment and where it should be placed in order to achieve the greatest increase in performance. Since the cache is presented to applications as a web service, it can also be used for remote access to stored data and data sharing between applications

  17. A cool Web challenge on a hot weekend

    CERN Multimedia

    Antonella Del Rosso

    2013-01-01

    The CERN Summer Student Webfest took place last weekend and brought dozens of young web enthusiasts to the Main Auditorium. Fifteen projects were presented in the Friday pitching session and after that the challenge was launched. And the winner is…   Five of the six members of the team behind the “Mother hunting” project during a brainstorming session. Image: Jiannan Zhang. … the “Mother hunting” game! An end-state particle explores CERN to try to reconstruct his (or her) family history of decay mothers and ancestors. Along the way, the particle meets famous physicists who teach it physics. A globe sphinx asks physics questions before the player can progress to various stages (read the game description on the dedicated page). Targeted at high school students, the game features very appealing 3D graphics, which accurately reproduce the layout of CERN. “Mother hunting” was one of the 15 projects presented at the...

  18. Improving the web site's effectiveness by considering each page's temporal information

    NARCIS (Netherlands)

    Li, ZG; Sun, MT; Dunham, MH; Xiao, YQ; Dong, G; Tang, C; Wang, W

    2003-01-01

    Improving the effectiveness of a web site is always one of its owner's top concerns. By focusing on analyzing web users' visiting behavior, web mining researchers have developed a variety of helpful methods, based upon association rules, clustering, prediction and so on. However, we have found

  19. Le co-inventeur du Web nous dit

    CERN Multimedia

    Ungar, Christophe

    2004-01-01

    For fifty years CERN has made landmark discoveries. Since 1990, a particular idea was developed by a Belgian, Robert Cailliau in collaboration with Tim Berners-Lee. Cailliau, the co-inventor of the Web, speaks about what brought him to CERN, what he likes about Geneva, and how he uses the Web from day to day (1 page)

  20. Post-processing of Deep Web Information Extraction Based on Domain Ontology

    Directory of Open Access Journals (Sweden)

    PENG, T.

    2013-11-01

    Full Text Available Many methods are utilized to extract and process query results in deep Web, which rely on the different structures of Web pages and various designing modes of databases. However, some semantic meanings and relations are ignored. So, in this paper, we present an approach for post-processing deep Web query results based on domain ontology which can utilize the semantic meanings and relations. A block identification model (BIM based on node similarity is defined to extract data blocks that are relevant to specific domain after reducing noisy nodes. Feature vector of domain books is obtained by result set extraction model (RSEM based on vector space model (VSM. RSEM, in combination with BIM, builds the domain ontology on books which can not only remove the limit of Web page structures when extracting data information, but also make use of semantic meanings of domain ontology. After extracting basic information of Web pages, a ranking algorithm is adopted to offer an ordered list of data records to users. Experimental results show that BIM and RSEM extract data blocks and build domain ontology accurately. In addition, relevant data records and basic information are extracted and ranked. The performances precision and recall show that our proposed method is feasible and efficient.

  1. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  2. Web Development Simplified

    Science.gov (United States)

    Becker, Bernd W.

    2010-01-01

    The author has discussed the Multimedia Educational Resource for Teaching and Online Learning site, MERLOT, in a recent Electronic Roundup column. In this article, he discusses an entirely new Web page development tool that MERLOT has added for its members. The new tool is called the MERLOT Content Builder and is directly integrated into the…

  3. Talking physics in the social web

    CERN Multimedia

    Griffiths, Martin

    2007-01-01

    "From "blogs" to "wikis", the Web is now more than a mere repository of information. Martin Griffiths investigates how this new interactivity is affecting the way physicists communicate and access information." (5 pages)

  4. Learning in a sheltered Internet environment: The use of WebQuests

    NARCIS (Netherlands)

    Segers, P.C.J.; Verhoeven, L.T.W.

    2009-01-01

    The present study investigated the effects on learning in a sheltered Internet environment using so-called WebQuests in elementary school classrooms in the Netherlands. A WebQuest is an assignment presented together with a series of web pages to help guide children's learning. The learning gains and

  5. Augmenting the Web through Open Hypermedia

    DEFF Research Database (Denmark)

    Bouvin, N.O.

    2003-01-01

    Based on an overview of Web augmentation and detailing the three basic approaches to extend the hypermedia functionality of the Web, the author presents a general open hypermedia framework (the Arakne framework) to augment the Web. The aim is to provide users with the ability to link, annotate, a......, and otherwise structure Web pages, as they see fit. The paper further discusses the possibilities of the concept through the description of various experiments performed with an implementation of the framework, the Arakne Environment......Based on an overview of Web augmentation and detailing the three basic approaches to extend the hypermedia functionality of the Web, the author presents a general open hypermedia framework (the Arakne framework) to augment the Web. The aim is to provide users with the ability to link, annotate...

  6. Analysis of Technique to Extract Data from the Web for Improved Performance

    Science.gov (United States)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  7. 78 FR 67881 - Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and...

    Science.gov (United States)

    2013-11-12

    ... ticket agents are providing schedule and fare information and marketing covered air transportation... corresponding accessible pages on a mobile Web site by one year after the final rule's effective date; and (3... criteria) as the required accessibility standard for all public-facing Web pages involved in marketing air...

  8. Sensor Webs and Virtual Globes: Enabling Understanding of Changes in a partially Glaciated Watershed

    Science.gov (United States)

    Heavner, M.; Fatland, D. R.; Habermann, M.; Berner, L.; Hood, E.; Connor, C.; Galbraith, J.; Knuth, E.; O'Brien, W.

    2008-12-01

    The University of Alaska Southeast is currently implementing a sensor web identified as the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research (SEAMONSTER). SEAMONSTER is operating in the partially glaciated Mendenhall and Lemon Creek Watersheds, in the Juneau area, on the margins of the Juneau Icefield. These watersheds are studied for both 1. long term monitoring of changes, and 2. detection and analysis of transient events (such as glacier lake outburst floods). The heterogeneous sensors (meteorologic, dual frequency GPS, water quality, lake level, etc), power and bandwidth constraints, and competing time scales of interest require autonomous reactivity of the sensor web. They also present challenges for operational management of the sensor web. The harsh conditions on the glaciers provide additional operating constraints. The tight integration of the sensor web and virtual global enabling technology enhance the project in multiple ways. We are utilizing virtual globe infrastructures to enhance both sensor web management and data access. SEAMONSTER utilizes virtual globes for education and public outreach, sensor web management, data dissemination, and enabling collaboration. Using a PosgreSQL with GIS extensions database coupled to the Open Geospatial Consortium (OGC) Geoserver, we generate near-real-time auto-updating geobrowser files of the data in multiple OGC standard formats (e.g KML, WCS). Additionally, embedding wiki pages in this database allows the development of a geospatially aware wiki describing the projects for better public outreach and education. In this presentation we will describe how we have implemented these technologies to date, the lessons learned, and our efforts towards greater OGC standard implementation. A major focus will be on demonstrating how geobrowsers and virtual globes have made this project possible.

  9. Web resources for myrmecologists

    DEFF Research Database (Denmark)

    Nash, David Richard

    2005-01-01

    The world wide web provides many resources that are useful to the myrmecologist. Here I provide a brief introduc- tion to the types of information currently available, and to recent developments in data provision over the internet which are likely to become important resources for myrmecologists...... in the near future. I discuss the following types of web site, and give some of the most useful examples of each: taxonomy, identification and distribution; conservation; myrmecological literature; individual species sites; news and discussion; picture galleries; personal pages; portals....

  10. PSB goes personal: The failure of personalised PSB web pages

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk

    2013-01-01

    Between 2006 and 2011, a number of European public service broadcasting (PSB) organisations offered their website users the opportunity to create their own PSB homepage. The web customisation was conceived by the editors as a response to developments in commercial web services, particularly social...

  11. TREC2002 Web, Novelty and Filtering Track Experiments Using PIRCS

    National Research Council Canada - National Science Library

    Kwok, K. L; Deng, P; Dinstl, N; Chan, M

    2006-01-01

    .... The Web track has two tasks: distillation and named-page retrieval. Distillation is a new utility concept for ranking documents, and needs new design on the output document ranked list after an ad-hoc retrieval from the web (.gov) collection...

  12. Sources of Militaria on the World Wide Web | Walker | Scientia ...

    African Journals Online (AJOL)

    Having an interest in military-type topics is one thing, finding information on the web to quench your thirst for knowledge is another. The World Wide Web (WWW) is a universal electronic library that contains millions of web pages. As well as being fun, it is an addictive tool on which to search for information. To prevent hours ...

  13. Archives at the U.S. Naval Observatory - Recent Projects

    Science.gov (United States)

    Corbin, B. G.

    2004-12-01

    In 1874, like many other astronomical institutions, the U.S. Naval Observatory sent eight expeditions to different parts of the globe to observe the Transit of Venus. After all results were in, William Harkness was placed in charge of preparing the results and observations for publication. Page proofs of these observations appeared in 1881, but due to lack of funds and other reasons, these volumes were never published. Recently funds became available to have photocopies made on acid-free paper. The Astrophysics Data System (ADS) agreed to scan the photocopied pages and has made this publication available via the ADS so it now may be seen by anyone with access to the web. The compilation of a historical photograph archive at the USNO is continuing. Photographs and glass plates are being scanned by students and placed on the web. As the Naval Observatory has many thousands of plates and photographs, this project will take quite some time to complete. The images are of instruments, buildings, and staff members. The URL for this collection is http://www.usno.navy.mil/library/search.shtml

  14. Project management web tools at the MICE experiment

    International Nuclear Information System (INIS)

    Coney, L R; Tunnell, C D

    2012-01-01

    Project management tools like Trac are commonly used within the open-source community to coordinate projects. The Muon Ionization Cooling Experiment (MICE) uses the project management web application Redmine to host mice.rl.ac.uk. Many groups within the experiment have a Redmine project: analysis, computing and software (including offline, online, controls and monitoring, and database subgroups), executive board, and operations. All of these groups use the website to communicate, track effort, develop schedules, and maintain documentation. The issue tracker is a rich tool that is used to identify tasks and monitor progress within groups on timescales ranging from immediate and unexpected problems to milestones that cover the life of the experiment. It allows the prioritization of tasks according to time-sensitivity, while providing a searchable record of work that has been done. This record of work can be used to measure both individual and overall group activity, identify areas lacking sufficient personnel or effort, and as a measure of progress against the schedule. Given that MICE, like many particle physics experiments, is an international community, such a system is required to allow easy communication within a global collaboration. Unlike systems that are purely wiki-based, the structure of a project management tool like Redmine allows information to be maintained in a more structured and logical fashion.

  15. CSAR-web: a web server of contig scaffolding using algebraic rearrangements.

    Science.gov (United States)

    Chen, Kun-Tze; Lu, Chin Lung

    2018-05-04

    CSAR-web is a web-based tool that allows the users to efficiently and accurately scaffold (i.e. order and orient) the contigs of a target draft genome based on a complete or incomplete reference genome from a related organism. It takes as input a target genome in multi-FASTA format and a reference genome in FASTA or multi-FASTA format, depending on whether the reference genome is complete or incomplete, respectively. In addition, it requires the users to choose either 'NUCmer on nucleotides' or 'PROmer on translated amino acids' for CSAR-web to identify conserved genomic markers (i.e. matched sequence regions) between the target and reference genomes, which are used by the rearrangement-based scaffolding algorithm in CSAR-web to order and orient the contigs of the target genome based on the reference genome. In the output page, CSAR-web displays its scaffolding result in a graphical mode (i.e. scalable dotplot) allowing the users to visually validate the correctness of scaffolded contigs and in a tabular mode allowing the users to view the details of scaffolds. CSAR-web is available online at http://genome.cs.nthu.edu.tw/CSAR-web.

  16. Dynamic Web Expression for Near-real-time Sensor Networks

    Science.gov (United States)

    Lindquist, K. G.; Newman, R. L.; Nayak, A.; Vernon, F. L.; Nelson, C.; Hansen, T. S.; Yuen-Wong, R.

    2003-12-01

    As near-real-time sensor grids become more widespread, and processing systems based on them become more powerful, summarizing the raw and derived information products and delivering them to the end user become increasingly important both for ongoing monitoring and as a platform for cross-disciplinary research. We have re-engineered the dbrecenteqs program, which was designed to express real-time earthquake databases into dynamic web pages, with several powerful new technologies. While the application is still most fully developed for seismic data, the infrastructure is extensible (and being extended) to create a real-time information architecture for numerous signal domains. This work provides a practical, lightweight approach suitable for individual seismic and sensor networks, which does not require a full 'web-services' implementation. Nevertheless, the technologies here are extensible to larger applications such as the Storage-Resource-Broker based VORB project. The technologies included in the new system blend real-time relational databases as a focus for processing and data handling; an XML->XSLT architecture as the core of the web mirroring; PHP extensions to Antelope (the environmental monitoring-system context adopted for RoadNET) in order to support complex, user-driven interactivity; and VRML output for expression of information as web-browsable three-dimensional worlds.

  17. Web Transfer Over Satellites Being Improved

    Science.gov (United States)

    Allman, Mark

    1999-01-01

    Extensive research conducted by NASA Lewis Research Center's Satellite Networks and Architectures Branch and the Ohio University has demonstrated performance improvements in World Wide Web transfers over satellite-based networks. The use of a new version of the Hypertext Transfer Protocol (HTTP) reduced the time required to load web pages over a single Transmission Control Protocol (TCP) connection traversing a satellite channel. However, an older technique of simultaneously making multiple requests of a given server has been shown to provide even faster transfer time. Unfortunately, the use of multiple simultaneous requests has been shown to be harmful to the network in general. Therefore, we are developing new mechanisms for the HTTP protocol which may allow a single request at any given time to perform as well as, or better than, multiple simultaneous requests. In the course of study, we also demonstrated that the time for web pages to load is at least as short via a satellite link as it is via a standard 28.8-kbps dialup modem channel. This demonstrates that satellites are a viable means of accessing the Internet.

  18. Web Enabled DROLS Verity TopicSets

    National Research Council Canada - National Science Library

    Tong, Richard

    1999-01-01

    The focus of this effort has been the design and development of automatically generated TopicSets and HTML pages that provide the basis of the required search and browsing capability for DTIC's Web Enabled DROLS System...

  19. Teaching Critical Evaluation Skills for World Wide Web Resources.

    Science.gov (United States)

    Tate, Marsha; Alexander, Jan

    1996-01-01

    Outlines a lesson plan used by an academic library to evaluate the quality of World Wide Web information. Discusses the traditional evaluation criteria of accuracy, authority, objectivity, currency, and coverage as it applies to the unique characteristics of Web pages: their marketing orientation, variety of information, and instability. The…

  20. Developing Large Web Applications

    CERN Document Server

    Loudon, Kyle

    2010-01-01

    How do you create a mission-critical site that provides exceptional performance while remaining flexible, adaptable, and reliable 24/7? Written by the manager of a UI group at Yahoo!, Developing Large Web Applications offers practical steps for building rock-solid applications that remain effective even as you add features, functions, and users. You'll learn how to develop large web applications with the extreme precision required for other types of software. Avoid common coding and maintenance headaches as small websites add more pages, more code, and more programmersGet comprehensive soluti

  1. Reflect: a practical approach to web semantics

    DEFF Research Database (Denmark)

    O'Donoghue, S.I.; Horn, Heiko; Pafilisa, E.

    2010-01-01

    To date, adding semantic capabilities to web content usually requires considerable server-side re-engineering, thus only a tiny fraction of all web content currently has semantic annotations. Recently, we announced Reflect (http://reflect.ws), a free service that takes a more practical approach......: Reflect uses augmented browsing to allow end-users to add systematic semantic annotations to any web-page in real-time, typically within seconds. In this paper we describe the tagging process in detail and show how further entity types can be added to Reflect; we also describe how publishers and content...... web technologies....

  2. WebWise 2.0: The Power of Community. WebWise Conference on Libraries and Museums in the Digital World Proceedings (9th, Miami Beach, Florida, March 5-7, 2008)

    Science.gov (United States)

    Green, David

    2009-01-01

    Since it was coined by Tim O'Reilly in formulating the first Web 2.0 Conference in 2004, the term "Web 2.0" has definitely caught on as a designation of a second generation of Web design and experience that emphasizes a high degree of interaction with, and among, users. Rather than simply consulting and reading Web pages, the Web 2.0 generation is…

  3. A URI-based approach for addressing fragments of media resources on the Web

    NARCIS (Netherlands)

    E. Mannens; D. van Deursen; R. Troncy (Raphael); S. Pfeiffer; C. Parker (Conrad); Y. Lafon; A.J. Jansen (Jack); M. Hausenblas; R. van de Walle

    2011-01-01

    htmlabstractTo make media resources a prime citizen on the Web, we have to go beyond simply replicating digital media files. The Web is based on hyperlinks between Web resources, and that includes hyperlinking out of resources (e.g., from a word or an image within a Web page) as well as hyperlinking

  4. Distributed data collection and supervision based on web sensor

    Science.gov (United States)

    He, Pengju; Dai, Guanzhong; Fu, Lei; Li, Xiangjun

    2006-11-01

    As a node in Internet/Intranet, web sensor has been promoted in recent years and wildly applied in remote manufactory, workshop measurement and control field. However, the conventional scheme can only support HTTP protocol, and the remote users supervise and control the collected data published by web in the standard browser because of the limited resource of the microprocessor in the sensor; moreover, only one node of data acquirement can be supervised and controlled in one instant therefore the requirement of centralized remote supervision, control and data process can not be satisfied in some fields. In this paper, the centralized remote supervision, control and data process by the web sensor are proposed and implemented by the principle of device driver program. The useless information of the every collected web page embedded in the sensor is filtered and the useful data is transmitted to the real-time database in the workstation, and different filter algorithms are designed for different sensors possessing independent web pages. Every sensor node has its own filter program of web, called "web data collection driver program", the collecting details are shielded, and the supervision, control and configuration software can be implemented by the call of web data collection driver program just like the use of the I/O driver program. The proposed technology can be applied in the data acquirement where relative low real-time is required.

  5. Graphic Data Display from Manufacturing on Web Pages

    Directory of Open Access Journals (Sweden)

    Martin VALAS

    2009-06-01

    Full Text Available Industrial data can by displayed in graphical form which is usually used by three types of users. The first, nonstop users, most frequent operational engineer, who checking actual displayed values and then intervene in operation. The second are occasional users who are interested in historical data e.g. for servicing reason. The last users’ types are tradesmen and managers. State comparison few days or months ago helps as decision-making support. Graph component with web application, which provides data as XML document, was designed for second users group. Graph component displays historical data. Students can fully understand all the problems go along with web application creation in ASP.NET, which provides data in XML document, as well as graph component creation in integrated development environment Flash, thanks in detail described solution using ActionScript.

  6. EpiCollect+: linking smartphones to web applications for complex data collection projects.

    Science.gov (United States)

    Aanensen, David M; Huntley, Derek M; Menegazzo, Mirko; Powell, Chris I; Spratt, Brian G

    2014-01-01

    Previously, we have described the development of the generic mobile phone data gathering tool, EpiCollect, and an associated web application, providing two-way communication between multiple data gatherers and a project database. This software only allows data collection on the phone using a single questionnaire form that is tailored to the needs of the user (including a single GPS point and photo per entry), whereas many applications require a more complex structure, allowing users to link a series of forms in a linear or branching hierarchy, along with the addition of any number of media types accessible from smartphones and/or tablet devices (e.g., GPS, photos, videos, sound clips and barcode scanning). A much enhanced version of EpiCollect has been developed (EpiCollect+). The individual data collection forms in EpiCollect+ provide more design complexity than the single form used in EpiCollect, and the software allows the generation of complex data collection projects through the ability to link many forms together in a linear (or branching) hierarchy. Furthermore, EpiCollect+ allows the collection of multiple media types as well as standard text fields, increased data validation and form logic. The entire process of setting up a complex mobile phone data collection project to the specification of a user (project and form definitions) can be undertaken at the EpiCollect+ website using a simple 'drag and drop' procedure, with visualisation of the data gathered using Google Maps and charts at the project website. EpiCollect+ is suitable for situations where multiple users transmit complex data by mobile phone (or other Android devices) to a single project web database and is already being used for a range of field projects, particularly public health projects in sub-Saharan Africa. However, many uses can be envisaged from education, ecology and epidemiology to citizen science.

  7. U.S. Geological Survey (USGS) Earthquake Web Applications

    Science.gov (United States)

    Fee, J.; Martinez, E.

    2015-12-01

    USGS Earthquake web applications provide access to earthquake information from USGS and other Advanced National Seismic System (ANSS) contributors. One of the primary goals of these applications is to provide a consistent experience for accessing both near-real time information as soon as it is available and historic information after it is thoroughly reviewed. Millions of people use these applications every month including people who feel an earthquake, emergency responders looking for the latest information about a recent event, and scientists researching historic earthquakes and their effects. Information from multiple catalogs and contributors is combined by the ANSS Comprehensive Catalog into one composite catalog, identifying the most preferred information from any source for each event. A web service and near-real time feeds provide access to all contributed data, and are used by a number of users and software packages. The Latest Earthquakes application displays summaries of many events, either near-real time feeds or custom searches, and the Event Page application shows detailed information for each event. Because all data is accessed through the web service, it can also be downloaded by users. The applications are maintained as open source projects on github, and use mobile-first and responsive-web-design approaches to work well on both mobile devices and desktop computers. http://earthquake.usgs.gov/earthquakes/map/

  8. Learning in a Sheltered Internet Environment: The Use of WebQuests

    Science.gov (United States)

    Segers, Eliane; Verhoeven, Ludo

    2009-01-01

    The present study investigated the effects on learning in a sheltered Internet environment using so-called WebQuests in elementary school classrooms in the Netherlands. A WebQuest is an assignment presented together with a series of web pages to help guide children's learning. The learning gains and quality of the work of 229 sixth graders…

  9. SiteGuide: An example-based approach to web site development assistance

    NARCIS (Netherlands)

    Hollink, V.; de Boer, V.; van Someren, M.; Filipe, J.; Cordeiro, J.

    2009-01-01

    We present ‘SiteGuide’, a tool that helps web designers to decide which information will be included in a new web site and how the information will be organized. SiteGuide takes as input URLs of web sites from the same domain as the site the user wants to create. It automatically searches the pages

  10. The impact of Arizona Highways Magazine's facebook page.

    Science.gov (United States)

    2014-02-01

    This project examined the relationship between use of the Arizona Highways magazine (AHM) Facebook Page and the decision to : travel to or within Arizona. Key purposes were to: (1) provide a thorough understanding of AHM Facebook Page users, includin...

  11. A hybrid BCI web browser based on EEG and EOG signals.

    Science.gov (United States)

    Shenghong He; Tianyou Yu; Zhenghui Gu; Yuanqing Li

    2017-07-01

    In this study, we propose a new web browser based on a hybrid brain computer interface (BCI) combining electroencephalographic (EEG) and electrooculography (EOG) signals. Specifically, the user can control the horizontal movement of the mouse by imagining left/right hand motion, and control the vertical movement of the mouse, select/reject a target, or input text in an edit box by blinking eyes in synchrony with the flashes of the corresponding buttons on the GUI. Based on mouse control, target selection and text input, the user can open a web page of interest, select an intended target in the web and read the page content. An online experiment was conducted involving five healthy subjects. The experimental results demonstrated the effectiveness of the proposed method.

  12. Nessi: An EEG-Controlled Web Browser for Severely Paralyzed Patients

    Directory of Open Access Journals (Sweden)

    Michael Bensch

    2007-01-01

    Full Text Available We have previously demonstrated that an EEG-controlled web browser based on self-regulation of slow cortical potentials (SCPs enables severely paralyzed patients to browse the internet independently of any voluntary muscle control. However, this system had several shortcomings, among them that patients could only browse within a limited number of web pages and had to select links from an alphabetical list, causing problems if the link names were identical or if they were unknown to the user (as in graphical links. Here we describe a new EEG-controlled web browser, called Nessi, which overcomes these shortcomings. In Nessi, the open source browser, Mozilla, was extended by graphical in-place markers, whereby different brain responses correspond to different frame colors placed around selectable items, enabling the user to select any link on a web page. Besides links, other interactive elements are accessible to the user, such as e-mail and virtual keyboards, opening up a wide range of hypertext-based applications.

  13. Pro single page application development using Backbone.js and ASP.NET

    CERN Document Server

    Fink, Gil

    2014-01-01

    One of the most important and exciting trends in web development in recent years is the move towards single page applications, or SPAs. Instead of clicking through hyperlinks and waiting for each page to load, the user loads a site once and all the interactivity is handled fluidly by a rich JavaScript front end. If you come from a background in ASP.NET development, you'll be used to handling most interactions on the server side. Pro Single Page Application Development will guide you through your transition to this powerful new application type.The book starts in Part I by laying the groundwork

  14. Knighthood for 'father of the web'

    CERN Multimedia

    Uhlig, R

    2003-01-01

    "Tim Berners-Lee, the father of the world wide web, was awarded a knighthood for services to the internet, which his efforts transformed from a haunt of computer geeks, scientists and the military into a global phenomenon" (1/2 page).

  15. Semantic similarity measure in biomedical domain leverage web search engine.

    Science.gov (United States)

    Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei

    2010-01-01

    Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.

  16. Off the Beaten tracks: Exploring Three Aspects of Web Navigation

    NARCIS (Netherlands)

    Weinreich, H.; Obendorf, H.; Herder, E.; Mayer, M.; Edmonds, H.; Hawkey, K.; Kellar, M.; Turnbull, D.

    2006-01-01

    This paper presents results of a long-term client-side Web usage study, updating previous studies that range in age from five to ten years. We focus on three aspects of Web navigation: changes in the distribution of navigation actions, speed of navigation and within-page navigation. “Navigation

  17. Outreach to International Students and Scholars Using the World Wide Web.

    Science.gov (United States)

    Wei, Wei

    1998-01-01

    Describes the creation of a World Wide Web site for the Science Library International Outreach Program at the University of California, Santa Cruz. Discusses design elements, content, and promotion of the site. Copies of the home page and the page containing the outreach program's statement of purpose are included. (AEF)

  18. Checklist of accessibility in Web informational environments

    Directory of Open Access Journals (Sweden)

    Christiane Gomes dos Santos

    2017-01-01

    Full Text Available This research deals with the process of search, navigation and retrieval of information by the person with blindness in web environment, focusing on knowledge of the areas of information recovery and architecture, to understanding the strategies used by these people to access the information on the web. It aims to propose the construction of an accessibility verification instrument, checklist, to be used to analyze the behavior of people with blindness in search actions, navigation and recovery sites and pages. It a research exploratory and descriptive of qualitative nature, with the research methodology, case study - the research to establish a specific study with the simulation of search, navigation and information retrieval using speech synthesis system, NonVisual Desktop Access, in assistive technologies laboratory, to substantiate the construction of the checklist for accessibility verification. It is considered the reliability of performed research and its importance for the evaluation of accessibility in web environment to improve the access of information for people with limited reading in order to be used on websites and pages accessibility check analysis.

  19. Designing a Web Spam Classifier Based on Feature Fusion in the Layered Multi-Population Genetic Programming Framework

    Directory of Open Access Journals (Sweden)

    Amir Hosein KEYHANIPOUR

    2013-11-01

    Full Text Available Nowadays, Web spam pages are a critical challenge for Web retrieval systems which have drastic influence on the performance of such systems. Although these systems try to combat the impact of spam pages on their final results list, spammers increasingly use more sophisticated techniques to increase the number of views for their intended pages in order to have more commercial success. This paper employs the recently proposed Layered Multi-population Genetic Programming model for Web spam detection task as well application of correlation coefficient analysis for feature space reduction. Based on our tentative results, the designed classifier, which is based on a combination of easy to compute features, has a very reasonable performance in comparison with similar methods.

  20. Distribution of pagerank mass among principle components of the web

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli; Pham, Kim Son; Bonato, A.; Chung, F.R.K.

    2007-01-01

    We study the PageRank mass of principal components in a bow-tie Web Graph, as a function of the damping factor c. Using a singular perturbation approach, we show that the PageRank share of IN and SCC components remains high even for very large values of the damping factor, in spite of the fact that

  1. WEBSLIDE: A "Virtual" Slide Projector Based on World Wide Web

    Science.gov (United States)

    Barra, Maria; Ferrandino, Salvatore; Scarano, Vittorio

    1999-03-01

    We present here the design key concepts of WEBSLIDE, a software project whose objective is to provide a simple, cheap and efficient solution for showing slides during lessons in computer labs. In fact, WEBSLIDE allows the video monitors of several client machines (the "STUDENTS") to be synchronously updated by the actions of a particular client machine, called the "INSTRUCTOR." The system is based on the World Wide Web and the software components of WEBSLIDE mainly consists in a WWW server, browsers and small Cgi-Bill scripts. What makes WEBSLIDE particularly appealing for small educational institutions is that WEBSLIDE is built with "off the shelf" products: it does not involve using a specifically designed program but any Netscape browser, one of the most popular browsers available on the market, is sufficient. Another possible use is to use our system to implement "guided automatic tours" through several pages or Intranets internal news bulletins: the company Web server can broadcast to all employees relevant information on their browser.

  2. Neutralizing SQL Injection Attack Using Server Side Code Modification in Web Applications

    OpenAIRE

    Dalai, Asish Kumar; Jena, Sanjay Kumar

    2017-01-01

    Reports on web application security risks show that SQL injection is the top most vulnerability. The journey of static to dynamic web pages leads to the use of database in web applications. Due to the lack of secure coding techniques, SQL injection vulnerability prevails in a large set of web applications. A successful SQL injection attack imposes a serious threat to the database, web application, and the entire web server. In this article, the authors have proposed a novel method for prevent...

  3. Training project on Radiological Protection in medicine. Use of new technologies

    International Nuclear Information System (INIS)

    Ruis-Cruces, R.; Perez-Martinez, M.; Pastor Vega, J. M.; Diez de los Rios Delgado, A.

    2003-01-01

    Radiological protection training addressed to physicians should start during the teaching graduate and postgraduate studies, and a third phase only for those physicians using X rays and radioactive sources in diagnosis and treatment of diseases. To show a training project addressed to the teaching graduate students based on the new technologies, such as web online and interactive CD-ROM. Development of a web-online including information in.pdf (adobe acrobat) format and additional tools (as data bases, videos, news and class meetings, FAQ, tutorials). Moreover, we propose to development an interactive CD-ROM which will be used as a practical tool to complete the obligatory subject on radiological protection in the University of Malaga (Spain). We show the preliminary phase of the project. The web-online is being developed with the Microsoft FrontPage software. The first version of the CR-ROM is being developed in html format. These tools based on new technologies will be a very important support for radiological protection training, which is recommended by International Organizations (EC Report R116 and IAE Action Plan 2002-2006). (Author) 4 refs

  4. Trust estimation of the semantic web using semantic web clustering

    Science.gov (United States)

    Shirgahi, Hossein; Mohsenzadeh, Mehran; Haj Seyyed Javadi, Hamid

    2017-05-01

    Development of semantic web and social network is undeniable in the Internet world these days. Widespread nature of semantic web has been very challenging to assess the trust in this field. In recent years, extensive researches have been done to estimate the trust of semantic web. Since trust of semantic web is a multidimensional problem, in this paper, we used parameters of social network authority, the value of pages links authority and semantic authority to assess the trust. Due to the large space of semantic network, we considered the problem scope to the clusters of semantic subnetworks and obtained the trust of each cluster elements as local and calculated the trust of outside resources according to their local trusts and trust of clusters to each other. According to the experimental result, the proposed method shows more than 79% Fscore that is about 11.9% in average more than Eigen, Tidal and centralised trust methods. Mean of error in this proposed method is 12.936, that is 9.75% in average less than Eigen and Tidal trust methods.

  5. The World Wide Web and Technology Transfer at NASA Langley Research Center

    Science.gov (United States)

    Nelson, Michael L.; Bianco, David J.

    1994-01-01

    NASA Langley Research Center (LaRC) began using the World Wide Web (WWW) in the summer of 1993, becoming the first NASA installation to provide a Center-wide home page. This coincided with a reorganization of LaRC to provide a more concentrated focus on technology transfer to both aerospace and non-aerospace industry. Use of the WWW and NCSA Mosaic not only provides automated information dissemination, but also allows for the implementation, evolution and integration of many technology transfer applications. This paper describes several of these innovative applications, including the on-line presentation of the entire Technology Opportunities Showcase (TOPS), an industrial partnering showcase that exists on the Web long after the actual 3-day event ended. During its first year on the Web, LaRC also developed several WWW-based information repositories. The Langley Technical Report Server (LTRS), a technical paper delivery system with integrated searching and retrieval, has proved to be quite popular. The NASA Technical Report Server (NTRS), an outgrowth of LTRS, provides uniform access to many logically similar, yet physically distributed NASA report servers. WWW is also the foundation of the Langley Software Server (LSS), an experimental software distribution system which will distribute LaRC-developed software with the possible phase-out of NASA's COSMIC program. In addition to the more formal technology distribution projects, WWW has been successful in connecting people with technologies and people with other people. With the completion of the LaRC reorganization, the Technology Applications Group, charged with interfacing with non-aerospace companies, opened for business with a popular home page.

  6. Longitudinal navigation log data on a large web domain

    NARCIS (Netherlands)

    Verberne, S.; Arends, B.; Kraaij, W.; Vries, A. de

    2016-01-01

    We have collected the access logs for our university's web domain over a time span of 4.5 years. We now release the pre-processed data of a 3-month period for research into user navigation behavior. We preprocessed the data so that only successful GET requests of web pages by non-bot users are kept.

  7. Adding a visualization feature to web search engines: it's time.

    Science.gov (United States)

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  8. WAPTT - Web Application Penetration Testing Tool

    Directory of Open Access Journals (Sweden)

    DURIC, Z.

    2014-02-01

    Full Text Available Web applications vulnerabilities allow attackers to perform malicious actions that range from gaining unauthorized account access to obtaining sensitive data. The number of reported web application vulnerabilities in last decade is increasing dramatically. The most of vulnerabilities result from improper input validation and sanitization. The most important of these vulnerabilities based on improper input validation and sanitization are: SQL injection (SQLI, Cross-Site Scripting (XSS and Buffer Overflow (BOF. In order to address these vulnerabilities we designed and developed the WAPTT (Web Application Penetration Testing Tool tool - web application penetration testing tool. Unlike other web application penetration testing tools, this tool is modular, and can be easily extended by end-user. In order to improve efficiency of SQLI vulnerability detection, WAPTT uses an efficient algorithm for page similarity detection. The proposed tool showed promising results as compared to six well-known web application scanners in detecting various web application vulnerabilities.

  9. It's Time to Use a Wiki as Part of Your Web Site

    Science.gov (United States)

    Ribaric, Tim

    2007-01-01

    Without a doubt, the term "wiki" has leaked into almost every discussion concerning Web 2.0. The real question becomes: Is there a place for a wiki on every library Web site? The answer should be an emphatic "yes." People often praise the wiki because it offers simple page creation and provides instant gratification for amateur Web developers.…

  10. Web-based X-ray quality control documentation.

    Science.gov (United States)

    David, George; Burnett, Lou Ann; Schenkel, Robert

    2003-01-01

    The department of radiology at the Medical College of Georgia Hospital and Clinics has developed an equipment quality control web site. Our goal is to provide immediate access to virtually all medical physics survey data. The web site is designed to assist equipment engineers, department management and technologists. By improving communications and access to equipment documentation, we believe productivity is enhanced. The creation of the quality control web site was accomplished in three distinct steps. First, survey data had to be placed in a computer format. The second step was to convert these various computer files to a format supported by commercial web browsers. Third, a comprehensive home page had to be designed to provide convenient access to the multitude of surveys done in the various x-ray rooms. Because we had spent years previously fine-tuning the computerization of the medical physics quality control program, most survey documentation was already in spreadsheet or database format. A major technical decision was the method of conversion of survey spreadsheet and database files into documentation appropriate for the web. After an unsatisfactory experience with a HyperText Markup Language (HTML) converter (packaged with spreadsheet and database software), we tried creating Portable Document Format (PDF) files using Adobe Acrobat software. This process preserves the original formatting of the document and takes no longer than conventional printing; therefore, it has been very successful. Although the PDF file generated by Adobe Acrobat is a proprietary format, it can be displayed through a conventional web browser using the freely distributed Adobe Acrobat Reader program that is available for virtually all platforms. Once a user installs the software, it is automatically invoked by the web browser whenever the user follows a link to a file with a PDF extension. Although no confidential patient information is available on the web site, our legal

  11. Fifteen-year trend in information on the World Wide Web for patients with rheumatoid arthritis: evolving, but opportunities for improvement remain.

    Science.gov (United States)

    Castillo-Ortiz, Jose Dionisio; de Jesus Valdivia-Nuno, Jose; Ramirez-Gomez, Andrea; Garagarza-Mariscal, Heber; Gallegos-Rios, Carlos; Flores-Hernandez, Gabriel; Hernandez-Sanchez, Luis; Brambila-Barba, Victor; Castaneda-Sanchez, Jose Juan; Barajas-Ochoa, Zalathiel; Suarez-Rico, Angel; Sanchez-Gonzalez, Jorge Manuel; Ramos-Remus, Cesar

    2016-09-01

    The aim of this study was to assess the changes in the characteristics of rheumatoid arthritis information on the Internet over a 15-year period and the positioning of Web sites posted by universities, hospitals, and medical associations. We replicated the methods of a 2001 study assessing rheumatoid arthritis information on the Internet using WebCrawler. All Web sites and pages were critically assessed for relevance, scope, authorship, type of publication, and financial objectives. Differences between studies were considered significant if 95 % confidence intervals did not overlap. Additionally, we added a Google search with assessments of the quality of content of web pages and of the Web sites posted by medical institutions. There were significant differences between the present study's WebCrawler search and the 2001-referent study. There were increases in information sites (82 vs 36 %) and rheumatoid arthritis-specific discussion pages (59 vs 8 %), and decreases in advertisements (2 vs 48 %) and alternative therapies (27 vs 45 %). The quality of content of web pages is still dispersed; just 37 % were rated as good. Among the first 300 hits, 30 (10 %) were posted by medical institutions, 17 of them in the USA. Regarding readability, 7 % of these 30 web pages required 6 years, 27 % required 7-9 years, 27 % required 10-12 years, and 40 % required 12 or more years of schooling. The Internet has evolved in the last 15 years. Medical institutions are also better positioned. However, there are still areas for improvement, such as the quality of the content, leadership of medical institutions, and readability of information.

  12. REVIEW PAPER ON THE DEEP WEB DATA EXTRACTION

    OpenAIRE

    Prof. V. S. Patil*1, Miss Sneha Sitafale2, Miss Priyanka Kale3, Miss Poonam Bhujbal 4 , Miss Mohini Dandge 5 .

    2018-01-01

    Deep web data extraction is the process of extracting a set of data records and the items that they contain from a query result page. Such structured data can be later integrated into results from other data sources and given to the user in a single, cohesive view. Domain identification is used to identify the query interfaces related to the domain from the forms obtained in the search process. The surface web contains a large amount of unfiltered information, whereas the deep web includes hi...

  13. A new means of communication with the populations: the Extremadura Regional Government Radiological Monitoring alert WEB Page

    International Nuclear Information System (INIS)

    Baeza, A.; Vasco, J.; Miralles, Y.; Torrado, L.; Gil, J. M.

    2003-01-01

    XXI a summary sheet, relatively easy to interpret, giving the radiation levels and dosimetry detected during the immediately proceeding semester. Recently too, the challenge has been taken on of providing constantly, updated information on as complex a topic as the radiological monitoring of the environment. To this end, a Web page has been developed dealing with the operation and results provided by the aforementioned Radiological Warning Betwork of Extremadura. The page structure consists of seven major blocks: (i) origin and objectives of the network; (ii) a description of the stations of the network; (iii) their modes of operation in normal circumstances and in the case of an operational or radiological anomaly; (iv) the results that the network provides; (v) a glossary of terms to clarify as straightforwardly as possible some of the terms and concepts that are of unavoidable use, but are unfamiliar to the population in general; (vi) information about links to other Web sites that also deal with this issue to some degree; and (vii) giving the option of questions and contacts between the visitor to the page and those responsible for its creation and maintenance. Actions such as that described here will doubtless contribute positively to increasing the necessary trust that the population deserves to have in the correct operation of the measures adopted to guarantee their adequate radiological protection. (Author)

  14. DW3 Classical Music Resources: Managing Mozart on the Web.

    Science.gov (United States)

    Fineman, Yale

    2001-01-01

    Discusses the development of DW3 (Duke World Wide Web) Classical Music Resources, a vertical portal that comprises the most comprehensive collection of classical music resources on the Web with links to more than 2800 non-commercial pages/sites in over a dozen languages. Describes the hierarchical organization of subject headings and considers…

  15. Food marketing on popular children's web sites: a content analysis.

    Science.gov (United States)

    Alvy, Lisa M; Calvert, Sandra L

    2008-04-01

    In 2006 the Institute of Medicine (IOM) concluded that food marketing was a contributor to childhood obesity in the United States. One recommendation of the IOM committee was for research on newer marketing venues, such as Internet Web sites. The purpose of this cross-sectional study was to answer the IOM's call by examining food marketing on popular children's Web sites. Ten Web sites were selected based on market research conducted by KidSay, which identified favorite sites of children aged 8 to 11 years during February 2005. Using a standardized coding form, these sites were examined page by page for the existence, type, and features of food marketing. Web sites were compared using chi2 analyses. Although food marketing was not pervasive on the majority of the sites, seven of the 10 Web sites contained food marketing. The products marketed were primarily candy, cereal, quick serve restaurants, and snacks. Candystand.com, a food product site, contained a significantly greater amount of food marketing than the other popular children's Web sites. Because the foods marketed to children are not consistent with a healthful diet, nutrition professionals should consider joining advocacy groups to pressure industry to reduce online food marketing directed at youth.

  16. Approaching the largest ‘API’: extracting information from the Internet with Python

    Directory of Open Access Journals (Sweden)

    Jonathan E. Germann

    2018-02-01

    Full Text Available This article explores the need for libraries to algorithmically access and manipulate the world’s largest API: the Internet. The billions of pages on the ‘Internet API’ (HTTP, HTML, CSS, XPath, DOM, etc. are easily accessible and manipulable. Libraries can assist in creating meaning through the datafication of information on the world wide web. Because most information is created for human consumption, some programming is required for automated extraction. Python is an easy-to-learn programming language with extensive packages and community support for web page automation. Four packages (Urllib, Selenium, BeautifulSoup, Scrapy in Python can automate almost any web page for all sized projects. An example warrant data project is explained to illustrate how well Python packages can manipulate web pages to create meaning through assembling custom datasets.

  17. Demonstration: SpaceExplorer - A Tool for Designing Ubiquitous Web Applications for Collections of Displays

    DEFF Research Database (Denmark)

    Hansen, Thomas Riisgaard

    2007-01-01

    This demonstration presents a simple browser plug-in that grant web applications the ability to use multiple nearby devices for displaying web content. A web page can e.g. be designed to present additional information on nearby devices. The demonstration introduces a light weight peer-to-peer arc...

  18. Life Cycle Project Plan Outline: Web Sites and Web-based Applications

    Science.gov (United States)

    This tool is a guideline for planning and checking for 508 compliance on web sites and web based applications. Determine which EIT components are covered or excepted, which 508 standards and requirements apply, and how to implement them.

  19. Co-clustering Analysis of Weblogs Using Bipartite Spectral Projection Approach

    DEFF Research Database (Denmark)

    Xu, Guandong; Zong, Yu; Dolog, Peter

    2010-01-01

    Web clustering is an approach for aggregating Web objects into various groups according to underlying relationships among them. Finding co-clusters of Web objects is an interesting topic in the context of Web usage mining, which is able to capture the underlying user navigational interest...... and content preference simultaneously. In this paper we will present an algorithm using bipartite spectral clustering to co-cluster Web users and pages. The usage data of users visiting Web sites is modeled as a bipartite graph and the spectral clustering is then applied to the graph representation of usage...... data. The proposed approach is evaluated by experiments performed on real datasets, and the impact of using various clustering algorithms is also investigated. Experimental results have demonstrated the employed method can effectively reveal the subset aggregates of Web users and pages which...

  20. Corporate Writing in the Web of Postmodern Culture and Postindustrial Capitalism.

    Science.gov (United States)

    Boje, David M.

    2001-01-01

    Uses Nike as an example to explore the impact of corporate writing (in annual reports, press releases, advertisements, web pages, sponsored research, and consultant reports). Shows how the intertextual web of "Nike Writing," as it legitimates industry-wide labor and ecological practices has significant, negative consequences for academic…

  1. Oh What a Tangled Biofilm Web Bacteria Weave

    Science.gov (United States)

    ... Home Page Oh What a Tangled Biofilm Web Bacteria Weave By Elia Ben-Ari Posted May 1, ... a suitable surface, some water and nutrients, and bacteria will likely put down stakes and form biofilms. ...

  2. Web-enabled work permit system for fast breeder test reactor

    International Nuclear Information System (INIS)

    Madurai Meenachi, N.; Vinolia, K.; Ramanathan, V.

    2003-01-01

    The objective of this project is to computerize and web-enable the Work Permit System for the Fast Breeder Test Reactor (FBTR) at IGCAR, Kalpakkam. The existing Work Permit System at FBTR was studied in detail. Since all the formalities were paper-based, the risk of human error in scrutinizing all permits before reactor start-up was high. Compilation of reports (daily, monthly, yearly etc.) was tedious. The work permit system was therefore automated in order to enable the operation group manage the maintenance work carried out in the plant systematically with entries. The entire project was classified into five permit modules -maintenance, transfer, return, cancellation and reissue. Each module takes care of the entry and maintenance of data in their respective fields in their respective tables. The user is also provided with an option to take a hard copy of the report of his/her choice. A client/server based system was designed to web-enable the entire project. The server program was designed using VB 6.0 as the front-end and MS Access database as the back end to store the data. The client software was developed using Active Server Pages and published using personal web server in the Intranet. A number of administrative tools have been incorporated in the software to ensure access security and integrity of the database. An online help feature with search facilities was added to the software. The work permit system software is now already being used at FBTR and has been deemed to be an invaluable aid in empowering the availability of the reactor and determining the performance history of the equipment. (author)

  3. Improvements to Web Toolkits for Antelope-based Real-time Monitoring Systems

    Science.gov (United States)

    Lindquist, K. G.; Newman, R. L.; Vernon, F. L.; Hansen, T. S.; Orcutt, J.

    2005-12-01

    The Antelope Environmental Monitoring System (http://www.brtt.com) is a robust middleware architecture for near-real-time data collection, analysis, archiving and distribution. Antelope has an extensive toolkit allowing users to interact directly with their datasets. A rudimentary interface was developed in previous work between Antelope and the web-scripting language PHP (The PHP language is described in more detail at http://www.php.net). This interface allowed basic application development for remote access to and interaction with near-real-time data through a World Wide Web interface. We have added over 70 new functions for the Antelope interface to PHP, providing a solid base for web-scripting of near-real-time Antelope database applications. In addition, we have designed a new structure for web sites to be created from the Antelope platform, including PHP applications and Perl CGI scripts as well as static pages. Finally we have constructed the first version of the dbwebproject program, designed to dynamically create and maintain web-sites from specified recipes. These tools have already proven valuable for the creation of web tools for the dissemination of and interaction with near-real-time data streams from multi-signal-domain real-time sensor networks. We discuss current and future directions of this work in the context of the ROADNet project. Examples and applications of these core tools are elaborated in a companion presentation in this session (Newman et al., AGU 2005, session IN06).

  4. 77 FR 60138 - Trinity Adaptive Management Working Group; Public Teleconference/Web-Based Meeting

    Science.gov (United States)

    2012-10-02

    ... meeting. Background The TAMWG affords stakeholders the opportunity to give policy, management, and...-FF08EACT00] Trinity Adaptive Management Working Group; Public Teleconference/ Web-Based Meeting AGENCY: Fish..., announce a public teleconference/web-based meeting of [[Page 60139

  5. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    Science.gov (United States)

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  6. An Optimization Model for Product Placement on Product Listing Pages

    Directory of Open Access Journals (Sweden)

    Yan-Kwang Chen

    2014-01-01

    Full Text Available The design of product listing pages is a key component of Website design because it has significant influence on the sales volume on a Website. This study focuses on product placement in designing product listing pages. Product placement concerns how venders of online stores place their products over the product listing pages for maximization of profit. This problem is very similar to the offline shelf management problem. Since product information sources on a Web page are typically communicated through the text and image, visual stimuli such as color, shape, size, and spatial arrangement often have an effect on the visual attention of online shoppers and, in turn, influence their eventual purchase decisions. In view of the above, this study synthesizes the visual attention literature and theory of shelf-space allocation to develop a mathematical programming model with genetic algorithms for finding optimal solutions to the focused issue. The validity of the model is illustrated with example problems.

  7. The invisible Web uncovering information sources search engines can't see

    CERN Document Server

    Sherman, Chris

    2001-01-01

    Enormous expanses of the Internet are unreachable with standard web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, informa

  8. Developing heuristics for Web communication: an introduction to this special issue

    NARCIS (Netherlands)

    van der Geest, Thea; Spyridakis, Jan H.

    2000-01-01

    This article describes the role of heuristics in the Web design process. The five sets of heuristics that appear in this issue are also described, as well as the research methods used in their development. The heuristics were designed to help designers and developers of Web pages or sites to

  9. PageRank model of opinion formation on Ulam networks

    Science.gov (United States)

    Chakhmakhchyan, L.; Shepelyansky, D.

    2013-12-01

    We consider a PageRank model of opinion formation on Ulam networks, generated by the intermittency map and the typical Chirikov map. The Ulam networks generated by these maps have certain similarities with such scale-free networks as the World Wide Web (WWW), showing an algebraic decay of the PageRank probability. We find that the opinion formation process on Ulam networks has certain similarities but also distinct features comparing to the WWW. We attribute these distinctions to internal differences in network structure of the Ulam and WWW networks. We also analyze the process of opinion formation in the frame of generalized Sznajd model which protects opinion of small communities.

  10. Using ant-behavior-based simulation model AntWeb to improve website organization

    Science.gov (United States)

    Li, Weigang; Pinheiro Dib, Marcos V.; Teles, Wesley M.; Morais de Andrade, Vlaudemir; Alves de Melo, Alba C. M.; Cariolano, Judas T.

    2002-03-01

    Some web usage mining algorithms showed the potential application to find the difference among the organizations expected by visitors to the website. However, there are still no efficient method and criterion for a web administrator to measure the performance of the modification. In this paper, we developed an AntWeb, a model inspired by ants' behavior to simulate the sequence of visiting the website, in order to measure the efficient of the web structure. We implemented a web usage mining algorithm using backtrack to the intranet website of the Politec Informatic Ltd., Brazil. We defined throughput (the number of visitors to reach their target pages per time unit relates to the total number of visitors) as an index to measure the website's performance. We also used the link in a web page to represent the effect of visitors' pheromone trails. For every modification in the website organization, for example, putting a link from the expected location to the target object, the simulation reported the value of throughput as a quick answer about this modification. The experiment showed the stability of our simulation model, and a positive modification to the intranet website of the Politec.

  11. SWORS: a system for the efficient retrieval of relevant spatial web objects

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian S.

    2012-01-01

    Spatial web objects that possess both a geographical location and a textual description are gaining in prevalence. This gives prominence to spatial keyword queries that exploit both location and textual arguments. Such queries are used in many web services such as yellow pages and maps services....

  12. Perché è necessario capire il web

    CERN Multimedia

    Rosenthal, Edward C

    2007-01-01

    The birth of the cobweb of contents, the web 2.0, Wikipedia, the global collaboration, i blog, the net neutrality, the digital liberty and the future of the net: interview with Robert Cailliau, inventor of the WWW with Tim Bernes-Lee. (2 pages)

  13. Using Web Server Logs to Track Users through the Electronic Forest

    Science.gov (United States)

    Coombs, Karen A.

    2005-01-01

    This article analyzes server logs, providing helpful information in making decisions about Web-based services. The author indicates, as a result of analyzing server logs, several interesting things about the users' behavior were learned. The resulting findings are discussed in this article. Certain pages of the author's Web site, for instance, are…

  14. Estadísticas de visitas en portales web institucionales como indicador de respuesta del público a propuestas de divulgación

    Science.gov (United States)

    Lares, M.

    The presence of institutions on the internet is nowadays very important to strenghten communication channels, both internal and with the general public. The Córdoba Observatory has several web portals, including the official web page, a blog and presence on several social networks. These are one of the fundamental pillars for outreach activities, and serve as communication channel for events and scientific, academic, and outreach news. They are also a source of information for the staff, as well as data related to the Observatory internal organization and scientific production. Several statistical studies are presented, based on data taken from the visits to the official web pages. I comment on some aspects of the role of web pages as a source of consultation and as a quick response to information needs. FULL TEXT IN SPANISH

  15. A Study of HTML Title Tag Creation Behavior of Academic Web Sites

    Science.gov (United States)

    Noruzi, Alireza

    2007-01-01

    The HTML title tag information should identify and describe exactly what a Web page contains. This paper analyzes the "Title element" and raises a significant question: "Why is the title tag important?" Search engines base search results and page rankings on certain criteria. Among the most important criteria is the presence of the search keywords…

  16. Discovering How Students Search a Library Web Site: A Usability Case Study.

    Science.gov (United States)

    Augustine, Susan; Greene, Courtney

    2002-01-01

    Discusses results of a usability study at the University of Illinois Chicago that investigated whether Internet search engines have influenced the way students search library Web sites. Results show students use the Web site's internal search engine rather than navigating through the pages; have difficulty interpreting library terminology; and…

  17. The LifeWebs project: A call for data describing plant-herbivore interaction networks

    Czech Academy of Sciences Publication Activity Database

    Fayle, Tom Maurice; Sam, Kateřina; Humlová, A.; Cagnolo, L.; Novotný, Vojtěch

    2016-01-01

    Roč. 8, č. 4 (2016), č. článku e31122. ISSN 1948-6596 R&D Projects: GA ČR GB14-36098G Institutional support: RVO:60077344 Keywords : herbivory * food web * trophic interaction Subject RIV: EH - Ecology, Behaviour https://escholarship.org/uc/item/4mv2t52s

  18. RCrawler: An R package for parallel web crawling and scraping

    Directory of Open Access Journals (Sweden)

    Salim Khalil

    2017-01-01

    Full Text Available RCrawler is a contributed R package for domain-based web crawling and content scraping. As the first implementation of a parallel web crawler in the R environment, RCrawler can crawl, parse, store pages, extract contents, and produce data that can be directly employed for web content mining applications. However, it is also flexible, and could be adapted to other applications. The main features of RCrawler are multi-threaded crawling, content extraction, and duplicate content detection. In addition, it includes functionalities such as URL and content-type filtering, depth level controlling, and a robot.txt parser. Our crawler has a highly optimized system, and can download a large number of pages per second while being robust against certain crashes and spider traps. In this paper, we describe the design and functionality of RCrawler, and report on our experience of implementing it in an R environment, including different optimizations that handle the limitations of R. Finally, we discuss our experimental results.

  19. 76 FR 14034 - Proposed Collection; Comment Request; NCI Cancer Genetics Services Directory Web-Based...

    Science.gov (United States)

    2011-03-15

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Proposed Collection; Comment Request; NCI Cancer Genetics Services Directory Web-Based Application Form and Update Mailer Summary: In... Cancer Genetics Services Directory Web-based Application Form and Update Mailer. [[Page 14035

  20. Web Annotation and Threaded Forum: How Did Learners Use the Two Environments in an Online Discussion?

    Science.gov (United States)

    Sun, Yanyan; Gao, Fei

    2014-01-01

    Web annotation is a Web 2.0 technology that allows learners to work collaboratively on web pages or electronic documents. This study explored the use of Web annotation as an online discussion tool by comparing it to a traditional threaded discussion forum. Ten graduate students participated in the study. Participants had access to both a Web…

  1. Web-based Logbook System for EAST Experiments

    International Nuclear Information System (INIS)

    Yang Fei; Xiao Bingjia

    2010-01-01

    Implementation of a web-based logbook system on EAST is introduced, which can store the comments for the experiments into a database and access the documents via various web browsers. The three-tier software architecture and asynchronous access technology are adopted to improve the system effectively. Authorized users can view the information of real-time discharge, comments from others and signal plots; add, delete, or revise their own comments; search signal data or comments under complicated search conditions; and collect relevant information and output it to an excel file. The web pages can be automatically updated after a new discharge is completed and without refreshment.

  2. Academic medical center libraries on the Web.

    Science.gov (United States)

    Tannery, N H; Wessel, C B

    1998-10-01

    Academic medical center libraries are moving towards publishing electronically, utilizing networked technologies, and creating digital libraries. The catalyst for this movement has been the Web. An analysis of academic medical center library Web pages was undertaken to assess the information created and communicated in early 1997. A summary of present uses and suggestions for future applications is provided. A method for evaluating and describing the content of library Web sites was designed. The evaluation included categorizing basic information such as description and access to library services, access to commercial databases, and use of interactive forms. The main goal of the evaluation was to assess original resources produced by these libraries.

  3. CREATION OF A WEB MAP AND MOBILE APPLICATION BASED ON A PRINTED BOOK

    Directory of Open Access Journals (Sweden)

    V. Holubec

    2016-06-01

    Full Text Available The project describes a process of conversion of printed books into a web map and mobile application. The goal of the project is to make spatial data in the book accessible to wide public using GIS especially on web in order to spread the information about this topic. Moreover, as a result of the analysis and of the new perspectives gained from the data context, historians will be able to find new connections. The books that serve as sources of the project (two books with the scope of about 1400 pages featuring hundreds of locations where each location is associated with more events of different types refer to places with many addresses in Prague and some villages in the Czech Republic which are related to events that took place during the World War II. The paper describes the steps of conversion, the design of the data model in Esri geodatabase and examples of outputs. The historical data are connected to actual addresses and thanks to such a combination of historical and actual locations, the project will help to discover a part of the history of the Czech Republic and it will show new context in data via GIS capabilities. This project is a continuation of a project which recorded a march of death on a map. This is a unique project created in cooperation with Academia Publishing. The outputs of the project will serve as a core resource for a multimedia history portal. The author of the book is currently writing sequels from the post-war period and at least two other books are envisioned, so the future of the project is ensured.

  4. Web Accessibility and Guidelines

    Science.gov (United States)

    Harper, Simon; Yesilada, Yeliz

    Access to, and movement around, complex online environments, of which the World Wide Web (Web) is the most popular example, has long been considered an important and major issue in the Web design and usability field. The commonly used slang phrase ‘surfing the Web’ implies rapid and free access, pointing to its importance among designers and users alike. It has also been long established that this potentially complex and difficult access is further complicated, and becomes neither rapid nor free, if the user is disabled. There are millions of people who have disabilities that affect their use of the Web. Web accessibility aims to help these people to perceive, understand, navigate, and interact with, as well as contribute to, the Web, and thereby the society in general. This accessibility is, in part, facilitated by the Web Content Accessibility Guidelines (WCAG) currently moving from version one to two. These guidelines are intended to encourage designers to make sure their sites conform to specifications, and in that conformance enable the assistive technologies of disabled users to better interact with the page content. In this way, it was hoped that accessibility could be supported. While this is in part true, guidelines do not solve all problems and the new WCAG version two guidelines are surrounded by controversy and intrigue. This chapter aims to establish the published literature related to Web accessibility and Web accessibility guidelines, and discuss limitations of the current guidelines and future directions.

  5. A Web-Based System for Monitoring and Controlling Multidisciplinary Design Projects

    Science.gov (United States)

    Salas, Andrea O.; Rogers, James L.

    1997-01-01

    In today's competitive environment, both industry and government agencies are under enormous pressure to reduce the time and cost of multidisciplinary design projects. A number of frameworks have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. An examination of current frameworks reveals weaknesses in various areas such as sequencing, displaying, monitoring, and controlling the design process. The objective of this research is to explore how Web technology, in conjunction with an existing framework, can improve these areas of weakness. This paper describes a system that executes a sequence of programs, monitors and controls the design process through a Web-based interface, and visualizes intermediate and final results through the use of Java(Tm) applets. A small sample problem, which includes nine processes with two analysis programs that are coupled to an optimizer, is used to demonstrate the feasibility of this approach.

  6. Construction of a bibliographic information database and a web directory for the nuclear science and engineering

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jeong Hoon; Kim, Tae Whan; Lee, Ji Ho; Chun, Young Chun; Yu, An Na

    2005-11-15

    The objective of this project is to construct the bibliographic information database and the web directory in the nuclear field. Its construction is very timely and important. Because nuclear science and technology has an considerable effect all over the other sciences and technologies due to its property of giant and complex engineering. We aimed to firmly build up a basis of efficient management of the bibliographic information database and the web directory in the nuclear field. The results of this project that we achieved in this year are as follows : first, construction of the bibliographic information database in the nuclear field(the target title: 1,500 titles ; research report: 1,000 titles, full-text report: 250 titles, full-text article: 250 titles). Second, completion of construction of the web directory in the nuclear field by using SWING (the total figure achieved : 2,613 titles). We plan that we will positively give more information to the general public interested in the nuclear field and to the experts of the field through this bibliographic information database on KAERI's home page, KAERI's electronic library and other related sites as well as participation at various seminars and meetings related to the nuclear field.

  7. A user-friendly, dynamic web environment for remote data browsing and analysis of multiparametric geophysical data within the MULTIMO project

    Science.gov (United States)

    Carniel, Roberto; Di Cecca, Mauro; Jaquet, Olivier

    2006-05-01

    In the framework of the EU-funded project "Multi-disciplinary monitoring, modelling and forecasting of volcanic hazard" (MULTIMO), multiparametric data have been recorded at the MULTIMO station in Montserrat. Moreover, several other long time series, recorded at Montserrat and at other volcanoes, have been acquired in order to test stochastic and deterministic methodologies under development. Creating a general framework to handle data efficiently is a considerable task even for homogeneous data. In the case of heterogeneous data, this becomes a major issue. A need for a consistent way of browsing such a heterogeneous dataset in a user-friendly way therefore arose. Additionally, a framework for applying the calculation of the developed dynamical parameters on the data series was also needed in order to easily keep these parameters under control, e.g. for monitoring, research or forecasting purposes. The solution which we present is completely based on Open Source software, including Linux operating system, MySql database management system, Apache web server, Zope application server, Scilab math engine, Plone content management framework, Unified Modelling Language. From the user point of view the main advantage is the possibility of browsing through datasets recorded on different volcanoes, with different instruments, with different sampling frequencies, stored in different formats, all via a consistent, user- friendly interface that transparently runs queries to the database, gets the data from the main storage units, generates the graphs and produces dynamically generated web pages to interact with the user. The involvement of third parties for continuing the development in the Open Source philosophy and/or extending the application fields is now sought.

  8. Mobile Web for Pervasive environments - design webexperiences for multiple mobile devices

    DEFF Research Database (Denmark)

    Hansen, Thomas Riisgaard

    2008-01-01

    In this paper we present an architecture for designing web pages that uses multiple mobile and stationary devices to present web content. The architecture extends standard web technology with a number of functions for expressing how web content might migrate and use multiple displays....... The architecture is developed to support desktop applications, but in this paper we describe how the architecture can be extended to mobile devices by using AJAX technology. The paper also presents an implementation and presents a number of applications for mobile devices developed with this framework....

  9. Accessing NASA Technology with the World Wide Web

    Science.gov (United States)

    Nelson, Michael L.; Bianco, David J.

    1995-01-01

    NASA Langley Research Center (LaRC) began using the World Wide Web (WWW) in the summer of 1993, becoming the first NASA installation to provide a Center-wide home page. This coincided with a reorganization of LaRC to provide a more concentrated focus on technology transfer to both aerospace and non-aerospace industry. Use of WWW and NCSA Mosaic not only provides automated information dissemination, but also allows for the implementation, evolution and integration of many technology transfer and technology awareness applications. This paper describes several of these innovative applications, including the on-line presentation of the entire Technology OPportunities Showcase (TOPS), an industrial partnering showcase that exists on the Web long after the actual 3-day event ended. The NASA Technical Report Server (NTRS) provides uniform access to many logically similar, yet physically distributed NASA report servers. WWW is also the foundation of the Langley Software Server (LSS), an experimental software distribution system which will distribute LaRC-developed software. In addition to the more formal technology distribution projects, WWW has been successful in connecting people with technologies and people with other people.

  10. Development and Enhancement of Web-based Nuclear Education System and It's Enhancement

    International Nuclear Information System (INIS)

    Rho, Sipyo; Lee, K. B.; Nam, Y. M; Kim, H. K.; Hwang, I. A.; Yang, S. W.; Nam, J. S.; Yoo, H. W.

    2012-02-01

    To deliver rapidly changing technologies effectively and economically, E-learning in the field of nuclear technology is being done gradually. In the first year of this project, 'Development and Enhancement of Web-based Nuclear Education System; we had established a server system, fitting-up several home pages in NTC(Nuclear Training and Education Center in KAERI) and newly developed LMS(Learning Management System). We had selected a MOODLE for it is one of popular open source in LMS field, and connected to the ANENT(Asian Nuclear in Education for Nuclear Technology) web portal, which is co-operating with IAEA/NKM. We had produced e-learning content mainly composed of the video clip that was taken by making a film of the lecturing in the course of training and education in NTC. The running time of the content is 100 hours totally. This e-learning content is going to reinforce by adding quiz and Q and A. Another activity is web-conferencing between NWU in South Africa and KAERI, which executed 4 times successfully. We are going to make a pre-course for the foreigners who will take part in our training and education course

  11. Multigraph: Interactive Data Graphs on the Web

    Science.gov (United States)

    Phillips, M. B.

    2010-12-01

    Many aspects of geophysical science involve time dependent data that is often presented in the form of a graph. Considering that the web has become a primary means of communication, there are surprisingly few good tools and techniques available for presenting time-series data on the web. The most common solution is to use a desktop tool such as Excel or Matlab to create a graph which is saved as an image and then included in a web page like any other image. This technique is straightforward, but it limits the user to one particular view of the data, and disconnects the graph from the data in a way that makes updating a graph with new data an often cumbersome manual process. This situation is somewhat analogous to the state of mapping before the advent of GIS. Maps existed only in printed form, and creating a map was a laborious process. In the last several years, however, the world of mapping has experienced a revolution in the form of web-based and other interactive computer technologies, so that it is now commonplace for anyone to easily browse through gigabytes of geographic data. Multigraph seeks to bring a similar ease of access to time series data. Multigraph is a program for displaying interactive time-series data graphs in web pages that includes a simple way of configuring the appearance of the graph and the data to be included. It allows multiple data sources to be combined into a single graph, and allows the user to explore the data interactively. Multigraph lets users explore and visualize "data space" in the same way that interactive mapping applications such as Google Maps facilitate exploring and visualizing geography. Viewing a Multigraph graph is extremely simple and intuitive, and requires no instructions. Creating a new graph for inclusion in a web page involves writing a simple XML configuration file and requires no programming. Multigraph can read data in a variety of formats, and can display data from a web service, allowing users to "surf

  12. Upgrading to Web 2.0: An Experiential Project to Build a Marketing Wiki

    Science.gov (United States)

    Cronin, John J.

    2009-01-01

    Wikis are one of the newest features of Web 2.0. This article describes the implementation of a project in a marketing course in which students created an interactive textbook using wiki software. Several surprises encountered along the way are described, and the unique problem of grading individual contributions to a wiki is discussed. The author…

  13. Hue combinations in web design for Swedish and Thai users : Guidelines for combining color hues onscreen for Swedish and Thai users in the context of designing web sites

    OpenAIRE

    Ruse, Vidal

    2017-01-01

    Users can assess the visual appeal of a web page within 50 milliseconds and color is the first thing noticed onscreen. That directly influences user perception of the website, and choosing appealing color combinations is therefore crucial for successful web design. Recent scientific research has identified which individual colors are culturally preferred in web design in different countries but there is no similar research on hue combinations. Currently no effective, scientifically based guid...

  14. A Web text acquisition method%基于Delphi的Web文本获取方法

    Institute of Scientific and Technical Information of China (English)

    刘建培

    2016-01-01

    提出基于delphi的Web文本获取方法,从网页中获取Web页面格式的源文件(.html文件),分析它的结构信息,处理它的控制符,通过分析过滤源文件的格式来提取网页中的文本信息。利用标点符号对文本信息进行章节、段落、句子等预处理,将文本信息转换成句子序列,让用户快速地定位到需要了解的内容,从而让用户远离钓鱼网站、恶意广告、欺诈信息以及在浏览网页内容时产生的骚扰,提高互联网体验。%In this paper, a method of Web text acquisition with Delphi is proposed, which obtains the source files of the Web page format (.Html file) from the Web page, analyzes its structure information, deals with its control character, and extracts the text information from the Web page by analyzing and filtering the source files’ formats. The method makes use of punctuation marks to preprocess the text information for sections, paragraphs and sentences, converts the text information into sentence sequences, which allows the users to quickly navigate to the contents needed to know, allows the users to stay away from phishing sites, malicious advertising, fraud information and the harassment generated by browsing the content of Web pages, and improves their Internet experience.

  15. Analysis of Trust-Based Approaches for Web Service Selection

    DEFF Research Database (Denmark)

    Dragoni, Nicola; Miotto, Nicola

    2011-01-01

    The basic tenet of Service-Oriented Computing (SOC) is the possibility of building distributed applications on the Web by using Web services as fundamental building blocks. The proliferation of such services is considered the second wave of evolution in the Internet age, moving the Web from...... a collection of pages to a collections of services. Consensus is growing that this Web service revolution wont eventuate until we resolve trust-related issues. Indeed, the intrinsic openness of the SOC vision makes crucial to locate useful services and recognize them as trustworthy. In this paper we review...... the field of trust-based Web service selection, providing a structured classification of current approaches and highlighting the main limitations of each class and of the overall field....

  16. A Literature Review of Academic Library Web Page Studies

    Science.gov (United States)

    Blummer, Barbara

    2007-01-01

    In the early 1990s, numerous academic libraries adopted the web as a communication tool with users. The literature on academic library websites includes research on both design and navigation. Early studies typically focused on design characteristics, since websites initially merely provided information on the services and collections available in…

  17. Three loud cheers for the father of the Web

    CERN Multimedia

    2005-01-01

    World Wide Web creator Sir Tim Berners-Lee could have been a very rich man - but he gave away his invention for the good of mankind. Tom Leonard meets the modest genius voted Great Briton 2004 (2 pages)

  18. JavaScript and interactive web pages in radiology.

    Science.gov (United States)

    Gurney, J W

    2001-10-01

    Web publishing is becoming a more common method of disseminating information. JavaScript is an object-orientated language embedded into modern browsers and has a wide variety of uses. The use of JavaScript in radiology is illustrated by calculating the indices of sensitivity, specificity, and predictive values from a table of true positives, true negatives, false positives, and false negatives. In addition, a single line of JavaScript code can be used to annotate images, which has a wide variety of uses.

  19. The EMBRACE web service collection

    DEFF Research Database (Denmark)

    Pettifer, S.; Ison, J.; Kalas, M.

    2010-01-01

    The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order...... for web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM......, an ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection...

  20. Improving web site performance using commercially available analytical tools.

    Science.gov (United States)

    Ogle, James A

    2010-10-01

    It is easy to accurately measure web site usage and to quantify key parameters such as page views, site visits, and more complex variables using commercially available tools that analyze web site log files and search engine use. This information can be used strategically to guide the design or redesign of a web site (templates, look-and-feel, and navigation infrastructure) to improve overall usability. The data can also be used tactically to assess the popularity and use of new pages and modules that are added and to rectify problems that surface. This paper describes software tools used to: (1) inventory search terms that lead to available content; (2) propose synonyms for commonly used search terms; (3) evaluate the effectiveness of calls to action; (4) conduct path analyses to targeted content. The American Academy of Orthopaedic Surgeons (AAOS) uses SurfRay's Behavior Tracking software (Santa Clara CA, USA, and Copenhagen, Denmark) to capture and archive the search terms that have been entered into the site's Google Mini search engine. The AAOS also uses Unica's NetInsight program to analyze its web site log files. These tools provide the AAOS with information that quantifies how well its web sites are operating and insights for making improvements to them. Although it is easy to quantify many aspects of an association's web presence, it also takes human involvement to analyze the results and then recommend changes. Without a dedicated resource to do this, the work often is accomplished only sporadically and on an ad hoc basis.

  1. "Così abbiamo creato il World Wide Web"

    CERN Multimedia

    Sigiani, GianLuca

    2002-01-01

    Meeting with Robert Cailliau, scientist and pioneer of the web, who, in a book, tells how at CERN in Geneva, his team transformed Internet (an instrument used for military purposes) in one of the most revolutionary tool of mass media from ever (1 page)

  2. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    Science.gov (United States)

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  3. TMFoldWeb: a web server for predicting transmembrane protein fold class.

    Science.gov (United States)

    Kozma, Dániel; Tusnády, Gábor E

    2015-09-17

    Here we present TMFoldWeb, the web server implementation of TMFoldRec, a transmembrane protein fold recognition algorithm. TMFoldRec uses statistical potentials and utilizes topology filtering and a gapless threading algorithm. It ranks template structures and selects the most likely candidates and estimates the reliability of the obtained lowest energy model. The statistical potential was developed in a maximum likelihood framework on a representative set of the PDBTM database. According to the benchmark test the performance of TMFoldRec is about 77 % in correctly predicting fold class for a given transmembrane protein sequence. An intuitive web interface has been developed for the recently published TMFoldRec algorithm. The query sequence goes through a pipeline of topology prediction and a systematic sequence to structure alignment (threading). Resulting templates are ordered by energy and reliability values and are colored according to their significance level. Besides the graphical interface, a programmatic access is available as well, via a direct interface for developers or for submitting genome-wide data sets. The TMFoldWeb web server is unique and currently the only web server that is able to predict the fold class of transmembrane proteins while assigning reliability scores for the prediction. This method is prepared for genome-wide analysis with its easy-to-use interface, informative result page and programmatic access. Considering the info-communication evolution in the last few years, the developed web server, as well as the molecule viewer, is responsive and fully compatible with the prevalent tablets and mobile devices.

  4. The ATLAS Computing Agora: a resource web site for citizen science projects

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS collaboration has recently setup a number of citizen science projects which have a strong IT component and could not have been envisaged without the growth of general public computing resources and network connectivity: event simulation through volunteer computing, algorithms improvement via Machine Learning challenges, event display analysis on citizen science platforms, use of open data, etc. Most of the interactions with volunteers are handled through message boards, but specific outreach material was also developed, giving an enhanced visibility to the ATLAS software and computing techniques, challenges and community. In this talk the Atlas Computing Agora (ACA) web platform will be presented as well as some of the specific material developed for some of the projects.

  5. 78 FR 76391 - Proposed Enhancements to the Motor Carrier Safety Measurement System (SMS) Public Web Site

    Science.gov (United States)

    2013-12-17

    ...-0392] Proposed Enhancements to the Motor Carrier Safety Measurement System (SMS) Public Web Site AGENCY... proposed enhancements to the display of information on the Agency's Safety Measurement System (SMS) public Web site. On December 6, 2013, Advocates [[Page 76392

  6. Policy Analysis on Growth and Employment - PAGE II | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Policy Analysis on Growth and Employment - PAGE II. This project by the Partnership for Economic Policy (PEP) will support quality ... The project is supported by the UK's Department for International Development, and additional funding is ...

  7. Responsive web design workflow

    OpenAIRE

    LAAK, TIMO

    2013-01-01

    Responsive Web Design Workflow is a literature review about Responsive Web Design, a web standards based modern web design paradigm. The goals of this research were to define what responsive web design is, determine its importance in building modern websites and describe a workflow for responsive web design projects. Responsive web design is a paradigm to create adaptive websites, which respond to the properties of the media that is used to render them. The three key elements of responsi...

  8. SISTEM INFORMASI RUMAH SAKIT BERBASIS WEB MENGGUNAKAN JAVA SERVER PAGES

    Directory of Open Access Journals (Sweden)

    Heru Cahya Rustamaji

    2010-01-01

    Teknologi yang dipakai untuk membangun sistem informasi berbasis web ini adalah menggunakan JSP dan apache Tomcat. Tomcat merupakan servlet engine open source yang termasuk dalam proyek Jakarta yang dikerjakan oleh Apache Software Foundation.

  9. Web Based Projects Enhancing English Language and Generic Skills Development for Asian Hospitality Industry Students

    Science.gov (United States)

    Wang, Mei-jung

    2009-01-01

    This study investigated hospitality students' responses toward their learning experiences from undertaking group projects based upon a College web platform, the "Ubiquitous Hospitality English Learning Platform" (U-HELP). Twenty-six students in the Department of Applied Foreign Languages participated in this study. Their attitudes toward…

  10. PageRank for low frequency earthquake detection

    Science.gov (United States)

    Aguiar, A. C.; Beroza, G. C.

    2013-12-01

    We have analyzed Hi-Net seismic waveform data during the April 2006 tremor episode in the Nankai Trough in SW Japan using the autocorrelation approach of Brown et al. (2008), which detects low frequency earthquakes (LFEs) based on pair-wise waveform matching. We have generalized this to exploit the fact that waveforms may repeat multiple times, on more than just a pair-wise basis. We are working towards developing a sound statistical basis for event detection, but that is complicated by two factors. First, the statistical behavior of the autocorrelations varies between stations. Analyzing one station at a time assures that the detection threshold will only depend on the station being analyzed. Second, the positive detections do not satisfy "closure." That is, if window A correlates with window B, and window B correlates with window C, then window A and window C do not necessarily correlate with one another. We want to evaluate whether or not a linked set of windows are correlated due to chance. To do this, we map our problem on to one that has previously been solved for web search, and apply Google's PageRank algorithm. PageRank is the probability of a 'random surfer' to visit a particular web page; it assigns a ranking for a webpage based on the amount of links associated with that page. For windows of seismic data instead of webpages, the windows with high probabilities suggest likely LFE signals. Once identified, we stack the matched windows to improve the snr and use these stacks as template signals to find other LFEs within continuous data. We compare the results among stations and declare a detection if they are found in a statistically significant number of stations, based on multinomial statistics. We compare our detections using the single-station method to detections found by Shelly et al. (2007) for the April 2006 tremor sequence in Shikoku, Japan. We find strong similarity between the results, as well as many new detections that were not found using

  11. Web Application Vulnerabilities

    OpenAIRE

    Yadav, Bhanu

    2014-01-01

    Web application security has been a major issue in information technology since the evolvement of dynamic web application. The main objective of this project was to carry out a detailed study on the top three web application vulnerabilities such as injection, cross site scripting, broken authentication and session management, present the situation where an application can be vulnerable to these web threats and finally provide preventative measures against them. ...

  12. The use of the World Wide Web by medical journals in 2003 and 2005: an observational study.

    Science.gov (United States)

    Schriger, David L; Ouk, Sripha; Altman, Douglas G

    2007-01-01

    The 2- to 6-page print journal article has been the standard for 200 years, yet this format severely limits the amount of detailed information that can be conveyed. The World Wide Web provides a low-cost option for posting extended text and supplementary information. It also can enhance the experience of journal editors, reviewers, readers, and authors through added functionality (eg, online submission and peer review, postpublication critique, and e-mail notification of table of contents.) Our aim was to characterize ways that journals were using the World Wide Web in 2005 and note changes since 2003. We analyzed the Web sites of 138 high-impact print journals in 3 ways. First, we compared the print and Web versions of March 2003 and 2005 issues of 28 journals (20 of which were randomly selected from the 138) to determine how often articles were published Web only and how often print articles were augmented by Web-only supplements. Second, we examined what functions were offered by each journal Web site. Third, for journals that offered Web pages for reader commentary about each article, we analyzed the number of comments and characterized these comments. Fifty-six articles (7%) in 5 journals were Web only. Thirteen of the 28 journals had no supplementary online content. By 2005, several journals were including Web-only supplements in >20% of their papers. Supplementary methods, tables, and figures predominated. The use of supplementary material increased by 5% from 2% to 7% in the 20-journal random sample from 2003 to 2005. Web sites had similar functionality with an emphasis on linking each article to related material and e-mailing readers about activity related to each article. There was little evidence of journals using the Web to provide readers an interactive experience with the data or with each other. Seventeen of the 138 journals offered rapid-response pages. Only 18% of eligible articles had any comments after 5 months. Journal Web sites offer similar

  13. Fermilab joins in global live Web cast

    CERN Multimedia

    Polansek, Tom

    2005-01-01

    From 2 to 3:30 p.m., Lederman, who won the Nobel Prize for physics in 1988, will host his own wacky, science-centered talk show at Fermi National Accelerator Laboratory as part of a lvie, 12-hour, international Web cast celebrating Albert Einstein and the world Year of Physics (2/3 page)

  14. Resolving person names in web people search

    NARCIS (Netherlands)

    Balog, K.; Azzopardi, L.; de Rijke, M.; King, I.; Baeza-Yates, R.

    2009-01-01

    Disambiguating person names in a set of documents (such as a set of web pages returned in response to a person name) is a key task for the presentation of results and the automatic profiling of experts. With largely unstructured documents and an unknown number of people with the same name the

  15. Discovery and Selection of Semantic Web Services

    CERN Document Server

    Wang, Xia

    2013-01-01

    For advanced web search engines to be able not only to search for semantically related information dispersed over different web pages, but also for semantic services providing certain functionalities, discovering semantic services is the key issue. Addressing four problems of current solution, this book presents the following contributions. A novel service model independent of semantic service description models is proposed, which clearly defines all elements necessary for service discovery and selection. It takes service selection as its gist and improves efficiency. Corresponding selection algorithms and their implementation as components of the extended Semantically Enabled Service-oriented Architecture in the Web Service Modeling Environment are detailed. Many applications of semantic web services, e.g. discovery, composition and mediation, can benefit from a general approach for building application ontologies. With application ontologies thus built, services are discovered in the same way as with single...

  16. Página web de la Sociedad Española para los Recursos Genéticos Animales (SERGA): justificación y descripción

    OpenAIRE

    Barba Capote, C.J.; Rodero Serrano, E.; Delgado-Bermejo, J.V.; Castro, R.; Zamora Lozano, R.

    2000-01-01

    In the present paper the web page of SERGA is described as a structure of permanent information among members and divulgation of the society’s activities out of our subject. After a slight justification the different option of the page are described where we have to stand out the database of the Spanish breeds, the historical photographic archive the biblioteque, and the links with other web address, and another minor functionality. This paper finish presenting the proposal for the web future...

  17. Web-Based Distributed Simulation of Aeronautical Propulsion System

    Science.gov (United States)

    Zheng, Desheng; Follen, Gregory J.; Pavlik, William R.; Kim, Chan M.; Liu, Xianyou; Blaser, Tammy M.; Lopez, Isaac

    2001-01-01

    An application was developed to allow users to run and view the Numerical Propulsion System Simulation (NPSS) engine simulations from web browsers. Simulations were performed on multiple INFORMATION POWER GRID (IPG) test beds. The Common Object Request Broker Architecture (CORBA) was used for brokering data exchange among machines and IPG/Globus for job scheduling and remote process invocation. Web server scripting was performed by JavaServer Pages (JSP). This application has proven to be an effective and efficient way to couple heterogeneous distributed components.

  18. Web-дизайн: landing page

    OpenAIRE

    ЗОРИНА Е.В.; БЕЖИТСКАЯ Е.А.

    2014-01-01

    В то время, как в США и Европе такое явление как landing page используется давно, в России же оно только стало популярным и уже получило широкое распространение.

  19. Sustainable Materials Management (SMM) Web Academy Webinar: The Changing Waste Stream

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  20. On HTML and XML based web design and implementation techniques

    International Nuclear Information System (INIS)

    Bezboruah, B.; Kalita, M.

    2006-05-01

    Web implementation is truly a multidisciplinary field with influences from programming, choosing of scripting languages, graphic design, user interface design, and database design. The challenge of a Web designer/implementer is his ability to create an attractive and informative Web. To work with the universal framework and link diagrams from the design process as well as the Web specifications and domain information, it is essential to create Hypertext Markup Language (HTML) or other software and multimedia to accomplish the Web's objective. In this article we will discuss Web design standards and the techniques involved in Web implementation based on HTML and Extensible Markup Language (XML). We will also discuss the advantages and disadvantages of HTML over its successor XML in designing and implementing a Web. We have developed two Web pages, one utilizing the features of HTML and the other based on the features of XML to carry out the present investigation. (author)