WorldWideScience

Sample records for single web page

  1. Migrating Multi-page Web Applications to Single-page AJAX Interfaces

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.

    2006-01-01

    Recently, a new web development technique for creating interactive web applications, dubbed AJAX, has emerged. In this new model, the single-page web interface is composed of individual components which can be updated/replaced independently. With the rise of AJAX web applications classical

  2. Analysis and Testing of Ajax-based Single-page Web Applications

    NARCIS (Netherlands)

    Mesbah, A.

    2009-01-01

    This dissertation has focused on better understanding the shifting web paradigm and the consequences of moving from the classical multi-page model to an Ajax-based single-page style. Specifically to that end, this work has examined this new class of software from three main software engineering

  3. Web pages: What can you see in a single fixation?

    Science.gov (United States)

    Jahanian, Ali; Keshvari, Shaiyan; Rosenholtz, Ruth

    2018-01-01

    Research in human vision suggests that in a single fixation, humans can extract a significant amount of information from a natural scene, e.g. the semantic category, spatial layout, and object identities. This ability is useful, for example, for quickly determining location, navigating around obstacles, detecting threats, and guiding eye movements to gather more information. In this paper, we ask a new question: What can we see at a glance at a web page - an artificial yet complex "real world" stimulus? Is it possible to notice the type of website, or where the relevant elements are, with only a glimpse? We find that observers, fixating at the center of a web page shown for only 120 milliseconds, are well above chance at classifying the page into one of ten categories. Furthermore, this ability is supported in part by text that they can read at a glance. Users can also understand the spatial layout well enough to reliably localize the menu bar and to detect ads, even though the latter are often camouflaged among other graphical elements. We discuss the parallels between web page gist and scene gist, and the implications of our findings for both vision science and human-computer interaction.

  4. Web Page Recommendation Using Web Mining

    OpenAIRE

    Modraj Bhavsar; Mrs. P. M. Chavan

    2014-01-01

    On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each...

  5. WEB LOG EXPLORER – CONTROL OF MULTIDIMENSIONAL DYNAMICS OF WEB PAGES

    Directory of Open Access Journals (Sweden)

    Mislav Šimunić

    2012-07-01

    Full Text Available Demand markets dictate and pose increasingly more requirements to the supplymarket that are not easily satisfied. The supply market presenting its web pages to thedemand market should find the best and quickest ways to respond promptly to the changesdictated by the demand market. The question is how to do that in the most efficient andquickest way. The data on the usage of web pages on a specific web site are recorded in alog file. The data in a log file are stochastic and unordered and require systematicmonitoring, categorization, analyses, and weighing. From the data processed in this way, itis necessary to single out and sort the data by their importance that would be a basis for acontinuous generation of dynamics/changes to the web site pages in line with the criterionchosen. To perform those tasks successfully, a new software solution is required. For thatpurpose, the authors have developed the first version of the WLE (WebLogExplorersoftware solution, which is actually realization of web page multidimensionality and theweb site as a whole. The WebLogExplorer enables statistical and semantic analysis of a logfile and on the basis thereof, multidimensional control of the web page dynamics. Theexperimental part of the work was done within the web site of HTZ (Croatian NationalTourist Board being the main portal of the global tourist supply in the Republic of Croatia(on average, daily "log" consists of c. 600,000 sets, average size of log file is 127 Mb, andc. 7000-8000 daily visitors on the web site.

  6. CrazyEgg Reports for Single Page Analysis

    Science.gov (United States)

    CrazyEgg provides an in depth look at visitor behavior on one page. While you can use GA to do trend analysis of your web area, CrazyEgg helps diagnose the design of a single Web page by visually displaying all visitor clicks during a specified time.

  7. Creating Web Pages Simplified

    CERN Document Server

    Wooldridge, Mike

    2011-01-01

    The easiest way to learn how to create a Web page for your family or organization Do you want to share photos and family lore with relatives far away? Have you been put in charge of communication for your neighborhood group or nonprofit organization? A Web page is the way to get the word out, and Creating Web Pages Simplified offers an easy, visual way to learn how to build one. Full-color illustrations and concise instructions take you through all phases of Web publishing, from laying out and formatting text to enlivening pages with graphics and animation. This easy-to-follow visual guide sho

  8. Developing Dynamic Single Page Web Applications Using Meteor : Comparing JavaScript Frameworks: Blaze and React

    OpenAIRE

    Yetayeh, Asabeneh

    2017-01-01

    This paper studies Meteor which is a JavaScript full-stack framework to develop interactive single page web applications. Meteor allows building web applications entirely in JavaScript. Meteor uses Blaze, React or AngularJS as a view layer and Node.js and MongoDB as a back-end. The main purpose of this study is to compare the performance of Blaze and React. A multi-user Blaze and React web applications with similar HTML and CSS were developed. Both applications were deployed on Heroku’s w...

  9. Measuring consistency of web page design and its effects on performance and satisfaction.

    Science.gov (United States)

    Ozok, A A; Salvendy, G

    2000-04-01

    This study examines the methods for measuring the consistency levels of web pages and the effect of consistency on the performance and satisfaction of the world-wide web (WWW) user. For clarification, a home page is referred to as a single page that is the default page of a web site on the WWW. A web page refers to a single screen that indicates a specific address on the WWW. This study has tested a series of web pages that were mostly hyperlinked. Therefore, the term 'web page' has been adopted for the nomenclature while referring to the objects of which the features were tested. It was hypothesized that participants would perform better and be more satisfied using web pages that have consistent rather than inconsistent interface design; that the overall consistency level of an interface design would significantly correlate with the three elements of consistency, physical, communicational and conceptual consistency; and that physical and communicational consistencies would interact with each other. The hypotheses were tested in a four-group, between-subject design, with 10 participants in each group. The results partially support the hypothesis regarding error rate, but not regarding satisfaction and performance time. The results also support the hypothesis that each of the three elements of consistency significantly contribute to the overall consistency of a web page, and that physical and communicational consistencies interact with each other, while conceptual consistency does not interact with them.

  10. The Faculty Web Page: Contrivance or Continuation?

    Science.gov (United States)

    Lennex, Lesia

    2007-01-01

    In an age of Internet education, what does it mean for a tenure/tenure-track faculty to have a web page? How many professors have web pages? If they have a page, what does it look like? Do they really need a web page at all? Many universities have faculty web pages. What do those collective pages look like? In what way do they represent the…

  11. WebScore: An Effective Page Scoring Approach for Uncertain Web Social Networks

    Directory of Open Access Journals (Sweden)

    Shaojie Qiao

    2011-10-01

    Full Text Available To effectively score pages with uncertainty in web social networks, we first proposed a new concept called transition probability matrix and formally defined the uncertainty in web social networks. Second, we proposed a hybrid page scoring algorithm, called WebScore, based on the PageRank algorithm and three centrality measures including degree, betweenness, and closeness. Particularly,WebScore takes into a full consideration of the uncertainty of web social networks by computing the transition probability from one page to another. The basic idea ofWebScore is to: (1 integrate uncertainty into PageRank in order to accurately rank pages, and (2 apply the centrality measures to calculate the importance of pages in web social networks. In order to verify the performance of WebScore, we developed a web social network analysis system which can partition web pages into distinct groups and score them in an effective fashion. Finally, we conducted extensive experiments on real data and the results show that WebScore is effective at scoring uncertain pages with less time deficiency than PageRank and centrality measures based page scoring algorithms.

  12. Dynamic Web Pages: Performance Impact on Web Servers.

    Science.gov (United States)

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  13. Exploiting link structure for web page genre identification

    KAUST Repository

    Zhu, Jia

    2015-07-07

    As the World Wide Web develops at an unprecedented pace, identifying web page genre has recently attracted increasing attention because of its importance in web search. A common approach for identifying genre is to use textual features that can be extracted directly from a web page, that is, On-Page features. The extracted features are subsequently inputted into a machine learning algorithm that will perform classification. However, these approaches may be ineffective when the web page contains limited textual information (e.g., the page is full of images). In this study, we address genre identification of web pages under the aforementioned situation. We propose a framework that uses On-Page features while simultaneously considering information in neighboring pages, that is, the pages that are connected to the original page by backward and forward links. We first introduce a graph-based model called GenreSim, which selects an appropriate set of neighboring pages. We then construct a multiple classifier combination module that utilizes information from the selected neighboring pages and On-Page features to improve performance in genre identification. Experiments are conducted on well-known corpora, and favorable results indicate that our proposed framework is effective, particularly in identifying web pages with limited textual information. © 2015 The Author(s)

  14. Exploiting link structure for web page genre identification

    KAUST Repository

    Zhu, Jia; Xie, Qing; Yu, Shoou I.; Wong, Wai Hung

    2015-01-01

    As the World Wide Web develops at an unprecedented pace, identifying web page genre has recently attracted increasing attention because of its importance in web search. A common approach for identifying genre is to use textual features that can be extracted directly from a web page, that is, On-Page features. The extracted features are subsequently inputted into a machine learning algorithm that will perform classification. However, these approaches may be ineffective when the web page contains limited textual information (e.g., the page is full of images). In this study, we address genre identification of web pages under the aforementioned situation. We propose a framework that uses On-Page features while simultaneously considering information in neighboring pages, that is, the pages that are connected to the original page by backward and forward links. We first introduce a graph-based model called GenreSim, which selects an appropriate set of neighboring pages. We then construct a multiple classifier combination module that utilizes information from the selected neighboring pages and On-Page features to improve performance in genre identification. Experiments are conducted on well-known corpora, and favorable results indicate that our proposed framework is effective, particularly in identifying web pages with limited textual information. © 2015 The Author(s)

  15. Finding Specification Pages from the Web

    Science.gov (United States)

    Yoshinaga, Naoki; Torisawa, Kentaro

    This paper presents a method of finding a specification page on the Web for a given object (e.g., ``Ch. d'Yquem'') and its class label (e.g., ``wine''). A specification page for an object is a Web page which gives concise attribute-value information about the object (e.g., ``county''-``Sauternes'') in well formatted structures. A simple unsupervised method using layout and symbolic decoration cues was applied to a large number of the Web pages to acquire candidate attributes for each class (e.g., ``county'' for a class ``wine''). We then filter out irrelevant words from the putative attributes through an author-aware scoring function that we called site frequency. We used the acquired attributes to select a representative specification page for a given object from the Web pages retrieved by a normal search engine. Experimental results revealed that our system greatly outperformed the normal search engine in terms of this specification retrieval.

  16. Classifying web pages with visual features

    NARCIS (Netherlands)

    de Boer, V.; van Someren, M.; Lupascu, T.; Filipe, J.; Cordeiro, J.

    2010-01-01

    To automatically classify and process web pages, current systems use the textual content of those pages, including both the displayed content and the underlying (HTML) code. However, a very important feature of a web page is its visual appearance. In this paper, we show that using generic visual

  17. Metadata Schema Used in OCLC Sampled Web Pages

    Directory of Open Access Journals (Sweden)

    Fei Yu

    2005-12-01

    Full Text Available The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources. However, many questions have been raised about the use of metadata schemas such as which metadata schemas have been used on the Web? How did they describe Web accessible information? What is the distribution of these metadata schemas among Web pages? Do certain schemas dominate the others? To address these issues, this study analyzed 16,383 Web pages with meta tags extracted from 200,000 OCLC sampled Web pages in 2000. It found that only 8.19% Web pages used meta tags; description tags, keyword tags, and Dublin Core tags were the only three schemas used in the Web pages. This article revealed the use of meta tags in terms of their function distribution, syntax characteristics, granularity of the Web pages, and the length distribution and word number distribution of both description and keywords tags.

  18. Network and User-Perceived Performance of Web Page Retrievals

    Science.gov (United States)

    Kruse, Hans; Allman, Mark; Mallasch, Paul

    1998-01-01

    The development of the HTTP protocol has been driven by the need to improve the network performance of the protocol by allowing the efficient retrieval of multiple parts of a web page without the need for multiple simultaneous TCP connections between a client and a server. We suggest that the retrieval of multiple page elements sequentially over a single TCP connection may result in a degradation of the perceived performance experienced by the user. We attempt to quantify this perceived degradation through the use of a model which combines a web retrieval simulation and an analytical model of TCP operation. Starting with the current HTTP/l.1 specification, we first suggest a client@side heuristic to improve the perceived transfer performance. We show that the perceived speed of the page retrieval can be increased without sacrificing data transfer efficiency. We then propose a new client/server extension to the HTTP/l.1 protocol to allow for the interleaving of page element retrievals. We finally address the issue of the display of advertisements on web pages, and in particular suggest a number of mechanisms which can make efficient use of IP multicast to send advertisements to a number of clients within the same network.

  19. Web page classification on child suitability

    NARCIS (Netherlands)

    C. Eickhoff (Carsten); P. Serdyukov; A.P. de Vries (Arjen)

    2010-01-01

    htmlabstractChildren spend significant amounts of time on the Internet. Recent studies showed, that during these periods they are often not under adult supervision. This work presents an automatic approach to identifying suitable web pages for children based on topical and non-topical web page

  20. SPIAR: an architectural style for single page internet applications

    NARCIS (Netherlands)

    A. Mesbah (Ali); K. Broenink; A. van Deursen (Arie)

    2006-01-01

    textabstractA new breed of Web application, dubbed AJAX, is emerging in response to a limited degree of interactivity in large-grain stateless Web interactions. At the heart of this new approach lies a single page interaction model that facilitates rich interactivity. In this paper, we examine the

  1. Measurment of Web Usability: Web Page of Hacettepe University Department of Information Management

    OpenAIRE

    Nazan Özenç Uçak; Tolga Çakmak

    2009-01-01

    Today, information is produced increasingly in electronic form and retrieval of information is provided via web pages. As a result of the rise of the number of web pages, many of them seem to comprise similar contents but different designs. In this respect, presenting information over the web pages according to user expectations and specifications is important in terms of effective usage of information. This study provides an insight about web usability studies that are executed for measuring...

  2. Web-page Prediction for Domain Specific Web-search using Boolean Bit Mask

    OpenAIRE

    Sinha, Sukanta; Duttagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Search Engine is a Web-page retrieval tool. Nowadays Web searchers utilize their time using an efficient search engine. To improve the performance of the search engine, we are introducing a unique mechanism which will give Web searchers more prominent search results. In this paper, we are going to discuss a domain specific Web search prototype which will generate the predicted Web-page list for user given search string using Boolean bit mask.

  3. Code AI Personal Web Pages

    Science.gov (United States)

    Garcia, Joseph A.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The document consists of a publicly available web site (george.arc.nasa.gov) for Joseph A. Garcia's personal web pages in the AI division. Only general information will be posted and no technical material. All the information is unclassified.

  4. Stochastic analysis of web page ranking

    NARCIS (Netherlands)

    Volkovich, Y.

    2009-01-01

    Today, the study of the World Wide Web is one of the most challenging subjects. In this work we consider the Web from a probabilistic point of view. We analyze the relations between various characteristics of the Web. In particular, we are interested in the Web properties that affect the Web page

  5. Classroom Web Pages: A "How-To" Guide for Educators.

    Science.gov (United States)

    Fehling, Eric E.

    This manual provides teachers, with very little or no technology experience, with a step-by-step guide for developing the necessary skills for creating a class Web Page. The first part of the manual is devoted to the thought processes preceding the actual creation of the Web Page. These include looking at other Web Pages, deciding what should be…

  6. A thorough spring-clean for CERN's Web pages

    CERN Multimedia

    2001-01-01

    This coming Tuesday will see the unveiling of CERN's new user pages on the Web. Their simplified layout and design will make everybody's lives a whole lot easier. Stand by for Tuesday 17 April when, as announced in the Weekly Bulletin of 2 April (n°14/2001), the new newly-designed users welcome page will be hitting our screens as the default CERN home page. But don't worry, if you've got the blues for the good old blue-green home page it's still in service and, to ensure a smooth transition, will be maintained in parallel until 25 May. But in all likelihood you'll be quickly won over by the new-look pages, which are so much simpler to use. Welcome to the new Web! The aim of this revamp, led by the WPE (Web Public Education) group, is to simplify and introduce a more logical hierarchy into the menus and welcome pages on CERN's Intranet. In a second stage, the 'General Public' pages will get a similar makeover. The fact is that the number of links on the user pages, and in particular the welcome page...

  7. Educational use of World Wide Web pages on CD-ROM.

    Science.gov (United States)

    Engel, Thomas P; Smith, Michael

    2002-01-01

    The World Wide Web is increasingly important for medical education. Internet served pages may also be used on a local hard disk or CD-ROM without a network or server. This allows authors to reuse existing content and provide access to users without a network connection. CD-ROM offers several advantages over network delivery of Web pages for several applications. However, creating Web pages for CD-ROM requires careful planning. Issues include file names, relative links, directory names, default pages, server created content, image maps, other file types and embedded programming. With care, it is possible to create server based pages that can be copied directly to CD-ROM. In addition, Web pages on CD-ROM may reference Internet served pages to provide the best features of both methods.

  8. Required Discussion Web Pages in Psychology Courses and Student Outcomes

    Science.gov (United States)

    Pettijohn, Terry F., II; Pettijohn, Terry F.

    2007-01-01

    We conducted 2 studies that investigated student outcomes when using discussion Web pages in psychology classes. In Study 1, we assigned 213 students enrolled in Introduction to Psychology courses to either a mandatory or an optional Web page discussion condition. Students used the discussion Web page significantly more often and performed…

  9. Recognition of pornographic web pages by classifying texts and images.

    Science.gov (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  10. Automatically annotating web pages using Google Rich Snippets

    NARCIS (Netherlands)

    Hogenboom, F.P.; Frasincar, F.; Vandic, D.; Meer, van der J.; Boon, F.; Kaymak, U.

    2011-01-01

    We propose the Automatic Review Recognition and annO- tation of Web pages (ARROW) framework, a framework for Web page review identification and annotation using RDFa Google Rich Snippets. The ARROW framework consists of four steps: hotspot identification, subjectivity analysis, in- formation

  11. A teen's guide to creating web pages and blogs

    CERN Document Server

    Selfridge, Peter; Osburn, Jennifer

    2008-01-01

    Whether using a social networking site like MySpace or Facebook or building a Web page from scratch, millions of teens are actively creating a vibrant part of the Internet. This is the definitive teen''s guide to publishing exciting web pages and blogs on the Web. This easy-to-follow guide shows teenagers how to: Create great MySpace and Facebook pages Build their own unique, personalized Web site Share the latest news with exciting blogging ideas Protect themselves online with cyber-safety tips Written by a teenager for other teens, this book leads readers step-by-step through the basics of web and blog design. In this book, teens learn to go beyond clicking through web sites to learning winning strategies for web design and great ideas for writing blogs that attract attention and readership.

  12. Identify Web-page Content meaning using Knowledge based System for Dual Meaning Words

    OpenAIRE

    Sinha, Sukanta; Dattagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Meaning of Web-page content plays a big role while produced a search result from a search engine. Most of the cases Web-page meaning stored in title or meta-tag area but those meanings do not always match with Web-page content. To overcome this situation we need to go through the Web-page content to identify the Web-page meaning. In such cases, where Webpage content holds dual meaning words that time it is really difficult to identify the meaning of the Web-page. In this paper, we are introdu...

  13. Building interactive simulations in a Web page design program.

    Science.gov (United States)

    Kootsey, J Mailen; Siriphongs, Daniel; McAuley, Grant

    2004-01-01

    A new Web software architecture, NumberLinX (NLX), has been integrated into a commercial Web design program to produce a drag-and-drop environment for building interactive simulations. NLX is a library of reusable objects written in Java, including input, output, calculation, and control objects. The NLX objects were added to the palette of available objects in the Web design program to be selected and dropped on a page. Inserting an object in a Web page is accomplished by adding a template block of HTML code to the page file. HTML parameters in the block must be set to user-supplied values, so the HTML code is generated dynamically, based on user entries in a popup form. Implementing the object inspector for each object permits the user to edit object attributes in a form window. Except for model definition, the combination of the NLX architecture and the Web design program permits construction of interactive simulation pages without writing or inspecting code.

  14. Is Domain Highlighting Actually Helpful in Identifying Phishing Web Pages?

    Science.gov (United States)

    Xiong, Aiping; Proctor, Robert W; Yang, Weining; Li, Ninghui

    2017-06-01

    To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants' visual attention was attracted by the highlighted domains. Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages.

  15. Digital libraries and World Wide Web sites and page persistence.

    Directory of Open Access Journals (Sweden)

    Wallace Koehler

    1999-01-01

    Full Text Available Web pages and Web sites, some argue, can either be collected as elements of digital or hybrid libraries, or, as others would have it, the WWW is itself a library. We begin with the assumption that Web pages and Web sites can be collected and categorized. The paper explores the proposition that the WWW constitutes a library. We conclude that the Web is not a digital library. However, its component parts can be aggregated and included as parts of digital library collections. These, in turn, can be incorporated into "hybrid libraries." These are libraries with both traditional and digital collections. Material on the Web can be organized and managed. Native documents can be collected in situ, disseminated, distributed, catalogueed, indexed, controlled, in traditional library fashion. The Web therefore is not a library, but material for library collections is selected from the Web. That said, the Web and its component parts are dynamic. Web documents undergo two kinds of change. The first type, the type addressed in this paper, is "persistence" or the existence or disappearance of Web pages and sites, or in a word the lifecycle of Web documents. "Intermittence" is a variant of persistence, and is defined as the disappearance but reappearance of Web documents. At any given time, about five percent of Web pages are intermittent, which is to say they are gone but will return. Over time a Web collection erodes. Based on a 120-week longitudinal study of a sample of Web documents, it appears that the half-life of a Web page is somewhat less than two years and the half-life of a Web site is somewhat more than two years. That is to say, an unweeded Web document collection created two years ago would contain the same number of URLs, but only half of those URLs point to content. The second type of change Web documents experience is change in Web page or Web site content. Again based on the Web document samples, very nearly all Web pages and sites undergo some

  16. Pro single page application development using Backbone.js and ASP.NET

    CERN Document Server

    Fink, Gil

    2014-01-01

    One of the most important and exciting trends in web development in recent years is the move towards single page applications, or SPAs. Instead of clicking through hyperlinks and waiting for each page to load, the user loads a site once and all the interactivity is handled fluidly by a rich JavaScript front end. If you come from a background in ASP.NET development, you'll be used to handling most interactions on the server side. Pro Single Page Application Development will guide you through your transition to this powerful new application type.The book starts in Part I by laying the groundwork

  17. [An evaluation of the quality of health web pages using a validated questionnaire].

    Science.gov (United States)

    Conesa Fuentes, Maria del Carmen; Aguinaga Ontoso, Enrique; Hernández Morante, Juan José

    2011-01-01

    The objective of the present study was to evaluate the quality of general health information in Spanish language web pages, and the official Regional Services web pages from the different Autonomous Regions. It is a cross-sectional study. We have used a previously validated questionnaire to study the present state of the health information on Internet for a lay-user point of view. By mean of PageRank (Google®), we obtained a group of webs, including a total of 65 health web pages. We applied some exclusion criteria, and finally obtained a total of 36 webs. We also analyzed the official web pages from the different Health Services in Spain (19 webs), making a total of 54 health web pages. In the light of our data, we observed that, the quality of the general information health web pages was generally rather low, especially regarding the information quality. Not one page reached the maximum score (19 points). The mean score of the web pages was of 9.8±2.8. In conclusion, to avoid the problems arising from the lack of quality, health professionals should design advertising campaigns and other media to teach the lay-user how to evaluate the information quality. Copyright © 2009 Elsevier España, S.L. All rights reserved.

  18. An Analysis of Academic Library Web Pages for Faculty

    Science.gov (United States)

    Gardner, Susan J.; Juricek, John Eric; Xu, F. Grace

    2008-01-01

    Web sites are increasingly used by academic libraries to promote key services and collections to teaching faculty. This study analyzes the content, location, language, and technological features of fifty-four academic library Web pages designed especially for faculty to expose patterns in the development of these pages.

  19. Enriching the trustworthiness of health-related web pages.

    Science.gov (United States)

    Gaudinat, Arnaud; Cruchet, Sarah; Boyer, Celia; Chrawdhry, Pravir

    2011-06-01

    We present an experimental mechanism for enriching web content with quality metadata. This mechanism is based on a simple and well-known initiative in the field of the health-related web, the HONcode. The Resource Description Framework (RDF) format and the Dublin Core Metadata Element Set were used to formalize these metadata. The model of trust proposed is based on a quality model for health-related web pages that has been tested in practice over a period of thirteen years. Our model has been explored in the context of a project to develop a research tool that automatically detects the occurrence of quality criteria in health-related web pages.

  20. Interstellar Initiative Web Page Design

    Science.gov (United States)

    Mehta, Alkesh

    1999-01-01

    This summer at NASA/MSFC, I have contributed to two projects: Interstellar Initiative Web Page Design and Lenz's Law Relative Motion Demonstration. In the Web Design Project, I worked on an Outline. The Web Design Outline was developed to provide a foundation for a Hierarchy Tree Structure. The Outline would help design a Website information base for future and near-term missions. The Website would give in-depth information on Propulsion Systems and Interstellar Travel. The Lenz's Law Relative Motion Demonstrator is discussed in this volume by Russell Lee.

  1. Near-Duplicate Web Page Detection: An Efficient Approach Using Clustering, Sentence Feature and Fingerprinting

    Directory of Open Access Journals (Sweden)

    J. Prasanna Kumar

    2013-02-01

    Full Text Available Duplicate and near-duplicate web pages are the chief concerns for web search engines. In reality, they incur enormous space to store the indexes, ultimately slowing down and increasing the cost of serving results. A variety of techniques have been developed to identify pairs of web pages that are aldquo;similarardquo; to each other. The problem of finding near-duplicate web pages has been a subject of research in the database and web-search communities for some years. In order to identify the near duplicate web pages, we make use of sentence level features along with fingerprinting method. When a large number of web documents are in consideration for the detection of web pages, then at first, we use K-mode clustering and subsequently sentence feature and fingerprint comparison is used. Using these steps, we exactly identify the near duplicate web pages in an efficient manner. The experimentation is carried out on the web page collections and the results ensured the efficiency of the proposed approach in detecting the near duplicate web pages.

  2. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric Martin; van der Geest, Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  3. Reporting on post-menopausal hormone therapy: an analysis of gynaecologists' web pages.

    Science.gov (United States)

    Bucksch, Jens; Kolip, Petra; Deitermann, Bernhilde

    2004-01-01

    The present study was designed to analyse Web pages of German gynaecologists with regard to postmenopausal hormone therapy (HT). There is a growing body of evidence, that the overall health risks of HT exceed the benefits. Making one's own informed choice has become a central concern for menopausal women. The Internet is an important source of health information, but the quality is often dubious. The study focused on the analysis of basic criteria such as last modification date and quality of the HT information content. The results of the Women's Health Initiative Study (WHI) were used as a benchmark. We searched for relevant Web pages by entering a combination of key words (9 x 13 = 117) into the search engine www.google.de. Each Web page was analysed using a standardized questionnaire. The basic criteria and the quality of content on each Web page were separately categorized by two evaluators. Disagreements were resolved by discussion. Of the 97 websites identified, basic criteria were not met by the majority. For example, the modification date was displayed by only 23 (23.7%) Web pages. The quality of content of most Web pages regarding HT was inaccurate and incomplete. Whilst only nine (9.3%) took up a balanced position, 66 (68%) recommended HT without any restrictions. In 22 cases the recommendation was indistinct and none of the sites refused HT. With regard to basic criteria, there was no difference between HT-recommending Web pages and sites with balanced position. Evidence-based information resulting from the WHI trial was insufficiently represented on gynaecologists' Web pages. Because of the growing number of consumers looking online for health information, the danger of obtaining harmful information has to be minimized. Web pages of gynaecologists do not appear to be recommendable for women because they do not provide recent evidence-based findings about HT.

  4. Going, going, still there: using the WebCite service to permanently archive cited web pages.

    Science.gov (United States)

    Eysenbach, Gunther; Trudel, Mathieu

    2005-12-30

    Scholars are increasingly citing electronic "web references" which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To "webcite" a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page. This journal has amended its "instructions for authors" accordingly, asking authors to archive cited Web pages before submitting a manuscript. Almost 200 other journals are already using the system. We discuss the rationale for WebCite, its technology, and how scholars, editors, and publishers can benefit from the service. Citing scholars initiate an archiving process of all cited Web references, ideally before they submit a manuscript. Authors of online documents and websites which are expected to be cited by others can ensure that their work is permanently available by creating an archived copy using WebCite and providing the citation information including the WebCite link on their Web document(s). Editors should ask their authors to cache all cited Web addresses (Uniform Resource Locators, or URLs) "prospectively" before submitting their manuscripts to their journal. Editors and publishers should also instruct their copyeditors to cache cited Web material if the author has not done so already. Finally, WebCite can process publisher submitted "citing articles" (submitted for example as eXtensible Markup Language [XML] documents) to automatically archive all cited Web pages shortly before or on publication. Finally, WebCite can act as a focussed crawler, caching retrospectively references of already published articles. Copyright issues are addressed by honouring respective Internet standards (robot exclusion files, no-cache and no-archive tags). Long-term preservation is ensured by agreements with libraries and digital preservation organizations. The resulting WebCite Index may also have applications for research

  5. Web page sorting algorithm based on query keyword distance relation

    Science.gov (United States)

    Yang, Han; Cui, Hong Gang; Tang, Hao

    2017-08-01

    In order to optimize the problem of page sorting, according to the search keywords in the web page in the relationship between the characteristics of the proposed query keywords clustering ideas. And it is converted into the degree of aggregation of the search keywords in the web page. Based on the PageRank algorithm, the clustering degree factor of the query keyword is added to make it possible to participate in the quantitative calculation. This paper proposes an improved algorithm for PageRank based on the distance relation between search keywords. The experimental results show the feasibility and effectiveness of the method.

  6. In-Degree and PageRank of web pages: why do they follow similar power laws?

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    2009-01-01

    PageRank is a popularity measure designed by Google to rank Web pages. Experiments confirm that PageRank values obey a power law with the same exponent as In-Degree values. This paper presents a novel mathematical model that explains this phenomenon. The relation between PageRank and In-Degree is

  7. Learning Structural Classification Rules for Web-page Categorization

    NARCIS (Netherlands)

    Stuckenschmidt, Heiner; Hartmann, Jens; Van Harmelen, Frank

    2002-01-01

    Content-related metadata plays an important role in the effort of developing intelligent web applications. One of the most established form of providing content-related metadata is the assignment of web-pages to content categories. We describe the Spectacle system for classifying individual web

  8. In-degree and pageRank of web pages: Why do they follow similar power laws?

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    The PageRank is a popularity measure designed by Google to rank Web pages. Experiments confirm that the PageRank obeys a 'power law' with the same exponent as the In-Degree. This paper presents a novel mathematical model that explains this phenomenon. The relation between the PageRank and In-Degree

  9. Enhancing the Ranking of a Web Page in the Ocean of Data

    Directory of Open Access Journals (Sweden)

    Hitesh KUMAR SHARMA

    2013-10-01

    Full Text Available In today's world, web is considered as ocean of data and information (like text, videos, multimedia etc. consisting of millions and millions of web pages in which web pages are linked with each other like a tree. It is often argued that, especially considering the dynamic of the internet, too much time has passed since the scientific work on PageRank, as that it still could be the basis for the ranking methods of the Google search engine. There is no doubt that within the past years most likely many changes, adjustments and modifications regarding the ranking methods of Google have taken place, but PageRank was absolutely crucial for Google's success, so that at least the fundamental concept behind PageRank should still be constitutive. This paper describes the components which affects the ranking of the web pages and helps in increasing the popularity of web site. By adapting these factors website developers can increase their site's page rank and within the PageRank concept, considering the rank of a document is given by the rank of those documents which link to it. Their rank again is given by the rank of documents which link to them. The PageRank of a document is always determined recursively by the PageRank of other documents.

  10. THE DIFFERENCE BETWEEN DEVELOPING SINGLE PAGE APPLICATION AND TRADITIONAL WEB APPLICATION BASED ON MECHATRONICS ROBOT LABORATORY ONAFT APPLICATION

    Directory of Open Access Journals (Sweden)

    V. Solovei

    2018-04-01

    Full Text Available Today most of desktop and mobile applications have analogues in the form of web-based applications.  With evolution of development technologies and web technologies web application increased in functionality to desktop applications. The Web application consists of two parts of the client part and the server part. The client part is responsible for providing the user with visual information through the browser. The server part is responsible for processing and storing data.MPA appeared simultaneously with the Internet. Multiple-page applications work in a "traditional" way. Every change eg. display the data or submit data back to the server. With the advent of AJAX, MPA learned to load not the whole page, but only a part of it, which eventually led to the appearance of the SPA. SPA is the principle of development when only one page is transferred to the client part, and the content is downloaded only to a certain part of the page, without rebooting it, which allows to speed up the application and simplify the user experience of using the application to the level of desktop applications.Based on the SPA, the Mechatronics Robot Laboratory ONAFT application was designed to automate the management process. The application implements the client-server architecture. The server part consists of a RESTful API, which allows you to get unified access to the application functionality, and a database for storing information. Since the client part is a spa, this allows you to reduce the load on the connection to the server and improve the user experience

  11. An ant colony optimization based feature selection for web page classification.

    Science.gov (United States)

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  12. Google Analytics: Single Page Traffic Reports

    Science.gov (United States)

    These are pages that live outside of Google Analytics (GA) but allow you to view GA data for any individual page on either the public EPA web or EPA intranet. You do need to log in to Google Analytics to view them.

  13. A Survey on PageRank Computing

    OpenAIRE

    Berkhin, Pavel

    2005-01-01

    This survey reviews the research related to PageRank computing. Components of a PageRank vector serve as authority weights for web pages independent of their textual content, solely based on the hyperlink structure of the web. PageRank is typically used as a web search ranking component. This defines the importance of the model and the data structures that underly PageRank processing. Computing even a single PageRank is a difficult computational task. Computing many PageRanks is a much mor...

  14. Readability of the web: a study on 1 billion web pages

    NARCIS (Netherlands)

    de Heus, Marije; Hiemstra, Djoerd

    We have performed a readability study on more than 1 billion web pages. The Automated Readability Index was used to determine the average grade level required to easily comprehend a website. Some of the results are that a 16-year-old can easily understand 50% of the web and an 18-year old can easily

  15. The impact of visual layout factors on performance in Web pages: a cross-language study.

    Science.gov (United States)

    Parush, Avi; Shwarts, Yonit; Shtub, Avy; Chandra, M Jeya

    2005-01-01

    Visual layout has a strong impact on performance and is a critical factor in the design of graphical user interfaces (GUIs) and Web pages. Many design guidelines employed in Web page design were inherited from human performance literature and GUI design studies and practices. However, few studies have investigated the more specific patterns of performance with Web pages that may reflect some differences between Web page and GUI design. We investigated interactions among four visual layout factors in Web page design (quantity of links, alignment, grouping indications, and density) in two experiments: one with pages in Hebrew, entailing right-to-left reading, and the other with English pages, entailing left-to-right reading. Some performance patterns (measured by search times and eye movements) were similar between languages. Performance was particularly poor in pages with many links and variable densities, but it improved with the presence of uniform density. Alignment was not shown to be a performance-enhancing factor. The findings are discussed in terms of the similarities and differences in the impact of layout factors between GUIs and Web pages. Actual or potential applications of this research include specific guidelines for Web page design.

  16. Digital Ethnography: Library Web Page Redesign among Digital Natives

    Science.gov (United States)

    Klare, Diane; Hobbs, Kendall

    2011-01-01

    Presented with an opportunity to improve Wesleyan University's dated library home page, a team of librarians employed ethnographic techniques to explore how its users interacted with Wesleyan's current library home page and web pages in general. Based on the data that emerged, a group of library staff and members of the campus' information…

  17. ARL Physics Web Pages: An Evaluation by Established, Transitional and Emerging Benchmarks.

    Science.gov (United States)

    Duffy, Jane C.

    2002-01-01

    Provides an overview of characteristics among Association of Research Libraries (ARL) physics Web pages. Examines current academic Web literature and from that develops six benchmarks to measure physics Web pages: ease of navigation; logic of presentation; representation of all forms of information; engagement of the discipline; interactivity of…

  18. Document representations for classification of short web-page descriptions

    Directory of Open Access Journals (Sweden)

    Radovanović Miloš

    2008-01-01

    Full Text Available Motivated by applying Text Categorization to classification of Web search results, this paper describes an extensive experimental study of the impact of bag-of- words document representations on the performance of five major classifiers - Naïve Bayes, SVM, Voted Perceptron, kNN and C4.5. The texts, representing short Web-page descriptions sorted into a large hierarchy of topics, are taken from the dmoz Open Directory Web-page ontology, and classifiers are trained to automatically determine the topics which may be relevant to a previously unseen Web-page. Different transformations of input data: stemming, normalization, logtf and idf, together with dimensionality reduction, are found to have a statistically significant improving or degrading effect on classification performance measured by classical metrics - accuracy, precision, recall, F1 and F2. The emphasis of the study is not on determining the best document representation which corresponds to each classifier, but rather on describing the effects of every individual transformation on classification, together with their mutual relationships. .

  19. Social Responsibility and Corporate Web Pages: Self-Presentation or Agenda-Setting?

    Science.gov (United States)

    Esrock, Stuart L.; Leichty, Greg B.

    1998-01-01

    Examines how corporate entities use the Web to present themselves as socially responsible citizens and to advance policy positions. Samples randomly "Fortune 500" companies, revealing that, although 90% had Web pages and 82% of the sites addressed a corporate social responsibility issue, few corporations used their pages to monitor…

  20. Does content affect whether users remember that Web pages were hyperlinked?

    Science.gov (United States)

    Jones, Keith S; Ballew, Timothy V; Probst, C Adam

    2008-10-01

    We determined whether memory for hyperlinks improved when they represented relations between the contents of the Web pages. J. S. Farris (2003) found that memory for hyperlinks improved when they represented relations between the contents of the Web pages. However, Farris's (2003) participants could have used their knowledge of site content to answer questions about relations that were instantiated via the site's content and its hyperlinks. In Experiment 1, users navigated a Web site and then answered questions about relations that were instantiated only via content, only via hyperlinks, and via content and hyperlinks. Unlike Farris (2003), we split the latter into two sets. One asked whether certain content elements were related, and the other asked whether certain Web pages were hyperlinked. Experiment 2 replicated Experiment 1 with one modification: The questions that were asked about relations instantiated via content and hyperlinks were changed so that each question's wrong answer was also related to the question's target. Memory for hyperlinks improved when they represented relations instantiated within the content of the Web pages. This was true when (a) questions about content and hyperlinks were separated (Experiment 1) and (b) each question's wrong answer was also related to the question's target (Experiment 2). The accuracy of users' mental representations of local architecture depended on whether hyperlinks were related to the site's content. Designers who want users to remember hyperlinks should associate those hyperlinks with content that reflects the relation between the contents on the Web pages.

  1. AUTOMATIC TAGGING OF PERSIAN WEB PAGES BASED ON N-GRAM LANGUAGE MODELS USING MAPREDUCE

    Directory of Open Access Journals (Sweden)

    Saeed Shahrivari

    2015-07-01

    Full Text Available Page tagging is one of the most important facilities for increasing the accuracy of information retrieval in the web. Tags are simple pieces of data that usually consist of one or several words, and briefly describe a page. Tags provide useful information about a page and can be used for boosting the accuracy of searching, document clustering, and result grouping. The most accurate solution to page tagging is using human experts. However, when the number of pages is large, humans cannot be used, and some automatic solutions should be used instead. We propose a solution called PerTag which can automatically tag a set of Persian web pages. PerTag is based on n-gram models and uses the tf-idf method plus some effective Persian language rules to select proper tags for each web page. Since our target is huge sets of web pages, PerTag is built on top of the MapReduce distributed computing framework. We used a set of more than 500 million Persian web pages during our experiments, and extracted tags for each page using a cluster of 40 machines. The experimental results show that PerTag is both fast and accurate

  2. Improving Web Page Retrieval using Search Context from Clicked Domain Names

    NARCIS (Netherlands)

    Li, R.

    Search context is a crucial factor that helps to understand a user’s information need in ad-hoc Web page retrieval. A query log of a search engine contains rich information on issued queries and their corresponding clicked Web pages. The clicked data implies its relevance to the query and can be

  3. Identification of Malicious Web Pages by Inductive Learning

    Science.gov (United States)

    Liu, Peishun; Wang, Xuefang

    Malicious web pages are an increasing threat to current computer systems in recent years. Traditional anti-virus techniques focus typically on detection of the static signatures of Malware and are ineffective against these new threats because they cannot deal with zero-day attacks. In this paper, a novel classification method for detecting malicious web pages is presented. This method is generalization and specialization of attack pattern based on inductive learning, which can be used for updating and expanding knowledge database. The attack pattern is established from an example and generalized by inductive learning, which can be used to detect unknown attacks whose behavior is similar to the example.

  4. Referencing web pages and e-journals.

    Science.gov (United States)

    Bryson, David

    2013-12-01

    One of the areas that can confuse students and authors alike is how to reference web pages and electronic journals (e-journals). The aim of this professional development article is to go back to first principles for referencing and see how with examples these should be referenced.

  5. Automatic Hidden-Web Table Interpretation by Sibling Page Comparison

    Science.gov (United States)

    Tao, Cui; Embley, David W.

    The longstanding problem of automatic table interpretation still illudes us. Its solution would not only be an aid to table processing applications such as large volume table conversion, but would also be an aid in solving related problems such as information extraction and semi-structured data management. In this paper, we offer a conceptual modeling solution for the common special case in which so-called sibling pages are available. The sibling pages we consider are pages on the hidden web, commonly generated from underlying databases. We compare them to identify and connect nonvarying components (category labels) and varying components (data values). We tested our solution using more than 2,000 tables in source pages from three different domains—car advertisements, molecular biology, and geopolitical information. Experimental results show that the system can successfully identify sibling tables, generate structure patterns, interpret tables using the generated patterns, and automatically adjust the structure patterns, if necessary, as it processes a sequence of hidden-web pages. For these activities, the system was able to achieve an overall F-measure of 94.5%.

  6. Search Engine Ranking, Quality, and Content of Web Pages That Are Critical Versus Noncritical of Human Papillomavirus Vaccine.

    Science.gov (United States)

    Fu, Linda Y; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G

    2016-01-01

    Online information can influence attitudes toward vaccination. The aim of the present study was to provide a systematic evaluation of the search engine ranking, quality, and content of Web pages that are critical versus noncritical of human papillomavirus (HPV) vaccination. We identified HPV vaccine-related Web pages with the Google search engine by entering 20 terms. We then assessed each Web page for critical versus noncritical bias and for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than ninth grade. We also determined Web page comprehensiveness in terms of mention of 14 HPV vaccine-relevant topics. Twenty searches yielded 116 unique Web pages. HPV vaccine-critical Web pages comprised roughly a third of the top, top 5- and top 10-ranking Web pages. The prevalence of HPV vaccine-critical Web pages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical Web pages, Web pages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (p engine queries despite being of lower quality and less comprehensive than noncritical Web pages. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  7. Design of a Web Page as a complement of educative innovation through MOODLE

    Science.gov (United States)

    Mendiola Ubillos, M. A.; Aguado Cortijo, Pedro L.

    2010-05-01

    In the context of Information Technology to impart knowledge and to establish MOODLE system as a support and complementary tool to on-site educational methodology (b-learning) a Web Page was designed in Agronomic and Food Industry Crops (Plantas de interés Agroalimentario) during 2006-07 course. This web was inserted in the Thecnical University of Madrid (Universidad Politécnica de Madrid) computer system to facilitate to the students the first contact with the contents of this subject. In this page the objectives and methodology, personal work planning, subject program given plus the activities are showed. At another web site, the evaluation criteria and recommended bibliography are located. The objective of this web page has been to make more transparent and accessible the necessary information in the learning process and presenting it in a more attractive frame. This page has been update and modified in each academic course offered since its first implementation. We had added in some cases new specific links to increase its useful. At the end of each course a test is applied to the students that take this subject. We have asked which elements would like to modify, delete and add to this web page. In this way the direct users give their point of view and help to improve the web page each course.

  8. Categorization of web pages - Performance enhancement to search engine

    Digital Repository Service at National Institute of Oceanography (India)

    Lakshminarayana, S.

    of Artificial Intelligence, Volume III. Los Altos, CA.: William Kaufmann. pp 1-74. 18. Brin, S. & Page, L. (1998). The anatomy of a large scale hyper-textual web search engine. In Proceedings of the seventh World Wide Web conference, Brisbane, Australia. 19...

  9. Developing a web page: bringing clinics online.

    Science.gov (United States)

    Peterson, Ronnie; Berns, Susan

    2004-01-01

    Introducing clinical staff education, along with new policies and procedures, to over 50 different clinical sites can be a challenge. As any staff educator will confess, getting people to attend an educational inservice session can be difficult. Clinical staff request training, but no one has time to attend training sessions. Putting the training along with the policies and other information into "neat" concise packages via the computer and over the company's intranet was the way to go. However, how do you bring the clinics online when some of the clinical staff may still be reluctant to turn on their computers for anything other than to gather laboratory results? Developing an easy, fun, and accessible Web page was the answer. This article outlines the development of the first training Web page at the University of Wisconsin Medical Foundation, Madison, WI.

  10. A reverse engineering approach for automatic annotation of Web pages

    NARCIS (Netherlands)

    R. de Virgilio (Roberto); F. Frasincar (Flavius); W. Hop (Walter); S. Lachner (Stephan)

    2013-01-01

    textabstractThe Semantic Web is gaining increasing interest to fulfill the need of sharing, retrieving, and reusing information. Since Web pages are designed to be read by people, not machines, searching and reusing information on the Web is a difficult task without human participation. To this aim

  11. Environment: General; Grammar & Usage; Money Management; Music History; Web Page Creation & Design.

    Science.gov (United States)

    Web Feet, 2001

    2001-01-01

    Describes Web site resources for elementary and secondary education in the topics of: environment, grammar, money management, music history, and Web page creation and design. Each entry includes an illustration of a sample page on the site and an indication of the grade levels for which it is appropriate. (AEF)

  12. Key-phrase based classification of public health web pages.

    Science.gov (United States)

    Dolamic, Ljiljana; Boyer, Célia

    2013-01-01

    This paper describes and evaluates the public health web pages classification model based on key phrase extraction and matching. Easily extendible both in terms of new classes as well as the new language this method proves to be a good solution for text classification faced with the total lack of training data. To evaluate the proposed solution we have used a small collection of public health related web pages created by a double blind manual classification. Our experiments have shown that by choosing the adequate threshold value the desired value for either precision or recall can be achieved.

  13. A framework for automatic annotation of web pages using the Google Rich Snippets vocabulary

    NARCIS (Netherlands)

    Meer, van der J.; Boon, F.; Hogenboom, F.P.; Frasincar, F.; Kaymak, U.

    2011-01-01

    One of the latest developments for the Semantic Web is Google Rich Snippets, a service that uses Web page annotations for displaying search results in a visually appealing manner. In this paper we propose the Automatic Review Recognition and annOtation of Web pages (ARROW) framework, which is able

  14. Building Interactive Simulations in Web Pages without Programming.

    Science.gov (United States)

    Mailen Kootsey, J; McAuley, Grant; Bernal, Julie

    2005-01-01

    A software system is described for building interactive simulations and other numerical calculations in Web pages. The system is based on a new Java-based software architecture named NumberLinX (NLX) that isolates each function required to build the simulation so that a library of reusable objects could be assembled. The NLX objects are integrated into a commercial Web design program for coding-free page construction. The model description is entered through a wizard-like utility program that also functions as a model editor. The complete system permits very rapid construction of interactive simulations without coding. A wide range of applications are possible with the system beyond interactive calculations, including remote data collection and processing and collaboration over a network.

  15. A Quantitative Comparison of Semantic Web Page Segmentation Approaches

    NARCIS (Netherlands)

    Kreuzer, Robert; Hage, J.; Feelders, A.J.

    2015-01-01

    We compare three known semantic web page segmentation algorithms, each serving as an example of a particular approach to the problem, and one self-developed algorithm, WebTerrain, that combines two of the approaches. We compare the performance of the four algorithms for a large benchmark of modern

  16. Modeling user navigation behavior in web by colored Petri nets to determine the user's interest in recommending web pages

    Directory of Open Access Journals (Sweden)

    Mehdi Sadeghzadeh

    2013-01-01

    Full Text Available One of existing challenges in personalization of the web is increasing the efficiency of a web in meeting the users' requirements for the contents they require in an optimal state. All the information associated with the current user behavior following in web and data obtained from pervious users’ interaction in web can provide some necessary keys to recommend presentation of services, productions, and the required information of the users. This study aims at presenting a formal model based on colored Petri nets to identify the present user's interest, which is utilized to recommend the most appropriate pages ahead. In the proposed design, recommendation of the pages is considered with respect to information obtained from pervious users' profile as well as the current session of the present user. This model offers the updated proposed pages to the user by clicking on the web pages. Moreover, an example of web is modeled using CPN Tools. The results of the simulation show that this design improves the precision factor. We explain, through evaluation where the results of this method are more objective and the dynamic recommendations demonstrate that the results of the recommended method improve the precision criterion 15% more than the static method.

  17. Exploring Cultural Variation in Eye Movements on a Web Page between Americans and Koreans

    Science.gov (United States)

    Yang, Changwoo

    2009-01-01

    This study explored differences in eye movement on a Web page between members of two different cultures to provide insight and guidelines for implementation of global Web site development. More specifically, the research examines whether differences of eye movement exist between the two cultures (American vs. Korean) when viewing a Web page, and…

  18. Building single-page web apps with meteor

    CERN Document Server

    Vogelsteller, Fabian

    2015-01-01

    If you are a web developer with basic knowledge of JavaScript and want to take on Web 2.0, build real-time applications, or simply want to write a complete application using only JavaScript and HTML/CSS, this is the book for you.This book is based on Meteor 1.0.

  19. Detection of spam web page using content and link-based techniques

    Indian Academy of Sciences (India)

    Spam pages are generally insufficient and inappropriate results for user. ... kinds of Web spamming techniques: Content spam and Link spam. 1. Content spam: The .... of the spam pages are machine generated and hence tech- nique of ...

  20. Arabic web pages clustering and annotation using semantic class features

    Directory of Open Access Journals (Sweden)

    Hanan M. Alghamdi

    2014-12-01

    Full Text Available To effectively manage the great amount of data on Arabic web pages and to enable the classification of relevant information are very important research problems. Studies on sentiment text mining have been very limited in the Arabic language because they need to involve deep semantic processing. Therefore, in this paper, we aim to retrieve machine-understandable data with the help of a Web content mining technique to detect covert knowledge within these data. We propose an approach to achieve clustering with semantic similarities. This approach comprises integrating k-means document clustering with semantic feature extraction and document vectorization to group Arabic web pages according to semantic similarities and then show the semantic annotation. The document vectorization helps to transform text documents into a semantic class probability distribution or semantic class density. To reach semantic similarities, the approach extracts the semantic class features and integrates them into the similarity weighting schema. The quality of the clustering result has evaluated the use of the purity and the mean intra-cluster distance (MICD evaluation measures. We have evaluated the proposed approach on a set of common Arabic news web pages. We have acquired favorable clustering results that are effective in minimizing the MICD, expanding the purity and lowering the runtime.

  1. Web Page Classification Method Using Neural Networks

    Science.gov (United States)

    Selamat, Ali; Omatu, Sigeru; Yanagimoto, Hidekazu; Fujinaka, Toru; Yoshioka, Michifumi

    Automatic categorization is the only viable method to deal with the scaling problem of the World Wide Web (WWW). In this paper, we propose a news web page classification method (WPCM). The WPCM uses a neural network with inputs obtained by both the principal components and class profile-based features (CPBF). Each news web page is represented by the term-weighting scheme. As the number of unique words in the collection set is big, the principal component analysis (PCA) has been used to select the most relevant features for the classification. Then the final output of the PCA is combined with the feature vectors from the class-profile which contains the most regular words in each class before feeding them to the neural networks. We have manually selected the most regular words that exist in each class and weighted them using an entropy weighting scheme. The fixed number of regular words from each class will be used as a feature vectors together with the reduced principal components from the PCA. These feature vectors are then used as the input to the neural networks for classification. The experimental evaluation demonstrates that the WPCM method provides acceptable classification accuracy with the sports news datasets.

  2. Around power law for PageRank components in Buckley-Osthus model of web graph

    OpenAIRE

    Gasnikov, Alexander; Zhukovskii, Maxim; Kim, Sergey; Noskov, Fedor; Plaunov, Stepan; Smirnov, Daniil

    2017-01-01

    In the paper we investigate power law for PageRank components for the Buckley-Osthus model for web graph. We compare different numerical methods for PageRank calculation. With the best method we do a lot of numerical experiments. These experiments confirm the hypothesis about power law. At the end we discuss real model of web-ranking based on the classical PageRank approach.

  3. First U.S. Web page went up 10 years ago

    CERN Multimedia

    Kornblum, J

    2001-01-01

    Wednesday marks the 10th anniversary of the first U.S. Web page, created by Paul Kunz, a physicist at SLAC. He says that if World Wide Web creator Tim Berners-Lee hadn't been so persistant that he attended a demonstration meeting on a visit to CERN, the Web wouldn't have taken off when it did -- maybe not at all.

  4. Ecosystem Food Web Lift-The-Flap Pages

    Science.gov (United States)

    Atwood-Blaine, Dana; Rule, Audrey C.; Morgan, Hannah

    2016-01-01

    In the lesson on which this practical article is based, third grade students constructed a "lift-the-flap" page to explore food webs on the prairie. The moveable papercraft focused student attention on prairie animals' external structures and how the inferred functions of those structures could support further inferences about the…

  5. Design of an Interface for Page Rank Calculation using Web Link Attributes Information

    Directory of Open Access Journals (Sweden)

    Jeyalatha SIVARAMAKRISHNAN

    2010-01-01

    Full Text Available This paper deals with the Web Structure Mining and the different Structure Mining Algorithms like Page Rank, HITS, Trust Rank and Sel-HITS. The functioning of these algorithms are discussed. An incremental algorithm for calculation of PageRank using an interface has been formulated. This algorithm makes use of Web Link Attributes Information as key parameters and has been implemented using Visibility and Position of a Link. The application of Web Structure Mining Algorithm in an Academic Search Application has been discussed. The present work can be a useful input to Web Users, Faculty, Students and Web Administrators in a University Environment.

  6. Virtual real-time inspection of nuclear material via VRML and secure web pages

    International Nuclear Information System (INIS)

    Nilsen, C.; Jortner, J.; Damico, J.; Friesen, J.; Schwegel, J.

    1996-01-01

    Sandia National Laboratories'' Straight-Line project is working to provide the right sensor information to the right user to enhance the safety, security, and international accountability of nuclear material. One of Straight-Line''s efforts is to create a system to securely disseminate this data on the Internet''s World-Wide-Web. To make the user interface more intuitive, Sandia has generated a three dimensional VRML (virtual reality modeling language) interface for a secure web page. This paper will discuss the implementation of the Straight-Line secure 3-D web page. A discussion of the pros and cons of a 3-D web page is also presented. The public VRML demonstration described in this paper can be found on the Internet at this address, http://www.ca.sandia.gov/NMM/. A Netscape browser, version 3 is strongly recommended

  7. Virtual real-time inspection of nuclear material via VRML and secure web pages

    International Nuclear Information System (INIS)

    Nilsen, C.; Jortner, J.; Damico, J.; Friesen, J.; Schwegel, J.

    1997-04-01

    Sandia National Laboratories' Straight Line project is working to provide the right sensor information to the right user to enhance the safety, security, and international accountability of nuclear material. One of Straight Line's efforts is to create a system to securely disseminate this data on the Internet's World-Wide-Web. To make the user interface more intuitive, Sandia has generated a three dimensional VRML (virtual reality modeling language) interface for a secure web page. This paper will discuss the implementation of the Straight Line secure 3-D web page. A discussion of the ''pros and cons'' of a 3-D web page is also presented. The public VRML demonstration described in this paper can be found on the Internet at the following address: http://www.ca.sandia.gov/NMM/. A Netscape browser, version 3 is strongly recommended

  8. Standards opportunities around data-bearing Web pages.

    Science.gov (United States)

    Karger, David

    2013-03-28

    The evolving Web has seen ever-growing use of structured data, thanks to the way it enhances information authoring, querying, visualization and sharing. To date, however, most structured data authoring and management tools have been oriented towards programmers and Web developers. End users have been left behind, unable to leverage structured data for information management and communication as well as professionals. In this paper, I will argue that many of the benefits of structured data management can be provided to end users as well. I will describe an approach and tools that allow end users to define their own schemas (without knowing what a schema is), manage data and author (not program) interactive Web visualizations of that data using the Web tools with which they are already familiar, such as plain Web pages, blogs, wikis and WYSIWYG document editors. I will describe our experience deploying these tools and some lessons relevant to their future evolution.

  9. Collaborative Design of World Wide Web Pages: A Case Study.

    Science.gov (United States)

    Andrew, Paige G; Musser, Linda R.

    1997-01-01

    This case study of the collaborative design of an earth science World Wide Web page at Pennsylvania State University highlights the role of librarians. Discusses the original Web site and links, planning, the intended audience, and redesign and recommended changes; and considers the potential contributions of librarians. (LRW)

  10. RDFa Primer, Embedding Structured Data in Web Pages

    NARCIS (Netherlands)

    institution W3C; M. Birbeck (Mark); not CWI et al

    2007-01-01

    textabstractCurrent Web pages, written in XHTML, contain inherent structured data: calendar events, contact information, photo captions, song titles, copyright licensing information, etc. When authors and publishers can express this data precisely, and when tools can read it robustly, a new world of

  11. A step-by-step solution for embedding user-controlled cines into educational Web pages.

    Science.gov (United States)

    Cornfeld, Daniel

    2008-03-01

    The objective of this article is to introduce a simple method for embedding user-controlled cines into a Web page using a simple JavaScript. Step-by-step instructions are included and the source code is made available. This technique allows the creation of portable Web pages that allow the user to scroll through cases as if seated at a PACS workstation. A simple JavaScript allows scrollable image stacks to be included on Web pages. With this technique, you can quickly and easily incorporate entire stacks of CT or MR images into online teaching files. This technique has the potential for use in case presentations, online didactics, teaching archives, and resident testing.

  12. Teaching Materials to Enhance the Visual Expression of Web Pages for Students Not in Art or Design Majors

    Science.gov (United States)

    Ariga, T.; Watanabe, T.

    2008-01-01

    The explosive growth of the Internet has made the knowledge and skills for creating Web pages into general subjects that all students should learn. It is now common to teach the technical side of the production of Web pages and many teaching materials have been developed. However teaching the aesthetic side of Web page design has been neglected,…

  13. Citations to Web pages in scientific articles: the permanence of archived references.

    Science.gov (United States)

    Thorp, Andrea W; Schriger, David L

    2011-02-01

    We validate the use of archiving Internet references by comparing the accessibility of published uniform resource locators (URLs) with corresponding archived URLs over time. We scanned the "Articles in Press" section in Annals of Emergency Medicine from March 2009 through June 2010 for Internet references in research articles. If an Internet reference produced the authors' expected content, the Web page was archived with WebCite (http://www.webcitation.org). Because the archived Web page does not change, we compared it with the original URL to determine whether the original Web page had changed. We attempted to access each original URL and archived Web site URL at 3-month intervals from the time of online publication during an 18-month study period. Once a URL no longer existed or failed to contain the original authors' expected content, it was excluded from further study. The number of original URLs and archived URLs that remained accessible over time was totaled and compared. A total of 121 articles were reviewed and 144 Internet references were found within 55 articles. Of the original URLs, 15% (21/144; 95% confidence interval [CI] 9% to 21%) were inaccessible at publication. During the 18-month observation period, there was no loss of archived URLs (apart from the 4% [5/123; 95% CI 2% to 9%] that could not be archived), whereas 35% (49/139) of the original URLs were lost (46% loss; 95% CI 33% to 61% by the Kaplan-Meier method; difference between curves PWeb page at publication can help preserve the authors' expected information. Copyright © 2010 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  14. Investigating Large-Scale Internet Abuse Through Web Page Classification

    OpenAIRE

    Der, Matthew Francis

    2015-01-01

    The Internet is rife with abuse: examples include spam, phishing, malicious advertising, DNS abuse, search poisoning, click fraud, and so on. To detect, investigate, and defend against such abuse, security efforts frequently crawl large sets of Web sites that need to be classified into categories, e.g., the attacker behind the abuse or the type of abuse.Domain expertise is often required at first, but classifying thousands to even millions of Web pages manually is infeasible. In this disse...

  15. Teaching E-Commerce Web Page Evaluation and Design: A Pilot Study Using Tourism Destination Sites

    Science.gov (United States)

    Susser, Bernard; Ariga, Taeko

    2006-01-01

    This study explores a teaching method for improving business students' skills in e-commerce page evaluation and making Web design majors aware of business content issues through cooperative learning. Two groups of female students at a Japanese university studying either tourism or Web page design were assigned tasks that required cooperation to…

  16. Automatic Removal of Advertising from Web-Page Display / Extended Abstract

    OpenAIRE

    Rowe, Neil C.; Coffman, Jim; Degirmenci, Yilmaz; Hall, Scott; Lee, Shong; Williams, Clifton

    2002-01-01

    Joint Conference on Digital Libraries ’02, July 8-12, Portland, Oregon. The usefulness of the World Wide Web as a digital library of precise and reliable information is reduced by the increasing presence of advertising on Web pages. But no one is required to read or see advertising, and this cognitive censorship can be automated by software. Such filters can be useful to the U.S. government which must permit its employees to use the Web but which is prohibited by law from endorsing c...

  17. Searchers' relevance judgments and criteria in evaluating Web pages in a learning style perspective

    DEFF Research Database (Denmark)

    Papaeconomou, Chariste; Zijlema, Annemarie F.; Ingwersen, Peter

    2008-01-01

    The paper presents the results of a case study of searcher's relevance criteria used for assessments of Web pages in a perspective of learning style. 15 test persons participated in the experiments based on two simulated work tasks that provided cover stories to trigger their information needs. Two...... learning styles were examined: Global and Sequential learners. The study applied eye-tracking for the observation of relevance hot spots on Web pages, learning style index analysis and post-search interviews to gain more in-depth information on relevance behavior. Findings reveal that with respect to use......, they are statistically insignificant. When interviewed in retrospective the resulting profiles tend to become even similar across learning styles but a shift occurs from instant assessments with content features of web pages replacing topicality judgments as predominant relevance criteria....

  18. Evaluating the usability of web pages: a case study

    NARCIS (Netherlands)

    Lautenbach, M.A.E.; Schegget, I.E. ter; Schoute, A.E.; Witteman, C.L.M.

    1999-01-01

    An evaluation of the Utrecht University website was carried out with 240 students. New criteria were drawn from the literature and operationalized for the study. These criteria are surveyability and findability. Web pages can be said to satisfy a usability criterion if their efficiency and

  19. Internet resources and web pages for pediatric surgeons.

    Science.gov (United States)

    Lugo-Vicente, H

    2000-02-01

    The Internet, the largest network of connected computers, provides immediate, dynamic, and downloadable information. By re-architecturing the work place and becoming familiar with Internet resources, pediatric surgeons have anticipated the informatics capabilities of this computer-based technology creating a new vision of work and organization in such areas as patient care, teaching, and research. This review aims to highlight how Internet navigational technology can be a useful educational resource in pediatric surgery, examines web pages of interest, and defines ideas of network communication. Basic Internet resources are electronic mail, discussion groups, file transfer, and the Worldwide Web (WWW). Electronic mailing is the most useful resource extending the avenue of learning to an international audience through news or list-servers groups. Pediatric Surgery List Server, the most popular discussion group, is a constant forum for exchange of ideas, difficult cases, consensus on management, and development of our specialty. The WWW provides an all-in-one medium of text, image, sound, and video. Associations, departments, educational sites, organizations, peer-reviewed scientific journals and Medline database web pages of prime interest to pediatric surgeons have been developing at an amazing pace. Future developments of technological advance nurturing our specialty will consist of online journals, telemedicine, international chatting, computer-based training for surgical education, and centralization of cyberspace information into database search sites.

  20. Business Systems Branch Abilities, Capabilities, and Services Web Page

    Science.gov (United States)

    Cortes-Pena, Aida Yoguely

    2009-01-01

    During the INSPIRE summer internship I acted as the Business Systems Branch Capability Owner for the Kennedy Web-based Initiative for Communicating Capabilities System (KWICC), with the responsibility of creating a portal that describes the services provided by this Branch. This project will help others achieve a clear view ofthe services that the Business System Branch provides to NASA and the Kennedy Space Center. After collecting the data through the interviews with subject matter experts and the literature in Business World and other web sites I identified discrepancies, made the necessary corrections to the sites and placed the information from the report into the KWICC web page.

  1. Future Trends in Children's Web Pages: Probing Hidden Biases for Information Quality

    Science.gov (United States)

    Kurubacak, Gulsun

    2007-01-01

    As global digital communication continues to flourish, Children's Web pages become more critical for children to realize not only the surface but also breadth and deeper meanings in presenting these milieus. These pages not only are very diverse and complex but also enable intense communication across social, cultural and political restrictions…

  2. What Snippets Say About Pages in Federated Web Search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd; Hou, Yuexian; Nie, Jian-Yun; Sun, Le; Wang, Bo; Zhang, Peng

    2012-01-01

    What is the likelihood that a Web page is considered relevant to a query, given the relevance assessment of the corresponding snippet? Using a new federated IR test collection that contains search results from over a hundred search engines on the internet, we are able to investigate such research

  3. Do-It-Yourself: A Special Library's Approach to Creating Dynamic Web Pages Using Commercial Off-The-Shelf Applications

    Science.gov (United States)

    Steeman, Gerald; Connell, Christopher

    2000-01-01

    Many librarians may feel that dynamic Web pages are out of their reach, financially and technically. Yet we are reminded in library and Web design literature that static home pages are a thing of the past. This paper describes how librarians at the Institute for Defense Analyses (IDA) library developed a database-driven, dynamic intranet site using commercial off-the-shelf applications. Administrative issues include surveying a library users group for interest and needs evaluation; outlining metadata elements; and, committing resources from managing time to populate the database and training in Microsoft FrontPage and Web-to-database design. Technical issues covered include Microsoft Access database fundamentals, lessons learned in the Web-to-database process (including setting up Database Source Names (DSNs), redesigning queries to accommodate the Web interface, and understanding Access 97 query language vs. Standard Query Language (SQL)). This paper also offers tips on editing Active Server Pages (ASP) scripting to create desired results. A how-to annotated resource list closes out the paper.

  4. Identification of the unidentified deceased and locating next of kin: experience with a UID web site page, Fulton County, Georgia.

    Science.gov (United States)

    Hanzlick, Randy

    2006-06-01

    Medical examiner and coroner offices may face difficulties in trying to achieve identification of deceased persons who are unidentified or in locating next of kin for deceased persons who have been identified. The Fulton County medical examiner (FCME) has an office web site which includes information about unidentified decedents and cases for which next of kin are being sought. Information about unidentified deceased and cases in need of next of kin has been posted on the FCME web site for 3 years and 1 year, respectively. FCME investigators and staff medical examiners were surveyed about the web site's usefulness for making identifications and locating next of kin. No cases were recalled in which the web site led to making an identification. Two cases were reported in which next of kin were located, and another case involved a missing person being ruled out as one of the decedents. The web site page is visited by agencies interested in missing and unidentified persons, and employees do find it useful for follow-up because information about all unidentified decedents is located and easily accessible, electronically, in a single location. Despite low yield in making identifications and locating next of kin, the UID web site is useful in some respects, and there is no compelling reason to discontinue its existence. It is proposed that UID pages on office web sites be divided into "hot" (less than 30 days, for example) and "warm" (31 days to 1 year, for example) cases and that cases older than a year be designated as "cold cases." It is conceivable that all unidentified deceased cases nationally could be placed on a single web site designed for such purposes, to remain in public access until identity is established and confirmed.

  5. Is This Information Source Commercially Biased? How Contradictions between Web Pages Stimulate the Consideration of Source Information

    Science.gov (United States)

    Kammerer, Yvonne; Kalbfell, Eva; Gerjets, Peter

    2016-01-01

    In two experiments we systematically examined whether contradictions between two web pages--of which one was commercially biased as stated in an "about us" section--stimulated university students' consideration of source information both during and after reading. In Experiment 1 "about us" information of the web pages was…

  6. Why Web Pages Annotation Tools Are Not Killer Applications? A New Approach to an Old Problem.

    Science.gov (United States)

    Ronchetti, Marco; Rizzi, Matteo

    The idea of annotating Web pages is not a new one: early proposals date back to 1994. A tool providing the ability to add notes to a Web page, and to share the notes with other users seems to be particularly well suited to an e-learning environment. Although several tools already provide such possibility, they are not widely popular. This paper…

  7. Domainwise Web Page Optimization Based On Clustered Query Sessions Using Hybrid Of Trust And ACO For Effective Information Retrieval

    Directory of Open Access Journals (Sweden)

    Dr. Suruchi Chawla

    2015-08-01

    Full Text Available Abstract In this paper hybrid of Ant Colony OptimizationACO and trust has been used for domainwise web page optimization in clustered query sessions for effective Information retrieval. The trust of the web page identifies its degree of relevance in satisfying specific information need of the user. The trusted web pages when optimized using pheromone updates in ACO will identify the trusted colonies of web pages which will be relevant to users information need in a given domain. Hence in this paper the hybrid of Trust and ACO has been used on clustered query sessions for identifying more and more relevant number of documents in a given domain in order to better satisfy the information need of the user. Experiment was conducted on the data set of web query sessions to test the effectiveness of the proposed approach in selected three domains Academics Entertainment and Sports and the results confirm the improvement in the precision of search results.

  8. The ATLAS Public Web Pages: Online Management of HEP External Communication Content

    CERN Document Server

    Goldfarb, Steven; Phoboo, Abha Eli; Shaw, Kate

    2015-01-01

    The ATLAS Education and Outreach Group is in the process of migrating its public online content to a professionally designed set of web pages built on the Drupal content management system. Development of the front-end design passed through several key stages, including audience surveys, stakeholder interviews, usage analytics, and a series of fast design iterations, called sprints. Implementation of the web site involves application of the html design using Drupal templates, refined development iterations, and the overall population of the site with content. We present the design and development processes and share the lessons learned along the way, including the results of the data-driven discovery studies. We also demonstrate the advantages of selecting a back-end supported by content management, with a focus on workflow. Finally, we discuss usage of the new public web pages to implement outreach strategy through implementation of clearly presented themes, consistent audience targeting and messaging, and th...

  9. Socorro Students Translate NRAO Web Pages Into Spanish

    Science.gov (United States)

    2002-07-01

    Six Socorro High School students are spending their summer working at the National Radio Astronomy Observatory (NRAO) on a unique project that gives them experience in language translation, World Wide Web design, and technical communication. Under the project, called "Un puente a los cielos," the students are translating many of NRAO's Web pages on astronomy into Spanish. "These students are using their bilingual skills to help us make basic information about astronomy and radio telescopes available to the Spanish-speaking community," said Kristy Dyer, who works at NRAO as a National Science Foundation postdoctoral fellow and who developed the project and obtained funding for it from the National Aeronautics and Space Administration. The students are: Daniel Acosta, 16; Rossellys Amarante, 15; Sandra Cano, 16; Joel Gonzalez, 16; Angelica Hernandez, 16; and Cecilia Lopez, 16. The translation project, a joint effort of NRAO and the NM Tech physics department, also includes Zammaya Moreno, a teacher from Ecuador, Robyn Harrison, NRAO's education officer, and NRAO computer specialist Allan Poindexter. The students are translating NRAO Web pages aimed at the general public. These pages cover the basics of radio astronomy and frequently-asked questions about NRAO and the scientific research done with NRAO's telescopes. "Writing about science for non-technical audiences has to be done carefully. Scientific concepts must be presented in terms that are understandable to non-scientists but also that remain scientifically accurate," Dyer said. "When translating this type of writing from one language to another, we need to preserve both the understandability and the accuracy," she added. For that reason, Dyer recruited 14 Spanish-speaking astronomers from Argentina, Mexico and the U.S. to help verify the scientific accuracy of the Spanish translations. The astronomers will review the translations. The project is giving the students a broad range of experience. "They are

  10. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    Science.gov (United States)

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  11. Science on the Web: Secondary School Students' Navigation Patterns and Preferred Pages' Characteristics

    Science.gov (United States)

    Dimopoulos, Kostas; Asimakopoulos, Apostolos

    2010-01-01

    This study aims to explore navigation patterns and preferred pages' characteristics of ten secondary school students searching the web for information about cloning. The students navigated the Web for as long as they wished in a context of minimum support of teaching staff. Their navigation patterns were analyzed using audit trail data software.…

  12. Toward automated assessment of health Web page quality using the DISCERN instrument.

    Science.gov (United States)

    Allam, Ahmed; Schulz, Peter J; Krauthammer, Michael

    2017-05-01

    As the Internet becomes the number one destination for obtaining health-related information, there is an increasing need to identify health Web pages that convey an accurate and current view of medical knowledge. In response, the research community has created multicriteria instruments for reliably assessing online medical information quality. One such instrument is DISCERN, which measures health Web page quality by assessing an array of features. In order to scale up use of the instrument, there is interest in automating the quality evaluation process by building machine learning (ML)-based DISCERN Web page classifiers. The paper addresses 2 key issues that are essential before constructing automated DISCERN classifiers: (1) generation of a robust DISCERN training corpus useful for training classification algorithms, and (2) assessment of the usefulness of the current DISCERN scoring schema as a metric for evaluating the performance of these algorithms. Using DISCERN, 272 Web pages discussing treatment options in breast cancer, arthritis, and depression were evaluated and rated by trained coders. First, different consensus models were compared to obtain a robust aggregated rating among the coders, suitable for a DISCERN ML training corpus. Second, a new DISCERN scoring criterion was proposed (features-based score) as an ML performance metric that is more reflective of the score distribution across different DISCERN quality criteria. First, we found that a probabilistic consensus model applied to the DISCERN instrument was robust against noise (random ratings) and superior to other approaches for building a training corpus. Second, we found that the established DISCERN scoring schema (overall score) is ill-suited to measure ML performance for automated classifiers. Use of a probabilistic consensus model is advantageous for building a training corpus for the DISCERN instrument, and use of a features-based score is an appropriate ML metric for automated DISCERN

  13. Key word placing in Web page body text to increase visibility to search engines

    Directory of Open Access Journals (Sweden)

    W. T. Kritzinger

    2007-11-01

    Full Text Available The growth of the World Wide Web has spawned a wide variety of new information sources, which has also left users with the daunting task of determining which sources are valid. Many users rely on the Web as an information source because of the low cost of information retrieval. It is also claimed that the Web has evolved into a powerful business tool. Examples include highly popular business services such as Amazon.com and Kalahari.net. It is estimated that around 80% of users utilize search engines to locate information on the Internet. This, by implication, places emphasis on the underlying importance of Web pages being listed on search engines indices. Empirical evidence that the placement of key words in certain areas of the body text will have an influence on the Web sites' visibility to search engines could not be found in the literature. The result of two experiments indicated that key words should be concentrated towards the top, and diluted towards the bottom of a Web page to increase visibility. However, care should be taken in terms of key word density, to prevent search engine algorithms from raising the spam alarm.

  14. Appraisals of Salient Visual Elements in Web Page Design

    Directory of Open Access Journals (Sweden)

    Johanna M. Silvennoinen

    2016-01-01

    Full Text Available Visual elements in user interfaces elicit emotions in users and are, therefore, essential to users interacting with different software. Although there is research on the relationship between emotional experience and visual user interface design, the focus has been on the overall visual impression and not on visual elements. Additionally, often in a software development process, programming and general usability guidelines are considered as the most important parts of the process. Therefore, knowledge of programmers’ appraisals of visual elements can be utilized to understand the web page designs we interact with. In this study, appraisal theory of emotion is utilized to elaborate the relationship of emotional experience and visual elements from programmers’ perspective. Participants (N=50 used 3E-templates to express their visual and emotional experiences of web page designs. Content analysis of textual data illustrates how emotional experiences are elicited by salient visual elements. Eight hierarchical visual element categories were found and connected to various emotions, such as frustration, boredom, and calmness, via relational emotion themes. The emotional emphasis was on centered, symmetrical, and balanced composition, which was experienced as pleasant and calming. The results benefit user-centered visual interface design and researchers of visual aesthetics in human-computer interaction.

  15. Learning Hierarchical User Interest Models from Web Pages

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    We propose an algorithm for learning hierarchical user interest models according to the Web pages users have browsed. In this algorithm, the interests of a user are represented into a tree which is called a user interest tree, the content and the structure of which can change simultaneously to adapt to the changes in a user's interests. This expression represents a user's specific and general interests as a continuum. In some sense, specific interests correspond to short-term interests, while general interests correspond to long-term interests. So this representation more really reflects the users' interests. The algorithm can automatically model a user's multiple interest domains, dynamically generate the interest models and prune a user interest tree when the number of the nodes in it exceeds given value. Finally, we show the experiment results in a Chinese Web Site.

  16. A STUDY ON RANKING METHOD IN RETRIEVING WEB PAGES BASED ON CONTENT AND LINK ANALYSIS: COMBINATION OF FOURIER DOMAIN SCORING AND PAGERANK SCORING

    Directory of Open Access Journals (Sweden)

    Diana Purwitasari

    2008-01-01

    Full Text Available Ranking module is an important component of search process which sorts through relevant pages. Since collection of Web pages has additional information inherent in the hyperlink structure of the Web, it can be represented as link score and then combined with the usual information retrieval techniques of content score. In this paper we report our studies about ranking score of Web pages combined from link analysis, PageRank Scoring, and content analysis, Fourier Domain Scoring. Our experiments use collection of Web pages relate to Statistic subject from Wikipedia with objectives to check correctness and performance evaluation of combination ranking method. Evaluation of PageRank Scoring show that the highest score does not always relate to Statistic. Since the links within Wikipedia articles exists so that users are always one click away from more information on any point that has a link attached, it it possible that unrelated topics to Statistic are most likely frequently mentioned in the collection. While the combination method show link score which is given proportional weight to content score of Web pages does effect the retrieval results.

  17. Age differences in search of web pages: the effects of link size, link number, and clutter.

    Science.gov (United States)

    Grahame, Michael; Laberge, Jason; Scialfa, Charles T

    2004-01-01

    Reaction time, eye movements, and errors were measured during visual search of Web pages to determine age-related differences in performance as a function of link size, link number, link location, and clutter. Participants (15 young adults, M = 23 years; 14 older adults, M = 57 years) searched Web pages for target links that varied from trial to trial. During one half of the trials, links were enlarged from 10-point to 12-point font. Target location was distributed among the left, center, and bottom portions of the screen. Clutter was manipulated according to the percentage of used space, including graphics and text, and the number of potentially distracting nontarget links was varied. Increased link size improved performance, whereas increased clutter and links hampered search, especially for older adults. Results also showed that links located in the left region of the page were found most easily. Actual or potential applications of this research include Web site design to increase usability, particularly for older adults.

  18. Credibility judgments in web page design - a brief review.

    Science.gov (United States)

    Selejan, O; Muresanu, D F; Popa, L; Muresanu-Oloeriu, I; Iudean, D; Buzoianu, A; Suciu, S

    2016-01-01

    Today, more than ever, knowledge that interfaces appearance analysis is a crucial point in human-computer interaction field has been accepted. As nowadays virtually anyone can publish information on the web, the credibility role has grown increasingly important in relation to the web-based content. Areas like trust, credibility, and behavior, doubled by overall impression and user expectation are today in the spotlight of research compared to the last period, when other pragmatic areas such as usability and utility were considered. Credibility has been discussed as a theoretical construct in the field of communication in the past decades and revealed that people tend to evaluate the credibility of communication primarily by the communicator's expertise. Other factors involved in the content communication process are trustworthiness and dynamism as well as various other criteria but to a lower extent. In this brief review, factors like web page aesthetics, browsing experiences and user experience are considered.

  19. The Recognition of Web Pages' Hyperlinks by People with Intellectual Disabilities: An Evaluation Study

    Science.gov (United States)

    Rocha, Tania; Bessa, Maximino; Goncalves, Martinho; Cabral, Luciana; Godinho, Francisco; Peres, Emanuel; Reis, Manuel C.; Magalhaes, Luis; Chalmers, Alan

    2012-01-01

    Background: One of the most mentioned problems of web accessibility, as recognized in several different studies, is related to the difficulty regarding the perception of what is or is not clickable in a web page. In particular, a key problem is the recognition of hyperlinks by a specific group of people, namely those with intellectual…

  20. The effects of link format and screen location on visual search of web pages.

    Science.gov (United States)

    Ling, Jonathan; Van Schaik, Paul

    2004-06-22

    Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.

  1. Limitations of existing web services

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Limitations of existing web services. Uploading or downloading large data. Serving too many user from single source. Difficult to provide computer intensive job. Depend on internet and its bandwidth. Security of data in transition. Maintain confidentiality of data ...

  2. What is the title of a Web page? A study of Webography practice

    Directory of Open Access Journals (Sweden)

    Timothy C. Craven

    2002-01-01

    Full Text Available Few style guides recommend a specific source for citing the title of a Web page that is not a duplicate of a printed format. Sixteen Web bibliographies were analyzed for uses of two different recommended sources: (1 the tagged title; (2 the title as it would appear to be from viewing the beginning of the page in the browser (apparent title. In all sixteen, the proportion of tagged titles was much less than that of apparent titles, and only rarely did the bibliography title match the tagged title and not the apparent title. Convenience of copying may partly explain the preference for the apparent title. Contrary to expectation, correlation between proportion of valid links in a bibliography and proportion of accurately reproduced apparent titles was slightly negative.

  3. SurveyWiz and factorWiz: JavaScript Web pages that make HTML forms for research on the Internet.

    Science.gov (United States)

    Birnbaum, M H

    2000-05-01

    SurveyWiz and factorWiz are Web pages that act as wizards to create HTML forms that enable one to collect data via the Web. SurveyWiz allows the user to enter survey questions or personality test items with a mixture of text boxes and scales of radio buttons. One can add demographic questions of age, sex, education, and nationality with the push of a button. FactorWiz creates the HTML for within-subjects, two-factor designs as large as 9 x 9, or higher order factorial designs up to 81 cells. The user enters levels of the row and column factors, which can be text, images, or other multimedia. FactorWiz generates the stimulus combinations, randomizes their order, and creates the page. In both programs HTML is displayed in a window, and the user copies it to a text editor to save it. When uploaded to a Web server and supported by a CGI script, the created Web pages allow data to be collected, coded, and saved on the server. These programs are intended to assist researchers and students in quickly creating studies that can be administered via the Web.

  4. Children's recognition of advertisements on television and on Web pages.

    Science.gov (United States)

    Blades, Mark; Oates, Caroline; Li, Shiying

    2013-03-01

    In this paper we consider the issue of advertising to children. Advertising to children raises a number of concerns, in particular the effects of food advertising on children's eating habits. We point out that virtually all the research into children's understanding of advertising has focused on traditional television advertisements, but much marketing aimed at children is now via the Internet and little is known about children's awareness of advertising on the Web. One important component of understanding advertisements is the ability to distinguish advertisements from other messages, and we suggest that young children's ability to recognise advertisements on a Web page is far behind their ability to recognise advertisements on television. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Web page of the Ibero-American laboratories network of radioactivity analysis in foods: a tool for inter regional diffusion

    International Nuclear Information System (INIS)

    Melo Ferreira, Ana C. de; Osores, Jose M.; Fernandez Gomez, Isis M.; Iglicki, Flora A.; Vazquez Bolanos, Luis R.; Romero, Maria de L.; Aguirre Gomez, Jaime; Flores, Yasmine

    2008-01-01

    One objective of the thematic networks is the exchanges of knowledge among participants, for this reason, actions focused to the diffusion of their respective work are prioritized, evidencing the result of the cooperation among the participant groups and also among different networks. The Ibero-American Laboratories Network of Radioactivity Analysis in Foods (RILARA) was constituted in 2007, and one of the first actions carried out in this framework, was the design and conformation of a web page. The web pages have become a powerful means for diffusion of specialized information. Their power, as well as their continuous upgrading and the specificity of the topics that can develop, allow the user to obtain fast information on a wide range of products, services and organizations at local and world level. The main objective of the RILARA web page is to provide updated relevant information to interested specialists in the subject and also to public in general, about the work developed by the network laboratories regarding the control of radioactive pollutants in foods and related scientific issues. This web has been developed based on a Content Management Systems that helps to eliminate potential barriers to the communication web, reducing the creation costs, contribution and maintenance of the content. The tool used for its design is very effective to be used in the process of teaching, learning and for the organization of the information. This paper describes how was conceived the design of this web page, the information that contains and how can be accessed and/or to include any contribution, the value of this page depends directly on the grade of updating of the available contents so that it can be useful and attractive to the users. (author)

  6. SChiSM2: creating interactive web page annotations of molecular structure models using Jmol.

    Science.gov (United States)

    Cammer, Stephen

    2007-02-01

    SChiSM2 is a web server-based program for creating web pages that include interactive molecular graphics using the freely-available applet, Jmol, for illustration. The program works with Internet Explorer and Firefox on Windows, Safari and Firefox on Mac OSX and Firefox on Linux. The program can be accessed at the following address: http://ci.vbi.vt.edu/cammer/schism2.html.

  7. Web accessibility: a longitudinal study of college and university home pages in the northwestern United States.

    Science.gov (United States)

    Thompson, Terrill; Burgstahler, Sheryl; Moore, Elizabeth J

    2010-01-01

    This article reports on a follow-up assessment to Thompson et al. (Proceedings of The First International Conference on Technology-based Learning with Disability, July 19-20, Dayton, Ohio, USA; 2007. pp 127-136), in which higher education home pages were evaluated over a 5-year period on their accessibility to individuals with disabilities. The purpose of this article is to identify trends in web accessibility and long-term impact of outreach and education. Home pages from 127 higher education institutions in the Northwest were evaluated for accessibility three times over a 6-month period in 2004-2005 (Phase I), and again in 2009 (Phase II). Schools in the study were offered varying degrees of training and/or support on web accessibility during Phase I. Pages were evaluated for accessibility using a set of manual checkpoints developed by the researchers. Over the 5-year period reported in this article, significant positive gains in accessibility were revealed on some measures, but accessibility declined on other measures. The areas of improvement are arguably the more basic, easy-to-implement accessibility features, while the area of decline is keyboard accessibility, which is likely associated with the emergence of dynamic new technologies on web pages. Even on those measures where accessibility is improving, it is still strikingly low. In Phase I of the study, institutions that received extensive training and support were more likely than other institutions to show improved accessibility on the measures where institutions improved overall, but were equally or more likely than others to show a decline on measures where institutions showed an overall decline. In Phase II, there was no significant difference between institutions who had received support earlier in the study, and those who had not. Results suggest that growing numbers of higher education institutions in the Northwest are motivated to add basic accessibility features to their home pages, and that

  8. Web Page Layout: A Comparison Between Left- and Right-justified Site Navigation Menus

    OpenAIRE

    Kalbach, James; Bosenick, Tim

    2006-01-01

    The usability of two Web page layouts was directly compared: one with the main site navigation menu on the left of the page, and one with the main site navigation menu on the right. Sixty-four participants were divided equally into two groups and assigned to either the left- or the right-hand navigation test condition. Using a stopwatch, the time to complete each of five tasks was measured. The hypothesis that the left-hand navigation would perform significantly faster than the right-hand nav...

  9. The Impact of Salient Advertisements on Reading and Attention on Web Pages

    Science.gov (United States)

    Simola, Jaana; Kuisma, Jarmo; Oorni, Anssi; Uusitalo, Liisa; Hyona, Jukka

    2011-01-01

    Human vision is sensitive to salient features such as motion. Therefore, animation and onset of advertisements on Websites may attract visual attention and disrupt reading. We conducted three eye tracking experiments with authentic Web pages to assess whether (a) ads are efficiently ignored, (b) ads attract overt visual attention and disrupt…

  10. WebVis: a hierarchical web homepage visualizer

    Science.gov (United States)

    Renteria, Jose C.; Lodha, Suresh K.

    2000-02-01

    WebVis, the Hierarchical Web Home Page Visualizer, is a tool for managing home web pages. The user can access this tool via the WWW and obtain a hierarchical visualization of one's home web pages. WebVis is a real time interactive tool that supports many different queries on the statistics of internal files such as sizes, age, and type. In addition, statistics on embedded information such as VRML files, Java applets, images and sound files can be extracted and queried. Results of these queries are visualized using color, shape and size of different nodes of the hierarchy. The visualization assists the user in a variety of task, such as quickly finding outdated information or locate large files. WebVIs is one solution to the growing web space maintenance problem. Implementation of WebVis is realized with Perl and Java. Perl pattern matching and file handling routines are used to collect and process web space linkage information and web document information. Java utilizes the collected information to produce visualization of the web space. Java also provides WebVis with real time interactivity, while running off the WWW. Some WebVis examples of home web page visualization are presented.

  11. A comprehensive analysis of Italian web pages mentioning squalene-based influenza vaccine adjuvants reveals a high prevalence of misinformation.

    Science.gov (United States)

    Panatto, Donatella; Amicizia, Daniela; Arata, Lucia; Lai, Piero Luigi; Gasparini, Roberto

    2018-04-03

    Squalene-based adjuvants have been included in influenza vaccines since 1997. Despite several advantages of adjuvanted seasonal and pandemic influenza vaccines, laypeople's perception of such formulations may be hesitant or even negative under certain circumstances. Moreover, in Italian, the term "squalene" has the same root as such common words as "shark" (squalo), "squalid" and "squalidness" that tend to have negative connotations. This study aimed to quantitatively and qualitatively analyze a representative sample of Italian web pages mentioning squalene-based adjuvants used in influenza vaccines. Every effort was made to limit the subjectivity of judgments. Eighty-four unique web pages were assessed. A high prevalence (47.6%) of pages with negative or ambiguous attitudes toward squalene-based adjuvants was established. Compared with web pages reporting balanced information on squalene-based adjuvants, those categorized as negative/ambiguous had significantly lower odds of belonging to a professional institution [adjusted odds ratio (aOR) = 0.12, p = .004], and significantly higher odds of containing pictures (aOR = 1.91, p = .034) and being more readable (aOR = 1.34, p = .006). Some differences in wording between positive/neutral and negative/ambiguous web pages were also observed. The most common scientifically unsound claims concerned safety issues and, in particular, claims linking squalene-based adjuvants to the Gulf War Syndrome and autoimmune disorders. Italian users searching the web for information on vaccine adjuvants have a high likelihood of finding unbalanced and misleading material. Information provided by institutional websites should be not only evidence-based but also carefully targeted towards laypeople. Conversely, authors writing for non-institutional websites should avoid sensationalism and provide their readers with more balanced information.

  12. Calculating PageRank in a changing network with added or removed edges

    Science.gov (United States)

    Engström, Christopher; Silvestrov, Sergei

    2017-01-01

    PageRank was initially developed by S. Brinn and L. Page in 1998 to rank homepages on the Internet using the stationary distribution of a Markov chain created using the web graph. Due to the large size of the web graph and many other real world networks fast methods to calculate PageRank is needed and even if the original way of calculating PageRank using a Power iterations is rather fast, many other approaches have been made to improve the speed further. In this paper we will consider the problem of recalculating PageRank of a changing network where the PageRank of a previous version of the network is known. In particular we will consider the special case of adding or removing edges to a single vertex in the graph or graph component.

  13. Personal Web home pages of adolescents with cancer: self-presentation, information dissemination, and interpersonal connection.

    Science.gov (United States)

    Suzuki, Lalita K; Beale, Ivan L

    2006-01-01

    The content of personal Web home pages created by adolescents with cancer is a new source of information about this population of potential benefit to oncology nurses and psychologists. Individual Internet elements found on 21 home pages created by youths with cancer (14-22 years old) were rated for cancer-related self-presentation, information dissemination, and interpersonal connection. Examples of adolescents' online narratives were also recorded. Adolescents with cancer used various Internet elements on their home pages for cancer-related self-presentation (eg, welcome messages, essays, personal history and diary pages, news articles, and poetry), information dissemination (e.g., through personal interest pages, multimedia presentations, lists, charts, and hyperlinks), and interpersonal connection (eg, guestbook entries). Results suggest that various elements found on personal home pages are being used by a limited number of young patients with cancer for self-expression, information access, and contact with peers.

  14. THE NEW PURCHASING SERVICE PAGE NOW ON THE WEB!

    CERN Multimedia

    SPL Division

    2000-01-01

    Users of CERN's Purchasing Service are encouraged to visit the new Purchasing Service web page, accessible from the CERN homepage or directly at: http://spl-purchasing.web.cern.ch/spl-purchasing/ There, you will find answers to questions such as: Who are the buyers? What do I need to know before creating a DAI? How many offers do I need? Where shall I send the offer I received? I know the amount of my future requirement, how do I proceed? How are contracts adjudicated at CERN? Which exhibitions and visits of Member State companies are foreseen in the future? A company I know is interested in making a presentation at CERN, who should they contact? Additionally, you will find information concerning: The Purchasing procedures Market Surveys and Invitations to Tender The Industrial Liaison Officers appointed in each Member State The Purchasing Broker at CERN

  15. Effects of picture amount on preference, balance, and dynamic feel of Web pages.

    Science.gov (United States)

    Chiang, Shu-Ying; Chen, Chien-Hsiung

    2012-04-01

    This study investigates the effects of picture amount on subjective evaluation. The experiment herein adopted two variables to define picture amount: column ratio and picture size. Six column ratios were employed: 7:93,15:85, 24:76, 33:67, 41:59, and 50:50. Five picture sizes were examined: 140 x 81, 220 x 127, 300 x 173, 380 x 219, and 460 x 266 pixels. The experiment implemented a within-subject design; 104 participants were asked to evaluate 30 web page layouts. Repeated measurements revealed that the column ratio and picture size have significant effects on preference, balance, and dynamic feel. The results indicated the most appropriate picture amount for display: column ratios of 15:85 and 24:76, and picture sizes of 220 x 127, 300 x 173, and 380 x 219. The research findings can serve as the basis for the application of design guidelines for future web page interface design.

  16. INTERNET and information about nuclear sciences. The world wide web virtual library: nuclear sciences

    International Nuclear Information System (INIS)

    Kuruc, J.

    1999-01-01

    In this work author proposes to constitute new virtual library which should centralize the information from nuclear disciplines on the INTERNET, in order to them to give first and foremost the connection on the most important links in set nuclear sciences. The author has entitled this new virtual library The World Wide Web Library: Nuclear Sciences. By constitution of this virtual library next basic principles were chosen: home pages of international organizations important from point of view of nuclear disciplines; home pages of the National Nuclear Commissions and governments; home pages of nuclear scientific societies; web-pages specialized on nuclear problematic, in general; periodical tables of elements and isotopes; web-pages aimed on Chernobyl crash and consequences; web-pages with antinuclear aim. Now continue the links grouped on web-pages according to single nuclear areas: nuclear arsenals; nuclear astrophysics; nuclear aspects of biology (radiobiology); nuclear chemistry; nuclear company; nuclear data centres; nuclear energy; nuclear energy, environmental aspects of (radioecology); nuclear energy info centres; nuclear engineering; nuclear industries; nuclear magnetic resonance; nuclear material monitoring; nuclear medicine and radiology; nuclear physics; nuclear power (plants); nuclear reactors; nuclear risk; nuclear technologies and defence; nuclear testing; nuclear tourism; nuclear wastes; nuclear wastes. In these single groups web-links will be concentrated into following groups: virtual libraries and specialized servers; science; nuclear societies; nuclear departments of the academic institutes; nuclear research institutes and laboratories; centres, info links

  17. Hormone replacement therapy advertising: sense and nonsense on the web pages of the best-selling pharmaceuticals in Spain.

    Science.gov (United States)

    Chilet-Rosell, Elisa; Martín Llaguno, Marta; Ruiz Cantero, María Teresa; Alonso-Coello, Pablo

    2010-03-16

    The balance of the benefits and risks of long term use of hormone replacement therapy (HRT) have been a matter of debate for decades. In Europe, HRT requires medical prescription and its advertising is only permitted when aimed at health professionals (direct to consumer advertising is allowed in some non European countries). The objective of this study is to analyse the appropriateness and quality of Internet advertising about HRT in Spain. A search was carried out on the Internet (January 2009) using the eight best-selling HRT drugs in Spain. The brand name of each drug was entered into Google's search engine. The web sites appearing on the first page of results and the corresponding companies were analysed using the European Code of Good Practice as the reference point. Five corporate web pages: none of them included bibliographic references or measures to ensure that the advertising was only accessible by health professionals. Regarding non-corporate web pages (n = 27): 41% did not include the company name or address, 44% made no distinction between patient and health professional information, 7% contained bibliographic references, 26% provided unspecific information for the use of HRT for osteoporosis and 19% included menstrual cycle regulation or boosting feminity as an indication. Two online pharmacies sold HRT drugs which could be bought online in Spain, did not include the name or contact details of the registered company, nor did they stipulate the need for a medical prescription or differentiate between patient and health professional information. Even though pharmaceutical companies have committed themselves to compliance with codes of good practice, deficiencies were observed regarding the identification, information and promotion of HRT medications on their web pages. Unaffected by legislation, non-corporate web pages are an ideal place for indirect HRT advertising, but they often contain misleading information. HRT can be bought online from Spain

  18. NUCLEAR STRUCTURE AND DECAY DATA: INTRODUCTION TO RELEVANT WEB PAGES

    International Nuclear Information System (INIS)

    BURROWS, T.W.; MCLAUGHLIN, P.D.; NICHOLS, A.L.

    2005-01-01

    A brief description is given of the nuclear data centres around the world able to provide access to those databases and programs of highest relevance to nuclear structure and decay data specialists. A number of Web-page addresses are also provided for the reader to inspect and investigate these data and codes for study, evaluation and calculation. These instructions are not meant to be comprehensive, but should provide the reader with a reasonable means of electronic access to the most important data sets and programs

  19. Beginning ASPNET Web Pages with WebMatrix

    CERN Document Server

    Brind, Mike

    2011-01-01

    Learn to build dynamic web sites with Microsoft WebMatrix Microsoft WebMatrix is designed to make developing dynamic ASP.NET web sites much easier. This complete Wrox guide shows you what it is, how it works, and how to get the best from it right away. It covers all the basic foundations and also introduces HTML, CSS, and Ajax using jQuery, giving beginning programmers a firm foundation for building dynamic web sites.Examines how WebMatrix is expected to become the new recommended entry-level tool for developing web sites using ASP.NETArms beginning programmers, students, and educators with al

  20. Users page feedback

    CERN Multimedia

    2010-01-01

    In October last year the Communication Group proposed an interim redesign of the users’ web pages in order to improve the visibility of key news items, events and announcements to the CERN community. The proposed update to the users' page (right), and the current version (left, behind) This proposed redesign was seen as a small step on the way to much wider reforms of the CERN web landscape proposed in the group’s web communication plan.   The results are available here. Some of the key points: - the balance between news / events / announcements and access to links on the users’ pages was not right - many people asked to see a reversal of the order so that links appeared first, news/events/announcements last; - many people felt that we should keep the primary function of the users’ pages as an index to other CERN websites; - many people found the sections of the front page to be poorly delineated; - people do not like scrolling; - there were performance...

  1. MPEG-7 low level image descriptors for modeling users' web pages visual appeal opinion

    OpenAIRE

    Uribe Mayoral, Silvia; Alvarez Garcia, Federico; Menendez Garcia, Jose Manuel

    2015-01-01

    The study of the users' web pages first impression is an important factor for interface designers, due to its influence over the final opinion about a site. In this regard, the analysis of web aesthetics can be considered as an interesting tool for evaluating this early impression, and the use of low level image descriptors for modeling it in an objective way represents an innovative research field. According to this, in this paper we present a new model for website aesthetics evaluation and ...

  2. The effect of new links on Google PageRank

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli

    2004-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. We study the effect of newly created links on Google PageRank. We discuss to

  3. A quality evaluation methodology of health web-pages for non-professionals.

    Science.gov (United States)

    Currò, Vincenzo; Buonuomo, Paola Sabrina; Onesimo, Roberta; de Rose, Paola; Vituzzi, Andrea; di Tanna, Gian Luca; D'Atri, Alessandro

    2004-06-01

    The proposal of an evaluation methodology for determining the quality of healthcare web sites for the dissemination of medical information to non-professionals. Three (macro) factors are considered for the quality evaluation: medical contents, accountability of the authors, and usability of the web site. Starting from two results in the literature the problem of whether or not to introduce a weighting function has been investigated. This methodology has been validated on a specialized information content, i.e., sore throats, due to the large interest such a topic enjoys with target users. The World Wide Web was accessed using a meta-search system merging several search engines. A statistical analysis was made to compare the proposed methodology with the obtained ranks of the sample web pages. The statistical analysis confirms that the variables examined (per item and sub factor) show substantially similar ranks and are capable of contributing to the evaluation of the main quality macro factors. A comparison between the aggregation functions in the proposed methodology (non-weighted averages) and the weighting functions, derived from the literature, allowed us to verify the suitability of the method. The proposed methodology suggests a simple approach which can quickly award an overall quality score for medical web sites oriented to non-professionals.

  4. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.; Zaiane, O.R.; Srivastav, J.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  5. Hormone Replacement Therapy advertising: sense and nonsense on the web pages of the best-selling pharmaceuticals in Spain

    Directory of Open Access Journals (Sweden)

    Cantero María

    2010-03-01

    Full Text Available Abstract Background The balance of the benefits and risks of long term use of hormone replacement therapy (HRT have been a matter of debate for decades. In Europe, HRT requires medical prescription and its advertising is only permitted when aimed at health professionals (direct to consumer advertising is allowed in some non European countries. The objective of this study is to analyse the appropriateness and quality of Internet advertising about HRT in Spain. Methods A search was carried out on the Internet (January 2009 using the eight best-selling HRT drugs in Spain. The brand name of each drug was entered into Google's search engine. The web sites appearing on the first page of results and the corresponding companies were analysed using the European Code of Good Practice as the reference point. Results Five corporate web pages: none of them included bibliographic references or measures to ensure that the advertising was only accessible by health professionals. Regarding non-corporate web pages (n = 27: 41% did not include the company name or address, 44% made no distinction between patient and health professional information, 7% contained bibliographic references, 26% provided unspecific information for the use of HRT for osteoporosis and 19% included menstrual cycle regulation or boosting feminity as an indication. Two online pharmacies sold HRT drugs which could be bought online in Spain, did not include the name or contact details of the registered company, nor did they stipulate the need for a medical prescription or differentiate between patient and health professional information. Conclusions Even though pharmaceutical companies have committed themselves to compliance with codes of good practice, deficiencies were observed regarding the identification, information and promotion of HRT medications on their web pages. Unaffected by legislation, non-corporate web pages are an ideal place for indirect HRT advertising, but they often contain

  6. PROTOTIPE PEMESANAN BAHAN PUSTAKA MELALUI WEB MENGGUNAKAN ACTIVE SERVER PAGE (ASP

    Directory of Open Access Journals (Sweden)

    Djoni Haryadi Setiabudi

    2002-01-01

    Full Text Available Electronic commerce is one of the components in the internet that growing fast in the world. In this research, it is developed the prototype for library service that offers library collection ordering especially books and articles through World Wide Web. In order to get an interaction between seller and buyer, there is an urgency to develop a dynamic web, which needs the technology and software. One of the programming languages is called Active Server Pages (ASP and it is combining with database system to store data. The other component as an interface between application and database is ActiveX Data Objects (ADO. ASP has an advantage in the scripting method and it is easy to make the configuration with database. This application consists of two major parts those are administrator and user. This prototype has the facilities for editing, searching and looking through ordering information online. Users can also do downloading process for searching and ordering articles. Paying method in this e-commerce system is quite essential because in Indonesia not everybody has a credit card. As a solution to this situation, this prototype has a form for user who does not have credit card. If the bill has been paid, he can do the transaction online. In this case, one of the ASP advantages will be used. This is called "session" when data in process would not be lost as long as the user still in that "session". This will be used in user area and admin area where the users and the admin can do various processes. Abstract in Bahasa Indonesia : Electronic commerce adalah satu bagian dari internet yang berkembang pesat di dunia saat ini. Pada penelitian ini dibuat suatu prototipe program aplikasi untuk pengembangan jasa layanan perpustakaan khususnya pemesanan artikel dan buku melalui World Wide Web. Untuk membangun aplikasi berbasis web diperlukan teknologi dan software yang mendukung pembuatan situs web dinamis sehingga ada interaksi antara pembeli dan penjual

  7. Web page quality: can we measure it and what do we find? A report of exploratory findings.

    Science.gov (United States)

    Abbott, V P

    2000-06-01

    The aim of this study was to report exploratory findings from an attempt to quantify the quality of a sample of World Wide Web (WWW) pages relating to MMR vaccine that a typical user might locate. Forty pages obtained from a search of the WWW using two search engines and the search expression 'mmr vaccine' were analysed using a standard proforma. The proforma looked at the information the pages contained in terms of three categories: content, authorship and aesthetics. The information from each category was then quantified into a summary statistic, and receiver operating characteristic (ROC) curves were generated using a 'gold standard' of quality derived from the published literature. Optimal cut-off points for each of the three sections were calculated that best discriminated 'good' from 'bad' pages. Pages were also assessed as to whether they were pro- or anti-vaccination. For this sample, the combined contents and authorship score, with a cut-off of five, was a good discriminator, having 88 per cent sensitivity and 92 per cent specificity. Aesthetics was not a good discriminator. In the sample, 32.5 per cent of pages were pro-vaccination; 42.5 per cent were anti-vaccination and 25 per cent were neutral. The relative risk of being of poor quality if anti-vaccination was 3.3 (95 per cent confidence interval 1.8, 6.1). The sample of Web pages did contain some quality information on MMR vaccine. It also contained a great deal of misleading, inaccurate data. The proforma, combined with a knowledge of the literature, may help to distinguish between the two.

  8. JERHRE's New Web Pages.

    Science.gov (United States)

    2006-06-01

    JERHRE'S WEBSITE, www.csueastbay.edu/JERHRE/ has two new pages. One of those pages is devoted to curriculum that may be used to educate students, investigators and ethics committee members about issues in the ethics of human subjects research, and to evaluate their learning. It appears at www.csueastbay.edu/JERHRE/cur.html. The other is devoted to emailed letters from readers. Appropriate letters will be posted as soon as they are received by the editor. Letters from readers appear at www.csueastbay.edu/JERHRE/let.html.

  9. Communicating public health preparedness information to pregnant and postpartum women: an assessment of Centers for Disease Control and Prevention web pages.

    Science.gov (United States)

    McDonough, Brianna; Felter, Elizabeth; Downes, Amia; Trauth, Jeanette

    2015-04-01

    Pregnant and postpartum women have special needs during public health emergencies but often have inadequate levels of disaster preparedness. Thus, improving maternal emergency preparedness is a public health priority. More research is needed to identify the strengths and weaknesses of various approaches to how preparedness information is communicated to these women. A sample of web pages from the Centers for Disease Control and Prevention intended to address the preparedness needs of pregnant and postpartum populations was examined for suitability for this audience. Five of the 7 web pages examined were considered adequate. One web page was considered not suitable and one the raters split between not suitable and adequate. None of the resources examined were considered superior. If these resources are considered some of the best available to pregnant and postpartum women, more work is needed to improve the suitability of educational resources, especially for audiences with low literacy and low incomes.

  10. Web Transfer Over Satellites Being Improved

    Science.gov (United States)

    Allman, Mark

    1999-01-01

    Extensive research conducted by NASA Lewis Research Center's Satellite Networks and Architectures Branch and the Ohio University has demonstrated performance improvements in World Wide Web transfers over satellite-based networks. The use of a new version of the Hypertext Transfer Protocol (HTTP) reduced the time required to load web pages over a single Transmission Control Protocol (TCP) connection traversing a satellite channel. However, an older technique of simultaneously making multiple requests of a given server has been shown to provide even faster transfer time. Unfortunately, the use of multiple simultaneous requests has been shown to be harmful to the network in general. Therefore, we are developing new mechanisms for the HTTP protocol which may allow a single request at any given time to perform as well as, or better than, multiple simultaneous requests. In the course of study, we also demonstrated that the time for web pages to load is at least as short via a satellite link as it is via a standard 28.8-kbps dialup modem channel. This demonstrates that satellites are a viable means of accessing the Internet.

  11. Cluster Analysis of Customer Reviews Extracted from Web Pages

    Directory of Open Access Journals (Sweden)

    S. Shivashankar

    2010-01-01

    Full Text Available As e-commerce is gaining popularity day by day, the web has become an excellent source for gathering customer reviews / opinions by the market researchers. The number of customer reviews that a product receives is growing at very fast rate (It could be in hundreds or thousands. Customer reviews posted on the websites vary greatly in quality. The potential customer has to read necessarily all the reviews irrespective of their quality to make a decision on whether to purchase the product or not. In this paper, we make an attempt to assess are view based on its quality, to help the customer make a proper buying decision. The quality of customer review is assessed as most significant, more significant, significant and insignificant.A novel and effective web mining technique is proposed for assessing a customer review of a particular product based on the feature clustering techniques, namely, k-means method and fuzzy c-means method. This is performed in three steps : (1Identify review regions and extract reviews from it, (2 Extract and cluster the features of reviews by a clustering technique and then assign weights to the features belonging to each of the clusters (groups and (3 Assess the review by considering the feature weights and group belongingness. The k-means and fuzzy c-means clustering techniques are implemented and tested on customer reviews extracted from web pages. Performance of these techniques are analyzed.

  12. User Perceptions of the Library's Web Pages: A Focus Group Study at Texas A&M University.

    Science.gov (United States)

    Crowley, Gwyneth H.; Leffel, Rob; Ramirez, Diana; Hart, Judith L.; Armstrong, Tommy S., II

    2002-01-01

    This focus group study explored library patrons' opinions about Texas A&M library's Web pages. Discusses information seeking behavior which indicated that patrons are confused when trying to navigate the Public Access Menu and suggests the need for a more intuitive interface. (Author/LRW)

  13. Using Frames and JavaScript To Automate Teacher-Side Web Page Navigation for Classroom Presentations.

    Science.gov (United States)

    Snyder, Robin M.

    HTML provides a platform-independent way of creating and making multimedia presentations for classroom instruction and making that content available on the Internet. However, time in class is very valuable, so that any way to automate or otherwise assist the presenter in Web page navigation during class can save valuable seconds. This paper…

  14. Bad on the net, or bipolars' lives on the web: analyzing discussion web pages for individuals with bipolar affective disorder.

    Science.gov (United States)

    Latalova, Klara; Prasko, Jan; Kamaradova, Dana; Ivanova, Katerina; Jurickova, Lubica

    2014-01-01

    The main therapeutic approach in the treatment of bipolar affective disorder is the administration of drugs. The effectiveness of this approach can be increased by specific psychotherapeutic interventions. There is not much knowledge about self-help initiatives in this field. Anonymous internet communication may be beneficial, regardless of the fact that it is non-professional. It offers a chance to confide and share symptoms with other patients, to open up for persons with feelings of shame, and to obtain relevant information without having a direct contact with an expert. Qualitative analysis of web discussions used by patients with bipolar disorder in Czech language was performed. Using key words "diskuze" (discussion), "maniodeprese" (manic depression) and "bipolární porucha" (bipolar disorder), 8 discussions were found, but only 3 of them were anonymous and non-professional. Individual discussion entries were analyzed for basic categories or subcategories, and these were subsequently assessed so that their relationships could be better understood. A total of 436 entries from 3 discussion web pages were analyzed. Subsequently, six categories were identified (participant, diagnosis, relationships, communication, topic and treatment), each having 5-12 subcategories. These were analyzed in terms of relationships and patterns. Czech discussion web pages for people suffering from bipolar disorder are a lively community of users supporting each other, that may be characterized as a compact body open to newcomers. They seem to fulfill patients' needs that are not fully met by health care services. It also has a "self-cleaning" ability, effectively dealing with posts that are inappropriate, provocative, criticizing, aggressive or meaningless.

  15. DATA EXTRACTION AND LABEL ASSIGNMENT FOR WEB DATABASES

    OpenAIRE

    T. Rajesh; T. Prathap; S.Naveen Nambi; A.R. Arunachalam

    2015-01-01

    Deep Web contents are accessed by queries submitted to Web databases and the returned data records are en wrapped in dynamically generated Web pages (they will be called deep Web pages in this paper). The structured data that Extracting from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now, a too many number of techniques have been proposed to address this problem, but all of them have limitations because they are Web-page-programming...

  16. Monte Carlo methods in PageRank computation: When one iteration is sufficient

    NARCIS (Netherlands)

    Avrachenkov, K.; Litvak, Nelli; Nemirovsky, D.; Osipova, N.

    2005-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method which requires

  17. Monte Carlo methods in PageRank computation: When one iteration is sufficient

    NARCIS (Netherlands)

    Avrachenkov, K.; Litvak, Nelli; Nemirovsky, D.; Osipova, N.

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer, and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method, which requires

  18. Personal and Public Start Pages in a library setting

    NARCIS (Netherlands)

    Kieft-Wondergem, Dorine

    Personal and Public Start Pages are web-based resources. With these kind of tools it is possible to make your own free start page. A Start Page allows you to put all your web resources into one page, including blogs, email, podcasts, RSSfeeds. It is possible to share the content of the page with

  19. [Analysis of the web pages of the intensive care units of Spain].

    Science.gov (United States)

    Navarro-Arnedo, J M

    2009-01-01

    In order to determine the Intensive Care Units (ICU) of Spanish hospitals that had a web site, to analyze the information they offered and to know what information they needed to offer according to a sample of ICU nurses, a cross-sectional observational, descriptive study was carried out between January and September 2008. For each ICU website, an analysis was made on the information available on the unit, its care, teaching and research activity on nursing. Simultaneously, based on a sample of intensive care nurses, the information that should be contained on an ICU website was determined. The results, expressed in absolute numbers and percentage, showed that 66 of the 292 hospitals with ICU (22.6%) had a web site; 50.7% of the sites showed the number of beds, 19.7% the activity report, 11.3% the published articles/studies and followed research lines and 9.9% the organized formation courses. 14 webs (19.7%) displayed images of nurses. However, only 1 (1.4%) offered guides on the actions followed. No web site offered a navigation section for nursing, the E-mail of the chief nursing, the nursing documentation used or if any nursing model of their own was used. It is concluded that only one-fourth of the Spanish hospitals with ICU have a web site; number of beds was the data offered by the most sites, whereas information on care, educational and investigating activities was very reduced and that on nursing was practically omitted on the web pages of intensive care units.

  20. The Importance of Prior Probabilities for Entry Page Search

    NARCIS (Netherlands)

    Kraaij, W.; Westerveld, T.H.W.; Hiemstra, Djoerd

    An important class of searches on the world-wide-web has the goal to find an entry page (homepage) of an organisation. Entry page search is quite different from Ad Hoc search. Indeed a plain Ad Hoc system performs disappointingly. We explored three non-content features of web pages: page length,

  1. The sources and popularity of online drug information: an analysis of top search engine results and web page views.

    Science.gov (United States)

    Law, Michael R; Mintzes, Barbara; Morgan, Steven G

    2011-03-01

    The Internet has become a popular source of health information. However, there is little information on what drug information and which Web sites are being searched. To investigate the sources of online information about prescription drugs by assessing the most common Web sites returned in online drug searches and to assess the comparative popularity of Web pages for particular drugs. This was a cross-sectional study of search results for the most commonly dispensed drugs in the US (n=278 active ingredients) on 4 popular search engines: Bing, Google (both US and Canada), and Yahoo. We determined the number of times a Web site appeared as the first result. A linked retrospective analysis counted Wikipedia page hits for each of these drugs in 2008 and 2009. About three quarters of the first result on Google USA for both brand and generic names linked to the National Library of Medicine. In contrast, Wikipedia was the first result for approximately 80% of generic name searches on the other 3 sites. On these other sites, over two thirds of brand name searches led to industry-sponsored sites. The Wikipedia pages with the highest number of hits were mainly for opiates, benzodiazepines, antibiotics, and antidepressants. Wikipedia and the National Library of Medicine rank highly in online drug searches. Further, our results suggest that patients most often seek information on drugs with the potential for dependence, for stigmatized conditions, that have received media attention, and for episodic treatments. Quality improvement efforts should focus on these drugs.

  2. Microsoft Expression Web for dummies

    CERN Document Server

    Hefferman, Linda

    2013-01-01

    Expression Web is Microsoft's newest tool for creating and maintaining dynamic Web sites. This FrontPage replacement offers all the simple ""what-you-see-is-what-you-get"" tools for creating a Web site along with some pumped up new features for working with Cascading Style Sheets and other design options. Microsoft Expression Web For Dummies arrives in time for early adopters to get a feel for how to build an attractive Web site. Author Linda Hefferman teams up with longtime FrontPage For Dummies author Asha Dornfest to show the easy way for first-time Web designers, FrontPage ve

  3. An Improved Approach to the PageRank Problems

    Directory of Open Access Journals (Sweden)

    Yue Xie

    2013-01-01

    Full Text Available We introduce a partition of the web pages particularly suited to the PageRank problems in which the web link graph has a nested block structure. Based on the partition of the web pages, dangling nodes, common nodes, and general nodes, the hyperlink matrix can be reordered to be a more simple block structure. Then based on the parallel computation method, we propose an algorithm for the PageRank problems. In this algorithm, the dimension of the linear system becomes smaller, and the vector for general nodes in each block can be calculated separately in every iteration. Numerical experiments show that this approach speeds up the computation of PageRank.

  4. A construction scheme of web page comment information extraction system based on frequent subtree mining

    Science.gov (United States)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    Based on the frequent sub-tree mining algorithm, this paper proposes a construction scheme of web page comment information extraction system based on frequent subtree mining, referred to as FSM system. The entire system architecture and the various modules to do a brief introduction, and then the core of the system to do a detailed description, and finally give the system prototype.

  5. Web wisdom how to evaluate and create information quality on the Web

    CERN Document Server

    Alexander, Janet E

    1999-01-01

    Web Wisdom is an essential reference for anyone needing to evaluate or establish information quality on the World Wide Web. The book includes easy to use checklists for step-by-step quality evaluations of virtually any Web page. The checklists can also be used by Web authors to help them ensure quality information on their pages. In addition, Web Wisdom addresses other important issues, such as understanding the ways that advertising and sponsorship may affect the quality of Web information. It features: * a detailed discussion of the items involved in evaluating Web information; * checklists

  6. Using Power-Law Degree Distribution to Accelerate PageRank

    Directory of Open Access Journals (Sweden)

    Zhaoyan Jin

    2012-12-01

    Full Text Available The PageRank vector of a network is very important, for it can reflect the importance of a Web page in the World Wide Web, or of a people in a social network. However, with the growth of the World Wide Web and social networks, it needs more and more time to compute the PageRank vector of a network. In many real-world applications, the degree and PageRank distributions of these complex networks conform to the Power-Law distribution. This paper utilizes the degree distribution of a network to initialize its PageRank vector, and presents a Power-Law degree distribution accelerating algorithm of PageRank computation. Experiments on four real-world datasets show that the proposed algorithm converges more quickly than the original PageRank algorithm.

  7. MANAGEMENT CONSULTANCIES DISCURSIVE CONSTRUCTION OF WORK-LIFE BALANCE : A DISCOURSE ANALYSIS OF WEB PAGES

    OpenAIRE

    Bergqvist, Sofie; Vestin, Mikaela

    2014-01-01

    Academics, practitioners and media agree that the topic of work-life balance is on the agenda and valued by the new business generation. Although Sweden might be considered a working friendly country, the management consultancy industry is not recognized to be the same. With an institutional perspective we will through a discourse analysis investigate the communication on Swedish management consultancies web pages in order to explore how consultancies relate to the work-life balance discourse...

  8. The ICAP (Interactive Course Assignment Pages Publishing System

    Directory of Open Access Journals (Sweden)

    Kim Griggs

    2008-03-01

    Full Text Available The ICAP publishing system is an open source custom content management system that enables librarians to easily and quickly create and manage library help pages for course assignments (ICAPs, without requiring knowledge of HTML or other web technologies. The system's unique features include an emphasis on collaboration and content reuse and an easy-to-use interface that includes in-line help, simple forms and drag and drop functionality. The system generates dynamic, attractive course assignment pages that blend Web 2.0 features with traditional library resources, and makes the pages easier to find by providing a central web page for the course assignment pages. As of December 2007, the code is available as free, open-source software under the GNU General Public License.

  9. The Top 100 Linked-To Pages on UK University Web Sites: High Inlink Counts Are Not Usually Associated with Quality Scholarly Content.

    Science.gov (United States)

    Thelwall, Mike

    2002-01-01

    Reports on an investigation into the most highly linked pages on United Kingdom university Web sites. Concludes that simple link counts are highly unreliable indicators of the average behavior of scholars, and that the most highly linked-to pages are those that facilitate access to a wide range of information rather than providing specific…

  10. Analysis of co-occurrence toponyms in web pages based on complex networks

    Science.gov (United States)

    Zhong, Xiang; Liu, Jiajun; Gao, Yong; Wu, Lun

    2017-01-01

    A large number of geographical toponyms exist in web pages and other documents, providing abundant geographical resources for GIS. It is very common for toponyms to co-occur in the same documents. To investigate these relations associated with geographic entities, a novel complex network model for co-occurrence toponyms is proposed. Then, 12 toponym co-occurrence networks are constructed from the toponym sets extracted from the People's Daily Paper documents of 2010. It is found that two toponyms have a high co-occurrence probability if they are at the same administrative level or if they possess a part-whole relationship. By applying complex network analysis methods to toponym co-occurrence networks, we find the following characteristics. (1) The navigation vertices of the co-occurrence networks can be found by degree centrality analysis. (2) The networks express strong cluster characteristics, and it takes only several steps to reach one vertex from another one, implying that the networks are small-world graphs. (3) The degree distribution satisfies the power law with an exponent of 1.7, so the networks are free-scale. (4) The networks are disassortative and have similar assortative modes, with assortative exponents of approximately 0.18 and assortative indexes less than 0. (5) The frequency of toponym co-occurrence is weakly negatively correlated with geographic distance, but more strongly negatively correlated with administrative hierarchical distance. Considering the toponym frequencies and co-occurrence relationships, a novel method based on link analysis is presented to extract the core toponyms from web pages. This method is suitable and effective for geographical information retrieval.

  11. Sign Language Web Pages

    Science.gov (United States)

    Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.

    2006-01-01

    The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…

  12. Quality of drug information on the World Wide Web and strategies to improve pages with poor information quality. An intervention study on pages about sildenafil.

    Science.gov (United States)

    Martin-Facklam, Meret; Kostrzewa, Michael; Martin, Peter; Haefeli, Walter E

    2004-01-01

    The generally poor quality of health information on the world wide web (WWW) has caused preventable adverse outcomes. Quality management of information on the internet is therefore critical given its widespread use. In order to develop strategies for the safe use of drugs, we scored general and content quality of pages about sildenafil and performed an intervention to improve their quality. The internet was searched with Yahoo and AltaVista for pages about sildenafil and 303 pages were included. For assessment of content quality a score based on accuracy and completeness of essential drug information was assigned. For assessment of general quality, four criteria were evaluated and their association with high content quality was determined by multivariate logistic regression analysis. The pages were randomly allocated to either control or intervention group. Evaluation took place before, as well as 7 and 22 weeks after an intervention which consisted of two letters with individualized feedback information on the respective page which were sent electronically to the address mentioned on the page. Providing references to scientific publications or prescribing information was significantly associated with high content quality (odds ratio: 8.2, 95% CI 3.2, 20.5). The intervention had no influence on general or content quality. To prevent adverse outcomes caused by misinformation on the WWW individualized feedback to the address mentioned on the page was ineffective. It is currently probably the most straight-forward approach to inform lay persons about indicators of high information quality, i.e. the provision of references.

  13. When the Web meets the cell: using personalized PageRank for analyzing protein interaction networks.

    Science.gov (United States)

    Iván, Gábor; Grolmusz, Vince

    2011-02-01

    Enormous and constantly increasing quantity of biological information is represented in metabolic and in protein interaction network databases. Most of these data are freely accessible through large public depositories. The robust analysis of these resources needs novel technologies, being developed today. Here we demonstrate a technique, originating from the PageRank computation for the World Wide Web, for analyzing large interaction networks. The method is fast, scalable and robust, and its capabilities are demonstrated on metabolic network data of the tuberculosis bacterium and the proteomics analysis of the blood of melanoma patients. The Perl script for computing the personalized PageRank in protein networks is available for non-profit research applications (together with sample input files) at the address: http://uratim.com/pp.zip.

  14. Web server for priority ordered multimedia services

    Science.gov (United States)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  15. Research of Subgraph Estimation Page Rank Algorithm for Web Page Rank

    Directory of Open Access Journals (Sweden)

    LI Lan-yin

    2017-04-01

    Full Text Available The traditional PageRank algorithm can not efficiently perform large data Webpage scheduling problem. This paper proposes an accelerated algorithm named topK-Rank,which is based on PageRank on the MapReduce platform. It can find top k nodes efficiently for a given graph without sacrificing accuracy. In order to identify top k nodes,topK-Rank algorithm prunes unnecessary nodes and edges in each iteration to dynamically construct subgraphs,and iteratively estimates lower/upper bounds of PageRank scores through subgraphs. Theoretical analysis shows that this method guarantees result exactness. Experiments show that topK-Rank algorithm can find k nodes much faster than the existing approaches.

  16. CERN single sign on solution

    International Nuclear Information System (INIS)

    Ormancey, E

    2008-01-01

    The need for Single Sign On has always been restricted by the absence of cross platform solutions: a single sign on working only on one platform or technology is nearly useless. The recent improvements in Web Services Federation (WS-Federation) standard enabling federation of identity, attribute, authentication and authorization information can now provide real extended Single Sign On solutions. Various solutions have been investigated at CERN and now, a Web SSO solution using some parts of WS-Federation technology is available. Using the Shibboleth Service Provider module for Apache hosted web sites and Microsoft ADFS as the identity provider linked to Active Directory user, users can now authenticate on any web application using a single authentication platform, providing identity, user information (building, phone...) as well as group membership enabling authorization possibilities. A typical scenario: a CERN user can now authenticate on a Linux/Apache website using Windows Integrated credentials, and his Active Directory group membership can be checked before allowing access to a specific web page

  17. REVIEW PAPER ON THE DEEP WEB DATA EXTRACTION

    OpenAIRE

    Prof. V. S. Patil*1, Miss Sneha Sitafale2, Miss Priyanka Kale3, Miss Poonam Bhujbal 4 , Miss Mohini Dandge 5 .

    2018-01-01

    Deep web data extraction is the process of extracting a set of data records and the items that they contain from a query result page. Such structured data can be later integrated into results from other data sources and given to the user in a single, cohesive view. Domain identification is used to identify the query interfaces related to the domain from the forms obtained in the search process. The surface web contains a large amount of unfiltered information, whereas the deep web includes hi...

  18. EPA Web Training Classes

    Science.gov (United States)

    Scheduled webinars can help you better manage EPA web content. Class topics include Drupal basics, creating different types of pages in the WebCMS such as document pages and forms, using Google Analytics, and best practices for metadata and accessibility.

  19. The Evaluation of Web pages of State Universities’ Usability via Content Analysis

    Directory of Open Access Journals (Sweden)

    Ezgi CEVHER

    2015-12-01

    Full Text Available Within the scope of e-transformation project in Turkey, the “Preparation of Guideline for State Institutions’ Web Pages” action has been carried out for ensuring the minimal cohesiveness among government institutions’ and organizations’ Web pages in terms of design and content. As a result of those efforts, the first edition of “Guideline for State Institutions’ Web Pages” has been prepared in year 2006. The second edition of this guideline has been published in year 2009 under in simpler form under the name of “Guideline and Suggestions for the Standards of Governmental Institutions’ Web Pages”. It became compulsory for local and central level governmental institutions and organizations to obey the procedures and principles stated in Guideline. Through this Guideline, the preparation of websites of governmental institutions in harmony with mentioned standards, and updating them in parallel with changing conditions and requirements have been brought to agenda especially in recent years. In this study, by considering the characteristics stated in Guideline, the webpages of state universities’ have been assessed through “content analysis”. Considering that the webpages of universities are being visited by hundreds of visitors daily, it is required to ensure the effective, productive and comfortable usability. For this reason, by analyzing their webpages, the object is to determine to what extend the state universities implement the compulsory principles stated in Guideline, the webpages of universities have been assessed in this study from the aspects of compliance with standards, usability, and accessibility

  20. Introduction to Webometrics Quantitative Web Research for the Social Sciences

    CERN Document Server

    Thelwall, Michael

    2009-01-01

    Webometrics is concerned with measuring aspects of the web: web sites, web pages, parts of web pages, words in web pages, hyperlinks, web search engine results. The importance of the web itself as a communication medium and for hosting an increasingly wide array of documents, from journal articles to holiday brochures, needs no introduction. Given this huge and easily accessible source of information, there are limitless possibilities for measuring or counting on a huge scale (e.g., the number of web sites, the number of web pages, the number of blogs) or on a smaller scale (e.g., the number o

  1. On Page Rank

    NARCIS (Netherlands)

    Hoede, C.

    In this paper the concept of page rank for the world wide web is discussed. The possibility of describing the distribution of page rank by an exponential law is considered. It is shown that the concept is essentially equal to that of status score, a centrality measure discussed already in 1953 by

  2. Health on the Net Foundation: assessing the quality of health web pages all over the world.

    Science.gov (United States)

    Boyer, Célia; Gaudinat, Arnaud; Baujard, Vincent; Geissbühler, Antoine

    2007-01-01

    The Internet provides a great amount of information and has become one of the communication media which is most widely used [1]. However, the problem is no longer finding information but assessing the credibility of the publishers as well as the relevance and accuracy of the documents retrieved from the web. This problem is particularly relevant in the medical area which has a direct impact on the well-being of citizens. In this paper, we assume that the quality of web pages can be controlled, even when a huge amount of documents has to be reviewed. But this must be supported by both specific automatic tools and human expertise. In this context, we present various initiatives of the Health on the Net Foundation informing the citizens about the reliability of the medical content on the web.

  3. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M.B. Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  4. Food Enterprise Web Design Based on User Experience

    OpenAIRE

    Fei Wang

    2015-01-01

    Excellent auxiliary food enterprise web design conveyed good visual transmission effect through user experience. This study was based on the food enterprise managers and customers as the main operating object to get the performance of the web page creation, web page design not only focused on the function and work efficiency, the most important thing was that the user experience in the process of web page interaction.

  5. ELAN - the web page based information system for emergency preparedness in Germany

    International Nuclear Information System (INIS)

    Zaehringer, M.; Hoebler, Ch.; Bieringer, P.

    2002-01-01

    A plan for a WEB-page based system was developed which compiles all important information in case of an nuclear emergency with actual or potential release of radioactivity into the environment. A prototype system providing information of the Federal Ministry for Environment, Nature Conservation and Reactor Safety (BMU) was tested successfully. The implementation at the National Emergency Operations Centre of Switzerland was used as template. However, further planning takes into account the special conditions of the federal structure in Germany. The main purpose of the system is to compile, to clearly arrange, and to timely provide on a central server all relevant information of the federal government, the states (Laender), and, if available, from foreign authorities that is needed for decision making. It is envisaged to integrate similar existing systems in some states conceptually and technically. ELAN makes use of standardised and secure web technology. Uploading of information and delivery to national and foreign authorities, international organisations and the public is managed by role specific access controlling. (orig.)

  6. WEB STRUCTURE MINING

    Directory of Open Access Journals (Sweden)

    CLAUDIA ELENA DINUCĂ

    2011-01-01

    Full Text Available The World Wide Web became one of the most valuable resources for information retrievals and knowledge discoveries due to the permanent increasing of the amount of data available online. Taking into consideration the web dimension, the users get easily lost in the web’s rich hyper structure. Application of data mining methods is the right solution for knowledge discovery on the Web. The knowledge extracted from the Web can be used to raise the performances for Web information retrievals, question answering and Web based data warehousing. In this paper, I provide an introduction of Web mining categories and I focus on one of these categories: the Web structure mining. Web structure mining, one of three categories of web mining for data, is a tool used to identify the relationship between Web pages linked by information or direct link connection. It offers information about how different pages are linked together to form this huge web. Web Structure Mining finds hidden basic structures and uses hyperlinks for more web applications such as web search.

  7. Introduction to the world wide web.

    Science.gov (United States)

    Downes, P K

    2007-05-12

    The World Wide Web used to be nicknamed the 'World Wide Wait'. Now, thanks to high speed broadband connections, browsing the web has become a much more enjoyable and productive activity. Computers need to know where web pages are stored on the Internet, in just the same way as we need to know where someone lives in order to post them a letter. This section explains how the World Wide Web works and how web pages can be viewed using a web browser.

  8. JavaScript and interactive web pages in radiology.

    Science.gov (United States)

    Gurney, J W

    2001-10-01

    Web publishing is becoming a more common method of disseminating information. JavaScript is an object-orientated language embedded into modern browsers and has a wide variety of uses. The use of JavaScript in radiology is illustrated by calculating the indices of sensitivity, specificity, and predictive values from a table of true positives, true negatives, false positives, and false negatives. In addition, a single line of JavaScript code can be used to annotate images, which has a wide variety of uses.

  9. Constructing a web recommender system using web usage mining and user’s profiles

    Directory of Open Access Journals (Sweden)

    T. Mombeini

    2014-12-01

    Full Text Available The World Wide Web is a great source of information, which is nowadays being widely used due to the availability of useful information changing, dynamically. However, the large number of webpages often confuses many users and it is hard for them to find information on their interests. Therefore, it is necessary to provide a system capable of guiding users towards their desired choices and services. Recommender systems search among a large collection of user interests and recommend those, which are likely to be favored the most by the user. Web usage mining was designed to function on web server records, which are included in user search results. Therefore, recommender servers use the web usage mining technique to predict users’ browsing patterns and recommend those patterns in the form of a suggestion list. In this article, a recommender system based on web usage mining phases (online and offline was proposed. In the offline phase, the first step is to analyze user access records to identify user sessions. Next, user profiles are built using data from server records based on the frequency of access to pages, the time spent by the user on each page and the date of page view. Date is of importance since it is more possible for users to request new pages more than old ones and old pages are less probable to be viewed, as users mostly look for new information. Following the creation of user profiles, users are categorized in clusters using the Fuzzy C-means clustering algorithm and S(c criterion based on their similarities. In the online phase, a neural network is offered to identify the suggested model while online suggestions are generated using the suggestion module for the active user. Search engines analyze suggestion lists based on rate of user interest in pages and page rank and finally suggest appropriate pages to the active user. Experiments show that the proposed method of predicting user recent requested pages has more accuracy and

  10. Finding pages on the unarchived Web

    NARCIS (Netherlands)

    J. Kamps; A. Ben-David; H.C. Huurdeman; A.P. de Vries (Arjen); T. Samar (Thaer)

    2014-01-01

    htmlabstractWeb archives preserve the fast changing Web, yet are highly incomplete due to crawling restrictions, crawling depth and frequency, or restrictive selection policies-most of the Web is unarchived and therefore lost to posterity. In this paper, we propose an approach to recover significant

  11. 16 CFR 1130.8 - Requirements for Web site registration or alternative e-mail registration.

    Science.gov (United States)

    2010-01-01

    ... registration. (a) Link to registration page. The manufacturer's Web site, or other Web site established for the... web page that goes directly to “Product Registration.” (b) Purpose statement. The registration page... registration page. The Web site registration page shall request only the consumer's name, address, telephone...

  12. Towards Second and Third Generation Web-Based Multimedia

    OpenAIRE

    Ossenbruggen, Jacco; Geurts, Joost; Cornelissen, F.J.; Rutledge, Lloyd; Hardman, Lynda

    2001-01-01

    textabstractFirst generation Web-content encodes information in handwritten (HTML) Web pages. Second generation Web content generates HTML pages on demand, e.g. by filling in templates with content retrieved dynamically from a database or transformation of structured documents using style sheets (e.g. XSLT). Third generation Web pages will make use of rich markup (e.g. XML) along with metadata (e.g. RDF) schemes to make the content not only machine readable but also machine processable - a ne...

  13. Effects of Learning Style and Training Method on Computer Attitude and Performance in World Wide Web Page Design Training.

    Science.gov (United States)

    Chou, Huey-Wen; Wang, Yu-Fang

    1999-01-01

    Compares the effects of two training methods on computer attitude and performance in a World Wide Web page design program in a field experiment with high school students in Taiwan. Discusses individual differences, Kolb's Experiential Learning Theory and Learning Style Inventory, Computer Attitude Scale, and results of statistical analyses.…

  14. Give your feedback on the new Users’ page

    CERN Multimedia

    CERN Bulletin

    If you haven't already done so, visit the new Users’ page and provide the Communications group with your feedback. You can do this quickly and easily via an online form. A dedicated web steering group will design the future page on the basis of your comments. As a first step towards reforming the CERN website, the Communications group is proposing a ‘beta’ version of the Users’ pages. The primary aim of this version is to improve the visibility of key news items, events and announcements to the CERN community. The beta version is very much work in progress: your input is needed to make sure that the final site meets the needs of CERN’s wide and mixed community. The Communications group will read all your comments and suggestions, and will establish a web steering group that will make sure that the future CERN web pages match the needs of the community. More information on this process, including the gradual 'retirement' of the grey Users' pages we are a...

  15. WEB STRUCTURE MINING USING PAGERANK, IMPROVED PAGERANK – AN OVERVIEW

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2011-03-01

    Full Text Available Web Mining is the extraction of interesting and potentially useful patterns and information from Web. It includes Web documents, hyperlinks between documents, and usage logs of web sites. The significant task for web mining can be listed out as Information Retrieval, Information Selection / Extraction, Generalization and Analysis. Web information retrieval tools consider only the text on pages and ignore information in the links. The goal of Web structure mining is to explore structural summary about web. Web structure mining focusing on link information is an important aspect of web data. This paper presents an overview of the PageRank, Improved Page Rank and its working functionality in web structure mining.

  16. Towards Second and Third Generation Web-Based Multimedia

    NARCIS (Netherlands)

    J.R. van Ossenbruggen (Jacco); J.P.T.M. Geurts (Joost); F.J. Cornelissen; L. Rutledge (Lloyd); L. Hardman (Lynda)

    2001-01-01

    textabstractFirst generation Web-content encodes information in handwritten (HTML) Web pages. Second generation Web content generates HTML pages on demand, e.g. by filling in templates with content retrieved dynamically from a database or transformation of structured documents using style sheets

  17. Web party effect: a cocktail party effect in the web environment.

    Science.gov (United States)

    Rigutti, Sara; Fantoni, Carlo; Gerbino, Walter

    2015-01-01

    In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all) web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots) on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search). Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment): users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others.

  18. Web pages of Slovenian public libraries

    Directory of Open Access Journals (Sweden)

    Silva Novljan

    2002-01-01

    Full Text Available Libraries should offer their patrons web sites which establish the unmistakeable concept (public of library, the concept that cannot be mistaken for other information brokers and services available on the Internet, but inside this framework of the concept of library, would show a diversity which directs patrons to other (public libraries. This can be achieved by reliability, quality of information and services, and safety of usage.Achieving this, patrons regard library web sites as important reference sources deserving continuous usage for obtaining relevant information. Libraries excuse investment in the development and sustainance of their web sites by the number of visits and by patron satisfaction. The presented research, made on a sample of Slovene public libraries’web sites, determines how the libraries establish their purpose and role, as well as the given professional recommendations in web site design.The results uncover the striving of libraries for the modernisation of their functions,major attention is directed to the presentation of classic libraries and their activities,lesser to the expansion of available contents and electronic sources. Pointing to their diversity is significant since it is not a result of patrons’ needs, but more the consequence of improvisation, too little attention to selection, availability, organisation and formation of different kind of information and services on the web sites. Based on the analysis of a common concept of the public library web site, certain activities for improving the existing state of affairs are presented in the paper.

  19. EVALUATION OF WEB SEARCHING METHOD USING A NOVEL WPRR ALGORITHM FOR TWO DIFFERENT CASE STUDIES

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2012-04-01

    Full Text Available The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to web data and documents. Web content mining and web structure mining have important roles in identifying the relevant web page. Relevancy of web page denotes how well a retrieved web page or set of web pages meets the information need of the user. Page Rank, Weighted Page Rank and Hypertext Induced Topic Selection (HITS are existing algorithms which considers only web structure mining. Vector Space Model (VSM, Cover Density Ranking (CDR, Okapi similarity measurement (Okapi and Three-Level Scoring method (TLS are some of existing relevancy score methods which consider only web content mining. In this paper, we propose a new algorithm, Weighted Page with Relevant Rank (WPRR which is blend of both web content mining and web structure mining that demonstrates the relevancy of the page with respect to given query for two different case scenarios. It is shown that WPRR’s performance is better than the existing algorithms.

  20. Probabilistic relation between In-Degree and PageRank

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    2008-01-01

    This paper presents a novel stochastic model that explains the relation between power laws of In-Degree and PageRank. PageRank is a popularity measure designed by Google to rank Web pages. We model the relation between PageRank and In-Degree through a stochastic equation, which is inspired by the

  1. Nuclear expert web search and crawler algorithm

    International Nuclear Information System (INIS)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D.

    2013-01-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  2. Nuclear expert web search and crawler algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D., E-mail: thiagoreis@usp.br, E-mail: barroso@ipen.br, E-mail: bdbfilho@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  3. Working with WebQuests: Making the Web Accessible to Students with Disabilities.

    Science.gov (United States)

    Kelly, Rebecca

    2000-01-01

    This article describes how students with disabilities in regular classes are using the WebQuest lesson format to access the Internet. It explains essential WebQuest principles, creating a draft Web page, and WebQuest components. It offers an example of a WebQuest about salvaging the sunken ships, Titanic and Lusitania. A WebQuest planning form is…

  4. Upgrade of CERN OP Webtools IRRAD Page

    CERN Document Server

    Vik, Magnus Bjerke

    2017-01-01

    CERN Beams Department maintains a website with various tools for the Operations Group, with one of them being specific for the Proton Irradiation Facility (IRRAD). The IRRAD team use the tool to follow up and optimize the operation of the facility. The original version of the tool was difficult to maintain and adding new features to the page was challenging. Thus this summer student project is aimed to upgrade the web page by rewriting the web page with maintainability and flexibility in mind. The new application uses a server--client architecture with a REST API on the back end which is used by the front end to request data for visualization. PHP is used on the back end to implement the API's and Swagger is used to document them. Vue, Semantic UI, Webpack, Node and ECMAScript 5 is used on the fronted to visualize and administrate the data. The result is a new IRRAD operations web application with extended functionality, improved structure and an improved user interface. It includes a new Status Panel page th...

  5. Het WEB leert begrijpen

    CERN Multimedia

    Stroeykens, Steven

    2004-01-01

    The WEB could be much more useful if the computers understood something of information on the Web pages. That explains the goal of the "semantic Web", a project in which takes part, amongst others, Tim Berners Lee, the inventor of the original WEB

  6. Funnel-web spider bite

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/002844.htm Funnel-web spider bite To use the sharing features on ... the effects of a bite from the funnel-web spider. Male funnel-web spiders are more poisonous ...

  7. UPGRADE OF THE CENTRAL WEB SERVERS

    CERN Multimedia

    WEB Services

    2000-01-01

    During the weekend of the 25-26 March, the infrastructure of the CERN central web servers will undergo a major upgrade.As a result, the web services hosted by the central servers (that is, the services the address of which starts with www.cern.ch) will be unavailable Friday 24th, from 17:30 to 18:30, and may suffer from short interruptions until 20:00. This includes access to the CERN top-level page as well as the services referenced by this page (such as access to the scientific program and events information, or training, recruitment, housing services).After the upgrade, the change will be transparent to the users. Expert readers may however notice that when they connect to a web page starting with www.cern.ch this address is slightly changed when the page is actually displayed on their screen (e.g. www.cern.ch/Press will be changed to Press.web.cern.ch/Press). They should not worry: this behaviour, necessary for technical reasons, is normal.web.services@cern.chTel 74989

  8. EPA Web Taxonomy

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA's Web Taxonomy is a faceted hierarchical vocabulary used to tag web pages with terms from a controlled vocabulary. Tagging enables search and discovery of EPA's...

  9. Web party effect: a cocktail party effect in the web environment

    Directory of Open Access Journals (Sweden)

    Sara Rigutti

    2015-03-01

    Full Text Available In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search. Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment: users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others.

  10. Web party effect: a cocktail party effect in the web environment

    Science.gov (United States)

    Gerbino, Walter

    2015-01-01

    In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all) web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots) on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search). Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment): users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others. PMID:25802803

  11. FlaME: Flash Molecular Editor - a 2D structure input tool for the web

    Directory of Open Access Journals (Sweden)

    Dallakian Pavel

    2011-02-01

    Full Text Available Abstract Background So far, there have been no Flash-based web tools available for chemical structure input. The authors herein present a feasibility study, aiming at the development of a compact and easy-to-use 2D structure editor, using Adobe's Flash technology and its programming language, ActionScript. As a reference model application from the Java world, we selected the Java Molecular Editor (JME. In this feasibility study, we made an attempt to realize a subset of JME's functionality in the Flash Molecular Editor (FlaME utility. These basic capabilities are: structure input, editing and depiction of single molecules, data import and export in molfile format. Implementation The result of molecular diagram sketching in FlaME is accessible in V2000 molfile format. By integrating the molecular editor into a web page, its communication with the HTML elements on this page is established using the two JavaScript functions, getMol( and setMol(. In addition, structures can be copied to the system clipboard. Conclusion A first attempt was made to create a compact single-file application for 2D molecular structure input/editing on the web, based on Flash technology. With the application examples presented in this article, it could be demonstrated that the Flash methods are principally well-suited to provide the requisite communication between the Flash object (application and the HTML elements on a web page, using JavaScript functions.

  12. A design method for an intuitive web site

    Energy Technology Data Exchange (ETDEWEB)

    Quinniey, M.L.; Diegert, K.V.; Baca, B.G.; Forsythe, J.C.; Grose, E.

    1999-11-03

    The paper describes a methodology for designing a web site for human factor engineers that is applicable for designing a web site for a group of people. Many web pages on the World Wide Web are not organized in a format that allows a user to efficiently find information. Often the information and hypertext links on web pages are not organized into intuitive groups. Intuition implies that a person is able to use their knowledge of a paradigm to solve a problem. Intuitive groups are categories that allow web page users to find information by using their intuition or mental models of categories. In order to improve the human factors engineers efficiency for finding information on the World Wide Web, research was performed to develop a web site that serves as a tool for finding information effectively. The paper describes a methodology for designing a web site for a group of people who perform similar task in an organization.

  13. Semantic Advertising for Web 3.0

    Science.gov (United States)

    Thomas, Edward; Pan, Jeff Z.; Taylor, Stuart; Ren, Yuan; Jekjantuk, Nophadol; Zhao, Yuting

    Advertising on the World Wide Web is based around automatically matching web pages with appropriate advertisements, in the form of banner ads, interactive adverts, or text links. Traditionally this has been done by manual classification of pages, or more recently using information retrieval techniques to find the most important keywords from the page, and match these to keywords being used by adverts. In this paper, we propose a new model for online advertising, based around lightweight embedded semantics. This will improve the relevancy of adverts on the World Wide Web and help to kick-start the use of RDFa as a mechanism for adding lightweight semantic attributes to the Web. Furthermore, we propose a system architecture for the proposed new model, based on our scalable ontology reasoning infrastructure TrOWL.

  14. What’s New? Deploying a Library New Titles Page with Minimal Programming

    Directory of Open Access Journals (Sweden)

    John Meyerhofer

    2017-01-01

    Full Text Available With a new titles web page, a library has a place to show faculty, students, and staff the items they are purchasing for their community. However, many times heavy programing knowledge and/or a LAMP stack (Linux, Apache, MySQL, PHP or APIs separate a library’s data from making a new titles web page a reality. Without IT staff, a new titles page can become nearly impossible or not worth the effort. Here we will demonstrate how a small liberal arts college took its acquisition data and combined it with a Google Sheet, HTML, and a little JavaScript to create a new titles web page that was dynamic and engaging to its users.

  15. Learning Layouts for Single-Page Graphic Designs.

    Science.gov (United States)

    O'Donovan, Peter; Agarwala, Aseem; Hertzmann, Aaron

    2014-08-01

    This paper presents an approach for automatically creating graphic design layouts using a new energy-based model derived from design principles. The model includes several new algorithms for analyzing graphic designs, including the prediction of perceived importance, alignment detection, and hierarchical segmentation. Given the model, we use optimization to synthesize new layouts for a variety of single-page graphic designs. Model parameters are learned with Nonlinear Inverse Optimization (NIO) from a small number of example layouts. To demonstrate our approach, we show results for applications including generating design layouts in various styles, retargeting designs to new sizes, and improving existing designs. We also compare our automatic results with designs created using crowdsourcing and show that our approach performs slightly better than novice designers.

  16. Evaluating Web Usability

    Science.gov (United States)

    Snider, Jean; Martin, Florence

    2012-01-01

    Web usability focuses on design elements and processes that make web pages easy to use. A website for college students was evaluated for underutilization. One-on-one testing, focus groups, web analytics, peer university review and marketing focus group and demographic data were utilized to conduct usability evaluation. The results indicated that…

  17. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  18. Discovery and Selection of Semantic Web Services

    CERN Document Server

    Wang, Xia

    2013-01-01

    For advanced web search engines to be able not only to search for semantically related information dispersed over different web pages, but also for semantic services providing certain functionalities, discovering semantic services is the key issue. Addressing four problems of current solution, this book presents the following contributions. A novel service model independent of semantic service description models is proposed, which clearly defines all elements necessary for service discovery and selection. It takes service selection as its gist and improves efficiency. Corresponding selection algorithms and their implementation as components of the extended Semantically Enabled Service-oriented Architecture in the Web Service Modeling Environment are detailed. Many applications of semantic web services, e.g. discovery, composition and mediation, can benefit from a general approach for building application ontologies. With application ontologies thus built, services are discovered in the same way as with single...

  19. An Enhanced Rule-Based Web Scanner Based on Similarity Score

    Directory of Open Access Journals (Sweden)

    LEE, M.

    2016-08-01

    Full Text Available This paper proposes an enhanced rule-based web scanner in order to get better accuracy in detecting web vulnerabilities than the existing tools, which have relatively high false alarm rate when the web pages are installed in unconventional directory paths. Using the proposed matching method based on similarity score, the proposed scheme can determine whether two pages have the same vulnerabilities or not. With this method, the proposed scheme is able to figure out the target web pages are vulnerable by comparing them to the web pages that are known to have vulnerabilities. We show the proposed scanner reduces 12% false alarm rate compared to the existing well-known scanner through the performance evaluation via various experiments. The proposed scheme is especially helpful in detecting vulnerabilities of the web applications which come from well-known open-source web applications after small customization, which happens frequently in many small-sized companies.

  20. Importance of intrinsic and non-network contribution in PageRank centrality and its effect on PageRank localization

    OpenAIRE

    Deyasi, Krishanu

    2016-01-01

    PageRank centrality is used by Google for ranking web-pages to present search result for a user query. Here, we have shown that PageRank value of a vertex also depends on its intrinsic, non-network contribution. If the intrinsic, non-network contributions of the vertices are proportional to their degrees or zeros, then their PageRank centralities become proportion to their degrees. Some simulations and empirical data are used to support our study. In addition, we have shown that localization ...

  1. Testing the visual consistency of web sites

    NARCIS (Netherlands)

    van der Geest, Thea; Loorbach, N.R.

    2005-01-01

    Consistency in the visual appearance of Web pages is often checked by experts, such as designers or reviewers. This article reports a card sort study conducted to determine whether users rather than experts could distinguish visual (in-)consistency in Web elements and pages. The users proved to

  2. Integration of Web mining and web crawler: Relevance and State of Art

    OpenAIRE

    Subhendu kumar pani; Deepak Mohapatra,; Bikram Keshari Ratha

    2010-01-01

    This study presents the role of web crawler in web mining environment. As the growth of the World Wide Web exceeded all expectations,the research on Web mining is growing more and more.web mining research topic which combines two of the activated research areas: Data Mining and World Wide Web .So, the World Wide Web is a very advanced area for data mining research. Search engines that are based on web crawling framework also used in web mining to find theinteracted web pages. This paper discu...

  3. A Runtime System for Interactive Web Services

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Sandholm, Anders

    1999-01-01

    Interactive web services are increasingly replacing traditional static web pages. Producing web services seems to require a tremendous amount of laborious low-level coding due to the primitive nature of CGI programming. We present ideas for an improved runtime system for interactive web services ...... built on top of CGI running on virtually every combination of browser and HTTP/CGI server. The runtime system has been implemented and used extensively in , a tool for producing interactive web services.......Interactive web services are increasingly replacing traditional static web pages. Producing web services seems to require a tremendous amount of laborious low-level coding due to the primitive nature of CGI programming. We present ideas for an improved runtime system for interactive web services...

  4. PageRank of integers

    International Nuclear Information System (INIS)

    Frahm, K M; Shepelyansky, D L; Chepelianskii, A D

    2012-01-01

    We up a directed network tracing links from a given integer to its divisors and analyze the properties of the Google matrix of this network. The PageRank vector of this matrix is computed numerically and it is shown that its probability is approximately inversely proportional to the PageRank index thus being similar to the Zipf law and the dependence established for the World Wide Web. The spectrum of the Google matrix of integers is characterized by a large gap and a relatively small number of nonzero eigenvalues. A simple semi-analytical expression for the PageRank of integers is derived that allows us to find this vector for matrices of billion size. This network provides a new PageRank order of integers. (paper)

  5. Preprocessing and Content/Navigational Pages Identification as Premises for an Extended Web Usage Mining Model Development

    Directory of Open Access Journals (Sweden)

    Daniel MICAN

    2009-01-01

    Full Text Available From its appearance until nowadays, the internet saw a spectacular growth not only in terms of websites number and information volume, but also in terms of the number of visitors. Therefore, the need of an overall analysis regarding both the web sites and the content provided by them was required. Thus, a new branch of research was developed, namely web mining, that aims to discover useful information and knowledge, based not only on the analysis of websites and content, but also on the way in which the users interact with them. The aim of the present paper is to design a database that captures only the relevant data from logs in a way that will allow to store and manage large sets of temporal data with common tools in real time. In our work, we rely on different web sites or website sections with known architecture and we test several hypotheses from the literature in order to extend the framework to sites with unknown or chaotic structure, which are non-transparent in determining the type of visited pages. In doing this, we will start from non-proprietary, preexisting raw server logs.

  6. MedlinePlus Connect: Web Service

    Science.gov (United States)

    ... MedlinePlus Connect → Web Service URL of this page: https://medlineplus.gov/connect/service.html MedlinePlus Connect: Web ... will change.) Old URLs New URLs Web Application https://apps.nlm.nih.gov/medlineplus/services/mpconnect.cfm? ...

  7. MedlinePlus Connect: Web Application

    Science.gov (United States)

    ... MedlinePlus Connect → Web Application URL of this page: https://medlineplus.gov/connect/application.html MedlinePlus Connect: Web ... will change.) Old URLs New URLs Web Application https://apps.nlm.nih.gov/medlineplus/services/mpconnect.cfm? ...

  8. Happy birthday WWW: the web is now old enough to drive

    CERN Document Server

    Gilbertson, Scott

    2007-01-01

    "The World Wide Web can now drive. Sixteen years ago yeterday, in a short post to the alt.hypertext newsgroup, tim Berners-Lee revealed the first public web pages summarizing his World Wide Web project." (1/4 page)

  9. Models and methods for building web recommendation systems

    OpenAIRE

    Stekh, Yu.; Artsibasov, V.

    2012-01-01

    Modern Word Wide Web contains a large number of Web sites and pages in each Web site. Web recommendation system (recommendation system for web pages) are typically implemented on web servers and use the data obtained from the collection viewed web templates (implicit data) or user registration data (explicit data). In article considering methods and algorithms of web recommendation system based on the technology of data mining (web mining). Сучасна мережа Інтернет містить велику кількість веб...

  10. Even Faster Web Sites Performance Best Practices for Web Developers

    CERN Document Server

    Souders, Steve

    2009-01-01

    Performance is critical to the success of any web site, and yet today's web applications push browsers to their limits with increasing amounts of rich content and heavy use of Ajax. In this book, Steve Souders, web performance evangelist at Google and former Chief Performance Yahoo!, provides valuable techniques to help you optimize your site's performance. Souders' previous book, the bestselling High Performance Web Sites, shocked the web development world by revealing that 80% of the time it takes for a web page to load is on the client side. In Even Faster Web Sites, Souders and eight exp

  11. Study on online community user motif using web usage mining

    Science.gov (United States)

    Alphy, Meera; Sharma, Ajay

    2016-04-01

    The Web usage mining is the application of data mining, which is used to extract useful information from the online community. The World Wide Web contains at least 4.73 billion pages according to Indexed Web and it contains at least 228.52 million pages according Dutch Indexed web on 6th august 2015, Thursday. It’s difficult to get needed data from these billions of web pages in World Wide Web. Here is the importance of web usage mining. Personalizing the search engine helps the web user to identify the most used data in an easy way. It reduces the time consumption; automatic site search and automatic restore the useful sites. This study represents the old techniques to latest techniques used in pattern discovery and analysis in web usage mining from 1996 to 2015. Analyzing user motif helps in the improvement of business, e-commerce, personalisation and improvement of websites.

  12. Project Management - Development of course materiale as WEB pages

    DEFF Research Database (Denmark)

    Thorsteinsson, Uffe; Bjergø, Søren

    1997-01-01

    Development of Internet pages with lessons plans, slideshows, links, conference system and interactive student section for communication between students and to teacher as well.......Development of Internet pages with lessons plans, slideshows, links, conference system and interactive student section for communication between students and to teacher as well....

  13. Instant responsive web design

    CERN Document Server

    Simmons, Cory

    2013-01-01

    A step-by-step tutorial approach which will teach the readers what responsive web design is and how it is used in designing a responsive web page.If you are a web-designer looking to expand your skill set by learning the quickly growing industry standard of responsive web design, this book is ideal for you. Knowledge of CSS is assumed.

  14. Automated grading of homework assignments and tests in introductory and intermediate statistics courses using active server pages.

    Science.gov (United States)

    Stockburger, D W

    1999-05-01

    Active server pages permit a software developer to customize the Web experience for users by inserting server-side script and database access into Web pages. This paper describes applications of these techniques and provides a primer on the use of these methods. Applications include a system that generates and grades individualized homework assignments and tests for statistics students. The student accesses the system as a Web page, prints out the assignment, does the assignment, and enters the answers on the Web page. The server, running on NT Server 4.0, grades the assignment, updates the grade book (on a database), and returns the answer key to the student.

  15. WebSelF: A Web Scraping Framework

    DEFF Research Database (Denmark)

    Thomsen, Jakob; Ernst, Erik; Brabrand, Claus

    2012-01-01

    We present, WebSelF, a framework for web scraping which models the process of web scraping and decomposes it into four conceptually independent, reusable, and composable constituents. We have validated our framework through a full parameterized implementation that is flexible enough to capture...... previous work on web scraping. We have experimentally evaluated our framework and implementation in an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over...... a period of more than one year. Our framework solves three concrete problems with current web scraping and our experimental results indicate that com- position of previous and our new techniques achieve a higher degree of accuracy, precision and specificity than existing techniques alone....

  16. Fluid annotations through open hypermedia: Using and extending emerging Web standards

    DEFF Research Database (Denmark)

    Bouvin, Niels Olof; Zellweger, Polle Trescott; Grønbæk, Kaj

    2002-01-01

    and browsing of fluid annotations on third-party Web pages. This prototype is an extension of the Arakne Environment, an open hypermedia application that can augment Web pages with externally stored hypermedia structures. This paper describes how various Web standards, including DOM, CSS, XLink, XPointer...

  17. Open Hypermedia as User Controlled Meta Data for the Web

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Sloth, Lennert; Bouvin, Niels Olof

    2000-01-01

    segments. By means of the Webvise system, OHIF structures can be authored, imposed on Web pages, and finally linked on the Web as any ordinary Web resource. Following a link to an OHIF file automatically invokes a Webvise download of the meta data structures and the annotated Web content will be displayed...... in the browser. Moreover, the Webvise system provides support for users to create, manipulate, and share the OHIF structures together with custom made web pages and MS Office 2000 documents on WebDAV servers. These Webvise facilities goes beyond ealier open hypermedia systems in that it now allows fully...... distributed open hypermedia linking between Web pages and WebDAV aware desktop applications. The paper describes the OHIF format and demonstrates how the Webvise system handles OHIF. Finally, it argues for better support for handling user controlled meta data, e.g. support for linking in non-XML data...

  18. JavaScript: Convenient Interactivity for the Class Web Page.

    Science.gov (United States)

    Gray, Patricia

    This paper shows how JavaScript can be used within HTML pages to add interactive review sessions and quizzes incorporating graphics and sound files. JavaScript has the advantage of providing basic interactive functions without the use of separate software applications and players. Because it can be part of a standard HTML page, it is…

  19. PageRank, HITS and a unified framework for link analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Chris; He, Xiaofeng; Husbands, Parry; Zha, Hongyuan; Simon, Horst

    2001-10-01

    Two popular webpage ranking algorithms are HITS and PageRank. HITS emphasizes mutual reinforcement between authority and hub webpages, while PageRank emphasizes hyperlink weight normalization and web surfing based on random walk models. We systematically generalize/combine these concepts into a unified framework. The ranking framework contains a large algorithm space; HITS and PageRank are two extreme ends in this space. We study several normalized ranking algorithms which are intermediate between HITS and PageRank, and obtain closed-form solutions. We show that, to first order approximation, all ranking algorithms in this framework, including PageRank and HITS, lead to same ranking which is highly correlated with ranking by indegree. These results support the notion that in web resource ranking indegree and outdegree are of fundamental importance. Rankings of webgraphs of different sizes and queries are presented to illustrate our analysis.

  20. TDCCREC: AN EFFICIENT AND SCALABLE WEB-BASED RECOMMENDATION SYSTEM

    Directory of Open Access Journals (Sweden)

    K.Latha

    2010-10-01

    Full Text Available Web browsers are provided with complex information space where the volume of information available to them is huge. There comes the Recommender system which effectively recommends web pages that are related to the current webpage, to provide the user with further customized reading material. To enhance the performance of the recommender systems, we include an elegant proposed web based recommendation system; Truth Discovery based Content and Collaborative RECommender (TDCCREC which is capable of addressing scalability. Existing approaches such as Learning automata deals with usage and navigational patterns of users. On the other hand, Weighted Association Rule is applied for recommending web pages by assigning weights to each page in all the transactions. Both of them have their own disadvantages. The websites recommended by the search engines have no guarantee for information correctness and often delivers conflicting information. To solve them, content based filtering and collaborative filtering techniques are introduced for recommending web pages to the active user along with the trustworthiness of the website and confidence of facts which outperforms the existing methods. Our results show how the proposed recommender system performs better in predicting the next request of web users.

  1. News from the Library: The CERN Web Archive

    CERN Multimedia

    CERN Library

    2012-01-01

    The World Wide Web was born at CERN in 1989. However, although historic paper documents from over 50 years ago survive in the CERN Archive, it is by no means certain that we will be able to consult today's web pages 50 years from now.   The Internet Archive's Wayback Machine includes an impressive collection of archived CERN web pages from 1996 onwards. However, their coverage is not complete - they aim for broad coverage of the whole Internet, rather than in-depth coverage of particular organisations. To try to fill this gap, the CERN Archive has entered into a partnership agreement with the Internet Memory Foundation. Harvesting of CERN's publicly available web pages is now being carried out on a regular basis, and the results are available here. 

  2. EuroGOV: Engineering a Multilingual Web Corpus

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.

    2005-01-01

    EuroGOV is a multilingual web corpus that was created to serve as the document collection for WebCLEF, the CLEF 2005 web retrieval task. EuroGOV is a collection of web pages crawled from the European Union portal, European Union member state governmental web sites, and Russian government web sites.

  3. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.

    Directory of Open Access Journals (Sweden)

    Mansour Alsaleh

    Full Text Available Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.

  4. Web Science emerges

    OpenAIRE

    Shadbolt, Nigel; Berners-Lee, Tim

    2008-01-01

    The relentless rise in Web pages and links is creating emergent properties, from social networks to virtual identity theft, that are transforming society. A new discipline, Web Science, aims to discover how Web traits arise and how they can be harnessed or held in check to benefit society. Important advances are beginning to be made; more work can solve major issues such as securing privacy and conveying trust.

  5. First in the web, but where are the pieces

    Energy Technology Data Exchange (ETDEWEB)

    Deken, J.M.

    1998-04-01

    The World Wide Web (WWW) does matter to the SLAC Archives and History Office for two very important, and related, reasons. The first reason is that the early Web at SLAC is historically significant: it was the first of its kind on this continent, and it achieved new and important things. The second reason is that the Web at SLAC--in its present and future forms--is a large and changing collection of official documents of the organization, many of which exist in no other form or environment. As of the first week of August, 1997, SLAC had 8,940 administratively-accounted-for web pages, and an estimated 2,000 to 4,000 additional pages that are hard to administratively track because they either reside on the main server in users directories several levels below their top-level pages, or they reside on one of the more than 60 non-main servers at the Center. A very small sampling of the information that SLAC WWW pages convey includes: information for the general public about programs and activities at SLAC; pages which allow physics experiment collaborators to monitor data, arrange work schedules and analyze results; pages that convey information to staff and visiting scientists about seminar and activity schedules, publication procedures, and ongoing experiments; and pages that allow staff and outside users to access databases maintained at SLAC. So, when SLAC's Archives and History Office begins to approach collecting the documents of their WWW presence, what are they collecting, and how are they to go about the process of collecting it. In this paper, the author discusses the effort to archive SLAC's Web in two parts, concentrating on the first task that has been undertaken: the initial effort to identify and gather into the archives evidence and documentation of the early days of the SLAC Web. The second task, which is the effort to collect present and future web pages at SLAC, are also covered, although in less detail, since it is an effort that is only

  6. Analyzing Web pages visual scanpaths: between and within tasks variability.

    Science.gov (United States)

    Drusch, Gautier; Bastien, J M Christian

    2012-01-01

    In this paper, we propose a new method for comparing scanpaths in a bottom-up approach, and a test of the scanpath theory. To do so, we conducted a laboratory experiment in which 113 participants were invited to accomplish a set of tasks on two different websites. For each site, they had to perform two tasks that had to be repeated ounce. The data were analyzed using a procedure similar to the one used by Duchowski et al. [8]. The first step was to automatically identify, then label, AOIs with the mean-shift clustering procedure [19]. Then, scanpaths were compared two by two with a modified version of the string-edit method, which take into account the order of AOIs visualizations [2]. Our results show that scanpaths variability between tasks but within participants seems to be lower than the variability within task for a given participant. In other words participants seem to be more coherent when they perform different tasks, than when they repeat the same tasks. In addition, participants view more of the same AOI when they perform a different task on the same Web page than when they repeated the same task. These results are quite different from what predicts the scanpath theory.

  7. Elementary Algebra + Student-Written Web Illustrations = Math Mastery.

    Science.gov (United States)

    Veteto, Bette R.

    This project focuses on the construction and use of a student-made elementary algebra tutorial World Wide Web page at the University of Memphis (Tennessee), how this helps students further explore the topics studied in elementary algebra, and how students can publish their work on the class Web page for use by other students. Practical,…

  8. Web-based surveillance of public information needs for informing preconception interventions.

    Directory of Open Access Journals (Sweden)

    Angelo D'Ambrosio

    Full Text Available The risk of adverse pregnancy outcomes can be minimized through the adoption of healthy lifestyles before pregnancy by women of childbearing age. Initiatives for promotion of preconception health may be difficult to implement. Internet can be used to build tailored health interventions through identification of the public's information needs. To this aim, we developed a semi-automatic web-based system for monitoring Google searches, web pages and activity on social networks, regarding preconception health.Based on the American College of Obstetricians and Gynecologists guidelines and on the actual search behaviors of Italian Internet users, we defined a set of keywords targeting preconception care topics. Using these keywords, we analyzed the usage of Google search engine and identified web pages containing preconception care recommendations. We also monitored how the selected web pages were shared on social networks. We analyzed discrepancies between searched and published information and the sharing pattern of the topics.We identified 1,807 Google search queries which generated a total of 1,995,030 searches during the study period. Less than 10% of the reviewed pages contained preconception care information and in 42.8% information was consistent with ACOG guidelines. Facebook was the most used social network for sharing. Nutrition, Chronic Diseases and Infectious Diseases were the most published and searched topics. Regarding Genetic Risk and Folic Acid, a high search volume was not associated to a high web page production, while Medication pages were more frequently published than searched. Vaccinations elicited high sharing although web page production was low; this effect was quite variable in time.Our study represent a resource to prioritize communication on specific topics on the web, to address misconceptions, and to tailor interventions to specific populations.

  9. Decomposition of the Google PageRank and Optimal Linking Strategy

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli

    We provide the analysis of the Google PageRank from the perspective of the Markov Chain Theory. First we study the Google PageRank for a Web that can be decomposed into several connected components which do not have any links to each other. We show that in order to determine the Google PageRank for

  10. Study of a Random Navigation on the Web Using Software Simulation

    Directory of Open Access Journals (Sweden)

    Mirella-Amelia Mioc

    2015-12-01

    Full Text Available The general information about the World Wide Web are especially nowadays useful in all types of communications. The most used model for simulating the functioning of the web is through the hypergraph. The surfer model was chosen from the known algorithms used for web navigation in this simulation. The main objective of this paper is to analyze the Page Rank and its dependency of Markov Chain Length. In this paper some software implementation are presented and used. The experimental results demonstrate the differences between the Algorithm Page Rank and Experimental Page Rank.

  11. Using centrality to rank web snippets

    NARCIS (Netherlands)

    Jijkoun, V.; de Rijke, M.; Peters, C.; Jijkoun, V.; Mandl, T.; Müller, H.; Oard, D.W.; Peñas, A.; Petras, V.; Santos, D.

    2008-01-01

    We describe our participation in the WebCLEF 2007 task, targeted at snippet retrieval from web data. Our system ranks snippets based on a simple similarity-based centrality, inspired by the web page ranking algorithms. We experimented with retrieval units (sentences and paragraphs) and with the

  12. Heuristic evaluation of paper-based Web pages: a simplified inspection usability methodology.

    Science.gov (United States)

    Allen, Mureen; Currie, Leanne M; Bakken, Suzanne; Patel, Vimla L; Cimino, James J

    2006-08-01

    Online medical information, when presented to clinicians, must be well-organized and intuitive to use, so that the clinicians can conduct their daily work efficiently and without error. It is essential to actively seek to produce good user interfaces that are acceptable to the user. This paper describes the methodology used to develop a simplified heuristic evaluation (HE) suitable for the evaluation of screen shots of Web pages, the development of an HE instrument used to conduct the evaluation, and the results of the evaluation of the aforementioned screen shots. In addition, this paper presents examples of the process of categorizing problems identified by the HE and the technological solutions identified to resolve these problems. Four usability experts reviewed 18 paper-based screen shots and made a total of 108 comments. Each expert completed the task in about an hour. We were able to implement solutions to approximately 70% of the violations. Our study found that a heuristic evaluation using paper-based screen shots of a user interface was expeditious, inexpensive, and straightforward to implement.

  13. Indian accent text-to-speech system for web browsing

    Indian Academy of Sciences (India)

    This paper describes a 'web reader' which 'reads out' the textual contents of a selected web page in Hindi or in English with Indian accent. The content of the page is downloaded and parsed into suitable textual form. It is then passed on to an indigenously developed text-to-speech system for Hindi/Indian English, ...

  14. Application of FrontPage 98 to the Development of Web Sites for the Science Division and the Center for the Advancement of Learning and Teaching (CALT) at Anne Arundel Community College.

    Science.gov (United States)

    Bird, Bruce

    This paper discusses the development of two World Wide Web sites at Anne Arundel Community College (Maryland). The criteria for the selection of hardware and software for Web site development that led to the decision to use Microsoft FrontPage 98 are described along with its major components and features. The discussion of the Science Division Web…

  15. Instant PageSpeed optimization

    CERN Document Server

    Jaiswal, Sanjeev

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Instant PageSpeed Optimization is a hands-on guide that provides a number of clear, step-by-step exercises for optimizing your websites for better performance and improving their efficiency.Instant PageSpeed Optimization is aimed at website developers and administrators who wish to make their websites load faster without any errors and consume less bandwidth. It's assumed that you will have some experience in basic web technologies like HTML, CSS3, JavaScript, and the basics of netw

  16. Web Spam, Social Propaganda and the Evolution of Search Engine Rankings

    Science.gov (United States)

    Metaxas, Panagiotis Takis

    Search Engines have greatly influenced the way we experience the web. Since the early days of the web, users have been relying on them to get informed and make decisions. When the web was relatively small, web directories were built and maintained using human experts to screen and categorize pages according to their characteristics. By the mid 1990's, however, it was apparent that the human expert model of categorizing web pages does not scale. The first search engines appeared and they have been evolving ever since, taking over the role that web directories used to play.

  17. The electronic image of the energy. A functional and aesthetical study of the web pages of the companies of energy.

    OpenAIRE

    Álvarez Ruiz, Antón; Reyes Moreno, María

    2011-01-01

    This article analyzes the web pages of the major companies in the energy market (electricity and gas) of the more developed countries (EU and North America). The present situation of the market is very significant: For the first time, the Government of these countries have given permission to liberalize the power market. This process goes on very slowly but we can see now some changes that leads to a process of concentration, through acquisitions, merges and commercial agreements between the ...

  18. Aesthetic design of e-commerce web pages—Complexity, order, and preferences

    NARCIS (Netherlands)

    Poole, M.S.

    2012-01-01

    This study was conducted to understand the perceptual structure of e-commerce webpage visual aesthetics and to provide insight into how physical design features of web pages influence users' aesthetic perception of and preference for web pages. Drawing on the environmental aesthetics, human-computer

  19. A Note on the PageRank of Undirected Graphs

    OpenAIRE

    Grolmusz, Vince

    2012-01-01

    The PageRank is a widely used scoring function of networks in general and of the World Wide Web graph in particular. The PageRank is defined for directed graphs, but in some special cases applications for undirected graphs occur. In the literature it is widely noted that the PageRank for undirected graphs are proportional to the degrees of the vertices of the graph. We prove that statement for a particular personalization vector in the definition of the PageRank, and we also show that in gene...

  20. Classifying web genres in context: a case study documenting the web genres used by a software engineer

    NARCIS (Netherlands)

    Montesi, M.; Navarrete, T.

    2008-01-01

    This case study analyzes the Internet-based resources that a software engineer uses in his daily work. Methodologically, we studied the web browser history of the participant, classifying all the web pages he had seen over a period of 12 days into web genres. We interviewed him before and after the

  1. Scientist who weaves wonderful web

    CERN Multimedia

    Wills, D

    2000-01-01

    Mr Berners-Lee's unique standing makes him a sought-after speaker. People want to know how he developed the Web and where he thinks it is headed. 'Weaving the Web', written by himself with Mark Fischetti, is his attempt to answer these questions (1 page).

  2. How Useful are Orthopedic Surgery Residency Web Pages?

    Science.gov (United States)

    Oladeji, Lasun O; Yu, Jonathan C; Oladeji, Afolayan K; Ponce, Brent A

    2015-01-01

    Medical students interested in orthopedic surgery residency positions frequently use the Internet as a modality to gather information about individual residency programs. Students often invest a painstaking amount of time and effort in determining programs that they are interested in, and the Internet is central to this process. Numerous studies have concluded that program websites are a valuable resource for residency and fellowship applicants. The purpose of the present study was to provide an update on the web pages of academic orthopedic surgery departments in the United States and to rate their utility in providing information on quality of education, faculty and resident information, environment, and applicant information. We reviewed existing websites for the 156 departments or divisions of orthopedic surgery that are currently accredited for resident education by the Accreditation Council for Graduate Medical Education. Each website was assessed for quality of information regarding quality of education, faculty and resident information, environment, and applicant information. We noted that 152 of the 156 departments (97%) had functioning websites that could be accessed. There was high variability regarding the comprehensiveness of orthopedic residency websites. Most of the orthopedic websites provided information on conference, didactics, and resident rotations. Less than 50% of programs provided information on resident call schedules, resident or faculty research and publications, resident hometowns, or resident salary. There is a lack of consistency regarding the content presented on orthopedic residency websites. As the competition for orthopedic websites continues to increase, applicants flock to the Internet to learn more about orthopedic websites in greater number. A well-constructed website has the potential to increase the caliber of students applying to a said program. Copyright © 2015 Association of Program Directors in Surgery. Published by

  3. Blueprint of a Cross-Lingual Web Retrieval Collection

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.; van Zwol, R.

    2005-01-01

    The world wide web is a natural setting for cross-lingual information retrieval; web content is essentially multilingual, and web searchers are often polyglots. Even though English has emerged as the lingua franca of the web, planning for a business trip or holiday usually involves digesting pages

  4. An Efficient PageRank Approach for Urban Traffic Optimization

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2012-01-01

    to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.

  5. Universal emergence of PageRank

    Energy Technology Data Exchange (ETDEWEB)

    Frahm, K M; Georgeot, B; Shepelyansky, D L, E-mail: frahm@irsamc.ups-tlse.fr, E-mail: georgeot@irsamc.ups-tlse.fr, E-mail: dima@irsamc.ups-tlse.fr [Laboratoire de Physique Theorique du CNRS, IRSAMC, Universite de Toulouse, UPS, 31062 Toulouse (France)

    2011-11-18

    The PageRank algorithm enables us to rank the nodes of a network through a specific eigenvector of the Google matrix, using a damping parameter {alpha} Element-Of ]0, 1[. Using extensive numerical simulations of large web networks, with a special accent on British University networks, we determine numerically and analytically the universal features of the PageRank vector at its emergence when {alpha} {yields} 1. The whole network can be divided into a core part and a group of invariant subspaces. For {alpha} {yields} 1, PageRank converges to a universal power-law distribution on the invariant subspaces whose size distribution also follows a universal power law. The convergence of PageRank at {alpha} {yields} 1 is controlled by eigenvalues of the core part of the Google matrix, which are extremely close to unity, leading to large relaxation times as, for example, in spin glasses. (paper)

  6. Universal emergence of PageRank

    International Nuclear Information System (INIS)

    Frahm, K M; Georgeot, B; Shepelyansky, D L

    2011-01-01

    The PageRank algorithm enables us to rank the nodes of a network through a specific eigenvector of the Google matrix, using a damping parameter α ∈ ]0, 1[. Using extensive numerical simulations of large web networks, with a special accent on British University networks, we determine numerically and analytically the universal features of the PageRank vector at its emergence when α → 1. The whole network can be divided into a core part and a group of invariant subspaces. For α → 1, PageRank converges to a universal power-law distribution on the invariant subspaces whose size distribution also follows a universal power law. The convergence of PageRank at α → 1 is controlled by eigenvalues of the core part of the Google matrix, which are extremely close to unity, leading to large relaxation times as, for example, in spin glasses. (paper)

  7. Vue.js 2 cookbook build modern, interactive web applications with Vue.js

    CERN Document Server

    Passaglia, Andrea

    2017-01-01

    Vue.js is an open source JavaScript library for building modern, interactive web applications. With a rapidly growing community and a strong ecosystem, Vue.js makes developing complex single page applications a breeze. Its component-based approach, intuitive API, blazing fast core, and compact size make Vue.js a great solution to craft your next front-end application. From basic to advanced recipes, this book arms you with practical solutions to common tasks when building an application using Vue. We start off by exploring the fundamentals of Vue.js: its reactivity system, data-binding syntax, and component-based architecture through practical examples. After that, we delve into integrating Webpack and Babel to enhance your development workflow using single file components. Finally, we take an in-depth look at Vuex for state management and Vue Router to route in your single page applications, and integrate a variety of technologies ranging from Node.js to Electron, and Socket.io to Firebase and HorizonDB. ...

  8. The poor quality of information about laparoscopy on the World Wide Web as indexed by popular search engines.

    Science.gov (United States)

    Allen, J W; Finch, R J; Coleman, M G; Nathanson, L K; O'Rourke, N A; Fielding, G A

    2002-01-01

    This study was undertaken to determine the quality of information on the Internet regarding laparoscopy. Four popular World Wide Web search engines were used with the key word "laparoscopy." Advertisements, patient- or physician-directed information, and controversial material were noted. A total of 14,030 Web pages were found, but only 104 were unique Web sites. The majority of the sites were duplicate pages, subpages within a main Web page, or dead links. Twenty-eight of the 104 pages had a medical product for sale, 26 were patient-directed, 23 were written by a physician or group of physicians, and six represented corporations. The remaining 21 were "miscellaneous." The 46 pages containing educational material were critically reviewed. At least one of the senior authors found that 32 of the pages contained controversial or misleading statements. All of the three senior authors (LKN, NAO, GAF) independently agreed that 17 of the 46 pages contained controversial information. The World Wide Web is not a reliable source for patient or physician information about laparoscopy. Authenticating medical information on the World Wide Web is a difficult task, and no government or surgical society has taken the lead in regulating what is presented as fact on the World Wide Web.

  9. Spinning the web of knowledge

    CERN Multimedia

    Knight, Matthew

    2007-01-01

    "On August 6, 1991, Tim Berners-Lee posted the World Wide Web's first Web site. Fifteen years on there are estimated to be over 100 million. The space of growth has happened at a bewildering rate and its success has even confounded its inventor." (1/2 page)

  10. A Technique to Speedup Access to Web Contents

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. Web Caching - A Technique to Speedup Access to Web Contents. Harsha Srinath Shiva Shankar Ramanna. General Article Volume 7 Issue 7 July 2002 pp 54-62 ... Keywords. World wide web; data caching; internet traffic; web page access.

  11. PageRank for low frequency earthquake detection

    Science.gov (United States)

    Aguiar, A. C.; Beroza, G. C.

    2013-12-01

    We have analyzed Hi-Net seismic waveform data during the April 2006 tremor episode in the Nankai Trough in SW Japan using the autocorrelation approach of Brown et al. (2008), which detects low frequency earthquakes (LFEs) based on pair-wise waveform matching. We have generalized this to exploit the fact that waveforms may repeat multiple times, on more than just a pair-wise basis. We are working towards developing a sound statistical basis for event detection, but that is complicated by two factors. First, the statistical behavior of the autocorrelations varies between stations. Analyzing one station at a time assures that the detection threshold will only depend on the station being analyzed. Second, the positive detections do not satisfy "closure." That is, if window A correlates with window B, and window B correlates with window C, then window A and window C do not necessarily correlate with one another. We want to evaluate whether or not a linked set of windows are correlated due to chance. To do this, we map our problem on to one that has previously been solved for web search, and apply Google's PageRank algorithm. PageRank is the probability of a 'random surfer' to visit a particular web page; it assigns a ranking for a webpage based on the amount of links associated with that page. For windows of seismic data instead of webpages, the windows with high probabilities suggest likely LFE signals. Once identified, we stack the matched windows to improve the snr and use these stacks as template signals to find other LFEs within continuous data. We compare the results among stations and declare a detection if they are found in a statistically significant number of stations, based on multinomial statistics. We compare our detections using the single-station method to detections found by Shelly et al. (2007) for the April 2006 tremor sequence in Shikoku, Japan. We find strong similarity between the results, as well as many new detections that were not found using

  12. Advanced express web application development

    CERN Document Server

    Keig, Andrew

    2013-01-01

    A practical book, guiding the reader through the development of a single page application using a feature-driven approach.If you are an experienced JavaScript developer who wants to build highly scalable, real-world applications using Express, this book is ideal for you. This book is an advanced title and assumes that the reader has some experience with node, Javascript MVC web development frameworks, and has heard of Express before, or is familiar with it. You should also have a basic understanding of Redis and MongoDB. This book is not a tutorial on Node, but aims to explore some of the more

  13. ONTOLOGY BASED MEANINGFUL SEARCH USING SEMANTIC WEB AND NATURAL LANGUAGE PROCESSING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    K. Palaniammal

    2013-10-01

    Full Text Available The semantic web extends the current World Wide Web by adding facilities for the machine understood description of meaning. The ontology based search model is used to enhance efficiency and accuracy of information retrieval. Ontology is the core technology for the semantic web and this mechanism for representing formal and shared domain descriptions. In this paper, we proposed ontology based meaningful search using semantic web and Natural Language Processing (NLP techniques in the educational domain. First we build the educational ontology then we present the semantic search system. The search model consisting three parts which are embedding spell-check, finding synonyms using WordNet API and querying ontology using SPARQL language. The results are both sensitive to spell check and synonymous context. This paper provides more accurate results and the complete details for the selected field in a single page.

  14. Grouping of Items in Mobile Web Questionnaires

    Science.gov (United States)

    Mavletova, Aigul; Couper, Mick P.

    2016-01-01

    There is some evidence that a scrolling design may reduce breakoffs in mobile web surveys compared to a paging design, but there is little empirical evidence to guide the choice of the optimal number of items per page. We investigate the effect of the number of items presented on a page on data quality in two types of questionnaires: with or…

  15. Multigraph: Interactive Data Graphs on the Web

    Science.gov (United States)

    Phillips, M. B.

    2010-12-01

    Many aspects of geophysical science involve time dependent data that is often presented in the form of a graph. Considering that the web has become a primary means of communication, there are surprisingly few good tools and techniques available for presenting time-series data on the web. The most common solution is to use a desktop tool such as Excel or Matlab to create a graph which is saved as an image and then included in a web page like any other image. This technique is straightforward, but it limits the user to one particular view of the data, and disconnects the graph from the data in a way that makes updating a graph with new data an often cumbersome manual process. This situation is somewhat analogous to the state of mapping before the advent of GIS. Maps existed only in printed form, and creating a map was a laborious process. In the last several years, however, the world of mapping has experienced a revolution in the form of web-based and other interactive computer technologies, so that it is now commonplace for anyone to easily browse through gigabytes of geographic data. Multigraph seeks to bring a similar ease of access to time series data. Multigraph is a program for displaying interactive time-series data graphs in web pages that includes a simple way of configuring the appearance of the graph and the data to be included. It allows multiple data sources to be combined into a single graph, and allows the user to explore the data interactively. Multigraph lets users explore and visualize "data space" in the same way that interactive mapping applications such as Google Maps facilitate exploring and visualizing geography. Viewing a Multigraph graph is extremely simple and intuitive, and requires no instructions. Creating a new graph for inclusion in a web page involves writing a simple XML configuration file and requires no programming. Multigraph can read data in a variety of formats, and can display data from a web service, allowing users to "surf

  16. Web-based pathology practice examination usage

    Directory of Open Access Journals (Sweden)

    Edward C Klatt

    2014-01-01

    Full Text Available Context: General and subject specific practice examinations for students in health sciences studying pathology were placed onto a free public internet web site entitled web path and were accessed four clicks from the home web site menu. Subjects and Methods: Multiple choice questions were coded into. html files with JavaScript functions for web browser viewing in a timed format. A Perl programming language script with common gateway interface for web page forms scored examinations and placed results into a log file on an internet computer server. The four general review examinations of 30 questions each could be completed in up to 30 min. The 17 subject specific examinations of 10 questions each with accompanying images could be completed in up to 15 min each. The results of scores and user educational field of study from log files were compiled from June 2006 to January 2014. Results: The four general review examinations had 31,639 accesses with completion of all questions, for a completion rate of 54% and average score of 75%. A score of 100% was achieved by 7% of users, ≥90% by 21%, and ≥50% score by 95% of users. In top to bottom web page menu order, review examination usage was 44%, 24%, 17%, and 15% of all accessions. The 17 subject specific examinations had 103,028 completions, with completion rate 73% and average score 74%. Scoring at 100% was 20% overall, ≥90% by 37%, and ≥50% score by 90% of users. The first three menu items on the web page accounted for 12.6%, 10.0%, and 8.2% of all completions, and the bottom three accounted for no more than 2.2% each. Conclusions: Completion rates were higher for shorter 10 questions subject examinations. Users identifying themselves as MD/DO scored higher than other users, averaging 75%. Usage was higher for examinations at the top of the web page menu. Scores achieved suggest that a cohort of serious users fully completing the examinations had sufficient preparation to use them to support

  17. Improving the interactivity and functionality of Web-based radiology teaching files with the Java programming language.

    Science.gov (United States)

    Eng, J

    1997-01-01

    Java is a programming language that runs on a "virtual machine" built into World Wide Web (WWW)-browsing programs on multiple hardware platforms. Web pages were developed with Java to enable Web-browsing programs to overlay transparent graphics and text on displayed images so that the user could control the display of labels and annotations on the images, a key feature not available with standard Web pages. This feature was extended to include the presentation of normal radiologic anatomy. Java programming was also used to make Web browsers compatible with the Digital Imaging and Communications in Medicine (DICOM) file format. By enhancing the functionality of Web pages, Java technology should provide greater incentive for using a Web-based approach in the development of radiology teaching material.

  18. Default Parallels Plesk Panel Page

    Science.gov (United States)

    services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this

  19. Web-based Surveys: Changing the Survey Process

    OpenAIRE

    Gunn, Holly

    2002-01-01

    Web-based surveys are having a profound influence on the survey process. Unlike other types of surveys, Web page design skills and computer programming expertise play a significant role in the design of Web-based surveys. Survey respondents face new and different challenges in completing a Web-based survey. This paper examines the different types of Web-based surveys, the advantages and challenges of using Web-based surveys, the design of Web-based surveys, and the issues of validity, error, ...

  20. Modelling Safe Interface Interactions in Web Applications

    Science.gov (United States)

    Brambilla, Marco; Cabot, Jordi; Grossniklaus, Michael

    Current Web applications embed sophisticated user interfaces and business logic. The original interaction paradigm of the Web based on static content pages that are browsed by hyperlinks is, therefore, not valid anymore. In this paper, we advocate a paradigm shift for browsers and Web applications, that improves the management of user interaction and browsing history. Pages are replaced by States as basic navigation nodes, and Back/Forward navigation along the browsing history is replaced by a full-fledged interactive application paradigm, supporting transactions at the interface level and featuring Undo/Redo capabilities. This new paradigm offers a safer and more precise interaction model, protecting the user from unexpected behaviours of the applications and the browser.

  1. How Many Pages in a Single Word: Alternative Typo-poetics of Surrealist Magazines

    Directory of Open Access Journals (Sweden)

    Biljana Andonovska

    2013-07-01

    Full Text Available The paper examines the experimental design, typography and editorial strategies of the rare avant-garde publication Four Pages - Onanism of Death - And So On (1930, published by Oskar Davičo, Đorđe Kostić and Đorđe Jovanović, probably the first Surrealist Edition of the Belgrade surrealist group. Starting from its unconventional format and the way authors (reshape and (misdirect each page in an autonomous fashion, I further analyze the intrinsic interaction between the text, its graphic embodiment and surrounding para-textual elements (illustrations, body text, titles, folding, dating, margins, comments. Special attention is given to the concepts of depersonalization, free association and automatic writing as primary poetical sources for the delinearisation of the reading process and 'emancipation' of the text, its content and syntax as well as its position, direction, and visual materiality on the page. Resisting conventional classifications and simplified distinctions between established print media and genres, this surrealist single-issue placard magazine mixes elements of the poster, magazine, and booklet. Its ambiguous nature leads us toward theoretical discussion of the avant-garde magazine as an autonomous literary genre and original, self-sufficient artwork, as was already suggested by the theory of Russian formalism.

  2. web cellHTS2: A web-application for the analysis of high-throughput screening data

    Directory of Open Access Journals (Sweden)

    Boutros Michael

    2010-04-01

    Full Text Available Abstract Background The analysis of high-throughput screening data sets is an expanding field in bioinformatics. High-throughput screens by RNAi generate large primary data sets which need to be analyzed and annotated to identify relevant phenotypic hits. Large-scale RNAi screens are frequently used to identify novel factors that influence a broad range of cellular processes, including signaling pathway activity, cell proliferation, and host cell infection. Here, we present a web-based application utility for the end-to-end analysis of large cell-based screening experiments by cellHTS2. Results The software guides the user through the configuration steps that are required for the analysis of single or multi-channel experiments. The web-application provides options for various standardization and normalization methods, annotation of data sets and a comprehensive HTML report of the screening data analysis, including a ranked hit list. Sessions can be saved and restored for later re-analysis. The web frontend for the cellHTS2 R/Bioconductor package interacts with it through an R-server implementation that enables highly parallel analysis of screening data sets. web cellHTS2 further provides a file import and configuration module for common file formats. Conclusions The implemented web-application facilitates the analysis of high-throughput data sets and provides a user-friendly interface. web cellHTS2 is accessible online at http://web-cellHTS2.dkfz.de. A standalone version as a virtual appliance and source code for platforms supporting Java 1.5.0 can be downloaded from the web cellHTS2 page. web cellHTS2 is freely distributed under GPL.

  3. The use of the TWiki Web in ATLAS

    International Nuclear Information System (INIS)

    Amram, Nir; Antonelli, Stefano; Haywood, Stephen; Lloyd, Steve; Luehring, Frederick; Poulard, Gilbert

    2010-01-01

    The ATLAS Experiment, with over 2000 collaborators, needs efficient and effective means of communicating information. The Collaboration has been using the TWiki Web at CERN for over three years and now has more than 7000 web pages, some of which are protected. This number greatly exceeds the number of 'static' HTML pages, and in the last year, there has been a significant migration to the TWiki. The TWiki is one example of the many different types of Wiki web which exist. In this paper, a description is given of the ATLAS TWiki at CERN. The tools used by the Collaboration to manage the TWiki are described and some of the problems encountered explained. A very useful development has been the creation of a set of Workbooks (Users' Guides) - these have benefitted from the TWiki environment and, in particular, a tool to extract pdf from the associated pages.

  4. Medium-sized Universities Connect to Their Libraries: Links on University Home Pages and User Group Pages

    Directory of Open Access Journals (Sweden)

    Pamela Harpel-Burk

    2006-03-01

    Full Text Available From major tasks—such as recruitment of new students and staff—to the more mundane but equally important tasks—such as providing directions to campus—college and university Web sites perform a wide range of tasks for a varied assortment of users. Overlapping functions and user needs meld to create the need for a Web site with three major functions: promotion and marketing, access to online services, and providing a means of communication between individuals and groups. In turn, college and university Web sites that provide links to their library home page can be valuable assets for recruitment, public relations, and for helping users locate online services.

  5. PageRank and rank-reversal dependence on the damping factor

    Science.gov (United States)

    Son, S.-W.; Christensen, C.; Grassberger, P.; Paczuski, M.

    2012-12-01

    PageRank (PR) is an algorithm originally developed by Google to evaluate the importance of web pages. Considering how deeply rooted Google's PR algorithm is to gathering relevant information or to the success of modern businesses, the question of rank stability and choice of the damping factor (a parameter in the algorithm) is clearly important. We investigate PR as a function of the damping factor d on a network obtained from a domain of the World Wide Web, finding that rank reversal happens frequently over a broad range of PR (and of d). We use three different correlation measures, Pearson, Spearman, and Kendall, to study rank reversal as d changes, and we show that the correlation of PR vectors drops rapidly as d changes from its frequently cited value, d0=0.85. Rank reversal is also observed by measuring the Spearman and Kendall rank correlation, which evaluate relative ranks rather than absolute PR. Rank reversal happens not only in directed networks containing rank sinks but also in a single strongly connected component, which by definition does not contain any sinks. We relate rank reversals to rank pockets and bottlenecks in the directed network structure. For the network studied, the relative rank is more stable by our measures around d=0.65 than at d=d0.

  6. PageRank and rank-reversal dependence on the damping factor.

    Science.gov (United States)

    Son, S-W; Christensen, C; Grassberger, P; Paczuski, M

    2012-12-01

    PageRank (PR) is an algorithm originally developed by Google to evaluate the importance of web pages. Considering how deeply rooted Google's PR algorithm is to gathering relevant information or to the success of modern businesses, the question of rank stability and choice of the damping factor (a parameter in the algorithm) is clearly important. We investigate PR as a function of the damping factor d on a network obtained from a domain of the World Wide Web, finding that rank reversal happens frequently over a broad range of PR (and of d). We use three different correlation measures, Pearson, Spearman, and Kendall, to study rank reversal as d changes, and we show that the correlation of PR vectors drops rapidly as d changes from its frequently cited value, d_{0}=0.85. Rank reversal is also observed by measuring the Spearman and Kendall rank correlation, which evaluate relative ranks rather than absolute PR. Rank reversal happens not only in directed networks containing rank sinks but also in a single strongly connected component, which by definition does not contain any sinks. We relate rank reversals to rank pockets and bottlenecks in the directed network structure. For the network studied, the relative rank is more stable by our measures around d=0.65 than at d=d_{0}.

  7. Automatic web site authoring with SiteGuide

    NARCIS (Netherlands)

    de Boer, V.; Hollink, V.; van Someren, M.W.; Kłopotek, M.A.; Przepiórkowski, A.; Wierzchoń, S.T.; Trojanowski, K.

    2009-01-01

    An important step in the design process for a web site is to determine which information is to be included and how the information should be organized on the web site’s pages. In this paper we describe ’SiteGuide’, a tool that automatically produces an information architecture for a web site that a

  8. Hiding in Plain Sight: The Anatomy of Malicious Facebook Pages

    OpenAIRE

    Dewan, Prateek; Kumaraguru, Ponnurangam

    2015-01-01

    Facebook is the world's largest Online Social Network, having more than 1 billion users. Like most other social networks, Facebook is home to various categories of hostile entities who abuse the platform by posting malicious content. In this paper, we identify and characterize Facebook pages that engage in spreading URLs pointing to malicious domains. We used the Web of Trust API to determine domain reputations of URLs published by pages, and identified 627 pages publishing untrustworthy info...

  9. A Web Server for MACCS Magnetometer Data

    Science.gov (United States)

    Engebretson, Mark J.

    1998-01-01

    NASA Grant NAG5-3719 was provided to Augsburg College to support the development of a web server for the Magnetometer Array for Cusp and Cleft Studies (MACCS), a two-dimensional array of fluxgate magnetometers located at cusp latitudes in Arctic Canada. MACCS was developed as part of the National Science Foundation's GEM (Geospace Environment Modeling) Program, which was designed in part to complement NASA's Global Geospace Science programs during the decade of the 1990s. This report describes the successful use of these grant funds to support a working web page that provides both daily plots and file access to any user accessing the worldwide web. The MACCS home page can be accessed at http://space.augsburg.edu/space/MaccsHome.html.

  10. A Runtime System for Interactive Web Services

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Sandholm, Anders

    1999-01-01

    Interactive web services are increasingly replacing traditional static web pages. Producing web services seems to require a tremendous amount of laborious low-level coding due to the primitive nature of CGI programming. We present ideas for an improved runtime system for interactive web services...... built on top of CGI running on virtually every combination of browser and HTTP/CGI server. The runtime system has been implemented and used extensively in , a tool for producing interactive web services....

  11. A fuzzy method for improving the functionality of search engines based on user's web interactions

    Directory of Open Access Journals (Sweden)

    Farzaneh Kabirbeyk

    2015-04-01

    Full Text Available Web mining has been widely used to discover knowledge from various sources in the web. One of the important tools in web mining is mining of web user’s behavior that is considered as a way to discover the potential knowledge of web user’s interaction. Nowadays, Website personalization is regarded as a popular phenomenon among web users and it plays an important role in facilitating user access and provides information of users’ requirements based on their own interests. Extracting important features about web user behavior plays a significant role in web usage mining. Such features are page visit frequency in each session, visit duration, and dates of visiting a certain pages. This paper presents a method to predict user’s interest and to propose a list of pages based on their interests by identifying user’s behavior based on fuzzy techniques called fuzzy clustering method. Due to the user’s different interests and use of one or more interest at a time, user’s interest may belong to several clusters and fuzzy clustering provide a possible overlap. Using the resulted cluster helps extract fuzzy rules. This helps detecting user’s movement pattern and using neural network a list of suggested pages to the users is provided.

  12. A Web Browser Interface to Manage the Searching and Organizing of Information on the Web by Learners

    Science.gov (United States)

    Li, Liang-Yi; Chen, Gwo-Dong

    2010-01-01

    Information Gathering is a knowledge construction process. Web learners make a plan for their Information Gathering task based on their prior knowledge. The plan is evolved with new information encountered and their mental model is constructed through continuously assimilating and accommodating new information gathered from different Web pages. In…

  13. The Rise and Fall of Text on the Web: A Quantitative Study of Web Archives

    Science.gov (United States)

    Cocciolo, Anthony

    2015-01-01

    Introduction: This study addresses the following research question: is the use of text on the World Wide Web declining? If so, when did it start declining, and by how much has it declined? Method: Web pages are downloaded from the Internet Archive for the years 1999, 2002, 2005, 2008, 2011 and 2014, producing 600 captures of 100 prominent and…

  14. Fuzzy Clustering: An Approachfor Mining Usage Profilesfrom Web

    OpenAIRE

    Ms.Archana N. Boob; Prof. D. M. Dakhane

    2012-01-01

    Web usage mining is an application of data mining technology to mining the data of the web server log file. It can discover the browsing patterns of user and some kind of correlations between the web pages. Web usage mining provides the support for the web site design, providing personalization server and other business making decision, etc. Web mining applies the data mining, the artificial intelligence and the chart technology and so on to the web data and traces users' visiting characteris...

  15. Association and Sequence Mining in Web Usage

    Directory of Open Access Journals (Sweden)

    Claudia Elena DINUCA

    2011-06-01

    Full Text Available Web servers worldwide generate a vast amount of information on web users’ browsing activities. Several researchers have studied these so-called clickstream or web access log data to better understand and characterize web users. Clickstream data can be enriched with information about the content of visited pages and the origin (e.g., geographic, organizational of the requests. The goal of this project is to analyse user behaviour by mining enriched web access log data. With the continued growth and proliferation of e-commerce, Web services, and Web-based information systems, the volumes of click stream and user data collected by Web-based organizations in their daily operations has reached astronomical proportions. This information can be exploited in various ways, such as enhancing the effectiveness of websites or developing directed web marketing campaigns. The discovered patterns are usually represented as collections of pages, objects, or re-sources that are frequently accessed by groups of users with common needs or interests. The focus of this paper is to provide an overview how to use frequent pattern techniques for discovering different types of patterns in a Web log database. In this paper we will focus on finding association as a data mining technique to extract potentially useful knowledge from web usage data. I implemented in Java, using NetBeans IDE, a program for identification of pages’ association from sessions. For exemplification, we used the log files from a commercial web site.

  16. Design of remote weather monitor system based on embedded web database

    International Nuclear Information System (INIS)

    Gao Jiugang; Zhuang Along

    2010-01-01

    The remote weather monitoring system is designed by employing the embedded Web database technology and the S3C2410 microprocessor as the core. The monitoring system can simultaneously monitor the multi-channel sensor signals, and can give a dynamic Web pages display of various types of meteorological information on the remote computer. It gives a elaborated introduction of the construction and application of the Web database under the embedded Linux. Test results show that the client access the Web page via the GPRS or the Internet, acquires data and uses an intuitive graphical way to display the value of various types of meteorological information. (authors)

  17. Lost but Not Forgotten: Finding Pages on the Unarchived Web

    NARCIS (Netherlands)

    Huurdeman, H.C.; Kamps, J.; Samar, T.; de Vries, A.P.; Ben-David, A.; Rogers, R.A.

    2015-01-01

    Web archives attempt to preserve the fast changing web, yet they will always be incomplete. Due to restrictions in crawling depth, crawling frequency, and restrictive selection policies, large parts of the Web are unarchived and, therefore, lost to posterity. In this paper, we propose an approach to

  18. Lost but not forgotten: finding pages on the unarchived web

    NARCIS (Netherlands)

    H.C. Huurdeman; J. Kamps; T. Samar (Thaer); A.P. de Vries (Arjen); A. Ben-David; R.A. Rogers (Richard)

    2015-01-01

    htmlabstractWeb archives attempt to preserve the fast changing web, yet they will always be incomplete. Due to restrictions in crawling depth, crawling frequency, and restrictive selection policies, large parts of the Web are unarchived and, therefore, lost to posterity. In this paper, we propose an

  19. Comparing Web, Group and Telehealth Formats of a Military Parenting Program

    Science.gov (United States)

    2016-06-01

    materials are available upon request: • Online questionnaire for baseline data collection (9 pages) • Online parent survey for time point 1 (69 pages...web-based parenting intervention for military families with school-aged children, we expect to strengthen parenting practices in families and...AWARD NUMBER: W81XWH-14-1-0143 TITLE: Comparing Web, Group and Telehealth Formats of a Military Parenting Program PRINCIPAL INVESTIGATOR

  20. UK Web Archive programme: a brief history of opportunities and challenges

    Directory of Open Access Journals (Sweden)

    Aquiles Alencar Brayner

    2016-05-01

    Full Text Available Webpages have been playing a key role in the creation and dissemination of information in recent decades. However, given their ephemeral nature, many Web pages published on the World Wide Web have had their content changed or have been permanently deleted without leaving any trace of their existence. In order to avoid the loss of this important material that represents our contemporary cultural heritage, various institutions have launched programmes to harvest and archive Web pages  registered in specific national domains . Based on the example of development of the Web archive program in the UK, this article raises some key questions in relation to the technological obstacles and curatorial models adopted for the preservation and access to the content published on the Web.

  1. A Survey On Various Web Template Detection And Extraction Methods

    Directory of Open Access Journals (Sweden)

    Neethu Mary Varghese

    2015-03-01

    Full Text Available Abstract In todays digital world reliance on the World Wide Web as a source of information is extensive. Users increasingly rely on web based search engines to provide accurate search results on a wide range of topics that interest them. The search engines in turn parse the vast repository of web pages searching for relevant information. However majority of web portals are designed using web templates which are designed to provide consistent look and feel to end users. The presence of these templates however can influence search results leading to inaccurate results being delivered to the users. Therefore to improve the accuracy and reliability of search results identification and removal of web templates from the actual content is essential. A wide range of approaches are commonly employed to achieve this and this paper focuses on the study of the various approaches of template detection and extraction that can be applied across homogenous as well as heterogeneous web pages.

  2. Improving Interdisciplinary Provider Communication Through a Unified Paging System.

    Science.gov (United States)

    Heidemann, Lauren; Petrilli, Christopher; Gupta, Ashwin; Campbell, Ian; Thompson, Maureen; Cinti, Sandro; Stewart, David A

    2016-06-01

    Interdisciplinary communication at a Veterans Affairs (VA) academic teaching hospital is largely dependent on alphanumeric paging, which has limitations as a result of one-way communication and lack of reliable physician identification. Adverse patient outcomes related to difficulty contacting the correct consulting provider in a timely manner have been reported. House officers were surveyed on the level of satisfaction with the current VA communication system and the rate of perceived adverse patient outcomes caused by potential delays within this system. Respondents were then asked to identify the ideal paging system. These results were used to develop and deploy a new Web site. A postimplementation survey was repeated 1 year later. This study was conducted as a quality improvement project. House officer satisfaction with the preintervention system was 3%. The majority used more than four modalities to identify consultants, with 59% stating that word of mouth was a typical source. The preferred mode of paging was the university hospital paging system, a Web-based program that is used at the partnering academic institution. Following integration of VA consulting services within the university hospital paging system, the level of satisfaction improved to 87%. Significant decreases were seen in perceived adverse patient outcomes (from 16% to 2%), delays in patient care (from 90% to 16%), and extended hospitalizations (from 46% to 4%). Our study demonstrates significant improvement in physician satisfaction with a newly implemented paging system that was associated with a decreased perceived number of adverse patient events and delays in care.

  3. Architecture for large-scale automatic web accessibility evaluation based on the UWEM methodology

    DEFF Research Database (Denmark)

    Ulltveit-Moe, Nils; Olsen, Morten Goodwin; Pillai, Anand B.

    2008-01-01

    The European Internet Accessibility project (EIAO) has developed an Observatory for performing large scale automatic web accessibility evaluations of public sector web sites in Europe. The architecture includes a distributed web crawler that crawls web sites for links until either a given budget...... of web pages have been identified or the web site has been crawled exhaustively. Subsequently, a uniform random subset of the crawled web pages is sampled and sent for accessibility evaluation and the evaluation results are stored in a Resource Description Format (RDF) database that is later loaded...... challenges that the project faced and the solutions developed towards building a system capable of regular large-scale accessibility evaluations with sufficient capacity and stability. It also outlines some possible future architectural improvements....

  4. Oracle Application Express 5 for beginners a practical guide to rapidly develop data-centric web applications accessible from desktop, laptops, tablets, and smartphones

    CERN Document Server

    2015-01-01

    Oracle Application Express has taken another big leap towards becoming a true next generation RAD tool. It has entered into its fifth version to build robust web applications. One of the most significant feature in this release is a new page designer that helps developers create and edit page elements within a single page design view, which enormously maximizes developer productivity. Without involving the audience too much into the boring bits, this full colored edition adopts an inspiring approach that helps beginners practically evaluate almost every feature of Oracle Application Express, including all features new to version 5. The most convincing way to explore a technology is to apply it to a real world problem. In this book, you’ll develop a sales application that demonstrates almost every feature to practically expose the anatomy of Oracle Application Express 5. The short list below presents some main topics of Oracle APEX covered in this book: Rapid web application development for desktops, la...

  5. Vague but exciting…CERN celebrates 20 years of the Web

    CERN Multimedia

    2009-01-01

    Twenty years ago work started on something that would change the world forever. It would change the way we work, the way we communicate and the way we make our voices heard. On 13 March CERN will celebrate the 20th anniversary of the birth of the World Wide Web. Tim Berners-Lee with Nicola Pellow, next to the NeXT computer.In March 1989 here at CERN, Tim Berners-Lee submitted a proposal for a new information management system to his boss, Mike Sendall. ‘Vague, but exciting’, were the words that Sendall wrote on the proposal, allowing Berners-Lee to continue with the project, but unaware that it would evolve into one of the most important communication tools ever created. Tim Berners-Lee used a NeXT computer at CERN to create the first web server running a single website – info.cern.ch. Since then the World Wide Web has grown into the incredible phenomenon that we know today, a web of more than 60 billion pages, and hundreds of ...

  6. Undergraduate Students’Evaluation Criteria When Using Web Resources for Class Papers

    Directory of Open Access Journals (Sweden)

    Tsai-Youn Hung

    2004-09-01

    Full Text Available The growth in popularity of the World Wide Web has dramatically changed the way undergraduate students conduct information searches. The purpose of this study is to investigate what core quality criteria undergraduate students use to evaluate Web resources for their class papers and to what extent they evaluate the Web resources. This study reports on five Web page evaluations and a questionnaire survey of thirty five undergraduate students in the Information Technology and Informatics Program at Rutgers University. Results show that undergraduate students have become increasingly sophisticated about using Web resources, but not yet sophisticated about searching them. Undergraduate students only used one or two surface quality criteria to evaluate Web resources. They made immediate judgments about the surface features of Web pages and ignored the content of the documents themselves. This research suggests that undergraduate instructors should take the responsibility for instructing students on basic Web use knowledge or work with librarians to develop undergraduate students information literacy skills.

  7. Webmail: an Automated Web Publishing System

    Science.gov (United States)

    Bell, David

    A system for publishing frequently updated information to the World Wide Web will be described. Many documents now hosted by the NOAO Web server require timely posting and frequent updates, but need only minor changes in markup or are in a standard format requiring only conversion to HTML. These include information from outside the organization, such as electronic bulletins, and a number of internal reports, both human and machine generated. Webmail uses procmail and Perl scripts to process incoming email messages in a variety of ways. This processing may include wrapping or conversion to HTML, posting to the Web or internal newsgroups, updating search indices or links on related pages, and sending email notification of the new pages to interested parties. The Webmail system has been in use at NOAO since early 1997 and has steadily grown to include fourteen recipes that together handle about fifty messages per week.

  8. The iMars WebGIS - Spatio-Temporal Data Queries and Single Image Map Web Services

    Science.gov (United States)

    Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Muller, Jan-Peter; van Gasselt, Stephan; Sidiropoulos, Panagiotis; Lanz-Kroechert, Julia

    2017-04-01

    Introduction: Web-based planetary image dissemination platforms usually show outline coverages of the data and offer querying for metadata as well as preview and download, e.g. the HRSC Mapserver (Walter & van Gasselt, 2014). Here we introduce a new approach for a system dedicated to change detection by simultanous visualisation of single-image time series in a multi-temporal context. While the usual form of presenting multi-orbit datasets is the merge of the data into a larger mosaic, we want to stay with the single image as an important snapshot of the planetary surface at a specific time. In the context of the EU FP-7 iMars project we process and ingest vast amounts of automatically co-registered (ACRO) images. The base of the co-registration are the high precision HRSC multi-orbit quadrangle image mosaics, which are based on bundle-block-adjusted multi-orbit HRSC DTMs. Additionally we make use of the existing bundle-adjusted HRSC single images available at the PDS archives. A prototype demonstrating the presented features is available at http://imars.planet.fu-berlin.de. Multi-temporal database: In order to locate multiple coverage of images and select images based on spatio-temporal queries, we converge available coverage catalogs for various NASA imaging missions into a relational database management system with geometry support. We harvest available metadata entries during our processing pipeline using the Integrated Software for Imagers and Spectrometers (ISIS) software. Currently, this database contains image outlines from the MGS/MOC, MRO/CTX and the MO/THEMIS instruments with imaging dates ranging from 1996 to the present. For the MEx/HRSC data, we already maintain a database which we automatically update with custom software based on the VICAR environment. Web Map Service with time support: The MapServer software is connected to the database and provides Web Map Services (WMS) with time support based on the START_TIME image attribute. It allows temporal

  9. Reactor Engineering Division Material for World Wide Web Pages

    International Nuclear Information System (INIS)

    1996-01-01

    This document presents the home page of the Reactor Engineering Division of Argonne National Laboratory. This WWW site describes the activities of the Division, an introduction to its wide variety of programs and samples of the results of research by people in the division

  10. Intelligent Agent Based Semantic Web in Cloud Computing Environment

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Considering today's web scenario, there is a need of effective and meaningful search over the web which is provided by Semantic Web. Existing search engines are keyword based. They are vulnerable in answering intelligent queries from the user due to the dependence of their results on information available in web pages. While semantic search engines provides efficient and relevant results as the semantic web is an extension of the current web in which information is given well defined meaning....

  11. The Partial Mapping of the Web Graph

    Directory of Open Access Journals (Sweden)

    Kristina Machova

    2009-06-01

    Full Text Available The paper presents an approach to partial mapping of a web sub-graph. This sub-graph contains the nearest surroundings of an actual web page. Our work deals with acquiring relevant Hyperlinks of a base web site, generation of adjacency matrix, the nearest distance matrix and matrix of converted distances of Hyperlinks, detection of compactness of web representation, and visualization of its graphical representation. The paper introduces an LWP algorithm – a technique for Hyperlink filtration.  This work attempts to help users with the orientation within the web graph.

  12. Head First Web Design

    CERN Document Server

    Watrall, Ethan

    2008-01-01

    Want to know how to make your pages look beautiful, communicate your message effectively, guide visitors through your website with ease, and get everything approved by the accessibility and usability police at the same time? Head First Web Design is your ticket to mastering all of these complex topics, and understanding what's really going on in the world of web design. Whether you're building a personal blog or a corporate website, there's a lot more to web design than div's and CSS selectors, but what do you really need to know? With this book, you'll learn the secrets of designing effecti

  13. Snippet-based relevance predictions for federated web search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd

    How well can the relevance of a page be predicted, purely based on snippets? This would be highly useful in a Federated Web Search setting where caching large amounts of result snippets is more feasible than caching entire pages. The experiments reported in this paper make use of result snippets and

  14. Classical Hypermedia Virtues on the Web with Webstrates

    DEFF Research Database (Denmark)

    Bouvin, Niels Olof; Klokmose, Clemens Nylandsted

    2016-01-01

    We show and analyze herein how Webstrates can augment the Web from a classical hypermedia perspective. Webstrates turns the DOM of Web pages into persistent and collaborative objects. We demonstrate how this can be applied to realize bidirectional links, shared collaborative annotations, and in...

  15. Ten years on, the web spans the globe

    CERN Multimedia

    Dalton, A W

    2003-01-01

    Short article on the history of the WWW. Prof Berner-Lee states that one of the main reasons the web was such a success was due to CERN's decision to make the web foundations and protocols available on a royalty-free basis (1/2 page).

  16. Twelve Theses on Reactive Rules for the Web

    OpenAIRE

    Bry, François; Eckert, Michael

    2006-01-01

    Reactivity, the ability to detect and react to events, is an essential functionality in many information systems. In particular, Web systems such as online marketplaces, adaptive (e.g., recommender) sys- tems, and Web services, react to events such as Web page updates or data posted to a server. This article investigates issues of relevance in designing high-level programming languages dedicated to reactivity on the Web. It presents twelve theses on features desira...

  17. Incorporating the surfing behavior of web users into PageRank

    OpenAIRE

    Ashyralyyev, Shatlyk

    2013-01-01

    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013. Thesis (Master's) -- Bilkent University, 2013. Includes bibliographical references leaves 68-73 One of the most crucial factors that determines the effectiveness of a large-scale commercial web search engine is the ranking (i.e., order) in which web search results are presented to the end user. In modern web search engines, the skeleton for the rank...

  18. Client-Side Event Processing for Personalized Web Advertisement

    Science.gov (United States)

    Stühmer, Roland; Anicic, Darko; Sen, Sinan; Ma, Jun; Schmidt, Kay-Uwe; Stojanovic, Nenad

    The market for Web advertisement is continuously growing and correspondingly, the number of approaches that can be used for realizing Web advertisement are increasing. However, current approaches fail to generate very personalized ads for a current Web user that is visiting a particular Web content. They mainly try to develop a profile based on the content of that Web page or on a long-term user's profile, by not taking into account current user's preferences. We argue that by discovering a user's interest from his current Web behavior we can support the process of ad generation, especially the relevance of an ad for the user. In this paper we present the conceptual architecture and implementation of such an approach. The approach is based on the extraction of simple events from the user interaction with a Web page and their combination in order to discover the user's interests. We use semantic technologies in order to build such an interpretation out of many simple events. We present results from preliminary evaluation studies. The main contribution of the paper is a very efficient, semantic-based client-side architecture for generating and combining Web events. The architecture ensures the agility of the whole advertisement system, by complexly processing events on the client. In general, this work contributes to the realization of new, event-driven applications for the (Semantic) Web.

  19. Commentary: Building Web Research Strategies for Teachers and Students

    Science.gov (United States)

    Maloy, Robert W.

    2016-01-01

    This paper presents web research strategies for teachers and students to use in building Dramatic Event, Historical Biography, and Influential Literature wiki pages for history/social studies learning. Dramatic Events refer to milestone or turning point moments in history. Historical Biographies and Influential Literature pages feature…

  20. Page 170 Use of Electronic Resources by Undergraduates in Two ...

    African Journals Online (AJOL)

    undergraduate students use electronic resources such as NUC virtual library, HINARI, ... web pages articles from magazines, encyclopedias, pamphlets and other .... of Nigerian university libraries have Internet connectivity, some of the system.

  1. Content and Design Features of Academic Health Sciences Libraries' Home Pages.

    Science.gov (United States)

    McConnaughy, Rozalynd P; Wilson, Steven P

    2018-01-01

    The goal of this content analysis was to identify commonly used content and design features of academic health sciences library home pages. After developing a checklist, data were collected from 135 academic health sciences library home pages. The core components of these library home pages included a contact phone number, a contact email address, an Ask-a-Librarian feature, the physical address listed, a feedback/suggestions link, subject guides, a discovery tool or database-specific search box, multimedia, social media, a site search option, a responsive web design, and a copyright year or update date.

  2. From web 2.0 to learning games

    DEFF Research Database (Denmark)

    Duus Henriksen, Thomas

    2011-01-01

    a game-based learning process that made active use of the participants’ current knowledge, both on leadership and on the context they were to lead. The solution developed for doing so was based on some of the ideas from web 2.0, by which the web-page or game merely provides a structure that allows...

  3. Sample-based XPath Ranking for Web Information Extraction

    NARCIS (Netherlands)

    Jundt, Oliver; van Keulen, Maurice

    Web information extraction typically relies on a wrapper, i.e., program code or a configuration that specifies how to extract some information from web pages at a specific website. Manually creating and maintaining wrappers is a cumbersome and error-prone task. It may even be prohibitive as some

  4. Traitor: associating concepts using the world wide web

    NARCIS (Netherlands)

    Drijfhout, Wanno; Oliver, J.; Oliver, Jundt; Wevers, L.; Hiemstra, Djoerd

    We use Common Crawl's 25TB data set of web pages to construct a database of associated concepts using Hadoop. The database can be queried through a web application with two query interfaces. A textual interface allows searching for similarities and differences between multiple concepts using a query

  5. Improving the web site's effectiveness by considering each page's temporal information

    NARCIS (Netherlands)

    Li, ZG; Sun, MT; Dunham, MH; Xiao, YQ; Dong, G; Tang, C; Wang, W

    2003-01-01

    Improving the effectiveness of a web site is always one of its owner's top concerns. By focusing on analyzing web users' visiting behavior, web mining researchers have developed a variety of helpful methods, based upon association rules, clustering, prediction and so on. However, we have found

  6. Le co-inventeur du Web nous dit

    CERN Multimedia

    Ungar, Christophe

    2004-01-01

    For fifty years CERN has made landmark discoveries. Since 1990, a particular idea was developed by a Belgian, Robert Cailliau in collaboration with Tim Berners-Lee. Cailliau, the co-inventor of the Web, speaks about what brought him to CERN, what he likes about Geneva, and how he uses the Web from day to day (1 page)

  7. Customisable Scientific Web Portal for Fusion Research

    Energy Technology Data Exchange (ETDEWEB)

    Abla, G; Kim, E; Schissel, D; Flannagan, S [General Atomics, San Diego (United States)

    2009-07-01

    The Web browser has become one of the major application interfaces for remotely participating in magnetic fusion. Web portals are used to present very diverse sources of information in a unified way. While a web portal has several benefits over other software interfaces, such as providing single point of access for multiple computational services, and eliminating the need for client software installation, the design and development of a web portal has unique challenges. One of the challenges is that a web portal needs to be fast and interactive despite a high volume of tools and information that it presents. Another challenge is the visual output on the web portal often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments; therefore the applications and information should be customizable depending on the needs of users. An appropriate software architecture and web technologies can meet these problems. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It utilizes a multi-tier software architecture, and web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services for users to select. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, shared visualization and online instant message services. Furthermore, the web portal will provide a mechanism to allow users to create their own applications on the web portal as well as bridging capabilities to external applications such as Twitter and other social networks. In this series of slides, we describe the software architecture of this scientific web portal and our experiences in utilizing web 2.0 technologies. A

  8. Post-processing of Deep Web Information Extraction Based on Domain Ontology

    Directory of Open Access Journals (Sweden)

    PENG, T.

    2013-11-01

    Full Text Available Many methods are utilized to extract and process query results in deep Web, which rely on the different structures of Web pages and various designing modes of databases. However, some semantic meanings and relations are ignored. So, in this paper, we present an approach for post-processing deep Web query results based on domain ontology which can utilize the semantic meanings and relations. A block identification model (BIM based on node similarity is defined to extract data blocks that are relevant to specific domain after reducing noisy nodes. Feature vector of domain books is obtained by result set extraction model (RSEM based on vector space model (VSM. RSEM, in combination with BIM, builds the domain ontology on books which can not only remove the limit of Web page structures when extracting data information, but also make use of semantic meanings of domain ontology. After extracting basic information of Web pages, a ranking algorithm is adopted to offer an ordered list of data records to users. Experimental results show that BIM and RSEM extract data blocks and build domain ontology accurately. In addition, relevant data records and basic information are extracted and ranked. The performances precision and recall show that our proposed method is feasible and efficient.

  9. Web technology for emergency medicine and secure transmission of electronic patient records.

    Science.gov (United States)

    Halamka, J D

    1998-01-01

    The American Heritage dictionary defines the word "web" as "something intricately contrived, especially something that ensnares or entangles." The wealth of medical resources on the World Wide Web is now so extensive, yet disorganized and unmonitored, that such a definition seems fitting. In emergency medicine, for example, a field in which accurate and complete information, including patients' records, is urgently needed, more than 5000 Web pages are available today, whereas fewer than 50 were available in December 1994. Most sites are static Web pages using the Internet to publish textbook material, but new technology is extending the scope of the Internet to include online medical education and secure exchange of clinical information. This article lists some of the best Web sites for use in emergency medicine and then describes a project in which the Web is used for transmission and protection of electronic medical records.

  10. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  11. Web Development Simplified

    Science.gov (United States)

    Becker, Bernd W.

    2010-01-01

    The author has discussed the Multimedia Educational Resource for Teaching and Online Learning site, MERLOT, in a recent Electronic Roundup column. In this article, he discusses an entirely new Web page development tool that MERLOT has added for its members. The new tool is called the MERLOT Content Builder and is directly integrated into the…

  12. Talking physics in the social web

    CERN Multimedia

    Griffiths, Martin

    2007-01-01

    "From "blogs" to "wikis", the Web is now more than a mere repository of information. Martin Griffiths investigates how this new interactivity is affecting the way physicists communicate and access information." (5 pages)

  13. Learning in a sheltered Internet environment: The use of WebQuests

    NARCIS (Netherlands)

    Segers, P.C.J.; Verhoeven, L.T.W.

    2009-01-01

    The present study investigated the effects on learning in a sheltered Internet environment using so-called WebQuests in elementary school classrooms in the Netherlands. A WebQuest is an assignment presented together with a series of web pages to help guide children's learning. The learning gains and

  14. Augmenting the Web through Open Hypermedia

    DEFF Research Database (Denmark)

    Bouvin, N.O.

    2003-01-01

    Based on an overview of Web augmentation and detailing the three basic approaches to extend the hypermedia functionality of the Web, the author presents a general open hypermedia framework (the Arakne framework) to augment the Web. The aim is to provide users with the ability to link, annotate, a......, and otherwise structure Web pages, as they see fit. The paper further discusses the possibilities of the concept through the description of various experiments performed with an implementation of the framework, the Arakne Environment......Based on an overview of Web augmentation and detailing the three basic approaches to extend the hypermedia functionality of the Web, the author presents a general open hypermedia framework (the Arakne framework) to augment the Web. The aim is to provide users with the ability to link, annotate...

  15. Analysis of Technique to Extract Data from the Web for Improved Performance

    Science.gov (United States)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  16. 78 FR 67881 - Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and...

    Science.gov (United States)

    2013-11-12

    ... ticket agents are providing schedule and fare information and marketing covered air transportation... corresponding accessible pages on a mobile Web site by one year after the final rule's effective date; and (3... criteria) as the required accessibility standard for all public-facing Web pages involved in marketing air...

  17. Web resources for myrmecologists

    DEFF Research Database (Denmark)

    Nash, David Richard

    2005-01-01

    The world wide web provides many resources that are useful to the myrmecologist. Here I provide a brief introduc- tion to the types of information currently available, and to recent developments in data provision over the internet which are likely to become important resources for myrmecologists...... in the near future. I discuss the following types of web site, and give some of the most useful examples of each: taxonomy, identification and distribution; conservation; myrmecological literature; individual species sites; news and discussion; picture galleries; personal pages; portals....

  18. PSB goes personal: The failure of personalised PSB web pages

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk

    2013-01-01

    Between 2006 and 2011, a number of European public service broadcasting (PSB) organisations offered their website users the opportunity to create their own PSB homepage. The web customisation was conceived by the editors as a response to developments in commercial web services, particularly social...

  19. TREC2002 Web, Novelty and Filtering Track Experiments Using PIRCS

    National Research Council Canada - National Science Library

    Kwok, K. L; Deng, P; Dinstl, N; Chan, M

    2006-01-01

    .... The Web track has two tasks: distillation and named-page retrieval. Distillation is a new utility concept for ranking documents, and needs new design on the output document ranked list after an ad-hoc retrieval from the web (.gov) collection...

  20. Sources of Militaria on the World Wide Web | Walker | Scientia ...

    African Journals Online (AJOL)

    Having an interest in military-type topics is one thing, finding information on the web to quench your thirst for knowledge is another. The World Wide Web (WWW) is a universal electronic library that contains millions of web pages. As well as being fun, it is an addictive tool on which to search for information. To prevent hours ...

  1. Adaptive web data extraction policies

    Directory of Open Access Journals (Sweden)

    Provetti, Alessandro

    2008-12-01

    Full Text Available Web data extraction is concerned, among other things, with routine data accessing and downloading from continuously-updated dynamic Web pages. There is a relevant trade-off between the rate at which the external Web sites are accessed and the computational burden on the accessing client. We address the problem by proposing a predictive model, typical of the Operating Systems literature, of the rate-of-update of each Web source. The presented model has been implemented into a new version of the Dynamo project: a middleware that assists in generating informative RSS feeds out of traditional HTML Web sites. To be effective, i.e., make RSS feeds be timely and informative and to be scalable, Dynamo needs a careful tuning and customization of its polling policies, which are described in detail.

  2. CSAR-web: a web server of contig scaffolding using algebraic rearrangements.

    Science.gov (United States)

    Chen, Kun-Tze; Lu, Chin Lung

    2018-05-04

    CSAR-web is a web-based tool that allows the users to efficiently and accurately scaffold (i.e. order and orient) the contigs of a target draft genome based on a complete or incomplete reference genome from a related organism. It takes as input a target genome in multi-FASTA format and a reference genome in FASTA or multi-FASTA format, depending on whether the reference genome is complete or incomplete, respectively. In addition, it requires the users to choose either 'NUCmer on nucleotides' or 'PROmer on translated amino acids' for CSAR-web to identify conserved genomic markers (i.e. matched sequence regions) between the target and reference genomes, which are used by the rearrangement-based scaffolding algorithm in CSAR-web to order and orient the contigs of the target genome based on the reference genome. In the output page, CSAR-web displays its scaffolding result in a graphical mode (i.e. scalable dotplot) allowing the users to visually validate the correctness of scaffolded contigs and in a tabular mode allowing the users to view the details of scaffolds. CSAR-web is available online at http://genome.cs.nthu.edu.tw/CSAR-web.

  3. Customizable Scientific Web Portal for Fusion Research

    Energy Technology Data Exchange (ETDEWEB)

    Abla, G; Kim, E; Schissel, D; Flannagan, S [General Atomics, San Diego (United States)

    2009-07-01

    The Web browser has become one of the major application interfaces for remotely participating in magnetic fusion experiments. Recently in other areas, web portals have begun to be deployed. These portals are used to present very diverse sources of information in a unified way. While a web portal has several benefits over other software interfaces, such as providing single point of access for multiple computational services, and eliminating the need for client software installation, the design and development of a web portal has unique challenges. One of the challenges is that a web portal needs to be fast and interactive despite a high volume of tools and information that it presents. Another challenge is the visual output on the web portal often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments; therefore the applications and information should be customizable depending on the needs of users. An appropriate software architecture and web technologies can meet these problems. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It utilizes a multi-tier software architecture, and web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services for users to select. The users can create a unique personalized working environment to fit their own needs and interests. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, shared visualization and online instant message services. Furthermore, the web portal will provide a mechanism to allow users to create their own applications on the web portal as well as bridging capabilities to external applications such as

  4. Web Enabled DROLS Verity TopicSets

    National Research Council Canada - National Science Library

    Tong, Richard

    1999-01-01

    The focus of this effort has been the design and development of automatically generated TopicSets and HTML pages that provide the basis of the required search and browsing capability for DTIC's Web Enabled DROLS System...

  5. Teaching Critical Evaluation Skills for World Wide Web Resources.

    Science.gov (United States)

    Tate, Marsha; Alexander, Jan

    1996-01-01

    Outlines a lesson plan used by an academic library to evaluate the quality of World Wide Web information. Discusses the traditional evaluation criteria of accuracy, authority, objectivity, currency, and coverage as it applies to the unique characteristics of Web pages: their marketing orientation, variety of information, and instability. The…

  6. Developing Large Web Applications

    CERN Document Server

    Loudon, Kyle

    2010-01-01

    How do you create a mission-critical site that provides exceptional performance while remaining flexible, adaptable, and reliable 24/7? Written by the manager of a UI group at Yahoo!, Developing Large Web Applications offers practical steps for building rock-solid applications that remain effective even as you add features, functions, and users. You'll learn how to develop large web applications with the extreme precision required for other types of software. Avoid common coding and maintenance headaches as small websites add more pages, more code, and more programmersGet comprehensive soluti

  7. Reflect: a practical approach to web semantics

    DEFF Research Database (Denmark)

    O'Donoghue, S.I.; Horn, Heiko; Pafilisa, E.

    2010-01-01

    To date, adding semantic capabilities to web content usually requires considerable server-side re-engineering, thus only a tiny fraction of all web content currently has semantic annotations. Recently, we announced Reflect (http://reflect.ws), a free service that takes a more practical approach......: Reflect uses augmented browsing to allow end-users to add systematic semantic annotations to any web-page in real-time, typically within seconds. In this paper we describe the tagging process in detail and show how further entity types can be added to Reflect; we also describe how publishers and content...... web technologies....

  8. WebWise 2.0: The Power of Community. WebWise Conference on Libraries and Museums in the Digital World Proceedings (9th, Miami Beach, Florida, March 5-7, 2008)

    Science.gov (United States)

    Green, David

    2009-01-01

    Since it was coined by Tim O'Reilly in formulating the first Web 2.0 Conference in 2004, the term "Web 2.0" has definitely caught on as a designation of a second generation of Web design and experience that emphasizes a high degree of interaction with, and among, users. Rather than simply consulting and reading Web pages, the Web 2.0 generation is…

  9. A URI-based approach for addressing fragments of media resources on the Web

    NARCIS (Netherlands)

    E. Mannens; D. van Deursen; R. Troncy (Raphael); S. Pfeiffer; C. Parker (Conrad); Y. Lafon; A.J. Jansen (Jack); M. Hausenblas; R. van de Walle

    2011-01-01

    htmlabstractTo make media resources a prime citizen on the Web, we have to go beyond simply replicating digital media files. The Web is based on hyperlinks between Web resources, and that includes hyperlinking out of resources (e.g., from a word or an image within a Web page) as well as hyperlinking

  10. Distributed data collection and supervision based on web sensor

    Science.gov (United States)

    He, Pengju; Dai, Guanzhong; Fu, Lei; Li, Xiangjun

    2006-11-01

    As a node in Internet/Intranet, web sensor has been promoted in recent years and wildly applied in remote manufactory, workshop measurement and control field. However, the conventional scheme can only support HTTP protocol, and the remote users supervise and control the collected data published by web in the standard browser because of the limited resource of the microprocessor in the sensor; moreover, only one node of data acquirement can be supervised and controlled in one instant therefore the requirement of centralized remote supervision, control and data process can not be satisfied in some fields. In this paper, the centralized remote supervision, control and data process by the web sensor are proposed and implemented by the principle of device driver program. The useless information of the every collected web page embedded in the sensor is filtered and the useful data is transmitted to the real-time database in the workstation, and different filter algorithms are designed for different sensors possessing independent web pages. Every sensor node has its own filter program of web, called "web data collection driver program", the collecting details are shielded, and the supervision, control and configuration software can be implemented by the call of web data collection driver program just like the use of the I/O driver program. The proposed technology can be applied in the data acquirement where relative low real-time is required.

  11. Graphic Data Display from Manufacturing on Web Pages

    Directory of Open Access Journals (Sweden)

    Martin VALAS

    2009-06-01

    Full Text Available Industrial data can by displayed in graphical form which is usually used by three types of users. The first, nonstop users, most frequent operational engineer, who checking actual displayed values and then intervene in operation. The second are occasional users who are interested in historical data e.g. for servicing reason. The last users’ types are tradesmen and managers. State comparison few days or months ago helps as decision-making support. Graph component with web application, which provides data as XML document, was designed for second users group. Graph component displays historical data. Students can fully understand all the problems go along with web application creation in ASP.NET, which provides data in XML document, as well as graph component creation in integrated development environment Flash, thanks in detail described solution using ActionScript.

  12. Virtual Web Services

    OpenAIRE

    Rykowski, Jarogniew

    2007-01-01

    In this paper we propose an application of software agents to provide Virtual Web Services. A Virtual Web Service is a linked collection of several real and/or virtual Web Services, and public and private agents, accessed by the user in the same way as a single real Web Service. A Virtual Web Service allows unrestricted comparison, information merging, pipelining, etc., of data coming from different sources and in different forms. Detailed architecture and functionality of a single Virtual We...

  13. Development of portal Web pages for the LHD experiment

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Funaba, Hisamichi; Nakanishi, Hideya; Iwata, Chie; Yoshida, Masanori; Nagayama, Yoshio

    2011-01-01

    Because the LHD project has been operating with the cooperation of many institutes in Japan, the remote participation facilities play an important role. Therefore, NIFS has been introducing these facilities to its remote participants. Because the authors regard Web services as essential tools for the current Internet communication, Web services for remote participation have been developed. However, because these services are dispersed among several servers in NIFS, users cannot find the required services easily. Therefore, the authors developed a portal Web server to list the existing and new Web services for the LHD experiment. The server provides services such as summary graph, plasma movie of the last plasma discharge, daily experiment logs, and daily experimental schedules. One of the most important information from these services is the summary graph. Usually, the plasma discharges of the LHD experiment are executed every three minutes. Between the discharges, the summary graph of the last plasma discharge is displayed on the front screen in the control room soon after the discharge is complete. The graph is useful in evaluating the last discharge, which is important information for determining the subsequent experiment schedule. Therefore, it is required to display the summary graph, which plots more than 10 data diagnostics, as soon as possible. On the other hand, the data-appearance time varies from one diagnostic to another. To display the graph faster, the new system retrieves the data asynchronously; several data retrieval processes work simultaneously, and the system plots the data all at once. (author)

  14. Learning in a Sheltered Internet Environment: The Use of WebQuests

    Science.gov (United States)

    Segers, Eliane; Verhoeven, Ludo

    2009-01-01

    The present study investigated the effects on learning in a sheltered Internet environment using so-called WebQuests in elementary school classrooms in the Netherlands. A WebQuest is an assignment presented together with a series of web pages to help guide children's learning. The learning gains and quality of the work of 229 sixth graders…

  15. SiteGuide: An example-based approach to web site development assistance

    NARCIS (Netherlands)

    Hollink, V.; de Boer, V.; van Someren, M.; Filipe, J.; Cordeiro, J.

    2009-01-01

    We present ‘SiteGuide’, a tool that helps web designers to decide which information will be included in a new web site and how the information will be organized. SiteGuide takes as input URLs of web sites from the same domain as the site the user wants to create. It automatically searches the pages

  16. A hybrid BCI web browser based on EEG and EOG signals.

    Science.gov (United States)

    Shenghong He; Tianyou Yu; Zhenghui Gu; Yuanqing Li

    2017-07-01

    In this study, we propose a new web browser based on a hybrid brain computer interface (BCI) combining electroencephalographic (EEG) and electrooculography (EOG) signals. Specifically, the user can control the horizontal movement of the mouse by imagining left/right hand motion, and control the vertical movement of the mouse, select/reject a target, or input text in an edit box by blinking eyes in synchrony with the flashes of the corresponding buttons on the GUI. Based on mouse control, target selection and text input, the user can open a web page of interest, select an intended target in the web and read the page content. An online experiment was conducted involving five healthy subjects. The experimental results demonstrated the effectiveness of the proposed method.

  17. Nessi: An EEG-Controlled Web Browser for Severely Paralyzed Patients

    Directory of Open Access Journals (Sweden)

    Michael Bensch

    2007-01-01

    Full Text Available We have previously demonstrated that an EEG-controlled web browser based on self-regulation of slow cortical potentials (SCPs enables severely paralyzed patients to browse the internet independently of any voluntary muscle control. However, this system had several shortcomings, among them that patients could only browse within a limited number of web pages and had to select links from an alphabetical list, causing problems if the link names were identical or if they were unknown to the user (as in graphical links. Here we describe a new EEG-controlled web browser, called Nessi, which overcomes these shortcomings. In Nessi, the open source browser, Mozilla, was extended by graphical in-place markers, whereby different brain responses correspond to different frame colors placed around selectable items, enabling the user to select any link on a web page. Besides links, other interactive elements are accessible to the user, such as e-mail and virtual keyboards, opening up a wide range of hypertext-based applications.

  18. Knighthood for 'father of the web'

    CERN Multimedia

    Uhlig, R

    2003-01-01

    "Tim Berners-Lee, the father of the world wide web, was awarded a knighthood for services to the internet, which his efforts transformed from a haunt of computer geeks, scientists and the military into a global phenomenon" (1/2 page).

  19. [Improving vaccination social marketing by monitoring the web].

    Science.gov (United States)

    Ferro, A; Bonanni, P; Castiglia, P; Montante, A; Colucci, M; Miotto, S; Siddu, A; Murrone, L; Baldo, V

    2014-01-01

    Immunisation is one of the most important and cost- effective interventions in Public Health because of their significant positive impact on population health.However, since Jenner's discovery there always been a lively debate between supporters and opponents of vaccination; Today the antivaccination movement spreads its message mostly on the web, disseminating inaccurate data through blogs and forums, increasing vaccine rejection.In this context, the Società Italiana di Igiene (SItI) created a web project in order to fight the misinformation on the web regarding vaccinations, through a series of information tools, including scientific articles, educational information, video and multimedia presentations The web portal (http://www.vaccinarsi.org) was published in May 2013 and now is already available over one hundred web pages related to vaccinations Recently a Forum, a periodic newsletter and a Twitter page have been created. There has been an average of 10,000 hits per month. Currently our users are mostly healthcare professionals. The visibility of the site is very good and it currently ranks first in the Google's search engine, taping the word "vaccinarsi" The results of the first four months of activity are extremely encouraging and show the importance of this project; furthermore the application for quality certification by independent international Organizations has been submitted.

  20. Semantic similarity measure in biomedical domain leverage web search engine.

    Science.gov (United States)

    Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei

    2010-01-01

    Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.

  1. Off the Beaten tracks: Exploring Three Aspects of Web Navigation

    NARCIS (Netherlands)

    Weinreich, H.; Obendorf, H.; Herder, E.; Mayer, M.; Edmonds, H.; Hawkey, K.; Kellar, M.; Turnbull, D.

    2006-01-01

    This paper presents results of a long-term client-side Web usage study, updating previous studies that range in age from five to ten years. We focus on three aspects of Web navigation: changes in the distribution of navigation actions, speed of navigation and within-page navigation. “Navigation

  2. PSB goes personal: The failure of personalised PSB web pages

    OpenAIRE

    Jannick Kirk Sørensen

    2013-01-01

    Between 2006 and 2011, a number of European public service broadcasting (PSB) organisations offered their website users the opportunity to create their own PSB homepage. The web customisation was conceived by the editors as a response to developments in commercial web services, particularly social networking and content aggregation services, but the customisation projects revealed tensions between the ideals of customer sovereignty and the editorial agenda-setting. This paper presents an over...

  3. Outreach to International Students and Scholars Using the World Wide Web.

    Science.gov (United States)

    Wei, Wei

    1998-01-01

    Describes the creation of a World Wide Web site for the Science Library International Outreach Program at the University of California, Santa Cruz. Discusses design elements, content, and promotion of the site. Copies of the home page and the page containing the outreach program's statement of purpose are included. (AEF)

  4. Checklist of accessibility in Web informational environments

    Directory of Open Access Journals (Sweden)

    Christiane Gomes dos Santos

    2017-01-01

    Full Text Available This research deals with the process of search, navigation and retrieval of information by the person with blindness in web environment, focusing on knowledge of the areas of information recovery and architecture, to understanding the strategies used by these people to access the information on the web. It aims to propose the construction of an accessibility verification instrument, checklist, to be used to analyze the behavior of people with blindness in search actions, navigation and recovery sites and pages. It a research exploratory and descriptive of qualitative nature, with the research methodology, case study - the research to establish a specific study with the simulation of search, navigation and information retrieval using speech synthesis system, NonVisual Desktop Access, in assistive technologies laboratory, to substantiate the construction of the checklist for accessibility verification. It is considered the reliability of performed research and its importance for the evaluation of accessibility in web environment to improve the access of information for people with limited reading in order to be used on websites and pages accessibility check analysis.

  5. Designing a Web Spam Classifier Based on Feature Fusion in the Layered Multi-Population Genetic Programming Framework

    Directory of Open Access Journals (Sweden)

    Amir Hosein KEYHANIPOUR

    2013-11-01

    Full Text Available Nowadays, Web spam pages are a critical challenge for Web retrieval systems which have drastic influence on the performance of such systems. Although these systems try to combat the impact of spam pages on their final results list, spammers increasingly use more sophisticated techniques to increase the number of views for their intended pages in order to have more commercial success. This paper employs the recently proposed Layered Multi-population Genetic Programming model for Web spam detection task as well application of correlation coefficient analysis for feature space reduction. Based on our tentative results, the designed classifier, which is based on a combination of easy to compute features, has a very reasonable performance in comparison with similar methods.

  6. Distribution of pagerank mass among principle components of the web

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli; Pham, Kim Son; Bonato, A.; Chung, F.R.K.

    2007-01-01

    We study the PageRank mass of principal components in a bow-tie Web Graph, as a function of the damping factor c. Using a singular perturbation approach, we show that the PageRank share of IN and SCC components remains high even for very large values of the damping factor, in spite of the fact that

  7. Neutralizing SQL Injection Attack Using Server Side Code Modification in Web Applications

    OpenAIRE

    Dalai, Asish Kumar; Jena, Sanjay Kumar

    2017-01-01

    Reports on web application security risks show that SQL injection is the top most vulnerability. The journey of static to dynamic web pages leads to the use of database in web applications. Due to the lack of secure coding techniques, SQL injection vulnerability prevails in a large set of web applications. A successful SQL injection attack imposes a serious threat to the database, web application, and the entire web server. In this article, the authors have proposed a novel method for prevent...

  8. Trust estimation of the semantic web using semantic web clustering

    Science.gov (United States)

    Shirgahi, Hossein; Mohsenzadeh, Mehran; Haj Seyyed Javadi, Hamid

    2017-05-01

    Development of semantic web and social network is undeniable in the Internet world these days. Widespread nature of semantic web has been very challenging to assess the trust in this field. In recent years, extensive researches have been done to estimate the trust of semantic web. Since trust of semantic web is a multidimensional problem, in this paper, we used parameters of social network authority, the value of pages links authority and semantic authority to assess the trust. Due to the large space of semantic network, we considered the problem scope to the clusters of semantic subnetworks and obtained the trust of each cluster elements as local and calculated the trust of outside resources according to their local trusts and trust of clusters to each other. According to the experimental result, the proposed method shows more than 79% Fscore that is about 11.9% in average more than Eigen, Tidal and centralised trust methods. Mean of error in this proposed method is 12.936, that is 9.75% in average less than Eigen and Tidal trust methods.

  9. Longitudinal navigation log data on a large web domain

    NARCIS (Netherlands)

    Verberne, S.; Arends, B.; Kraaij, W.; Vries, A. de

    2016-01-01

    We have collected the access logs for our university's web domain over a time span of 4.5 years. We now release the pre-processed data of a 3-month period for research into user navigation behavior. We preprocessed the data so that only successful GET requests of web pages by non-bot users are kept.

  10. Adding a visualization feature to web search engines: it's time.

    Science.gov (United States)

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  11. WAPTT - Web Application Penetration Testing Tool

    Directory of Open Access Journals (Sweden)

    DURIC, Z.

    2014-02-01

    Full Text Available Web applications vulnerabilities allow attackers to perform malicious actions that range from gaining unauthorized account access to obtaining sensitive data. The number of reported web application vulnerabilities in last decade is increasing dramatically. The most of vulnerabilities result from improper input validation and sanitization. The most important of these vulnerabilities based on improper input validation and sanitization are: SQL injection (SQLI, Cross-Site Scripting (XSS and Buffer Overflow (BOF. In order to address these vulnerabilities we designed and developed the WAPTT (Web Application Penetration Testing Tool tool - web application penetration testing tool. Unlike other web application penetration testing tools, this tool is modular, and can be easily extended by end-user. In order to improve efficiency of SQLI vulnerability detection, WAPTT uses an efficient algorithm for page similarity detection. The proposed tool showed promising results as compared to six well-known web application scanners in detecting various web application vulnerabilities.

  12. It's Time to Use a Wiki as Part of Your Web Site

    Science.gov (United States)

    Ribaric, Tim

    2007-01-01

    Without a doubt, the term "wiki" has leaked into almost every discussion concerning Web 2.0. The real question becomes: Is there a place for a wiki on every library Web site? The answer should be an emphatic "yes." People often praise the wiki because it offers simple page creation and provides instant gratification for amateur Web developers.…

  13. Web-based X-ray quality control documentation.

    Science.gov (United States)

    David, George; Burnett, Lou Ann; Schenkel, Robert

    2003-01-01

    The department of radiology at the Medical College of Georgia Hospital and Clinics has developed an equipment quality control web site. Our goal is to provide immediate access to virtually all medical physics survey data. The web site is designed to assist equipment engineers, department management and technologists. By improving communications and access to equipment documentation, we believe productivity is enhanced. The creation of the quality control web site was accomplished in three distinct steps. First, survey data had to be placed in a computer format. The second step was to convert these various computer files to a format supported by commercial web browsers. Third, a comprehensive home page had to be designed to provide convenient access to the multitude of surveys done in the various x-ray rooms. Because we had spent years previously fine-tuning the computerization of the medical physics quality control program, most survey documentation was already in spreadsheet or database format. A major technical decision was the method of conversion of survey spreadsheet and database files into documentation appropriate for the web. After an unsatisfactory experience with a HyperText Markup Language (HTML) converter (packaged with spreadsheet and database software), we tried creating Portable Document Format (PDF) files using Adobe Acrobat software. This process preserves the original formatting of the document and takes no longer than conventional printing; therefore, it has been very successful. Although the PDF file generated by Adobe Acrobat is a proprietary format, it can be displayed through a conventional web browser using the freely distributed Adobe Acrobat Reader program that is available for virtually all platforms. Once a user installs the software, it is automatically invoked by the web browser whenever the user follows a link to a file with a PDF extension. Although no confidential patient information is available on the web site, our legal

  14. Fifteen-year trend in information on the World Wide Web for patients with rheumatoid arthritis: evolving, but opportunities for improvement remain.

    Science.gov (United States)

    Castillo-Ortiz, Jose Dionisio; de Jesus Valdivia-Nuno, Jose; Ramirez-Gomez, Andrea; Garagarza-Mariscal, Heber; Gallegos-Rios, Carlos; Flores-Hernandez, Gabriel; Hernandez-Sanchez, Luis; Brambila-Barba, Victor; Castaneda-Sanchez, Jose Juan; Barajas-Ochoa, Zalathiel; Suarez-Rico, Angel; Sanchez-Gonzalez, Jorge Manuel; Ramos-Remus, Cesar

    2016-09-01

    The aim of this study was to assess the changes in the characteristics of rheumatoid arthritis information on the Internet over a 15-year period and the positioning of Web sites posted by universities, hospitals, and medical associations. We replicated the methods of a 2001 study assessing rheumatoid arthritis information on the Internet using WebCrawler. All Web sites and pages were critically assessed for relevance, scope, authorship, type of publication, and financial objectives. Differences between studies were considered significant if 95 % confidence intervals did not overlap. Additionally, we added a Google search with assessments of the quality of content of web pages and of the Web sites posted by medical institutions. There were significant differences between the present study's WebCrawler search and the 2001-referent study. There were increases in information sites (82 vs 36 %) and rheumatoid arthritis-specific discussion pages (59 vs 8 %), and decreases in advertisements (2 vs 48 %) and alternative therapies (27 vs 45 %). The quality of content of web pages is still dispersed; just 37 % were rated as good. Among the first 300 hits, 30 (10 %) were posted by medical institutions, 17 of them in the USA. Regarding readability, 7 % of these 30 web pages required 6 years, 27 % required 7-9 years, 27 % required 10-12 years, and 40 % required 12 or more years of schooling. The Internet has evolved in the last 15 years. Medical institutions are also better positioned. However, there are still areas for improvement, such as the quality of the content, leadership of medical institutions, and readability of information.

  15. Untitled

    Directory of Open Access Journals (Sweden)

    Terrence A. Brooks

    2006-01-01

    Full Text Available Background User scripting heralds a paradigm shift towards web reader empowerment. Powerful web writers of the first decade of the Web needed to be cautioned about usability and accessibility issues. As power shifts to web readers, they become capable of customizing web pages to their own tastes and purposes. This paper describes the development of Greasemonkey extension of the Firefox browser. Argument User scripting is a product of the development of the open source browser, and individual developers who wish to change webpages. The Greasemonkey extension of the Firefox browser permits web readers to write JavaScripts that (1 Change the look and feel of Web pages, (2 Change the functionality of web page controls, and (3 Facilitates Web page 'mashups', hybrid web presentations composed of content from two or more web pages. The only naturally occurring limit to web page modification may be difficult Web page source code. Tools that shield Web readers from the complexity of HTML are being introduced. Conclusion. The paradigm shift to Web readers, armed with powerful and easy-to-use tools for customizing Web pages heralds a new era of the Web. It threatens the idea that a Web page has a single look and feel, and emphasizes the trend to design Web pages as mere input to the reading experience, subject to modification of presentation device as well as reader taste and purpose.

  16. A new means of communication with the populations: the Extremadura Regional Government Radiological Monitoring alert WEB Page

    International Nuclear Information System (INIS)

    Baeza, A.; Vasco, J.; Miralles, Y.; Torrado, L.; Gil, J. M.

    2003-01-01

    XXI a summary sheet, relatively easy to interpret, giving the radiation levels and dosimetry detected during the immediately proceeding semester. Recently too, the challenge has been taken on of providing constantly, updated information on as complex a topic as the radiological monitoring of the environment. To this end, a Web page has been developed dealing with the operation and results provided by the aforementioned Radiological Warning Betwork of Extremadura. The page structure consists of seven major blocks: (i) origin and objectives of the network; (ii) a description of the stations of the network; (iii) their modes of operation in normal circumstances and in the case of an operational or radiological anomaly; (iv) the results that the network provides; (v) a glossary of terms to clarify as straightforwardly as possible some of the terms and concepts that are of unavoidable use, but are unfamiliar to the population in general; (vi) information about links to other Web sites that also deal with this issue to some degree; and (vii) giving the option of questions and contacts between the visitor to the page and those responsible for its creation and maintenance. Actions such as that described here will doubtless contribute positively to increasing the necessary trust that the population deserves to have in the correct operation of the measures adopted to guarantee their adequate radiological protection. (Author)

  17. DW3 Classical Music Resources: Managing Mozart on the Web.

    Science.gov (United States)

    Fineman, Yale

    2001-01-01

    Discusses the development of DW3 (Duke World Wide Web) Classical Music Resources, a vertical portal that comprises the most comprehensive collection of classical music resources on the Web with links to more than 2800 non-commercial pages/sites in over a dozen languages. Describes the hierarchical organization of subject headings and considers…

  18. World Wide Web Usage Mining Systems and Technologies

    Directory of Open Access Journals (Sweden)

    Wen-Chen Hu

    2003-08-01

    Full Text Available Web usage mining is used to discover interesting user navigation patterns and can be applied to many real-world problems, such as improving Web sites/pages, making additional topic or product recommendations, user/customer behavior studies, etc. This article provides a survey and analysis of current Web usage mining systems and technologies. A Web usage mining system performs five major tasks: i data gathering, ii data preparation, iii navigation pattern discovery, iv pattern analysis and visualization, and v pattern applications. Each task is explained in detail and its related technologies are introduced. A list of major research systems and projects concerning Web usage mining is also presented, and a summary of Web usage mining is given in the last section.

  19. Food marketing on popular children's web sites: a content analysis.

    Science.gov (United States)

    Alvy, Lisa M; Calvert, Sandra L

    2008-04-01

    In 2006 the Institute of Medicine (IOM) concluded that food marketing was a contributor to childhood obesity in the United States. One recommendation of the IOM committee was for research on newer marketing venues, such as Internet Web sites. The purpose of this cross-sectional study was to answer the IOM's call by examining food marketing on popular children's Web sites. Ten Web sites were selected based on market research conducted by KidSay, which identified favorite sites of children aged 8 to 11 years during February 2005. Using a standardized coding form, these sites were examined page by page for the existence, type, and features of food marketing. Web sites were compared using chi2 analyses. Although food marketing was not pervasive on the majority of the sites, seven of the 10 Web sites contained food marketing. The products marketed were primarily candy, cereal, quick serve restaurants, and snacks. Candystand.com, a food product site, contained a significantly greater amount of food marketing than the other popular children's Web sites. Because the foods marketed to children are not consistent with a healthful diet, nutrition professionals should consider joining advocacy groups to pressure industry to reduce online food marketing directed at youth.

  20. Demonstration: SpaceExplorer - A Tool for Designing Ubiquitous Web Applications for Collections of Displays

    DEFF Research Database (Denmark)

    Hansen, Thomas Riisgaard

    2007-01-01

    This demonstration presents a simple browser plug-in that grant web applications the ability to use multiple nearby devices for displaying web content. A web page can e.g. be designed to present additional information on nearby devices. The demonstration introduces a light weight peer-to-peer arc...

  1. Corporate Writing in the Web of Postmodern Culture and Postindustrial Capitalism.

    Science.gov (United States)

    Boje, David M.

    2001-01-01

    Uses Nike as an example to explore the impact of corporate writing (in annual reports, press releases, advertisements, web pages, sponsored research, and consultant reports). Shows how the intertextual web of "Nike Writing," as it legitimates industry-wide labor and ecological practices has significant, negative consequences for academic…

  2. Oh What a Tangled Biofilm Web Bacteria Weave

    Science.gov (United States)

    ... Home Page Oh What a Tangled Biofilm Web Bacteria Weave By Elia Ben-Ari Posted May 1, ... a suitable surface, some water and nutrients, and bacteria will likely put down stakes and form biofilms. ...

  3. 77 FR 60138 - Trinity Adaptive Management Working Group; Public Teleconference/Web-Based Meeting

    Science.gov (United States)

    2012-10-02

    ... meeting. Background The TAMWG affords stakeholders the opportunity to give policy, management, and...-FF08EACT00] Trinity Adaptive Management Working Group; Public Teleconference/ Web-Based Meeting AGENCY: Fish..., announce a public teleconference/web-based meeting of [[Page 60139

  4. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    Science.gov (United States)

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  5. An Optimization Model for Product Placement on Product Listing Pages

    Directory of Open Access Journals (Sweden)

    Yan-Kwang Chen

    2014-01-01

    Full Text Available The design of product listing pages is a key component of Website design because it has significant influence on the sales volume on a Website. This study focuses on product placement in designing product listing pages. Product placement concerns how venders of online stores place their products over the product listing pages for maximization of profit. This problem is very similar to the offline shelf management problem. Since product information sources on a Web page are typically communicated through the text and image, visual stimuli such as color, shape, size, and spatial arrangement often have an effect on the visual attention of online shoppers and, in turn, influence their eventual purchase decisions. In view of the above, this study synthesizes the visual attention literature and theory of shelf-space allocation to develop a mathematical programming model with genetic algorithms for finding optimal solutions to the focused issue. The validity of the model is illustrated with example problems.

  6. The invisible Web uncovering information sources search engines can't see

    CERN Document Server

    Sherman, Chris

    2001-01-01

    Enormous expanses of the Internet are unreachable with standard web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, informa

  7. Developing heuristics for Web communication: an introduction to this special issue

    NARCIS (Netherlands)

    van der Geest, Thea; Spyridakis, Jan H.

    2000-01-01

    This article describes the role of heuristics in the Web design process. The five sets of heuristics that appear in this issue are also described, as well as the research methods used in their development. The heuristics were designed to help designers and developers of Web pages or sites to

  8. PageRank model of opinion formation on Ulam networks

    Science.gov (United States)

    Chakhmakhchyan, L.; Shepelyansky, D.

    2013-12-01

    We consider a PageRank model of opinion formation on Ulam networks, generated by the intermittency map and the typical Chirikov map. The Ulam networks generated by these maps have certain similarities with such scale-free networks as the World Wide Web (WWW), showing an algebraic decay of the PageRank probability. We find that the opinion formation process on Ulam networks has certain similarities but also distinct features comparing to the WWW. We attribute these distinctions to internal differences in network structure of the Ulam and WWW networks. We also analyze the process of opinion formation in the frame of generalized Sznajd model which protects opinion of small communities.

  9. Using ant-behavior-based simulation model AntWeb to improve website organization

    Science.gov (United States)

    Li, Weigang; Pinheiro Dib, Marcos V.; Teles, Wesley M.; Morais de Andrade, Vlaudemir; Alves de Melo, Alba C. M.; Cariolano, Judas T.

    2002-03-01

    Some web usage mining algorithms showed the potential application to find the difference among the organizations expected by visitors to the website. However, there are still no efficient method and criterion for a web administrator to measure the performance of the modification. In this paper, we developed an AntWeb, a model inspired by ants' behavior to simulate the sequence of visiting the website, in order to measure the efficient of the web structure. We implemented a web usage mining algorithm using backtrack to the intranet website of the Politec Informatic Ltd., Brazil. We defined throughput (the number of visitors to reach their target pages per time unit relates to the total number of visitors) as an index to measure the website's performance. We also used the link in a web page to represent the effect of visitors' pheromone trails. For every modification in the website organization, for example, putting a link from the expected location to the target object, the simulation reported the value of throughput as a quick answer about this modification. The experiment showed the stability of our simulation model, and a positive modification to the intranet website of the Politec.

  10. SWORS: a system for the efficient retrieval of relevant spatial web objects

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian S.

    2012-01-01

    Spatial web objects that possess both a geographical location and a textual description are gaining in prevalence. This gives prominence to spatial keyword queries that exploit both location and textual arguments. Such queries are used in many web services such as yellow pages and maps services....

  11. Perché è necessario capire il web

    CERN Multimedia

    Rosenthal, Edward C

    2007-01-01

    The birth of the cobweb of contents, the web 2.0, Wikipedia, the global collaboration, i blog, the net neutrality, the digital liberty and the future of the net: interview with Robert Cailliau, inventor of the WWW with Tim Bernes-Lee. (2 pages)

  12. PageMan: An interactive ontology tool to generate, display, and annotate overview graphs for profiling experiments

    Directory of Open Access Journals (Sweden)

    Hannah Matthew A

    2006-12-01

    Full Text Available Abstract Background Microarray technology has become a widely accepted and standardized tool in biology. The first microarray data analysis programs were developed to support pair-wise comparison. However, as microarray experiments have become more routine, large scale experiments have become more common, which investigate multiple time points or sets of mutants or transgenics. To extract biological information from such high-throughput expression data, it is necessary to develop efficient analytical platforms, which combine manually curated gene ontologies with efficient visualization and navigation tools. Currently, most tools focus on a few limited biological aspects, rather than offering a holistic, integrated analysis. Results Here we introduce PageMan, a multiplatform, user-friendly, and stand-alone software tool that annotates, investigates, and condenses high-throughput microarray data in the context of functional ontologies. It includes a GUI tool to transform different ontologies into a suitable format, enabling the user to compare and choose between different ontologies. It is equipped with several statistical modules for data analysis, including over-representation analysis and Wilcoxon statistical testing. Results are exported in a graphical format for direct use, or for further editing in graphics programs. PageMan provides a fast overview of single treatments, allows genome-level responses to be compared across several microarray experiments covering, for example, stress responses at multiple time points. This aids in searching for trait-specific changes in pathways using mutants or transgenics, analyzing development time-courses, and comparison between species. In a case study, we analyze the results of publicly available microarrays of multiple cold stress experiments using PageMan, and compare the results to a previously published meta-analysis. PageMan offers a complete user's guide, a web-based over-representation analysis as

  13. Using Web Server Logs to Track Users through the Electronic Forest

    Science.gov (United States)

    Coombs, Karen A.

    2005-01-01

    This article analyzes server logs, providing helpful information in making decisions about Web-based services. The author indicates, as a result of analyzing server logs, several interesting things about the users' behavior were learned. The resulting findings are discussed in this article. Certain pages of the author's Web site, for instance, are…

  14. Single-item screening for agoraphobic symptoms : validation of a web-based audiovisual screening instrument

    NARCIS (Netherlands)

    van Ballegooijen, Wouter; Riper, Heleen; Donker, Tara; Martin Abello, Katherina; Marks, Isaac; Cuijpers, Pim

    2012-01-01

    The advent of web-based treatments for anxiety disorders creates a need for quick and valid online screening instruments, suitable for a range of social groups. This study validates a single-item multimedia screening instrument for agoraphobia, part of the Visual Screener for Common Mental Disorders

  15. Estadísticas de visitas en portales web institucionales como indicador de respuesta del público a propuestas de divulgación

    Science.gov (United States)

    Lares, M.

    The presence of institutions on the internet is nowadays very important to strenghten communication channels, both internal and with the general public. The Córdoba Observatory has several web portals, including the official web page, a blog and presence on several social networks. These are one of the fundamental pillars for outreach activities, and serve as communication channel for events and scientific, academic, and outreach news. They are also a source of information for the staff, as well as data related to the Observatory internal organization and scientific production. Several statistical studies are presented, based on data taken from the visits to the official web pages. I comment on some aspects of the role of web pages as a source of consultation and as a quick response to information needs. FULL TEXT IN SPANISH

  16. A Study of HTML Title Tag Creation Behavior of Academic Web Sites

    Science.gov (United States)

    Noruzi, Alireza

    2007-01-01

    The HTML title tag information should identify and describe exactly what a Web page contains. This paper analyzes the "Title element" and raises a significant question: "Why is the title tag important?" Search engines base search results and page rankings on certain criteria. Among the most important criteria is the presence of the search keywords…

  17. Discovering How Students Search a Library Web Site: A Usability Case Study.

    Science.gov (United States)

    Augustine, Susan; Greene, Courtney

    2002-01-01

    Discusses results of a usability study at the University of Illinois Chicago that investigated whether Internet search engines have influenced the way students search library Web sites. Results show students use the Web site's internal search engine rather than navigating through the pages; have difficulty interpreting library terminology; and…

  18. RCrawler: An R package for parallel web crawling and scraping

    Directory of Open Access Journals (Sweden)

    Salim Khalil

    2017-01-01

    Full Text Available RCrawler is a contributed R package for domain-based web crawling and content scraping. As the first implementation of a parallel web crawler in the R environment, RCrawler can crawl, parse, store pages, extract contents, and produce data that can be directly employed for web content mining applications. However, it is also flexible, and could be adapted to other applications. The main features of RCrawler are multi-threaded crawling, content extraction, and duplicate content detection. In addition, it includes functionalities such as URL and content-type filtering, depth level controlling, and a robot.txt parser. Our crawler has a highly optimized system, and can download a large number of pages per second while being robust against certain crashes and spider traps. In this paper, we describe the design and functionality of RCrawler, and report on our experience of implementing it in an R environment, including different optimizations that handle the limitations of R. Finally, we discuss our experimental results.

  19. 76 FR 14034 - Proposed Collection; Comment Request; NCI Cancer Genetics Services Directory Web-Based...

    Science.gov (United States)

    2011-03-15

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Proposed Collection; Comment Request; NCI Cancer Genetics Services Directory Web-Based Application Form and Update Mailer Summary: In... Cancer Genetics Services Directory Web-based Application Form and Update Mailer. [[Page 14035

  20. Web Annotation and Threaded Forum: How Did Learners Use the Two Environments in an Online Discussion?

    Science.gov (United States)

    Sun, Yanyan; Gao, Fei

    2014-01-01

    Web annotation is a Web 2.0 technology that allows learners to work collaboratively on web pages or electronic documents. This study explored the use of Web annotation as an online discussion tool by comparing it to a traditional threaded discussion forum. Ten graduate students participated in the study. Participants had access to both a Web…

  1. Web-based Logbook System for EAST Experiments

    International Nuclear Information System (INIS)

    Yang Fei; Xiao Bingjia

    2010-01-01

    Implementation of a web-based logbook system on EAST is introduced, which can store the comments for the experiments into a database and access the documents via various web browsers. The three-tier software architecture and asynchronous access technology are adopted to improve the system effectively. Authorized users can view the information of real-time discharge, comments from others and signal plots; add, delete, or revise their own comments; search signal data or comments under complicated search conditions; and collect relevant information and output it to an excel file. The web pages can be automatically updated after a new discharge is completed and without refreshment.

  2. Academic medical center libraries on the Web.

    Science.gov (United States)

    Tannery, N H; Wessel, C B

    1998-10-01

    Academic medical center libraries are moving towards publishing electronically, utilizing networked technologies, and creating digital libraries. The catalyst for this movement has been the Web. An analysis of academic medical center library Web pages was undertaken to assess the information created and communicated in early 1997. A summary of present uses and suggestions for future applications is provided. A method for evaluating and describing the content of library Web sites was designed. The evaluation included categorizing basic information such as description and access to library services, access to commercial databases, and use of interactive forms. The main goal of the evaluation was to assess original resources produced by these libraries.

  3. Using JavaScript and the FDSN web service to create an interactive earthquake information system

    Science.gov (United States)

    Fischer, Kasper D.

    2015-04-01

    The FDSN web service provides a web interface to access earthquake meta-data (e. g. event or station information) and waveform date over the internet. Requests are send to a server as URLs and the output is either XML or miniSEED. This makes it hard to read by humans but easy to process with different software. Different data centers are already supporting the FDSN web service, e. g. USGS, IRIS, ORFEUS. The FDSN web service is also part of the Seiscomp3 (http://www.seiscomp3.org) software. The Seismological Observatory of the Ruhr-University switched to Seiscomp3 as the standard software for the analysis of mining induced earthquakes at the beginning of 2014. This made it necessary to create a new web-based earthquake information service for the publication of results to the general public. This has be done by processing the output of a FDSN web service query by javascript running in a standard browser. The result is an interactive map presenting the observed events and further information of events and stations on a single web page as a table and on a map. In addition the user can download event information, waveform data and station data in different formats like miniSEED, quakeML or FDSNxml. The developed code and all used libraries are open source and freely available.

  4. Web Accessibility and Guidelines

    Science.gov (United States)

    Harper, Simon; Yesilada, Yeliz

    Access to, and movement around, complex online environments, of which the World Wide Web (Web) is the most popular example, has long been considered an important and major issue in the Web design and usability field. The commonly used slang phrase ‘surfing the Web’ implies rapid and free access, pointing to its importance among designers and users alike. It has also been long established that this potentially complex and difficult access is further complicated, and becomes neither rapid nor free, if the user is disabled. There are millions of people who have disabilities that affect their use of the Web. Web accessibility aims to help these people to perceive, understand, navigate, and interact with, as well as contribute to, the Web, and thereby the society in general. This accessibility is, in part, facilitated by the Web Content Accessibility Guidelines (WCAG) currently moving from version one to two. These guidelines are intended to encourage designers to make sure their sites conform to specifications, and in that conformance enable the assistive technologies of disabled users to better interact with the page content. In this way, it was hoped that accessibility could be supported. While this is in part true, guidelines do not solve all problems and the new WCAG version two guidelines are surrounded by controversy and intrigue. This chapter aims to establish the published literature related to Web accessibility and Web accessibility guidelines, and discuss limitations of the current guidelines and future directions.

  5. ASH External Web Portal (External Portal) -

    Data.gov (United States)

    Department of Transportation — The ASH External Web Portal is a web-based portal that provides single sign-on functionality, making the web portal a single location from which to be authenticated...

  6. Mobile Web for Pervasive environments - design webexperiences for multiple mobile devices

    DEFF Research Database (Denmark)

    Hansen, Thomas Riisgaard

    2008-01-01

    In this paper we present an architecture for designing web pages that uses multiple mobile and stationary devices to present web content. The architecture extends standard web technology with a number of functions for expressing how web content might migrate and use multiple displays....... The architecture is developed to support desktop applications, but in this paper we describe how the architecture can be extended to mobile devices by using AJAX technology. The paper also presents an implementation and presents a number of applications for mobile devices developed with this framework....

  7. Web-based training in German university eye hospitals - Education 2.0?

    Science.gov (United States)

    Handzel, Daniel M; Hesse, L

    2011-01-01

    To analyse web-based training in ophthalmology offered by German university eye hospitals. In January 2010 the websites of all 36 German university hospitals were searched for information provided for visitors, students and doctors alike. We evaluated the offer in terms of quantity and quality. All websites could be accessed at the time of the study. 28 pages provided information for students and doctors, one page only for students, three exclusively for doctors. Four pages didn't offer any information for these target groups. The websites offered information on events like congresses or students curricular education, there were also material for download for these events or for other purposes. We found complex e-learning-platforms on 9 pages. These dealt with special ophthalmological topics in a didactic arrangement. In spite of the extensive possibilities offered by the technology of Web 2.0, many conceivable tools were only rarely made available. It was not always possible to determine if the information provided was up-to-date, very often the last actualization of the content was long ago. On one page the date for the last change was stated as 2004. Currently there are 9 functional e-learning-applications offered by German university eye hospitals. Two additional hospitals present links to a project of the German Ophthalmological Society. There was a considerable variation in quantity and quality. No website made use of crediting successful studying, e.g. with CME-points or OSCE-credits. All German university eye hospitals present themselves in the World Wide Web. However, the lack of modern, technical as well as didactical state-of-the-art learning applications is alarming as it leaves an essential medium of today's communication unused.

  8. Hue combinations in web design for Swedish and Thai users : Guidelines for combining color hues onscreen for Swedish and Thai users in the context of designing web sites

    OpenAIRE

    Ruse, Vidal

    2017-01-01

    Users can assess the visual appeal of a web page within 50 milliseconds and color is the first thing noticed onscreen. That directly influences user perception of the website, and choosing appealing color combinations is therefore crucial for successful web design. Recent scientific research has identified which individual colors are culturally preferred in web design in different countries but there is no similar research on hue combinations. Currently no effective, scientifically based guid...

  9. Soil-Web: An online soil survey for California, Arizona, and Nevada

    Science.gov (United States)

    Beaudette, D. E.; O'Geen, A. T.

    2009-10-01

    Digital soil survey products represent one of the largest and most comprehensive inventories of soils information currently available. The complex structure of these databases, intensive use of codes and scientific jargon make it difficult for non-specialists to utilize digital soil survey resources. A project was initiated to construct a web-based interface to digital soil survey products (STATSGO and SSURGO) for California, Arizona, and Nevada that would be accessible to the general public. A collection of mature, open source applications (including Mapserver, PostGIS and Apache Web Server) were used as a framework to support data storage, querying, map composition, data presentation, and contextual links to related materials. Application logic was written in the PHP language to "glue" together the many components of an online soil survey. A comprehensive website ( http://casoilresource.lawr.ucdavis.edu/map) was created to facilitate access to digital soil survey databases through several interfaces including: interactive map, Google Earth and HTTP-based application programming interface (API). Each soil polygon is linked to a map unit summary page, which includes links to soil component summary pages. The most commonly used soil properties, land interpretations and ratings are presented. Graphical and tabular summaries of soil profile information are dynamically created, and aid with rapid assessment of key soil properties. Quick links to official series descriptions (OSD) and other such information are presented. All terminology is linked back to the USDA-NRCS Soil Survey Handbook which contains extended definitions. The Google Earth interface to Soil-Web can be used to explore soils information in three dimensions. A flexible web API was implemented to allow advanced users of soils information to access our website via simple web page requests. Soil-Web has been successfully used in soil science curriculum, outreach activities, and current research projects

  10. A Web text acquisition method%基于Delphi的Web文本获取方法

    Institute of Scientific and Technical Information of China (English)

    刘建培

    2016-01-01

    提出基于delphi的Web文本获取方法,从网页中获取Web页面格式的源文件(.html文件),分析它的结构信息,处理它的控制符,通过分析过滤源文件的格式来提取网页中的文本信息。利用标点符号对文本信息进行章节、段落、句子等预处理,将文本信息转换成句子序列,让用户快速地定位到需要了解的内容,从而让用户远离钓鱼网站、恶意广告、欺诈信息以及在浏览网页内容时产生的骚扰,提高互联网体验。%In this paper, a method of Web text acquisition with Delphi is proposed, which obtains the source files of the Web page format (.Html file) from the Web page, analyzes its structure information, deals with its control character, and extracts the text information from the Web page by analyzing and filtering the source files’ formats. The method makes use of punctuation marks to preprocess the text information for sections, paragraphs and sentences, converts the text information into sentence sequences, which allows the users to quickly navigate to the contents needed to know, allows the users to stay away from phishing sites, malicious advertising, fraud information and the harassment generated by browsing the content of Web pages, and improves their Internet experience.

  11. Web 2.0 and health 2.0: are you in?

    Science.gov (United States)

    Felkey, Bill G

    2008-01-01

    With over 6 billion web pages, over $100 billion in online sales every year in the U.S. alone and the average growth rate of online purchasing exceeding 26% over the last 5 years, the Internet is a powerful business tool. Today's shopper sees your actual pharmacy location and your Internet presence as one and the same. One study showed that 82% of the consumers surveyed who had a single frustrating experience online with a retailer would not return to the site for future dealings. A bad experience online made 28% of those surveyed unlikely to return to the retail location of the business, and over 55% said a bad online experience would have a negative impact on their overall opinion of the retailer. Web 2.0 refers to the social networking applications of the internet, Health 2.0 to its special health applications. Taking your Internet presence to the 2.0 level must be balanced with all of the other demands you are facing-but be aware that it's happening all around you.

  12. Uniform Page Migration Problem in Euclidean Space

    Directory of Open Access Journals (Sweden)

    Amanj Khorramian

    2016-08-01

    Full Text Available The page migration problem in Euclidean space is revisited. In this problem, online requests occur at any location to access a single page located at a server. Every request must be served, and the server has the choice to migrate from its current location to a new location in space. Each service costs the Euclidean distance between the server and request. A migration costs the distance between the former and the new server location, multiplied by the page size. We study the problem in the uniform model, in which the page has size D = 1 . All request locations are not known in advance; however, they are sequentially presented in an online fashion. We design a 2.75 -competitive online algorithm that improves the current best upper bound for the problem with the unit page size. We also provide a lower bound of 2.732 for our algorithm. It was already known that 2.5 is a lower bound for this problem.

  13. Analysis of Trust-Based Approaches for Web Service Selection

    DEFF Research Database (Denmark)

    Dragoni, Nicola; Miotto, Nicola

    2011-01-01

    The basic tenet of Service-Oriented Computing (SOC) is the possibility of building distributed applications on the Web by using Web services as fundamental building blocks. The proliferation of such services is considered the second wave of evolution in the Internet age, moving the Web from...... a collection of pages to a collections of services. Consensus is growing that this Web service revolution wont eventuate until we resolve trust-related issues. Indeed, the intrinsic openness of the SOC vision makes crucial to locate useful services and recognize them as trustworthy. In this paper we review...... the field of trust-based Web service selection, providing a structured classification of current approaches and highlighting the main limitations of each class and of the overall field....

  14. A Literature Review of Academic Library Web Page Studies

    Science.gov (United States)

    Blummer, Barbara

    2007-01-01

    In the early 1990s, numerous academic libraries adopted the web as a communication tool with users. The literature on academic library websites includes research on both design and navigation. Early studies typically focused on design characteristics, since websites initially merely provided information on the services and collections available in…

  15. Three loud cheers for the father of the Web

    CERN Multimedia

    2005-01-01

    World Wide Web creator Sir Tim Berners-Lee could have been a very rich man - but he gave away his invention for the good of mankind. Tom Leonard meets the modest genius voted Great Briton 2004 (2 pages)

  16. Info.cern.ch returns to the Web

    CERN Document Server

    2006-01-01

    First web address is reincarnated as a historical reference on the birth of the Web. Tim Berners-Lee, inventor of the Web, with one of the first Web pages on his computer. CERN invites you to take a virtual trip back in time and have a look at what the very first URL, which led to a revolution of the way we communicate and share information, was all about. The original web server, whose address was info.cern.ch, centred on information regarding the WorldWideWeb (WWW) project. Visitors could learn more about hypertext, technical details for creating one's own webpage, and even an explanation on how to search the Web for information-something 5 year-olds of today have mastered since it all started 17 years ago. Now info.cern.ch has been re-launched with a much brighter façade and a focus on the ideas that inspired this new wave of technology. The first browser created by Tim Berners-Lee, inventor of the Web, contained just about everything we see today on a web browser, including graphics, menus, layouts and...

  17. Improving web site performance using commercially available analytical tools.

    Science.gov (United States)

    Ogle, James A

    2010-10-01

    It is easy to accurately measure web site usage and to quantify key parameters such as page views, site visits, and more complex variables using commercially available tools that analyze web site log files and search engine use. This information can be used strategically to guide the design or redesign of a web site (templates, look-and-feel, and navigation infrastructure) to improve overall usability. The data can also be used tactically to assess the popularity and use of new pages and modules that are added and to rectify problems that surface. This paper describes software tools used to: (1) inventory search terms that lead to available content; (2) propose synonyms for commonly used search terms; (3) evaluate the effectiveness of calls to action; (4) conduct path analyses to targeted content. The American Academy of Orthopaedic Surgeons (AAOS) uses SurfRay's Behavior Tracking software (Santa Clara CA, USA, and Copenhagen, Denmark) to capture and archive the search terms that have been entered into the site's Google Mini search engine. The AAOS also uses Unica's NetInsight program to analyze its web site log files. These tools provide the AAOS with information that quantifies how well its web sites are operating and insights for making improvements to them. Although it is easy to quantify many aspects of an association's web presence, it also takes human involvement to analyze the results and then recommend changes. Without a dedicated resource to do this, the work often is accomplished only sporadically and on an ad hoc basis.

  18. Book Reviews, Annotation, and Web Technology.

    Science.gov (United States)

    Schulze, Patricia

    From reading texts to annotating web pages, grade 6-8 students rely on group cooperation and individual reading and writing skills in this research project that spans six 50-minute lessons. Student objectives for this project are that they will: read, discuss, and keep a journal on a book in literature circles; understand the elements of and…

  19. "Così abbiamo creato il World Wide Web"

    CERN Multimedia

    Sigiani, GianLuca

    2002-01-01

    Meeting with Robert Cailliau, scientist and pioneer of the web, who, in a book, tells how at CERN in Geneva, his team transformed Internet (an instrument used for military purposes) in one of the most revolutionary tool of mass media from ever (1 page)

  20. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    Science.gov (United States)

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  1. TMFoldWeb: a web server for predicting transmembrane protein fold class.

    Science.gov (United States)

    Kozma, Dániel; Tusnády, Gábor E

    2015-09-17

    Here we present TMFoldWeb, the web server implementation of TMFoldRec, a transmembrane protein fold recognition algorithm. TMFoldRec uses statistical potentials and utilizes topology filtering and a gapless threading algorithm. It ranks template structures and selects the most likely candidates and estimates the reliability of the obtained lowest energy model. The statistical potential was developed in a maximum likelihood framework on a representative set of the PDBTM database. According to the benchmark test the performance of TMFoldRec is about 77 % in correctly predicting fold class for a given transmembrane protein sequence. An intuitive web interface has been developed for the recently published TMFoldRec algorithm. The query sequence goes through a pipeline of topology prediction and a systematic sequence to structure alignment (threading). Resulting templates are ordered by energy and reliability values and are colored according to their significance level. Besides the graphical interface, a programmatic access is available as well, via a direct interface for developers or for submitting genome-wide data sets. The TMFoldWeb web server is unique and currently the only web server that is able to predict the fold class of transmembrane proteins while assigning reliability scores for the prediction. This method is prepared for genome-wide analysis with its easy-to-use interface, informative result page and programmatic access. Considering the info-communication evolution in the last few years, the developed web server, as well as the molecule viewer, is responsive and fully compatible with the prevalent tablets and mobile devices.

  2. 78 FR 76391 - Proposed Enhancements to the Motor Carrier Safety Measurement System (SMS) Public Web Site

    Science.gov (United States)

    2013-12-17

    ...-0392] Proposed Enhancements to the Motor Carrier Safety Measurement System (SMS) Public Web Site AGENCY... proposed enhancements to the display of information on the Agency's Safety Measurement System (SMS) public Web site. On December 6, 2013, Advocates [[Page 76392

  3. SISTEM INFORMASI RUMAH SAKIT BERBASIS WEB MENGGUNAKAN JAVA SERVER PAGES

    Directory of Open Access Journals (Sweden)

    Heru Cahya Rustamaji

    2010-01-01

    Teknologi yang dipakai untuk membangun sistem informasi berbasis web ini adalah menggunakan JSP dan apache Tomcat. Tomcat merupakan servlet engine open source yang termasuk dalam proyek Jakarta yang dikerjakan oleh Apache Software Foundation.

  4. Study of Search Engine Transaction Logs Shows Little Change in How Users use Search Engines. A review of: Jansen, Bernard J., and Amanda Spink. “How Are We Searching the World Wide Web? A Comparison of Nine Search Engine Transaction Logs.” Information Processing & Management 42.1 (2006: 248‐263.

    Directory of Open Access Journals (Sweden)

    David Hook

    2006-09-01

    Full Text Available Objective – To examine the interactions between users and search engines, and how they have changed over time. Design – Comparative analysis of search engine transaction logs. Setting – Nine major analyses of search engine transaction logs. Subjects – Nine web search engine studies (4 European, 5 American over a seven‐year period, covering the search engines Excite, Fireball, AltaVista, BWIE and AllTheWeb. Methods – The results from individual studies are compared by year of study for percentages of single query sessions, one term queries, operator (and, or, not, etc. usage and single result page viewing. As well, the authors group the search queries into eleven different topical categories and compare how the breakdown has changed over time. Main Results – Based on the percentage of single query sessions, it does not appear that the complexity of interactions has changed significantly for either the U.S.‐based or the European‐based search engines. As well, there was little change observed in the percentage of one‐term queries over the years of study for either the U.S.‐based or the European‐based search engines. Few users (generally less than 20% use Boolean or other operators in their queries, and these percentages have remained relatively stable. One area of noticeable change is in the percentage of users viewing only one results page, which has increased over the years of study. Based on the studies of the U.S.‐based search engines, the topical categories of ‘People, Place or Things’ and ‘Commerce, Travel, Employment or Economy’ are becoming more popular, while the categories of ‘Sex and Pornography’ and ‘Entertainment or Recreation’ are declining. Conclusions – The percentage of users viewing only one results page increased during the years of the study, while the percentages of single query sessions, oneterm sessions and operator usage remained stable. The increase in single result page viewing

  5. The use of the World Wide Web by medical journals in 2003 and 2005: an observational study.

    Science.gov (United States)

    Schriger, David L; Ouk, Sripha; Altman, Douglas G

    2007-01-01

    The 2- to 6-page print journal article has been the standard for 200 years, yet this format severely limits the amount of detailed information that can be conveyed. The World Wide Web provides a low-cost option for posting extended text and supplementary information. It also can enhance the experience of journal editors, reviewers, readers, and authors through added functionality (eg, online submission and peer review, postpublication critique, and e-mail notification of table of contents.) Our aim was to characterize ways that journals were using the World Wide Web in 2005 and note changes since 2003. We analyzed the Web sites of 138 high-impact print journals in 3 ways. First, we compared the print and Web versions of March 2003 and 2005 issues of 28 journals (20 of which were randomly selected from the 138) to determine how often articles were published Web only and how often print articles were augmented by Web-only supplements. Second, we examined what functions were offered by each journal Web site. Third, for journals that offered Web pages for reader commentary about each article, we analyzed the number of comments and characterized these comments. Fifty-six articles (7%) in 5 journals were Web only. Thirteen of the 28 journals had no supplementary online content. By 2005, several journals were including Web-only supplements in >20% of their papers. Supplementary methods, tables, and figures predominated. The use of supplementary material increased by 5% from 2% to 7% in the 20-journal random sample from 2003 to 2005. Web sites had similar functionality with an emphasis on linking each article to related material and e-mailing readers about activity related to each article. There was little evidence of journals using the Web to provide readers an interactive experience with the data or with each other. Seventeen of the 138 journals offered rapid-response pages. Only 18% of eligible articles had any comments after 5 months. Journal Web sites offer similar

  6. PSB goes personal: The failure of personalised PSB web pages

    Directory of Open Access Journals (Sweden)

    Jannick Kirk Sørensen

    2013-12-01

    Full Text Available Between 2006 and 2011, a number of European public service broadcasting (PSB organisations offered their website users the opportunity to create their own PSB homepage. The web customisation was conceived by the editors as a response to developments in commercial web services, particularly social networking and content aggregation services, but the customisation projects revealed tensions between the ideals of customer sovereignty and the editorial agenda-setting. This paper presents an overview of the PSB activities as well as reflections on the failure of the customisable PSB homepages. The analysis is based on interviews with the PSB editors involved in the projects and on studies of the interfaces and user comments. Commercial media customisation is discussed along with the PSB projects to identify similarities and differences.

  7. Fermilab joins in global live Web cast

    CERN Multimedia

    Polansek, Tom

    2005-01-01

    From 2 to 3:30 p.m., Lederman, who won the Nobel Prize for physics in 1988, will host his own wacky, science-centered talk show at Fermi National Accelerator Laboratory as part of a lvie, 12-hour, international Web cast celebrating Albert Einstein and the world Year of Physics (2/3 page)

  8. Resolving person names in web people search

    NARCIS (Netherlands)

    Balog, K.; Azzopardi, L.; de Rijke, M.; King, I.; Baeza-Yates, R.

    2009-01-01

    Disambiguating person names in a set of documents (such as a set of web pages returned in response to a person name) is a key task for the presentation of results and the automatic profiling of experts. With largely unstructured documents and an unknown number of people with the same name the

  9. A web-based repository of surgical simulator projects.

    Science.gov (United States)

    Leskovský, Peter; Harders, Matthias; Székely, Gábor

    2006-01-01

    The use of computer-based surgical simulators for training of prospective surgeons has been a topic of research for more than a decade. As a result, a large number of academic projects have been carried out, and a growing number of commercial products are available on the market. Keeping track of all these endeavors for established groups as well as for newly started projects can be quite arduous. Gathering information on existing methods, already traveled research paths, and problems encountered is a time consuming task. To alleviate this situation, we have established a modifiable online repository of existing projects. It contains detailed information about a large number of simulator projects gathered from web pages, papers and personal communication. The database is modifiable (with password protected sections) and also allows for a simple statistical analysis of the collected data. For further information, the surgical repository web page can be found at www.virtualsurgery.vision.ee.ethz.ch.

  10. Página web de la Sociedad Española para los Recursos Genéticos Animales (SERGA): justificación y descripción

    OpenAIRE

    Barba Capote, C.J.; Rodero Serrano, E.; Delgado-Bermejo, J.V.; Castro, R.; Zamora Lozano, R.

    2000-01-01

    In the present paper the web page of SERGA is described as a structure of permanent information among members and divulgation of the society’s activities out of our subject. After a slight justification the different option of the page are described where we have to stand out the database of the Spanish breeds, the historical photographic archive the biblioteque, and the links with other web address, and another minor functionality. This paper finish presenting the proposal for the web future...

  11. Web-Based Distributed Simulation of Aeronautical Propulsion System

    Science.gov (United States)

    Zheng, Desheng; Follen, Gregory J.; Pavlik, William R.; Kim, Chan M.; Liu, Xianyou; Blaser, Tammy M.; Lopez, Isaac

    2001-01-01

    An application was developed to allow users to run and view the Numerical Propulsion System Simulation (NPSS) engine simulations from web browsers. Simulations were performed on multiple INFORMATION POWER GRID (IPG) test beds. The Common Object Request Broker Architecture (CORBA) was used for brokering data exchange among machines and IPG/Globus for job scheduling and remote process invocation. Web server scripting was performed by JavaServer Pages (JSP). This application has proven to be an effective and efficient way to couple heterogeneous distributed components.

  12. Web-дизайн: landing page

    OpenAIRE

    ЗОРИНА Е.В.; БЕЖИТСКАЯ Е.А.

    2014-01-01

    В то время, как в США и Европе такое явление как landing page используется давно, в России же оно только стало популярным и уже получило широкое распространение.

  13. Sustainable Materials Management (SMM) Web Academy Webinar: The Changing Waste Stream

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  14. On HTML and XML based web design and implementation techniques

    International Nuclear Information System (INIS)

    Bezboruah, B.; Kalita, M.

    2006-05-01

    Web implementation is truly a multidisciplinary field with influences from programming, choosing of scripting languages, graphic design, user interface design, and database design. The challenge of a Web designer/implementer is his ability to create an attractive and informative Web. To work with the universal framework and link diagrams from the design process as well as the Web specifications and domain information, it is essential to create Hypertext Markup Language (HTML) or other software and multimedia to accomplish the Web's objective. In this article we will discuss Web design standards and the techniques involved in Web implementation based on HTML and Extensible Markup Language (XML). We will also discuss the advantages and disadvantages of HTML over its successor XML in designing and implementing a Web. We have developed two Web pages, one utilizing the features of HTML and the other based on the features of XML to carry out the present investigation. (author)

  15. The ARCOMEM Architecture for Social- and Semantic-Driven Web Archiving

    Directory of Open Access Journals (Sweden)

    Thomas Risse

    2014-11-01

    Full Text Available The constantly growing amount ofWeb content and the success of the SocialWeb lead to increasing needs for Web archiving. These needs go beyond the pure preservationo of Web pages. Web archives are turning into “community memories” that aim at building a better understanding of the public view on, e.g., celebrities, court decisions and other events. Due to the size of the Web, the traditional “collect-all” strategy is in many cases not the best method to build Web archives. In this paper, we present the ARCOMEM (From Future Internet 2014, 6 689 Collect-All Archives to Community Memories architecture and implementation that uses semantic information, such as entities, topics and events, complemented with information from the Social Web to guide a novel Web crawler. The resulting archives are automatically enriched with semantic meta-information to ease the access and allow retrieval based on conditions that involve high-level concepts.

  16. Enhancing Accessibility of Web Content for the Print-Impaired and Blind People

    Science.gov (United States)

    Chalamandaris, Aimilios; Raptis, Spyros; Tsiakoulis, Pirros; Karabetsos, Sotiris

    Blind people and in general print-impaired people are often restricted to use their own computers, enhanced most often with expensive, screen reading programs, in order to access the web, and in a form that every screen reading program allows to. In this paper we present SpellCast Navi, a tool that is intended for people with visual impairments, which attempts to combine advantages from both customized and generic web enhancement tools. It consists of a generically designed engine and a set of case-specific filters. It can run on a typical web browser and computer, without the need of installing any additional application locally. It acquires and parses the content of web pages, converts bi-lingual text into synthetic speech using high quality speech synthesizer, and supports a set of common functionalities such as navigation through hotkeys, audible navigation lists and more. By using a post-hoc approach based on a-priori information of the website's layout, the audible presentation and navigation through the website is more intuitive a more efficient than with a typical screen reading application. SpellCast Navi poses no requirements on web pages and introduces no overhead to the design and development of a website, as it functions as a hosted proxy service.

  17. Impression Management: The Web Presence of College Presidents

    Science.gov (United States)

    Morgan, Joanne Cardin

    2013-01-01

    Leadership profile pages on organizational websites have become staged opportunities for impression management. This research uses content analysis to examine the strategies of assertive impression management used to construct the leadership Web presence of the 70 presidents of national public universities, as identified in the "US News and…

  18. Onto-Agents-Enabling Intelligent Agents on the Web

    Science.gov (United States)

    2005-05-01

    Manual annotation is tedious, and often done poorly. Even within the funded DAML project fewer pages were annotated than was hoped. In eCommerce , there...been overcome, congratulations! The DAML project was initiated at the birth of the semantic web. It contributed greatly to define a new research

  19. The influence of social networking web sites on the evaluation of job candidates.

    Science.gov (United States)

    Bohnert, Daniel; Ross, William H

    2010-06-01

    This study investigated how the content of social networking Web site (SNW) pages influenced others' evaluation of job candidates. Students (N = 148) evaluated the suitability of hypothetical candidates for an entry-level managerial job. A 2 x 4 design was employed: résumés were either marginally qualified or well qualified for the job. SNW printouts reflected (a) an emphasis on drinking alcohol, (b) a family orientation, or (c) a professional orientation; participants in a control group received no Web page information. In addition to a main effect for résumé quality, applicants with either a family-oriented or a professional-oriented SNW were seen as more suitable for the job and more conscientious than applicants with alcohol-oriented SNW pages. They were more likely to be interviewed. If hired, they were also likely to be offered significantly higher starting salaries. Results are discussed in terms of implications for both managers and applicants.

  20. An Investigation of the Academic Information Finding and Re-finding Behavior on the Web

    Directory of Open Access Journals (Sweden)

    Hsiao-Tieh Pu

    2014-12-01

    Full Text Available Academic researchers often need and re-use relevant information found after a period of time. This preliminary study used various methods, including experiments, interviews, search log analysis, sequential analysis, and observation to investigate characteristics of academic information finding and re-finding behavior. Overall, the participants in this study entered short queries either in finding or re-finding phases. Comparatively speaking, the participants entered greater number of queries, modified more queries, browsed more web pages, and stayed longer on web pages in the finding phase. On the other hand, in the re-finding phase, they utilized personal information management tools to re-find instead of finding again using search engine, such as checking browsing history; moreover, they tend to input less number of queries and stayed shorter on web pages. In short, the participants interacted more with the retrieval system during the finding phase, while they increased the use of personal information management tools in the re-finding phase. As to the contextual clues used in re-finding phase, the participants used less clues from the target itself, instead, they used indirect clues more often, especially location-related information. Based on the results of sequential analysis, the transition states in the re-finding phase was found to be more complex than those in the finding phase. Web information finding and re-finding behavior is an important and novel area of research. The preliminary results would benefit research on Web information re-finding behavior, and provide useful suggestions for developing personal academic information management systems. [Article content in Chinese

  1. Robotic Prostatectomy on the Web: A Cross-Sectional Qualitative Assessment.

    Science.gov (United States)

    Borgmann, Hendrik; Mager, René; Salem, Johannes; Bründl, Johannes; Kunath, Frank; Thomas, Christian; Haferkamp, Axel; Tsaur, Igor

    2016-08-01

    Many patients diagnosed with prostate cancer search for information on robotic prostatectomy (RobP) on the Web. We aimed to evaluate the qualitative characteristics of the mostly frequented Web sites on RobP with a particular emphasis on provider-dependent issues. Google was searched for the term "robotic prostatectomy" in Europe and North America. The mostly frequented Web sites were selected and classified as physician-provided and publically-provided. Quality was measured using Journal of the American Medical Association (JAMA) benchmark criteria, DISCERN score, and addressing of Trifecta surgical outcomes. Popularity was analyzed using Google PageRank and Alexa tool. Accessibility, usability, and reliability were investigated using the LIDA tool and readability was assessed using readability indices. Twenty-eight Web sites were physician-provided and 15 publically-provided. For all Web sites, 88% of JAMA benchmark criteria were fulfilled, DISCERN quality score was high, and 81% of Trifecta outcome measurements were addressed. Popularity was average according to Google PageRank (mean 2.9 ± 1.5) and Alexa Traffic Rank (median, 49,109; minimum, 7; maximum, 8,582,295). Accessibility (85 ± 7%), usability (92 ± 3%), and reliability scores (88 ± 8%) were moderate to high. Automated Readability Index was 7.2 ± 2.1 and Flesch-Kincaid Grade Level was 9 ± 2, rating the Web sites as difficult to read. Physician-provided Web sites had higher quality scores and lower readability compared with publically-provided Web sites. Websites providing information on RobP obtained medium to high ratings in all domains of quality in the current assessment. In contrast, readability needs to be significantly improved so that this content can become available for the populace. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Öğretim Amaçlı Web Sayfası Tasarımında Renk Kullanımı

    OpenAIRE

    KARATAŞ, Serçin

    2014-01-01

    In this paper, the issues that ought to be noticed in the use of color element in the instructional web page design are identified. The following topics are briefly handled: why color is used, the color theory and color information, use principles, how to obtain browser-safe colors, readability of various text/background color combinations, and the color psychology. It is believed that these pieces of information could be helpful in the preparation of the instructional web pages.

  3. The Other Homepage.

    Science.gov (United States)

    Kupersmith, John

    2003-01-01

    Examines special-purpose entry points to library Web sites. Discusses in-house homepages; branch-specific pages or single library system-wide pages; staff use pages; versions in different languages; "MyLibrary" pages where users can customize the menu; standalone "branded" sites; publicly accessible pages; and best practices.…

  4. PageRank without hyperlinks: Reranking with PubMed related article networks for biomedical text retrieval

    Directory of Open Access Journals (Sweden)

    Lin Jimmy

    2008-06-01

    Full Text Available Abstract Background Graph analysis algorithms such as PageRank and HITS have been successful in Web environments because they are able to extract important inter-document relationships from manually-created hyperlinks. We consider the application of these techniques to biomedical text retrieval. In the current PubMed® search interface, a MEDLINE® citation is connected to a number of related citations, which are in turn connected to other citations. Thus, a MEDLINE record represents a node in a vast content-similarity network. This article explores the hypothesis that these networks can be exploited for text retrieval, in the same manner as hyperlink graphs on the Web. Results We conducted a number of reranking experiments using the TREC 2005 genomics track test collection in which scores extracted from PageRank and HITS analysis were combined with scores returned by an off-the-shelf retrieval engine. Experiments demonstrate that incorporating PageRank scores yields significant improvements in terms of standard ranked-retrieval metrics. Conclusion The link structure of content-similarity networks can be exploited to improve the effectiveness of information retrieval systems. These results generalize the applicability of graph analysis algorithms to text retrieval in the biomedical domain.

  5. PageRank without hyperlinks: reranking with PubMed related article networks for biomedical text retrieval.

    Science.gov (United States)

    Lin, Jimmy

    2008-06-06

    Graph analysis algorithms such as PageRank and HITS have been successful in Web environments because they are able to extract important inter-document relationships from manually-created hyperlinks. We consider the application of these techniques to biomedical text retrieval. In the current PubMed(R) search interface, a MEDLINE(R) citation is connected to a number of related citations, which are in turn connected to other citations. Thus, a MEDLINE record represents a node in a vast content-similarity network. This article explores the hypothesis that these networks can be exploited for text retrieval, in the same manner as hyperlink graphs on the Web. We conducted a number of reranking experiments using the TREC 2005 genomics track test collection in which scores extracted from PageRank and HITS analysis were combined with scores returned by an off-the-shelf retrieval engine. Experiments demonstrate that incorporating PageRank scores yields significant improvements in terms of standard ranked-retrieval metrics. The link structure of content-similarity networks can be exploited to improve the effectiveness of information retrieval systems. These results generalize the applicability of graph analysis algorithms to text retrieval in the biomedical domain.

  6. Dynamic and interactive web-based radiology teaching file using layer and javascript

    International Nuclear Information System (INIS)

    Park, Seong Ho; Han, Joon Koo; Lee, Kyoung Ho

    1999-01-01

    To improve the Web-based radiology teaching file by means of a dynamic and interactive interface using Layer and JavaScript. The radiology teaching file for medical students at the author's medical school was used. By mean of a digital camera, films were digitized and compressed to Joint Photographic Expert Group (JPEG) format. Layers which had arrows or lines pointing out lesions and anatomical structures were converted to transparent CompuServe Graphics Interchange Format (GIF). Basically, HyperText Mark-up Language (HTML) was used for each Web page. Using JavaScript, Layers were made to be overlapped with radiologic images at the user's request. Each case page consisted of radiologic images and texts for additional information and explanation. By moving the cursor or clicking onto key words, indicators pointing out corresponding lesions and anatomical structures were automatically shown on radiologic images. Although not compatible with some Web-browsers, a dynamic and interactive interface using Layer and JavaScript has little effect on the time needed for data transfer through a network, and is therefore an effective method of accessing radiologic images using the World-Wide Web and using these for teaching and learning

  7. Comparing the Scale of Web Subject Directories Precision in Technical-Engineering Information Retrieval

    Directory of Open Access Journals (Sweden)

    Mehrdokht Wazirpour Keshmiri

    2012-07-01

    Full Text Available The main purpose of this research was to compare the scale of web subject directories precision in information retrieval of technical-engineering science. Information gathering was documentary and webometric. Keywords of technical-engineering science were chosen at twenty different subjects from IEEE (Institute of Electrical and Electronics Engineers and engineering magazines that situated in sciencedirect site. These keywords are used at five subject directories Yahoo, Google, Infomine, Intute, Dmoz, that were web directories high-utilization. Usually first results in searching tools are connected to searching keywords. Because, first ten results was evaluated in every search. These assessments to consist of scale of precision, scale of error, scale retrieval items in technical-engineering categories to retrieval items entirely. The used criteria for determining the scale of precision that was according to high-utilization standards in different documents, to consist of presence of the keywords in title, appearance of keywords at the part of web retrieved pages, keywords adjacency, URL of page, page description and subject categories. Information analysis was according to Kruskal-Wallis Test and L.S.D fisher. Results revealed that there was meaningful difference about precision of web subject directories in information retrieval of technical-engineering science, Therefore this theory was confirmed.web subject directories ranked from point of precision as follows. Google, Yahoo, Intute, Dmoz, and Infomine. The scale of observed error at the first results was another criterion that was used for comparing web subject directories. In this research, Yahoo had minimum scale of error and Infomine had most of error. This research also compared the scale of retrieval items in all of categories web subject directories entirely to retrieval items in technical-engineering categories, results revealed that there was meaningful difference between them. And

  8. A grammar checker based on web searching

    Directory of Open Access Journals (Sweden)

    Joaquim Moré

    2006-05-01

    Full Text Available This paper presents an English grammar and style checker for non-native English speakers. The main characteristic of this checker is the use of an Internet search engine. As the number of web pages written in English is immense, the system hypothesises that a piece of text not found on the Web is probably badly written. The system also hypothesises that the Web will provide examples of how the content of the text segment can be expressed in a grammatically correct and idiomatic way. Thus, when the checker warns the user about the odd nature of a text segment, the Internet engine searches for contexts that can help the user decide whether he/she should correct the segment or not. By means of a search engine, the checker also suggests use of other expressions that appear on the Web more often than the expression he/she actually wrote.

  9. Securing the anonymity of content providers in the World Wide Web

    Science.gov (United States)

    Demuth, Thomas; Rieke, Andreas

    1999-04-01

    Nowadays the World Wide Web (WWW) is an established service used by people all over the world. Most of them do not recognize the fact that they reveal plenty of information about themselves or their affiliation and computer equipment to the providers of web pages they connect to. As a result, a lot of services offer users to access web pages unrecognized or without risk of being backtracked, respectively. This kind of anonymity is called user or client anonymity. But on the other hand, an equivalent protection for content providers does not exist, although this feature is desirable for many situations in which the identity of a publisher or content provider shall be hidden. We call this property server anonymity. We will introduce the first system with the primary target to offer anonymity for providers of information in the WWW. Beside this property, it provides also client anonymity. Based on David Chaum's idea of mixes and in relation to the context of the WWW, we explain the term 'server anonymity' motivating the system JANUS which offers both client and server anonymity.

  10. Web-based tool for subjective observer ranking of compressed medical images

    Science.gov (United States)

    Langer, Steven G.; Stewart, Brent K.; Andrew, Rex K.

    1999-05-01

    In the course of evaluating various compression schemes for ultrasound teleradiology applications, it became obvious that paper based methods of data collection were time consuming and error prone. A method was sought which allowed participating radiologists to view the ultrasound video clips (compressed to varying degree) at their desks. Furthermore, the method should allow observers to enter their evaluations and when finished, automatically submit the data to our statistical analysis engine. We have found the World Wide Web offered a ready solution. A web page was constructed that contains 18 embedded AVI video clips. The 18 clips represent 6 distinct anatomical areas, compressed by various methods and amounts, and then randomly distributed through the web page. To the right of each video, a series of questions are presented which ask the observer to rank (1 - 5) his/her ability to answer diagnostically relevant questions. When completed, the observer presses 'Submit' and a file of tab delimited test is created which can then be imported to an Excel workbook. Kappa analysis is then performed and the resulting plots demonstrate observer preferences.

  11. Geographic Information Systems and Web Page Development

    Science.gov (United States)

    Reynolds, Justin

    2004-01-01

    The Facilities Engineering and Architectural Branch is responsible for the design and maintenance of buildings, laboratories, and civil structures. In order to improve efficiency and quality, the FEAB has dedicated itself to establishing a data infrastructure based on Geographic Information Systems, GIs. The value of GIS was explained in an article dating back to 1980 entitled "Need for a Multipurpose Cadastre which stated, "There is a critical need for a better land-information system in the United States to improve land-conveyance procedures, furnish a basis for equitable taxation, and provide much-needed information for resource management and environmental planning." Scientists and engineers both point to GIS as the solution. What is GIS? According to most text books, Geographic Information Systems is a class of software that stores, manages, and analyzes mapable features on, above, or below the surface of the earth. GIS software is basically database management software to the management of spatial data and information. Simply put, Geographic Information Systems manage, analyze, chart, graph, and map spatial information. At the outset, I was given goals and expectations from my branch and from my mentor with regards to the further implementation of GIs. Those goals are as follows: (1) Continue the development of GIS for the underground structures. (2) Extract and export annotated data from AutoCAD drawing files and construct a database (to serve as a prototype for future work). (3) Examine existing underground record drawings to determine existing and non-existing underground tanks. Once this data was collected and analyzed, I set out on the task of creating a user-friendly database that could be assessed by all members of the branch. It was important that the database be built using programs that most employees already possess, ruling out most AutoCAD-based viewers. Therefore, I set out to create an Access database that translated onto the web using Internet

  12. Current uses of Web 2.0 applications in transportation : case studies of select state departments of transportation

    Science.gov (United States)

    2010-03-01

    Web 2.0 is an umbrella term for websites or online applications that are user-driven and emphasize collaboration and user interactivity. The trend away from static web pages to a more user-driven Internet model has also occurred in the public s...

  13. Development of Grid-like Applications for Public Health Using Web 2.0 Mashup Techniques

    OpenAIRE

    Scotch, Matthew; Yip, Kevin Y.; Cheung, Kei-Hoi

    2008-01-01

    Development of public health informatics applications often requires the integration of multiple data sources. This process can be challenging due to issues such as different file formats, schemas, naming systems, and having to scrape the content of web pages. A potential solution to these system development challenges is the use of Web 2.0 technologies. In general, Web 2.0 technologies are new internet services that encourage and value information sharing and collaboration among individuals....

  14. Net Survey: "Top Ten Mistakes" in Academic Web Design.

    Science.gov (United States)

    Petrik, Paula

    2000-01-01

    Highlights the top ten mistakes in academic Web design: (1) bloated graphics; (2) scaling images; (3) dense text; (4) lack of contrast; (5) font size; (6) looping animations; (7) courseware authoring software; (8) scrolling/long pages; (9) excessive download; and (10) the nothing site. Includes resources. (CMK)

  15. Doing Web history with the Internet Archive : Screencast documentaries

    NARCIS (Netherlands)

    Rogers, R.

    2017-01-01

    Among the conceptual and methodological opportunities afforded by the Internet Archive, and more specifically, the WayBack Machine, is the capacity to capture and “play back” the history a web page, most notably a website's homepage. These playbacks could be construed as “website histories”,

  16. The Visual Web User Interface Design in Augmented Reality Technology

    OpenAIRE

    Chouyin Hsu; Haui-Chih Shiau

    2013-01-01

    Upon the popularity of 3C devices, the visual creatures are all around us, such the online game, touch pad, video and animation. Therefore, the text-based web page will no longer satisfy users. With the popularity of webcam, digital camera, stereoscopic glasses, or head-mounted display, the user interface becomes more visual and multi-dimensional. For the consideration of 3D and visual display in the research of web user interface design, Augmented Reality technology providing the convenient ...

  17. Single Session Web-Based Counselling: A Thematic Analysis of Content from the Perspective of the Client

    Science.gov (United States)

    Rodda, S. N.; Lubman, D. I.; Cheetham, A.; Dowling, N. A.; Jackson, A. C.

    2015-01-01

    Despite the exponential growth of non-appointment-based web counselling, there is limited information on what happens in a single session intervention. This exploratory study, involving a thematic analysis of 85 counselling transcripts of people seeking help for problem gambling, aimed to describe the presentation and content of online…

  18. Human Factors in Web-Authentication

    Science.gov (United States)

    2009-02-06

    questions is vulnerable to man-in-the-middle ( MITM ) attacks [113, 144]. Since these attacks exploit similar human tendencies as attacks against...researchers have discovered a “Universal Man-in-the-Middle Phishing Kit” [75], a hacker toolkit which enables a phisher to easily set up a MITM ...the-middle ( MITM ) attacker spoofing the login page of the target Web site [113, 144]. 1 When a user attempts to login 1Challenge questions also have

  19. D-Iteration: diffusion approach for solving PageRank

    OpenAIRE

    Hong, Dohy; Huynh, The Dang; Mathieu, Fabien

    2015-01-01

    In this paper we present a new method that can accelerate the computation of the PageRank importance vector. Our method, called D-Iteration (DI), is based on the decomposition of the matrix-vector product that can be seen as a fluid diffusion model and is potentially adapted to asynchronous implementation. We give theoretical results about the convergence of our algorithm and we show through experimentations on a real Web graph that DI can improve the computation efficiency compared to other ...

  20. Judging nursing information on the world wide web.

    Science.gov (United States)

    Cader, Raffik

    2013-02-01

    The World Wide Web is increasingly becoming an important source of information for healthcare professionals. However, finding reliable information from unauthoritative Web sites to inform healthcare can pose a challenge to nurses. A study, using grounded theory, was undertaken in two phases to understand how qualified nurses judge the quality of Web nursing information. Data were collected using semistructured interviews and focus groups. An explanatory framework that emerged from the data showed that the judgment process involved the application of forms of knowing and modes of cognition to a range of evaluative tasks and depended on the nurses' critical skills, the time available, and the level of Web information cues. This article mainly focuses on the six evaluative tasks relating to assessing user-friendliness, outlook and authority of Web pages, and relationship to nursing practice; appraising the nature of evidence; and applying cross-checking strategies. The implications of these findings to nurse practitioners and publishers of nursing information are significant.

  1. Effects of Surrounding Information and Line Length on Text Comprehension from the Web

    Directory of Open Access Journals (Sweden)

    Jess McMullin

    2002-02-01

    Full Text Available The World Wide Web (Web is becoming a popular medium for transmission of information and online learning. We need to understand how people comprehend information from the Web to design Web sites that maximize the acquisition of information. We examined two features of Web page design that are easily modified by developers, namely line length and the amount of surrounding information, or whitespace. Undergraduate university student participants read text and answered comprehension questions on the Web. Comprehension was affected by whitespace; participants had better comprehension for information surrounded by whitespace than for information surrounded by meaningless information. Participants were not affected by line length. These findings demonstrate that reading from the Web is not the same as reading print and have implications for instructional Web design.

  2. SpaceExplorer

    DEFF Research Database (Denmark)

    Hansen, Thomas Riisgaard

    2007-01-01

    Web pages are designed to be displayed on a single screen, but as more and more screens are being introduced in our surroundings a burning question becomes how to design, interact, and display web pages on multiple devices and displays. In this paper I present the SpaceExplorer prototype, which...... is able to display standard HTML web pages on multiple displays with only a minor modification to the language. Based on the prototype a number of different examples are presented and discussed and some preliminary findings are presented....

  3. Characteristics of food industry web sites and "advergames" targeting children.

    Science.gov (United States)

    Culp, Jennifer; Bell, Robert A; Cassady, Diana

    2010-01-01

    To assess the content of food industry Web sites targeting children by describing strategies used to prolong their visits and foster brand loyalty; and to document health-promoting messages on these Web sites. A content analysis was conducted of Web sites advertised on 2 children's networks, Cartoon Network and Nickelodeon. A total of 290 Web pages and 247 unique games on 19 Internet sites were examined. Games, found on 81% of Web sites, were the most predominant promotion strategy used. All games had at least 1 brand identifier, with logos being most frequently used. On average Web sites contained 1 "healthful" message for every 45 exposures to brand identifiers. Food companies use Web sites to extend their television advertising to promote brand loyalty among children. These sites almost exclusively promoted food items high in sugar and fat. Health professionals need to monitor food industry marketing practices used in "new media." Published by Elsevier Inc.

  4. Web Services and Other Enhancements at the Northern California Earthquake Data Center

    Science.gov (United States)

    Neuhauser, D. S.; Zuzlewski, S.; Allen, R. M.

    2012-12-01

    The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, or MiniSEED depending on the service, and are compatible with the equivalent IRIS DMC web services. The NCEDC is currently providing the following Web Services: (1) Station inventory and channel response information delivered in StationXML format, (2) Channel response information delivered in RESP format, (3) Time series availability delivered in text and XML formats, (4) Single channel and bulk data request delivered in MiniSEED format. The NCEDC is also developing a rich Earthquake Catalog Web Service to allow users to query earthquake catalogs based on selection parameters such as time, location or geographic region, magnitude, depth, azimuthal gap, and rms. It will return (in QuakeML format) user-specified results that can include simple earthquake parameters, as well as observations such as phase arrivals, codas, amplitudes, and computed parameters such as first motion mechanisms, moment tensors, and rupture length. The NCEDC will work with both IRIS and the International Federation of Digital Seismograph Networks (FDSN) to define a uniform set of web service specifications that can be implemented by multiple data centers to provide users with a common data interface across data centers. The NCEDC now hosts earthquake catalogs and waveforms from the US Department of Energy (DOE) Enhanced Geothermal Systems (EGS) monitoring networks. These

  5. A Survey on Trust-Based Web Service Provision Approaches

    DEFF Research Database (Denmark)

    Dragoni, Nicola

    2010-01-01

    The basic tenet of Service-Oriented Computing (SOC) is the possibility of building distributed applications on the Web by using Web Services as fundamental building blocks. The proliferation of such services is considered the second wave of evolution in the Internet age, moving the Web from...... a collection of pages to a collections of services. Consensus is growing that this Web Service “revolution” won't eventuate until we resolve trust-related issues. Indeed, the intrinsic openness of the SOC vision makes crucial to locate useful services and recognize them as trustworthy. In this paper we review...... the field of trust-based Web Service selection, providing a structured classification of current approaches and highlighting the main limitations of each class and of the overall field. As a result, we claim that a soft notion of trust lies behind such weaknesses and we advocate the need of a new approach...

  6. Deep Web Search Interface Identification: A Semi-Supervised Ensemble Approach

    OpenAIRE

    Hong Wang; Qingsong Xu; Lifeng Zhou

    2014-01-01

    To surface the Deep Web, one crucial task is to predict whether a given web page has a search interface (searchable HyperText Markup Language (HTML) form) or not. Previous studies have focused on supervised classification with labeled examples. However, labeled data are scarce, hard to get and requires tediousmanual work, while unlabeled HTML forms are abundant and easy to obtain. In this research, we consider the plausibility of using both labeled and unlabeled data to train better models to...

  7. El creador de World Wide Web gana premio Millennium de tecnologia

    CERN Multimedia

    Galan, J

    2004-01-01

    "El creador de la World Wide Web (WWW), el fisico britanico Tim Berners-Lee, gano hoy la primera edicion del Millennium Technology Prize, un galardon internacional creado por una fundacion finlandesa y dotado con un millon de euros" (1/2 page)

  8. Law on the web a guide for students and practitioners

    CERN Document Server

    Stein, Stuart

    2014-01-01

    Law on the Web is ideal for anyone who wants to access Law Internet resources quickly and efficiently without becoming an IT expert. The emphasis throughout is on the location of high quality law Internet resources for learning, teaching and research, from among the billions of publicly accessible Web pages. The book is structured so that it will be found useful by both beginners and intermediate level users, and be of continuing use over the course of higher education studies. In addition to extensive coverage on locating files and Web sites, Part III provides a substantial and annotated list of high quality resources for law students.

  9. Né à Genève, le Web embobine la planète

    CERN Multimedia

    Broute, Anne-Muriel

    2009-01-01

    On the earth, today, one people about six is connected to the Web. Twenty years ago, they were only two: the english computer specialist Tim Berners-Lee and the belgium ingenner Robert Cailliau. (1,5 page)

  10. The presence of English and Spanish dyslexia in the Web

    Science.gov (United States)

    Rello, Luz; Baeza-Yates, Ricardo

    2012-09-01

    In this study we present a lower bound of the prevalence of dyslexia in the Web for English and Spanish. On the basis of analysis of corpora written by dyslexic people, we propose a classification of the different kinds of dyslexic errors. A representative data set of dyslexic words is used to calculate this lower bound in web pages containing English and Spanish dyslexic errors. We also present an analysis of dyslexic errors in major Internet domains, social media sites, and throughout English- and Spanish-speaking countries. To show the independence of our estimations from the presence of other kinds of errors, we compare them with the overall lexical quality of the Web and with the error rate of noncorrected corpora. The presence of dyslexic errors in the Web motivates work in web accessibility for dyslexic users.

  11. What Are the Usage Conditions of Web 2.0 Tools Faculty of Education Students?

    Science.gov (United States)

    Agir, Ahmet

    2014-01-01

    As a result of advances in technology and then the emergence of using Internet in every step of life, web that provides access to the documents such as picture, audio, animation and text in Internet started to be used. At first, web consists of only visual and text pages that couldn't enable to make user's interaction. However, it is seen that not…

  12. Opinion formation driven by PageRank node influence on directed networks

    Science.gov (United States)

    Eom, Young-Ho; Shepelyansky, Dima L.

    2015-10-01

    We study a two states opinion formation model driven by PageRank node influence and report an extensive numerical study on how PageRank affects collective opinion formations in large-scale empirical directed networks. In our model the opinion of a node can be updated by the sum of its neighbor nodes' opinions weighted by the node influence of the neighbor nodes at each step. We consider PageRank probability and its sublinear power as node influence measures and investigate evolution of opinion under various conditions. First, we observe that all networks reach steady state opinion after a certain relaxation time. This time scale is decreasing with the heterogeneity of node influence in the networks. Second, we find that our model shows consensus and non-consensus behavior in steady state depending on types of networks: Web graph, citation network of physics articles, and LiveJournal social network show non-consensus behavior while Wikipedia article network shows consensus behavior. Third, we find that a more heterogeneous influence distribution leads to a more uniform opinion state in the cases of Web graph, Wikipedia, and Livejournal. However, the opposite behavior is observed in the citation network. Finally we identify that a small number of influential nodes can impose their own opinion on significant fraction of other nodes in all considered networks. Our study shows that the effects of heterogeneity of node influence on opinion formation can be significant and suggests further investigations on the interplay between node influence and collective opinion in networks.

  13. Simulator Posisi Matahari dan Bulan Berbasis Web dengan WebGL

    Directory of Open Access Journals (Sweden)

    Kamshory

    2014-09-01

    Full Text Available Moon as a satellite of the earth has an important role for the life of the earth. Apart from being a source of illumination, moon also has an effect on the earth both on land and at sea. The influence of the moon to the earth relate to the its position. This research aims to create a simulator of the sun and moon position to the earth based on the time and location of the observation. This simulator is web-based and made in 3-dimensional (3D using webGL technology. The position of the sun and moon expressed by latitude and longitude which is a position that is passed by the line connecting the earth to the sun or the line connecting the earth to the moon. The position of the sun and moon obtained from calculations based on previous research. Once the position is known, the earth, the sun, and the moon then described as a 3-D model. The position of the camera can be moved by dragging a web page so that the sun, earth, and moon can be seen from various positions. The camera always leads to the center of the earth to avoid user errors flipping the camera which causes the object is not visible.

  14. Students' Evaluation Strategies in a Web Research Task: Are They Sensitive to Relevance and Reliability?

    Science.gov (United States)

    Rodicio, Héctor García

    2015-01-01

    When searching and using resources on the Web, students have to evaluate Web pages in terms of relevance and reliability. This evaluation can be done in a more or less systematic way, by either considering deep or superficial cues of relevance and reliability. The goal of this study was to examine how systematic students are when evaluating Web…

  15. Web Accessibility of the Higher Education Institute Websites Based on the World Wide Web Consortium and Section 508 of the Rehabilitation Act

    Science.gov (United States)

    Alam, Najma H.

    2014-01-01

    The problem observed in this study is the low level of compliance of higher education website accessibility with Section 508 of the Rehabilitation Act of 1973. The literature supports the non-compliance of websites with the federal policy in general. Studies were performed to analyze the accessibility of fifty-four sample web pages using automated…

  16. PERANCANGAN SISTEM PEMESANAN BARANG BERBASIS WEB DI TOKO ZENITH KOMPUTER DI PEKANBARU

    Directory of Open Access Journals (Sweden)

    Fery Wongso Johan Wyanaputra

    2016-03-01

    Full Text Available Abstrak : Sistem Informasi Pemesananberbasis web merupakan bagian dari sistem informasi pemasaran yang dikembangkan untuk mengumpulkan, mengolah data sehingga data tersebut dapat dilihat kembali untuk disalurkan sebagai suatu informasi yang berguna. Wujud dari pengembangan Sistem Informasi Pemesananberbasis web ini adalah pembuatan aplikasi Komputer yang mampu mewakili sistem informasi yang dirancang secara keseluruhan.Aplikasi Sistem Informasi pemesanan yang dihasilkan mampu mengelola data pemesanan secara terorgasisasi, serta menghasilkan laporan yang lengkap, akurat dan selalu aktual untuk setiap tingkatan manajemen. Perancangan sistemnya menggunakan PHP (Personal Home Page dan rancangan databasenya menggunakan Xamp Server. Hasil dari perancangan aplikasi Sistem Informasi Pemesananberbasis web menunjukkan bahwa peranan aplikasi Komputer dalam sistem informasi sangat penting sebagai penunjang dalam meningkatkan kualitas kegiatan Pemesanan dan pelayanan di lingkungan toko Zenith Komputer. Abstract: Information Systems is a web Pemesananberbasis Part Of The Marketing Information System was developed to review collect, process the data so that the data can be Seen Back to review the information supplied as a useful thing. From the form of development of Information Systems web Pemesananberbasis Singer Was Able Computer Application Development Information System Designed represent keseluruhan.Aplikasi Operating System Generated Ordering Information Ability to Manage Booking Data Operating terorgasisasi, as well as generate reports The complete, Accurate And Always Currents for EVERY level management review, The design of the system using PHP (Personal Home Page and database design using XAMP Server. Results From designing web applications Pemesananberbasis Information System showed that Role of Information Systems Computer Application hearts hearts Sangat as supporting activities improving QUALITY Booking And Environmental Services at Zenith

  17. Specification and Verification of Web Applications in Rewriting Logic

    Science.gov (United States)

    Alpuente, María; Ballis, Demis; Romero, Daniel

    This paper presents a Rewriting Logic framework that formalizes the interactions between Web servers and Web browsers through a communicating protocol abstracting HTTP. The proposed framework includes a scripting language that is powerful enough to model the dynamics of complex Web applications by encompassing the main features of the most popular Web scripting languages (e.g. PHP, ASP, Java Servlets). We also provide a detailed characterization of browser actions (e.g. forward/backward navigation, page refresh, and new window/tab openings) via rewrite rules, and show how our models can be naturally model-checked by using the Linear Temporal Logic of Rewriting (LTLR), which is a Linear Temporal Logic specifically designed for model-checking rewrite theories. Our formalization is particularly suitable for verification purposes, since it allows one to perform in-depth analyses of many subtle aspects related to Web interaction. Finally, the framework has been completely implemented in Maude, and we report on some successful experiments that we conducted by using the Maude LTLR model-checker.

  18. Web révolution; il y a 17 ans, qui le connaissait?

    CERN Multimedia

    2007-01-01

    When tim Berners Lee, a CERN scientist, found the World Wide Web in 1989, the aim was to find an automatic sharing of the information for scientists working in the universities or institutes in the whole world. (2 pages)

  19. Beginning JSP, JSF, and Tomcat web development from novice to professional

    CERN Document Server

    Zambon, Giulio

    2008-01-01

    A comprehensive introduction to JavaServer Pages (JSP), JavaServer Faces (JSF), and the Apache Tomcat Web application server, this manual makes key concepts easy to grasp by numerous working examples and a walk-through of the development of a complete e-commerce project.

  20. Machine learning approach for automatic quality criteria detection of health web pages.

    Science.gov (United States)

    Gaudinat, Arnaud; Grabar, Natalia; Boyer, Célia

    2007-01-01

    The number of medical websites is constantly growing [1]. Owing to the open nature of the Web, the reliability of information available on the Web is uneven. Internet users are overwhelmed by the quantity of information available on the Web. The situation is even more critical in the medical area, as the content proposed by health websites can have a direct impact on the users' well being. One way to control the reliability of health websites is to assess their quality and to make this assessment available to users. The HON Foundation has defined a set of eight ethical principles. HON's experts are working in order to manually define whether a given website complies with s the required principles. As the number of medical websites is constantly growing, manual expertise becomes insufficient and automatic systems should be used in order to help medical experts. In this paper we present the design and the evaluation of an automatic system conceived for the categorisation of medical and health documents according to he HONcode ethical principles. A first evaluation shows promising results. Currently the system shows 0.78 micro precision and 0.73 F-measure, with 0.06 errors.

  1. Teaching web application development: Microsoft proprietary or open systems?

    Directory of Open Access Journals (Sweden)

    Stephen Corich

    Full Text Available This paper revisits the debate concerning which development environment should be used to teach server-side Web Application Development courses to undergraduate students. In 2002, following an industry-based survey of Web developers, a decision was made to adopt an open source platform consisting of PHP and MySQL rather than a Microsoft platform utilising Access and Active Server Pages. Since that date there have been a number of significant changes within the computing industry that suggest that perhaps it is appropriate to revisit the original decision. This paper investigates expert opinion by reviewing current literature regarding web development environments, it looks at the results of a survey of web development companies and it examines the current employment trends in the web development area. The paper concludes by examining the impact of making a decision to change the development environment used to teach Web Application Development to a third year computing degree class and describes the impact on course delivery that the change has brought about.

  2. An Efficient Monte Carlo Approach to Compute PageRank for Large Graphs on a Single PC

    Directory of Open Access Journals (Sweden)

    Sonobe Tomohiro

    2016-03-01

    Full Text Available This paper describes a novel Monte Carlo based random walk to compute PageRanks of nodes in a large graph on a single PC. The target graphs of this paper are ones whose size is larger than the physical memory. In such an environment, memory management is a difficult task for simulating the random walk among the nodes. We propose a novel method that partitions the graph into subgraphs in order to make them fit into the physical memory, and conducts the random walk for each subgraph. By evaluating the walks lazily, we can conduct the walks only in a subgraph and approximate the random walk by rotating the subgraphs. In computational experiments, the proposed method exhibits good performance for existing large graphs with several passes of the graph data.

  3. Hospital Web site 'tops' in Louisiana. Hospital PR, marketing group cites East Jefferson General Hospital.

    Science.gov (United States)

    Rees, Tom

    2002-01-01

    East Jefferson General Hospital in Metairie, La., launched a new Web site in October 2001. Its user-friendly home page offers links to hospital services, medical staff, and employer information. Its jobline is a powerful tool for recruitment. The site was awarded the 2002 Pelican Award for Best Consumer Web site by the Louisiana Society for Hospital Public Relations & Marketing.

  4. VisBOL: Web-Based Tools for Synthetic Biology Design Visualization.

    Science.gov (United States)

    McLaughlin, James Alastair; Pocock, Matthew; Mısırlı, Göksel; Madsen, Curtis; Wipat, Anil

    2016-08-19

    VisBOL is a Web-based application that allows the rendering of genetic circuit designs, enabling synthetic biologists to visually convey designs in SBOL visual format. VisBOL designs can be exported to formats including PNG and SVG images to be embedded in Web pages, presentations and publications. The VisBOL tool enables the automated generation of visualizations from designs specified using the Synthetic Biology Open Language (SBOL) version 2.0, as well as a range of well-known bioinformatics formats including GenBank and Pigeoncad notation. VisBOL is provided both as a user accessible Web site and as an open-source (BSD) JavaScript library that can be used to embed diagrams within other content and software.

  5. W3C director Tim Berners-Lee to be Knighted by Queen Elizabeth web inventor recognized for contributions to internet development

    CERN Document Server

    2003-01-01

    "Tim Berners-Lee, the inventor of the World Wide Web and director of the World Wide Web Consortium (W3C), will be made a Knight Commander, Order of the British Empire (KBE) by Queen Elizabeth" (1/2 page).

  6. Sustainable Materials Management (SMM) Web Academy Webinar: Reducing Wasted Food: How Packaging Can Help

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  7. SAMP: Application Messaging for Desktop and Web Applications

    Science.gov (United States)

    Taylor, M. B.; Boch, T.; Fay, J.; Fitzpatrick, M.; Paioro, L.

    2012-09-01

    SAMP, the Simple Application Messaging Protocol, is a technology which allows tools to communicate. It is deployed in a number of desktop astronomy applications including ds9, Aladin, TOPCAT, World Wide Telescope and numerous others, and makes it straightforward for a user to treat a selection of these tools as a loosely-integrated suite, combining the most powerful features of each. It has been widely used within Virtual Observatory contexts, but is equally suitable for non-VO use. Enabling SAMP communication from web-based content has long been desirable. An obvious use case is arranging for a click on a web page link to deliver an image, table or spectrum to a desktop viewer, but more sophisticated two-way interaction with rich internet applications would also be possible. Use from the web however presents some problems related to browser sandboxing. We explain how the SAMP Web Profile, introduced in version 1.3 of the SAMP protocol, addresses these issues, and discuss the resulting security implications.

  8. A Hybrid Web Browser Architecture for Mobile Devices

    Directory of Open Access Journals (Sweden)

    CHO, J.

    2014-08-01

    Full Text Available Web browsing on mobile networks is slow in comparison to wired or Wi-Fi networks. Particularly, the connection establishment phase including DNS lookups and TCP handshakes takes a long time on mobile networks due to its long round-trip latency. In this paper, we propose a novel web browser architecture that aims to improve mobile web browsing performance. Our approach delegates the connection establishment and HTTP header field delivery tasks to a dedicated proxy server located at the joint point between the WAN and mobile network. Since the traffic for the connection establishment and HTTP header fields delivery passes only through the WAN between the proxy and web servers, our approach significantly reduces both the number and size of packets on the mobile network. Our evaluation showed that the proposed scheme reduces the number of mobile network packets by up to 42% and, consequently, the average page loading time is shortened by up to 52%.

  9. HEPWEB - WEB-portal for Monte Carlo simulations in high-energy physics

    International Nuclear Information System (INIS)

    Aleksandrov, E.I.; Kotov, V.M.; Uzhinsky, V.V.; Zrelov, P.V.

    2011-01-01

    A WEB-portal HepWeb allows users to perform the most popular calculations in high-energy physics - calculations of hadron-hadron, hadron-nucleus, and nucleus-nucleus interaction cross sections as well as calculations of secondary-particle characteristics in the interactions using Monte Carlo event generators. The list of the generators includes Dubna version of the intranuclear cascade model (CASCADE), FRITIOF model, ultrarelativistic quantum molecular dynamics model (UrQMD), HIJING model, and AMPT model. Setting up the colliding particles/nucleus properties (collision energy, mass numbers and charges of nuclei, impact parameters of interactions, and number of generated events) is realized by a WEB-interface. A query is processed by a server, and results are presented to the user as a WEB-page. Short descriptions of the installed generators, the WEB-interface implementation and the server operation are given

  10. HEPWEB - WEB-portal for Monte Carlo simulations in high-energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Aleksandrov, E I; Kotov, V M; Uzhinsky, V V; Zrelov, P V

    2011-07-01

    A WEB-portal HepWeb allows users to perform the most popular calculations in high-energy physics - calculations of hadron-hadron, hadron-nucleus, and nucleus-nucleus interaction cross sections as well as calculations of secondary-particle characteristics in the interactions using Monte Carlo event generators. The list of the generators includes Dubna version of the intranuclear cascade model (CASCADE), FRITIOF model, ultrarelativistic quantum molecular dynamics model (UrQMD), HIJING model, and AMPT model. Setting up the colliding particles/nucleus properties (collision energy, mass numbers and charges of nuclei, impact parameters of interactions, and number of generated events) is realized by a WEB-interface. A query is processed by a server, and results are presented to the user as a WEB-page. Short descriptions of the installed generators, the WEB-interface implementation and the server operation are given.

  11. Social Web Identity Established upon Trust and Reputations

    Directory of Open Access Journals (Sweden)

    Rajni Goel

    2014-11-01

    Full Text Available Online social networks have become a seamless and critical online communication platform for personal interactions. They are a powerful tool that businesses are using to expand among domestic markets. The increase in participation in online social networking can and has caused damage to individuals and organizations, and the issuance of trust has become a concern on the social web. The factors determining the reputation of persons (customers in the real world may relate to the factors of reputation on the social web, though relative to how trust is established in the physical world, establishing trust on the social web can be fairly difficult. Determining how to trust another individual’s online social profile becomes critical in initiating any interaction on the social web. Rather than focusing on content on the social network page, this research proposes and examines the application of user reputations to determine whether the trust should be issued on the social web. A top-level framework to establish trust in an identity on the Social Network Sites (SNS as a function of the users’ associations, usage patterns and reputation on the social web is presented.

  12. Checking an integrated model of web accessibility and usability evaluation for disabled people.

    Science.gov (United States)

    Federici, Stefano; Micangeli, Andrea; Ruspantini, Irene; Borgianni, Stefano; Corradi, Fabrizio; Pasqualotto, Emanuele; Olivetti Belardinelli, Marta

    2005-07-08

    A combined objective-oriented and subjective-oriented method for evaluating accessibility and usability of web pages for students with disability was tested. The objective-oriented approach is devoted to verifying the conformity of interfaces to standard rules stated by national and international organizations responsible for web technology standardization, such as W3C. Conversely, the subjective-oriented approach allows assessing how the final users interact with the artificial system, accessing levels of user satisfaction based on personal factors and environmental barriers. Five kinds of measurements were applied as objective-oriented and subjective-oriented tests. Objective-oriented evaluations were performed on the Help Desk web page for students with disability, included in the website of a large Italian state university. Subjective-oriented tests were administered to 19 students labeled as disabled on the basis of their own declaration at the University enrolment: 13 students were tested by means of the SUMI test and six students by means of the 'Cooperative evaluation'. Objective-oriented and subjective-oriented methods highlighted different and sometimes conflicting results. Both methods have pointed out much more consistency regarding levels of accessibility than of usability. Since usability is largely affected by individual differences in user's own (dis)abilities, subjective-oriented measures underscored the fact that blind students encountered much more web surfing difficulties.

  13. Web analytics as tool for improvement of website taxonomies

    DEFF Research Database (Denmark)

    Jonasen, Tanja Svarre; Ådland, Marit Kristine; Lykke, Marianne

    The poster examines how web analytics can be used to provide information about users and inform design and redesign of taxonomies. It uses a case study of the website Cancer.dk by the Danish Cancer Society. The society is a private organization with an overall goal to prevent the development...... provides information about e.g. subjects of interest, searching behaviour, browsing patterns in website structure as well as tag clouds, page views. The poster discusses benefits and challenges of the two web metrics, with a model of how to use search and tag data for the design of taxonomies, e.g. choice...

  14. The world wide web: exploring a new advertising environment.

    Science.gov (United States)

    Johnson, C R; Neath, I

    1999-01-01

    The World Wide Web currently boasts millions of users in the United States alone and is likely to continue to expand both as a marketplace and as an advertising environment. Three experiments explored advertising in the Web environment, in particular memory for ads as they appear in everyday use across the Web. Experiments 1 and 2 examined the effect of advertising repetition on the retention of familiar and less familiar brand names, respectively. Experiment 1 demonstrated that repetition of a banner ad within multiple web pages can improve recall of familiar brand names, and Experiment 2 demonstrated that repetition can improve recognition of less familiar brand names. Experiment 3 directly compared the retention of familiar and less familiar brand names that were promoted by static and dynamic ads and demonstrated that the use of dynamic advertising can increase brand name recall, though only for familiar brand names. This study also demonstrated that, in the Web environment, much as in other advertising environments, familiar brand names possess a mnemonic advantage not possessed by less familiar brand names. Finally, data regarding Web usage gathered from all experiments confirm reports that Web usage among males tends to exceed that among females.

  15. Wikinews interviews World Wide Web co-inventor Robert Cailliau

    CERN Multimedia

    2007-01-01

    "The name Robert Caillau may not ring a bell to the general pbulic, but his invention is the reason why you are reading this: Dr. Cailliau together with his colleague Sir Tim Berners-Lee invented the World Wide Web, making the internet accessible so it could grow from an academic tool to a mass communication medium." (9 pages)

  16. Exploring determining factors of web transparency in the world's top universities

    Directory of Open Access Journals (Sweden)

    Laura Saraite-Sariene

    2018-01-01

    Full Text Available This paper aims to analyze the online transparency of the top 100 Universities in the world and determine which factors influence the degree of online transparency achieved by these institutions. To this end, a global transparency index was developed comprising of four dimensions (“E-Information”, “E-Services”, “E-Participation” and “Navigability, Design and Accessibility”. From the analysis of the various dimensions, it is worth noting that universities are aware of the importance of having a web page with adequate “Navigability, Design and Accessibility”. In contrast, “E-information” is the least valued dimension due to universities focusing their attention on the disclosure of general information rather than on more specific issues. In addition, a multivariate regression equation was used to test the relationship between the online information disclosed and a particular set of factors. As main findings, younger universities of greater size and which are privately funded are the ones most interested in utilizing web pages.

  17. An Exploratory Study of Student Satisfaction with University Web Page Design

    Science.gov (United States)

    Gundersen, David E.; Ballenger, Joe K.; Crocker, Robert M.; Scifres, Elton L.; Strader, Robert

    2013-01-01

    This exploratory study evaluates the satisfaction of students with a web-based information system at a medium-sized regional university. The analysis provides a process for simplifying data interpretation in captured student user feedback. Findings indicate that student classifications, as measured by demographic and other factors, determine…

  18. Review Pages: Cities, Energy and Mobility

    Directory of Open Access Journals (Sweden)

    Gennaro Angiello

    2015-12-01

    Full Text Available Starting from the relationship between urban planning and mobility management, TeMA has gradually expanded the view of the covered topics, always remaining in the groove of rigorous scientific in-depth analysis. During the last two years a particular attention has been paid on the Smart Cities theme and on the different meanings that come with it. The last section of the journal is formed by the Review Pages. They have different aims: to inform on the problems, trends and evolutionary processes; to investigate on the paths by highlighting the advanced relationships among apparently distant disciplinary fields; to explore the interaction’s areas, experiences and potential applications; to underline interactions, disciplinary developments but also, if present, defeats and setbacks. Inside the journal the Review Pages have the task of stimulating as much as possible the circulation of ideas and the discovery of new points of view. For this reason the section is founded on a series of basic’s references, required for the identification of new and more advanced interactions. These references are the research, the planning acts, the actions and the applications, analysed and investigated both for their ability to give a systematic response to questions concerning the urban and territorial planning, and for their attention to aspects such as the environmental sustainability and the innovation in the practices. For this purpose the Review Pages are formed by five sections (Web Resources; Books; Laws; Urban Practices; News and Events, each of which examines a specific aspect of the broader information storage of interest for TeMA.

  19. PAGING IN COMMUNICATIONS

    DEFF Research Database (Denmark)

    2016-01-01

    A method and an apparatus are disclosed for managing paging in a communications system. The method may include, based on a received set of physical resources, determining, in a terminal apparatus, an original paging pattern defining potential time instants for paging, wherein the potential time...... instants for paging include a subset of a total amount of resources available at a network node for paging....

  20. Neutralizing SQL Injection Attack Using Server Side Code Modification in Web Applications

    Directory of Open Access Journals (Sweden)

    Asish Kumar Dalai

    2017-01-01

    Full Text Available Reports on web application security risks show that SQL injection is the top most vulnerability. The journey of static to dynamic web pages leads to the use of database in web applications. Due to the lack of secure coding techniques, SQL injection vulnerability prevails in a large set of web applications. A successful SQL injection attack imposes a serious threat to the database, web application, and the entire web server. In this article, the authors have proposed a novel method for prevention of SQL injection attack. The classification of SQL injection attacks has been done based on the methods used to exploit this vulnerability. The proposed method proves to be efficient in the context of its ability to prevent all types of SQL injection attacks. Some popular SQL injection attack tools and web application security datasets have been used to validate the model. The results obtained are promising with a high accuracy rate for detection of SQL injection attack.