WorldWideScience

Sample records for single web page

  1. Migrating Multi-page Web Applications to Single-page AJAX Interfaces

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.

    2006-01-01

    Recently, a new web development technique for creating interactive web applications, dubbed AJAX, has emerged. In this new model, the single-page web interface is composed of individual components which can be updated/replaced independently. With the rise of AJAX web applications classical

  2. Analysis and Testing of Ajax-based Single-page Web Applications

    NARCIS (Netherlands)

    Mesbah, A.

    2009-01-01

    This dissertation has focused on better understanding the shifting web paradigm and the consequences of moving from the classical multi-page model to an Ajax-based single-page style. Specifically to that end, this work has examined this new class of software from three main software engineering

  3. Building single-page web apps with meteor

    CERN Document Server

    Vogelsteller, Fabian

    2015-01-01

    If you are a web developer with basic knowledge of JavaScript and want to take on Web 2.0, build real-time applications, or simply want to write a complete application using only JavaScript and HTML/CSS, this is the book for you.This book is based on Meteor 1.0.

  4. Developing Dynamic Single Page Web Applications Using Meteor : Comparing JavaScript Frameworks: Blaze and React

    OpenAIRE

    Yetayeh, Asabeneh

    2017-01-01

    This paper studies Meteor which is a JavaScript full-stack framework to develop interactive single page web applications. Meteor allows building web applications entirely in JavaScript. Meteor uses Blaze, React or AngularJS as a view layer and Node.js and MongoDB as a back-end. The main purpose of this study is to compare the performance of Blaze and React. A multi-user Blaze and React web applications with similar HTML and CSS were developed. Both applications were deployed on Heroku’s w...

  5. Creating Web Pages Simplified

    CERN Document Server

    Wooldridge, Mike

    2011-01-01

    The easiest way to learn how to create a Web page for your family or organization Do you want to share photos and family lore with relatives far away? Have you been put in charge of communication for your neighborhood group or nonprofit organization? A Web page is the way to get the word out, and Creating Web Pages Simplified offers an easy, visual way to learn how to build one. Full-color illustrations and concise instructions take you through all phases of Web publishing, from laying out and formatting text to enlivening pages with graphics and animation. This easy-to-follow visual guide sho

  6. Web page recommending system; Web page suisen system

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    This system allows a user to retrieve relevant unknown useful web pages from other users who have the similar interest by comparing the web pages collected by the first user with the web pages collected by the others. The system collects web pages that one is fond of by performing 'labeling' that allows information to be put in order in more flexible manner, rather than by a book mark that has hierarchical folder structure. The system recommends useful web pages or users who have similar interest by sharing and comparing the correlation between labels of all the users and the web pages. This system was also introduced into the comprehensive regional information and culture aiding system for Nagao Township in Kagawa Prefecture, where verification is being carried out on the system effectiveness for community formation. (translated by NEDO)

  7. Code AI Personal Web Pages

    Science.gov (United States)

    Garcia, Joseph A.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The document consists of a publicly available web site (george.arc.nasa.gov) for Joseph A. Garcia's personal web pages in the AI division. Only general information will be posted and no technical material. All the information is unclassified.

  8. The Faculty Web Page: Contrivance or Continuation?

    Science.gov (United States)

    Lennex, Lesia

    2007-01-01

    In an age of Internet education, what does it mean for a tenure/tenure-track faculty to have a web page? How many professors have web pages? If they have a page, what does it look like? Do they really need a web page at all? Many universities have faculty web pages. What do those collective pages look like? In what way do they represent the…

  9. Sign Language Web Pages

    Science.gov (United States)

    Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.

    2006-01-01

    The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…

  10. Interstellar Initiative Web Page Design

    Science.gov (United States)

    Mehta, Alkesh

    1999-01-01

    This summer at NASA/MSFC, I have contributed to two projects: Interstellar Initiative Web Page Design and Lenz's Law Relative Motion Demonstration. In the Web Design Project, I worked on an Outline. The Web Design Outline was developed to provide a foundation for a Hierarchy Tree Structure. The Outline would help design a Website information base for future and near-term missions. The Website would give in-depth information on Propulsion Systems and Interstellar Travel. The Lenz's Law Relative Motion Demonstrator is discussed in this volume by Russell Lee.

  11. Classifying web pages with visual features

    NARCIS (Netherlands)

    de Boer, V.; van Someren, M.; Lupascu, T.; Filipe, J.; Cordeiro, J.

    2010-01-01

    To automatically classify and process web pages, current systems use the textual content of those pages, including both the displayed content and the underlying (HTML) code. However, a very important feature of a web page is its visual appearance. In this paper, we show that using generic visual

  12. Stochastic analysis of web page ranking

    NARCIS (Netherlands)

    Volkovich, Y.

    2009-01-01

    Today, the study of the World Wide Web is one of the most challenging subjects. In this work we consider the Web from a probabilistic point of view. We analyze the relations between various characteristics of the Web. In particular, we are interested in the Web properties that affect the Web page

  13. Web page classification on child suitability

    NARCIS (Netherlands)

    C. Eickhoff (Carsten); P. Serdyukov; A.P. de Vries (Arjen)

    2010-01-01

    htmlabstractChildren spend significant amounts of time on the Internet. Recent studies showed, that during these periods they are often not under adult supervision. This work presents an automatic approach to identifying suitable web pages for children based on topical and non-topical web page

  14. An Efficient Web Page Ranking for Semantic Web

    Science.gov (United States)

    Chahal, P.; Singh, M.; Kumar, S.

    2014-01-01

    With the enormous amount of information presented on the web, the retrieval of relevant information has become a serious problem and is also the topic of research for last few years. The most common tools to retrieve information from web are search engines like Google. The Search engines are usually based on keyword searching and indexing of web pages. This approach is not very efficient as the result-set of web pages obtained include large irrelevant pages. Sometimes even the entire result-set may contain lot of irrelevant pages for the user. The next generation of search engines must address this problem. Recently, many semantic web search engines have been developed like Ontolook, Swoogle, which help in searching meaningful documents presented on semantic web. In this process the ranking of the retrieved web pages is very crucial. Some attempts have been made in ranking of semantic web pages but still the ranking of these semantic web documents is neither satisfactory and nor up to the user's expectations. In this paper we have proposed a semantic web based document ranking scheme that relies not only on the keywords but also on the conceptual instances present between the keywords. As a result only the relevant page will be on the top of the result-set of searched web pages. We explore all relevant relations between the keywords exploring the user's intention and then calculate the fraction of these relations on each web page to determine their relevance. We have found that this ranking technique gives better results than those by the prevailing methods.

  15. CrazyEgg Reports for Single Page Analysis

    Science.gov (United States)

    CrazyEgg provides an in depth look at visitor behavior on one page. While you can use GA to do trend analysis of your web area, CrazyEgg helps diagnose the design of a single Web page by visually displaying all visitor clicks during a specified time.

  16. CERN Web Pages Receive a Makeover

    CERN Document Server

    2001-01-01

    Asudden allergic reaction to the colour turquoise? Never fear, from Monday 2 April you'll be able to click in the pink box at the top of the CERN users' welcome page to go to the all-new welcome page, which is simpler and better organized. CERN's new-look intranet is the first step in a complete Web-makeover being applied by the Web Public Education (WPE) group of ETT Division. The transition will be progressive, to allow users to familiarize themselves with the new pages. Until 17 April, CERN users will still get the familiar turquoise welcome page by default, with the new pages operating in parallel. From then on, the default will switch to the new pages, with the old ones being finally switched off on 25 May. Some 400 pages have received the makeover treatment. For more information about the changes to your Web, take a look at: http://www.cern.ch/CERN/NewUserPages/ Happy surfing!

  17. Query-Structure Based Web Page Indexing

    Science.gov (United States)

    2012-11-01

    task. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 13 19a. NAME OF...finding, Entity finding, and Web pages classification . The design of highly-scalable indexing algorithms is needed, especially with an estimate of one...content, e.g., “ Fibromyalgia " or "Lipoma". • Combining: this type of query is processed using primitive keywords from urls and/or titles that imply

  18. Categorization of web pages - Performance enhancement to search engine

    Digital Repository Service at National Institute of Oceanography (India)

    Lakshminarayana, S.

    search systems. Categorization of the web pages abet fairly in addressing this issue. The anatomy of the web pages, links, categorization of text and their relations are empathized with time. Search engines perform critical analysis using several inputs...

  19. Required Discussion Web Pages in Psychology Courses and Student Outcomes

    Science.gov (United States)

    Pettijohn, Terry F., II; Pettijohn, Terry F.

    2007-01-01

    We conducted 2 studies that investigated student outcomes when using discussion Web pages in psychology classes. In Study 1, we assigned 213 students enrolled in Introduction to Psychology courses to either a mandatory or an optional Web page discussion condition. Students used the discussion Web page significantly more often and performed…

  20. Network and User-Perceived Performance of Web Page Retrievals

    Science.gov (United States)

    Kruse, Hans; Allman, Mark; Mallasch, Paul

    1998-01-01

    The development of the HTTP protocol has been driven by the need to improve the network performance of the protocol by allowing the efficient retrieval of multiple parts of a web page without the need for multiple simultaneous TCP connections between a client and a server. We suggest that the retrieval of multiple page elements sequentially over a single TCP connection may result in a degradation of the perceived performance experienced by the user. We attempt to quantify this perceived degradation through the use of a model which combines a web retrieval simulation and an analytical model of TCP operation. Starting with the current HTTP/l.1 specification, we first suggest a client@side heuristic to improve the perceived transfer performance. We show that the perceived speed of the page retrieval can be increased without sacrificing data transfer efficiency. We then propose a new client/server extension to the HTTP/l.1 protocol to allow for the interleaving of page element retrievals. We finally address the issue of the display of advertisements on web pages, and in particular suggest a number of mechanisms which can make efficient use of IP multicast to send advertisements to a number of clients within the same network.

  1. Developing a web page: bringing clinics online.

    Science.gov (United States)

    Peterson, Ronnie; Berns, Susan

    2004-01-01

    Introducing clinical staff education, along with new policies and procedures, to over 50 different clinical sites can be a challenge. As any staff educator will confess, getting people to attend an educational inservice session can be difficult. Clinical staff request training, but no one has time to attend training sessions. Putting the training along with the policies and other information into "neat" concise packages via the computer and over the company's intranet was the way to go. However, how do you bring the clinics online when some of the clinical staff may still be reluctant to turn on their computers for anything other than to gather laboratory results? Developing an easy, fun, and accessible Web page was the answer. This article outlines the development of the first training Web page at the University of Wisconsin Medical Foundation, Madison, WI.

  2. Measuring consistency of web page design and its effects on performance and satisfaction.

    Science.gov (United States)

    Ozok, A A; Salvendy, G

    2000-04-01

    This study examines the methods for measuring the consistency levels of web pages and the effect of consistency on the performance and satisfaction of the world-wide web (WWW) user. For clarification, a home page is referred to as a single page that is the default page of a web site on the WWW. A web page refers to a single screen that indicates a specific address on the WWW. This study has tested a series of web pages that were mostly hyperlinked. Therefore, the term 'web page' has been adopted for the nomenclature while referring to the objects of which the features were tested. It was hypothesized that participants would perform better and be more satisfied using web pages that have consistent rather than inconsistent interface design; that the overall consistency level of an interface design would significantly correlate with the three elements of consistency, physical, communicational and conceptual consistency; and that physical and communicational consistencies would interact with each other. The hypotheses were tested in a four-group, between-subject design, with 10 participants in each group. The results partially support the hypothesis regarding error rate, but not regarding satisfaction and performance time. The results also support the hypothesis that each of the three elements of consistency significantly contribute to the overall consistency of a web page, and that physical and communicational consistencies interact with each other, while conceptual consistency does not interact with them.

  3. Exploiting link structure for web page genre identification

    KAUST Repository

    Zhu, Jia

    2015-07-07

    As the World Wide Web develops at an unprecedented pace, identifying web page genre has recently attracted increasing attention because of its importance in web search. A common approach for identifying genre is to use textual features that can be extracted directly from a web page, that is, On-Page features. The extracted features are subsequently inputted into a machine learning algorithm that will perform classification. However, these approaches may be ineffective when the web page contains limited textual information (e.g., the page is full of images). In this study, we address genre identification of web pages under the aforementioned situation. We propose a framework that uses On-Page features while simultaneously considering information in neighboring pages, that is, the pages that are connected to the original page by backward and forward links. We first introduce a graph-based model called GenreSim, which selects an appropriate set of neighboring pages. We then construct a multiple classifier combination module that utilizes information from the selected neighboring pages and On-Page features to improve performance in genre identification. Experiments are conducted on well-known corpora, and favorable results indicate that our proposed framework is effective, particularly in identifying web pages with limited textual information. © 2015 The Author(s)

  4. Recovering alternative presentation models of a web page with VAQUITA

    OpenAIRE

    Bouillon, Laurent; Vanderdonckt, Jean; Souchon, Nathalie

    2002-01-01

    VAQUITA allows developers to reverse engineer a presentation model of a web page according to multiple reverse engineering options. The alternative models offered by these options not only widen the spectrum of possible presentation models but also encourage developers in exploring multiple reverse engineering strategies. The options provide filtering capabilities in a static analysis of HTML code that are targeted either at multiple widgets simultaneously or at single widgets ...

  5. Web Page Classification Method Using Neural Networks

    Science.gov (United States)

    Selamat, Ali; Omatu, Sigeru; Yanagimoto, Hidekazu; Fujinaka, Toru; Yoshioka, Michifumi

    Automatic categorization is the only viable method to deal with the scaling problem of the World Wide Web (WWW). In this paper, we propose a news web page classification method (WPCM). The WPCM uses a neural network with inputs obtained by both the principal components and class profile-based features (CPBF). Each news web page is represented by the term-weighting scheme. As the number of unique words in the collection set is big, the principal component analysis (PCA) has been used to select the most relevant features for the classification. Then the final output of the PCA is combined with the feature vectors from the class-profile which contains the most regular words in each class before feeding them to the neural networks. We have manually selected the most regular words that exist in each class and weighted them using an entropy weighting scheme. The fixed number of regular words from each class will be used as a feature vectors together with the reduced principal components from the PCA. These feature vectors are then used as the input to the neural networks for classification. The experimental evaluation demonstrates that the WPCM method provides acceptable classification accuracy with the sports news datasets.

  6. Application of the multimedia for web page

    OpenAIRE

    Krstev, Dejan; Krstev, Aleksandar; Krstev, Boris

    2012-01-01

    Bargala is one of the most important antic town in Macedonia which name ethimology connect with Bregalnica River. This town is located 12 km west-eastern from Stip town among the Kozjacka River below the Plackovica mountain. Web-page for Bargala is unique way to represent and show what is Macedonia, what is history and civilization by centuries earlier. Basic colour which is set on the back is aproximatively dark brown (#1D1A15) and used combination with red colour (#9D1014). The dimension o...

  7. Google Analytics: Single Page Traffic Reports

    Science.gov (United States)

    These are pages that live outside of Google Analytics (GA) but allow you to view GA data for any individual page on either the public EPA web or EPA intranet. You do need to log in to Google Analytics to view them.

  8. WebScore: An Effective Page Scoring Approach for Uncertain Web Social Networks

    Directory of Open Access Journals (Sweden)

    Shaojie Qiao

    2011-10-01

    Full Text Available To effectively score pages with uncertainty in web social networks, we first proposed a new concept called transition probability matrix and formally defined the uncertainty in web social networks. Second, we proposed a hybrid page scoring algorithm, called WebScore, based on the PageRank algorithm and three centrality measures including degree, betweenness, and closeness. Particularly,WebScore takes into a full consideration of the uncertainty of web social networks by computing the transition probability from one page to another. The basic idea ofWebScore is to: (1 integrate uncertainty into PageRank in order to accurately rank pages, and (2 apply the centrality measures to calculate the importance of pages in web social networks. In order to verify the performance of WebScore, we developed a web social network analysis system which can partition web pages into distinct groups and score them in an effective fashion. Finally, we conducted extensive experiments on real data and the results show that WebScore is effective at scoring uncertain pages with less time deficiency than PageRank and centrality measures based page scoring algorithms.

  9. Measurment of Web Usability: Web Page of Hacettepe University Department of Information Management

    OpenAIRE

    Nazan Özenç Uçak; Tolga Çakmak

    2009-01-01

    Today, information is produced increasingly in electronic form and retrieval of information is provided via web pages. As a result of the rise of the number of web pages, many of them seem to comprise similar contents but different designs. In this respect, presenting information over the web pages according to user expectations and specifications is important in terms of effective usage of information. This study provides an insight about web usability studies that are executed for measuring...

  10. Digital Ethnography: Library Web Page Redesign among Digital Natives

    Science.gov (United States)

    Klare, Diane; Hobbs, Kendall

    2011-01-01

    Presented with an opportunity to improve Wesleyan University's dated library home page, a team of librarians employed ethnographic techniques to explore how its users interacted with Wesleyan's current library home page and web pages in general. Based on the data that emerged, a group of library staff and members of the campus' information…

  11. An Analysis of Academic Library Web Pages for Faculty

    Science.gov (United States)

    Gardner, Susan J.; Juricek, John Eric; Xu, F. Grace

    2008-01-01

    Web sites are increasingly used by academic libraries to promote key services and collections to teaching faculty. This study analyzes the content, location, language, and technological features of fifty-four academic library Web pages designed especially for faculty to expose patterns in the development of these pages.

  12. Web page sorting algorithm based on query keyword distance relation

    Science.gov (United States)

    Yang, Han; Cui, Hong Gang; Tang, Hao

    2017-08-01

    In order to optimize the problem of page sorting, according to the search keywords in the web page in the relationship between the characteristics of the proposed query keywords clustering ideas. And it is converted into the degree of aggregation of the search keywords in the web page. Based on the PageRank algorithm, the clustering degree factor of the query keyword is added to make it possible to participate in the quantitative calculation. This paper proposes an improved algorithm for PageRank based on the distance relation between search keywords. The experimental results show the feasibility and effectiveness of the method.

  13. Metadata Schema Used in OCLC Sampled Web Pages

    Directory of Open Access Journals (Sweden)

    Fei Yu

    2005-12-01

    Full Text Available The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources. However, many questions have been raised about the use of metadata schemas such as which metadata schemas have been used on the Web? How did they describe Web accessible information? What is the distribution of these metadata schemas among Web pages? Do certain schemas dominate the others? To address these issues, this study analyzed 16,383 Web pages with meta tags extracted from 200,000 OCLC sampled Web pages in 2000. It found that only 8.19% Web pages used meta tags; description tags, keyword tags, and Dublin Core tags were the only three schemas used in the Web pages. This article revealed the use of meta tags in terms of their function distribution, syntax characteristics, granularity of the Web pages, and the length distribution and word number distribution of both description and keywords tags.

  14. A teen's guide to creating web pages and blogs

    CERN Document Server

    Selfridge, Peter; Osburn, Jennifer

    2008-01-01

    Whether using a social networking site like MySpace or Facebook or building a Web page from scratch, millions of teens are actively creating a vibrant part of the Internet. This is the definitive teen''s guide to publishing exciting web pages and blogs on the Web. This easy-to-follow guide shows teenagers how to: Create great MySpace and Facebook pages Build their own unique, personalized Web site Share the latest news with exciting blogging ideas Protect themselves online with cyber-safety tips Written by a teenager for other teens, this book leads readers step-by-step through the basics of web and blog design. In this book, teens learn to go beyond clicking through web sites to learning winning strategies for web design and great ideas for writing blogs that attract attention and readership.

  15. A reverse engineering approach for automatic annotation of Web pages

    NARCIS (Netherlands)

    R. de Virgilio (Roberto); F. Frasincar (Flavius); W. Hop (Walter); S. Lachner (Stephan)

    2013-01-01

    textabstractThe Semantic Web is gaining increasing interest to fulfill the need of sharing, retrieving, and reusing information. Since Web pages are designed to be read by people, not machines, searching and reusing information on the Web is a difficult task without human participation. To this aim

  16. Learning Structural Classification Rules for Web-page Categorization

    NARCIS (Netherlands)

    Stuckenschmidt, Heiner; Hartmann, Jens; Van Harmelen, Frank

    2002-01-01

    Content-related metadata plays an important role in the effort of developing intelligent web applications. One of the most established form of providing content-related metadata is the assignment of web-pages to content categories. We describe the Spectacle system for classifying individual web

  17. A thorough spring-clean for CERN's Web pages

    CERN Multimedia

    2001-01-01

    This coming Tuesday will see the unveiling of CERN's new user pages on the Web. Their simplified layout and design will make everybody's lives a whole lot easier. Stand by for Tuesday 17 April when, as announced in the Weekly Bulletin of 2 April (n°14/2001), the new newly-designed users welcome page will be hitting our screens as the default CERN home page. But don't worry, if you've got the blues for the good old blue-green home page it's still in service and, to ensure a smooth transition, will be maintained in parallel until 25 May. But in all likelihood you'll be quickly won over by the new-look pages, which are so much simpler to use. Welcome to the new Web! The aim of this revamp, led by the WPE (Web Public Education) group, is to simplify and introduce a more logical hierarchy into the menus and welcome pages on CERN's Intranet. In a second stage, the 'General Public' pages will get a similar makeover. The fact is that the number of links on the user pages, and in particular the welcome page...

  18. Is Domain Highlighting Actually Helpful in Identifying Phishing Web Pages?

    Science.gov (United States)

    Xiong, Aiping; Proctor, Robert W; Yang, Weining; Li, Ninghui

    2017-06-01

    To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants' visual attention was attracted by the highlighted domains. Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages.

  19. A Quantitative Comparison of Semantic Web Page Segmentation Approaches

    NARCIS (Netherlands)

    Kreuzer, Robert; Hage, J.; Feelders, A.J.

    2015-01-01

    We compare three known semantic web page segmentation algorithms, each serving as an example of a particular approach to the problem, and one self-developed algorithm, WebTerrain, that combines two of the approaches. We compare the performance of the four algorithms for a large benchmark of modern

  20. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric Martin; van der Geest, Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  1. Beginning ASPNET Web Pages with WebMatrix

    CERN Document Server

    Brind, Mike

    2011-01-01

    Learn to build dynamic web sites with Microsoft WebMatrix Microsoft WebMatrix is designed to make developing dynamic ASP.NET web sites much easier. This complete Wrox guide shows you what it is, how it works, and how to get the best from it right away. It covers all the basic foundations and also introduces HTML, CSS, and Ajax using jQuery, giving beginning programmers a firm foundation for building dynamic web sites.Examines how WebMatrix is expected to become the new recommended entry-level tool for developing web sites using ASP.NETArms beginning programmers, students, and educators with al

  2. Web pages of Slovenian public libraries

    Directory of Open Access Journals (Sweden)

    Silva Novljan

    2002-01-01

    Full Text Available Libraries should offer their patrons web sites which establish the unmistakeable concept (public of library, the concept that cannot be mistaken for other information brokers and services available on the Internet, but inside this framework of the concept of library, would show a diversity which directs patrons to other (public libraries. This can be achieved by reliability, quality of information and services, and safety of usage.Achieving this, patrons regard library web sites as important reference sources deserving continuous usage for obtaining relevant information. Libraries excuse investment in the development and sustainance of their web sites by the number of visits and by patron satisfaction. The presented research, made on a sample of Slovene public libraries’web sites, determines how the libraries establish their purpose and role, as well as the given professional recommendations in web site design.The results uncover the striving of libraries for the modernisation of their functions,major attention is directed to the presentation of classic libraries and their activities,lesser to the expansion of available contents and electronic sources. Pointing to their diversity is significant since it is not a result of patrons’ needs, but more the consequence of improvisation, too little attention to selection, availability, organisation and formation of different kind of information and services on the web sites. Based on the analysis of a common concept of the public library web site, certain activities for improving the existing state of affairs are presented in the paper.

  3. Identification of Malicious Web Pages by Inductive Learning

    Science.gov (United States)

    Liu, Peishun; Wang, Xuefang

    Malicious web pages are an increasing threat to current computer systems in recent years. Traditional anti-virus techniques focus typically on detection of the static signatures of Malware and are ineffective against these new threats because they cannot deal with zero-day attacks. In this paper, a novel classification method for detecting malicious web pages is presented. This method is generalization and specialization of attack pattern based on inductive learning, which can be used for updating and expanding knowledge database. The attack pattern is established from an example and generalized by inductive learning, which can be used to detect unknown attacks whose behavior is similar to the example.

  4. Identify Web-page Content meaning using Knowledge based System for Dual Meaning Words

    OpenAIRE

    Sinha, Sukanta; Dattagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Meaning of Web-page content plays a big role while produced a search result from a search engine. Most of the cases Web-page meaning stored in title or meta-tag area but those meanings do not always match with Web-page content. To overcome this situation we need to go through the Web-page content to identify the Web-page meaning. In such cases, where Webpage content holds dual meaning words that time it is really difficult to identify the meaning of the Web-page. In this paper, we are introdu...

  5. Building interactive simulations in a Web page design program.

    Science.gov (United States)

    Kootsey, J Mailen; Siriphongs, Daniel; McAuley, Grant

    2004-01-01

    A new Web software architecture, NumberLinX (NLX), has been integrated into a commercial Web design program to produce a drag-and-drop environment for building interactive simulations. NLX is a library of reusable objects written in Java, including input, output, calculation, and control objects. The NLX objects were added to the palette of available objects in the Web design program to be selected and dropped on a page. Inserting an object in a Web page is accomplished by adding a template block of HTML code to the page file. HTML parameters in the block must be set to user-supplied values, so the HTML code is generated dynamically, based on user entries in a popup form. Implementing the object inspector for each object permits the user to edit object attributes in a form window. Except for model definition, the combination of the NLX architecture and the Web design program permits construction of interactive simulation pages without writing or inspecting code.

  6. Arabic web pages clustering and annotation using semantic class features

    OpenAIRE

    Hanan M. Alghamdi; Ali Selamat; Nor Shahriza Abdul Karim

    2014-01-01

    To effectively manage the great amount of data on Arabic web pages and to enable the classification of relevant information are very important research problems. Studies on sentiment text mining have been very limited in the Arabic language because they need to involve deep semantic processing. Therefore, in this paper, we aim to retrieve machine-understandable data with the help of a Web content mining technique to detect covert knowledge within these data. We propose an approach to achieve ...

  7. In-Degree and PageRank of web pages: why do they follow similar power laws?

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    2009-01-01

    PageRank is a popularity measure designed by Google to rank Web pages. Experiments confirm that PageRank values obey a power law with the same exponent as In-Degree values. This paper presents a novel mathematical model that explains this phenomenon. The relation between PageRank and In-Degree is

  8. In-degree and pageRank of web pages: Why do they follow similar power laws?

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    The PageRank is a popularity measure designed by Google to rank Web pages. Experiments confirm that the PageRank obeys a 'power law' with the same exponent as the In-Degree. This paper presents a novel mathematical model that explains this phenomenon. The relation between the PageRank and In-Degree

  9. What Snippets Say About Pages in Federated Web Search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd; Hou, Yuexian; Nie, Jian-Yun; Sun, Le; Wang, Bo; Zhang, Peng

    2012-01-01

    What is the likelihood that a Web page is considered relevant to a query, given the relevance assessment of the corresponding snippet? Using a new federated IR test collection that contains search results from over a hundred search engines on the internet, we are able to investigate such research

  10. Ecosystem Food Web Lift-The-Flap Pages

    Science.gov (United States)

    Atwood-Blaine, Dana; Rule, Audrey C.; Morgan, Hannah

    2016-01-01

    In the lesson on which this practical article is based, third grade students constructed a "lift-the-flap" page to explore food webs on the prairie. The moveable papercraft focused student attention on prairie animals' external structures and how the inferred functions of those structures could support further inferences about the…

  11. RDFa Primer, Embedding Structured Data in Web Pages

    NARCIS (Netherlands)

    institution W3C; M. Birbeck (Mark); not CWI et al

    2007-01-01

    textabstractCurrent Web pages, written in XHTML, contain inherent structured data: calendar events, contact information, photo captions, song titles, copyright licensing information, etc. When authors and publishers can express this data precisely, and when tools can read it robustly, a new world of

  12. Business Systems Branch Abilities, Capabilities, and Services Web Page

    Science.gov (United States)

    Cortes-Pena, Aida Yoguely

    2009-01-01

    During the INSPIRE summer internship I acted as the Business Systems Branch Capability Owner for the Kennedy Web-based Initiative for Communicating Capabilities System (KWICC), with the responsibility of creating a portal that describes the services provided by this Branch. This project will help others achieve a clear view ofthe services that the Business System Branch provides to NASA and the Kennedy Space Center. After collecting the data through the interviews with subject matter experts and the literature in Business World and other web sites I identified discrepancies, made the necessary corrections to the sites and placed the information from the report into the KWICC web page.

  13. Readability of the web: a study on 1 billion web pages

    NARCIS (Netherlands)

    de Heus, Marije; Hiemstra, Djoerd

    We have performed a readability study on more than 1 billion web pages. The Automated Readability Index was used to determine the average grade level required to easily comprehend a website. Some of the results are that a 16-year-old can easily understand 50% of the web and an 18-year old can easily

  14. Document representations for classification of short web-page descriptions

    Directory of Open Access Journals (Sweden)

    Radovanović Miloš

    2008-01-01

    Full Text Available Motivated by applying Text Categorization to classification of Web search results, this paper describes an extensive experimental study of the impact of bag-of- words document representations on the performance of five major classifiers - Naïve Bayes, SVM, Voted Perceptron, kNN and C4.5. The texts, representing short Web-page descriptions sorted into a large hierarchy of topics, are taken from the dmoz Open Directory Web-page ontology, and classifiers are trained to automatically determine the topics which may be relevant to a previously unseen Web-page. Different transformations of input data: stemming, normalization, logtf and idf, together with dimensionality reduction, are found to have a statistically significant improving or degrading effect on classification performance measured by classical metrics - accuracy, precision, recall, F1 and F2. The emphasis of the study is not on determining the best document representation which corresponds to each classifier, but rather on describing the effects of every individual transformation on classification, together with their mutual relationships. .

  15. Adaptation of web pages and images for mobile applications

    Science.gov (United States)

    Kopf, Stephan; Guthier, Benjamin; Lemelson, Hendrik; Effelsberg, Wolfgang

    2009-02-01

    In this paper, we introduce our new visualization service which presents web pages and images on arbitrary devices with differing display resolutions. We analyze the layout of a web page and simplify its structure and formatting rules. The small screen of a mobile device is used much better this way. Our new image adaptation service combines several techniques. In a first step, border regions which do not contain relevant semantic content are identified. Cropping is used to remove these regions. Attention objects are identified in a second step. We use face detection, text detection and contrast based saliency maps to identify these objects and combine them into a region of interest. Optionally, the seam carving technique can be used to remove inner parts of an image. Additionally, we have developed a software tool to validate, add, delete, or modify all automatically extracted data. This tool also simulates different mobile devices, so that the user gets a feeling of how an adapted web page will look like. We have performed user studies to evaluate our web and image adaptation approach. Questions regarding software ergonomics, quality of the adapted content, and perceived benefit of the adaptation were asked.

  16. Arabic web pages clustering and annotation using semantic class features

    Directory of Open Access Journals (Sweden)

    Hanan M. Alghamdi

    2014-12-01

    Full Text Available To effectively manage the great amount of data on Arabic web pages and to enable the classification of relevant information are very important research problems. Studies on sentiment text mining have been very limited in the Arabic language because they need to involve deep semantic processing. Therefore, in this paper, we aim to retrieve machine-understandable data with the help of a Web content mining technique to detect covert knowledge within these data. We propose an approach to achieve clustering with semantic similarities. This approach comprises integrating k-means document clustering with semantic feature extraction and document vectorization to group Arabic web pages according to semantic similarities and then show the semantic annotation. The document vectorization helps to transform text documents into a semantic class probability distribution or semantic class density. To reach semantic similarities, the approach extracts the semantic class features and integrates them into the similarity weighting schema. The quality of the clustering result has evaluated the use of the purity and the mean intra-cluster distance (MICD evaluation measures. We have evaluated the proposed approach on a set of common Arabic news web pages. We have acquired favorable clustering results that are effective in minimizing the MICD, expanding the purity and lowering the runtime.

  17. Detection of spam web page using content and link-based techniques

    Indian Academy of Sciences (India)

    of a Web page to rank it. Spammers try to understand the weakness of these models and try to manipulate the content of the target page. For example, increasing the term frequencies for terms appear in a page, repeating important terms many times on a target page, putting all dictionary terms on a target page are some of ...

  18. Web information seeking by pages. World Wide Web, Information seeking, Personal development, Navigation

    Directory of Open Access Journals (Sweden)

    Jarkko Kari

    2004-01-01

    Full Text Available The intention of this paper is to look at how the World Wide Web is used in looking for information in the domain of personal development. The theoretical aim of the paper is to elaborate conceptual tools for understanding better the content of Web pages, as well as navigation through the Web. To obtain detailed and valid data, totally free-form Web searches by 15 individuals were observed and videotaped. The 1,812 pages visited by the informants, along with their actions therein, were examined and coded. The study explores the subject, language and content type of the viewed pages, as well as the tactics, strategies, interfaces and revisitation in moving from one page to another. Correlations between the variables are also analysed. One of the most interesting discoveries was the wide variety of different tactics for moving around the Web, albeit that only clicking on links and pushing the Back button stood out from the rest. The paper ends by presenting sundry theoretical, methodological and practical contributions of the research to the field of Web searching.

  19. Credibility judgments in web page design - a brief review.

    Science.gov (United States)

    Selejan, O; Muresanu, D F; Popa, L; Muresanu-Oloeriu, I; Iudean, D; Buzoianu, A; Suciu, S

    2016-01-01

    Today, more than ever, knowledge that interfaces appearance analysis is a crucial point in human-computer interaction field has been accepted. As nowadays virtually anyone can publish information on the web, the credibility role has grown increasingly important in relation to the web-based content. Areas like trust, credibility, and behavior, doubled by overall impression and user expectation are today in the spotlight of research compared to the last period, when other pragmatic areas such as usability and utility were considered. Credibility has been discussed as a theoretical construct in the field of communication in the past decades and revealed that people tend to evaluate the credibility of communication primarily by the communicator's expertise. Other factors involved in the content communication process are trustworthiness and dynamism as well as various other criteria but to a lower extent. In this brief review, factors like web page aesthetics, browsing experiences and user experience are considered.

  20. Pro single page application development using Backbone.js and ASP.NET

    CERN Document Server

    Fink, Gil

    2014-01-01

    One of the most important and exciting trends in web development in recent years is the move towards single page applications, or SPAs. Instead of clicking through hyperlinks and waiting for each page to load, the user loads a site once and all the interactivity is handled fluidly by a rich JavaScript front end. If you come from a background in ASP.NET development, you'll be used to handling most interactions on the server side. Pro Single Page Application Development will guide you through your transition to this powerful new application type.The book starts in Part I by laying the groundwork

  1. Detection of spam web page using content and link-based techniques

    Indian Academy of Sciences (India)

    Abstract. Web spam is a technique through which the irrelevant pages get higher rank than relevant pages in the search engine's results. Spam pages are generally insufficient and inappropriate results for user. Many researchers are working in this area to detect the spam pages. However, there is no universal efficient ...

  2. Detection of spam web page using content and link-based ...

    Indian Academy of Sciences (India)

    Web spam is a technique through which the irrelevant pages get higher rank than relevant pages in the search engine's results. Spam pages are generally insufficient and inappropriate results for user. Many researchers are working in this area to detect the spam pages. However, there is no universal efficient technique ...

  3. Internet resources and web pages for pediatric surgeons.

    Science.gov (United States)

    Lugo-Vicente, H

    2000-02-01

    The Internet, the largest network of connected computers, provides immediate, dynamic, and downloadable information. By re-architecturing the work place and becoming familiar with Internet resources, pediatric surgeons have anticipated the informatics capabilities of this computer-based technology creating a new vision of work and organization in such areas as patient care, teaching, and research. This review aims to highlight how Internet navigational technology can be a useful educational resource in pediatric surgery, examines web pages of interest, and defines ideas of network communication. Basic Internet resources are electronic mail, discussion groups, file transfer, and the Worldwide Web (WWW). Electronic mailing is the most useful resource extending the avenue of learning to an international audience through news or list-servers groups. Pediatric Surgery List Server, the most popular discussion group, is a constant forum for exchange of ideas, difficult cases, consensus on management, and development of our specialty. The WWW provides an all-in-one medium of text, image, sound, and video. Associations, departments, educational sites, organizations, peer-reviewed scientific journals and Medline database web pages of prime interest to pediatric surgeons have been developing at an amazing pace. Future developments of technological advance nurturing our specialty will consist of online journals, telemedicine, international chatting, computer-based training for surgical education, and centralization of cyberspace information into database search sites.

  4. Overhaul of CERN's top-level web pages

    CERN Multimedia

    2004-01-01

    The pages for CERN users and for the general public have been given a face-lift before they become operational on the central web servers later this month. You may already now inspect the new versions in their "waiting places" at: http://intranet.cern.ch/User/ and http://intranet.cern.ch/Public/ We hope you will like these improved versions and you can report errors and omissions in the usual way ("comments and change requests" link at the bottom of the pages). The new versions will replace the existing ones at the end of the month, so you do not need to change your bookmarks or start-up URL. ETT/EC/EX

  5. Exploring Cultural Variation in Eye Movements on a Web Page between Americans and Koreans

    Science.gov (United States)

    Yang, Changwoo

    2009-01-01

    This study explored differences in eye movement on a Web page between members of two different cultures to provide insight and guidelines for implementation of global Web site development. More specifically, the research examines whether differences of eye movement exist between the two cultures (American vs. Korean) when viewing a Web page, and…

  6. [An evaluation of the quality of health web pages using a validated questionnaire].

    Science.gov (United States)

    Conesa Fuentes, Maria del Carmen; Aguinaga Ontoso, Enrique; Hernández Morante, Juan José

    2011-01-01

    The objective of the present study was to evaluate the quality of general health information in Spanish language web pages, and the official Regional Services web pages from the different Autonomous Regions. It is a cross-sectional study. We have used a previously validated questionnaire to study the present state of the health information on Internet for a lay-user point of view. By mean of PageRank (Google®), we obtained a group of webs, including a total of 65 health web pages. We applied some exclusion criteria, and finally obtained a total of 36 webs. We also analyzed the official web pages from the different Health Services in Spain (19 webs), making a total of 54 health web pages. In the light of our data, we observed that, the quality of the general information health web pages was generally rather low, especially regarding the information quality. Not one page reached the maximum score (19 points). The mean score of the web pages was of 9.8±2.8. In conclusion, to avoid the problems arising from the lack of quality, health professionals should design advertising campaigns and other media to teach the lay-user how to evaluate the information quality. Copyright © 2009 Elsevier España, S.L. All rights reserved.

  7. NUCLEAR STRUCTURE AND DECAY DATA: INTRODUCTION TO RELEVANT WEB PAGES

    International Nuclear Information System (INIS)

    BURROWS, T.W.; MCLAUGHLIN, P.D.; NICHOLS, A.L.

    2005-01-01

    A brief description is given of the nuclear data centres around the world able to provide access to those databases and programs of highest relevance to nuclear structure and decay data specialists. A number of Web-page addresses are also provided for the reader to inspect and investigate these data and codes for study, evaluation and calculation. These instructions are not meant to be comprehensive, but should provide the reader with a reasonable means of electronic access to the most important data sets and programs

  8. Near-Duplicate Web Page Detection: An Efficient Approach Using Clustering, Sentence Feature and Fingerprinting

    Directory of Open Access Journals (Sweden)

    J. Prasanna Kumar

    2013-02-01

    Full Text Available Duplicate and near-duplicate web pages are the chief concerns for web search engines. In reality, they incur enormous space to store the indexes, ultimately slowing down and increasing the cost of serving results. A variety of techniques have been developed to identify pairs of web pages that are aldquo;similarardquo; to each other. The problem of finding near-duplicate web pages has been a subject of research in the database and web-search communities for some years. In order to identify the near duplicate web pages, we make use of sentence level features along with fingerprinting method. When a large number of web documents are in consideration for the detection of web pages, then at first, we use K-mode clustering and subsequently sentence feature and fingerprint comparison is used. Using these steps, we exactly identify the near duplicate web pages in an efficient manner. The experimentation is carried out on the web page collections and the results ensured the efficiency of the proposed approach in detecting the near duplicate web pages.

  9. THE NEW PURCHASING SERVICE PAGE NOW ON THE WEB!

    CERN Multimedia

    SPL Division

    2000-01-01

    Users of CERN's Purchasing Service are encouraged to visit the new Purchasing Service web page, accessible from the CERN homepage or directly at: http://spl-purchasing.web.cern.ch/spl-purchasing/ There, you will find answers to questions such as: Who are the buyers? What do I need to know before creating a DAI? How many offers do I need? Where shall I send the offer I received? I know the amount of my future requirement, how do I proceed? How are contracts adjudicated at CERN? Which exhibitions and visits of Member State companies are foreseen in the future? A company I know is interested in making a presentation at CERN, who should they contact? Additionally, you will find information concerning: The Purchasing procedures Market Surveys and Invitations to Tender The Industrial Liaison Officers appointed in each Member State The Purchasing Broker at CERN

  10. Appraisals of Salient Visual Elements in Web Page Design

    Directory of Open Access Journals (Sweden)

    Johanna M. Silvennoinen

    2016-01-01

    Full Text Available Visual elements in user interfaces elicit emotions in users and are, therefore, essential to users interacting with different software. Although there is research on the relationship between emotional experience and visual user interface design, the focus has been on the overall visual impression and not on visual elements. Additionally, often in a software development process, programming and general usability guidelines are considered as the most important parts of the process. Therefore, knowledge of programmers’ appraisals of visual elements can be utilized to understand the web page designs we interact with. In this study, appraisal theory of emotion is utilized to elaborate the relationship of emotional experience and visual elements from programmers’ perspective. Participants (N=50 used 3E-templates to express their visual and emotional experiences of web page designs. Content analysis of textual data illustrates how emotional experiences are elicited by salient visual elements. Eight hierarchical visual element categories were found and connected to various emotions, such as frustration, boredom, and calmness, via relational emotion themes. The emotional emphasis was on centered, symmetrical, and balanced composition, which was experienced as pleasant and calming. The results benefit user-centered visual interface design and researchers of visual aesthetics in human-computer interaction.

  11. Cluster Analysis of Customer Reviews Extracted from Web Pages

    Directory of Open Access Journals (Sweden)

    S. Shivashankar

    2010-01-01

    Full Text Available As e-commerce is gaining popularity day by day, the web has become an excellent source for gathering customer reviews / opinions by the market researchers. The number of customer reviews that a product receives is growing at very fast rate (It could be in hundreds or thousands. Customer reviews posted on the websites vary greatly in quality. The potential customer has to read necessarily all the reviews irrespective of their quality to make a decision on whether to purchase the product or not. In this paper, we make an attempt to assess are view based on its quality, to help the customer make a proper buying decision. The quality of customer review is assessed as most significant, more significant, significant and insignificant.A novel and effective web mining technique is proposed for assessing a customer review of a particular product based on the feature clustering techniques, namely, k-means method and fuzzy c-means method. This is performed in three steps : (1Identify review regions and extract reviews from it, (2 Extract and cluster the features of reviews by a clustering technique and then assign weights to the features belonging to each of the clusters (groups and (3 Assess the review by considering the feature weights and group belongingness. The k-means and fuzzy c-means clustering techniques are implemented and tested on customer reviews extracted from web pages. Performance of these techniques are analyzed.

  12. Emerging Pattern-Based Clustering of Web Users Utilizing a Simple Page-Linked Graph

    Directory of Open Access Journals (Sweden)

    Xiuming Yu

    2016-03-01

    Full Text Available Web usage mining is a popular research area in data mining. With the extensive use of the Internet, it is essential to learn about the favorite web pages of its users and to cluster web users in order to understand the structural patterns of their usage behavior. In this paper, we propose an efficient approach to determining favorite web pages by generating large web pages, and emerging patterns of generated simple page-linked graphs. We identify the favorite web pages of each user by eliminating noise due to overall popular pages, and by clustering web users according to the generated emerging patterns. Afterwards, we label the clusters by using Term Frequency-Inverse Document Frequency (TF-IDF. In the experiments, we evaluate the parameters used in our proposed approach, discuss the effect of the parameters on generating emerging patterns, and analyze the results from clustering web users. The results of the experiments prove that the exact patterns generated in the emerging-pattern step eliminate the need to consider noise pages, and consequently, this step can improve the efficiency of subsequent mining tasks. Our proposed approach is capable of clustering web users from web log data.

  13. Unlocking the Gates to the Kingdom: Designing Web Pages for Accessibility.

    Science.gov (United States)

    Mills, Steven C.

    As the use of the Web is perceived to be an effective tool for dissemination of research findings for the provision of asynchronous instruction, the issue of accessibility of Web page information will become more and more relevant. The World Wide Web consortium (W3C) has recognized a disparity in accessibility to the Web between persons with and…

  14. Improving Web Page Retrieval using Search Context from Clicked Domain Names

    NARCIS (Netherlands)

    Li, R.

    Search context is a crucial factor that helps to understand a user’s information need in ad-hoc Web page retrieval. A query log of a search engine contains rich information on issued queries and their corresponding clicked Web pages. The clicked data implies its relevance to the query and can be

  15. Social Responsibility and Corporate Web Pages: Self-Presentation or Agenda-Setting?

    Science.gov (United States)

    Esrock, Stuart L.; Leichty, Greg B.

    1998-01-01

    Examines how corporate entities use the Web to present themselves as socially responsible citizens and to advance policy positions. Samples randomly "Fortune 500" companies, revealing that, although 90% had Web pages and 82% of the sites addressed a corporate social responsibility issue, few corporations used their pages to monitor…

  16. Environment: General; Grammar & Usage; Money Management; Music History; Web Page Creation & Design.

    Science.gov (United States)

    Web Feet, 2001

    2001-01-01

    Describes Web site resources for elementary and secondary education in the topics of: environment, grammar, money management, music history, and Web page creation and design. Each entry includes an illustration of a sample page on the site and an indication of the grade levels for which it is appropriate. (AEF)

  17. Why Web Pages Annotation Tools Are Not Killer Applications? A New Approach to an Old Problem.

    Science.gov (United States)

    Ronchetti, Marco; Rizzi, Matteo

    The idea of annotating Web pages is not a new one: early proposals date back to 1994. A tool providing the ability to add notes to a Web page, and to share the notes with other users seems to be particularly well suited to an e-learning environment. Although several tools already provide such possibility, they are not widely popular. This paper…

  18. Teaching E-Commerce Web Page Evaluation and Design: A Pilot Study Using Tourism Destination Sites

    Science.gov (United States)

    Susser, Bernard; Ariga, Taeko

    2006-01-01

    This study explores a teaching method for improving business students' skills in e-commerce page evaluation and making Web design majors aware of business content issues through cooperative learning. Two groups of female students at a Japanese university studying either tourism or Web page design were assigned tasks that required cooperation to…

  19. Around power law for PageRank components in Buckley-Osthus model of web graph

    OpenAIRE

    Gasnikov, Alexander; Zhukovskii, Maxim; Kim, Sergey; Noskov, Fedor; Plaunov, Stepan; Smirnov, Daniil

    2017-01-01

    In the paper we investigate power law for PageRank components for the Buckley-Osthus model for web graph. We compare different numerical methods for PageRank calculation. With the best method we do a lot of numerical experiments. These experiments confirm the hypothesis about power law. At the end we discuss real model of web-ranking based on the classical PageRank approach.

  20. AUTOMATIC TAGGING OF PERSIAN WEB PAGES BASED ON N-GRAM LANGUAGE MODELS USING MAPREDUCE

    Directory of Open Access Journals (Sweden)

    Saeed Shahrivari

    2015-07-01

    Full Text Available Page tagging is one of the most important facilities for increasing the accuracy of information retrieval in the web. Tags are simple pieces of data that usually consist of one or several words, and briefly describe a page. Tags provide useful information about a page and can be used for boosting the accuracy of searching, document clustering, and result grouping. The most accurate solution to page tagging is using human experts. However, when the number of pages is large, humans cannot be used, and some automatic solutions should be used instead. We propose a solution called PerTag which can automatically tag a set of Persian web pages. PerTag is based on n-gram models and uses the tf-idf method plus some effective Persian language rules to select proper tags for each web page. Since our target is huge sets of web pages, PerTag is built on top of the MapReduce distributed computing framework. We used a set of more than 500 million Persian web pages during our experiments, and extracted tags for each page using a cluster of 40 machines. The experimental results show that PerTag is both fast and accurate

  1. Enhancing the Ranking of a Web Page in the Ocean of Data

    Directory of Open Access Journals (Sweden)

    Hitesh KUMAR SHARMA

    2013-10-01

    Full Text Available In today's world, web is considered as ocean of data and information (like text, videos, multimedia etc. consisting of millions and millions of web pages in which web pages are linked with each other like a tree. It is often argued that, especially considering the dynamic of the internet, too much time has passed since the scientific work on PageRank, as that it still could be the basis for the ranking methods of the Google search engine. There is no doubt that within the past years most likely many changes, adjustments and modifications regarding the ranking methods of Google have taken place, but PageRank was absolutely crucial for Google's success, so that at least the fundamental concept behind PageRank should still be constitutive. This paper describes the components which affects the ranking of the web pages and helps in increasing the popularity of web site. By adapting these factors website developers can increase their site's page rank and within the PageRank concept, considering the rank of a document is given by the rank of those documents which link to it. Their rank again is given by the rank of documents which link to them. The PageRank of a document is always determined recursively by the PageRank of other documents.

  2. An Improved Focused Crawler: Using Web Page Classification and Link Priority Evaluation

    Directory of Open Access Journals (Sweden)

    Houqing Lu

    2016-01-01

    Full Text Available A focused crawler is topic-specific and aims selectively to collect web pages that are relevant to a given topic from the Internet. However, the performance of the current focused crawling can easily suffer the impact of the environments of web pages and multiple topic web pages. In the crawling process, a highly relevant region may be ignored owing to the low overall relevance of that page, and anchor text or link-context may misguide crawlers. In order to solve these problems, this paper proposes a new focused crawler. First, we build a web page classifier based on improved term weighting approach (ITFIDF, in order to gain highly relevant web pages. In addition, this paper introduces an evaluation approach of the link, link priority evaluation (LPE, which combines web page content block partition algorithm and the strategy of joint feature evaluation (JFE, to better judge the relevance between URLs on the web page and the given topic. The experimental results demonstrate that the classifier using ITFIDF outperforms TFIDF, and our focused crawler is superior to other focused crawlers based on breadth-first, best-first, anchor text only, link-context only, and content block partition in terms of harvest rate and target recall. In conclusion, our methods are significant and effective for focused crawler.

  3. Teaching Materials to Enhance the Visual Expression of Web Pages for Students Not in Art or Design Majors

    Science.gov (United States)

    Ariga, T.; Watanabe, T.

    2008-01-01

    The explosive growth of the Internet has made the knowledge and skills for creating Web pages into general subjects that all students should learn. It is now common to teach the technical side of the production of Web pages and many teaching materials have been developed. However teaching the aesthetic side of Web page design has been neglected,…

  4. Review of Metadata Elements within the Web Pages Resulting from Searching in General Search Engines

    Directory of Open Access Journals (Sweden)

    Sima Shafi’ie Alavijeh

    2009-12-01

    Full Text Available The present investigation was aimed to study the scope of presence of Dublin Core metadata elements and HTML meta tags in web pages. Ninety web pages were chosen by searching general search engines (Google, Yahoo and MSN. The scope of metadata elements (Dublin Core and HTML Meta tags present in these pages as well as existence of a significant correlation between presence of meta elements and type of search engines were investigated. Findings indicated very low presence of both Dublin Core metadata elements and HTML meta tags in the pages retrieved which in turn illustrates the very low usage of meta data elements in web pages. Furthermore, findings indicated that there are no significant correlation between the type of search engine used and presence of metadata elements. From the standpoint of including metadata in retrieval of web sources, search engines do not significantly differ from one another.

  5. Science on the Web: Secondary School Students' Navigation Patterns and Preferred Pages' Characteristics

    Science.gov (United States)

    Dimopoulos, Kostas; Asimakopoulos, Apostolos

    2010-06-01

    This study aims to explore navigation patterns and preferred pages' characteristics of ten secondary school students' searching the web for information about cloning. The students navigated the Web for as long as they wished in a context of minimum support of teaching staff. Their navigation patterns were analyzed using audit trail data software. The characteristics of their preferred Web pages were also analyzed using a scheme of analysis largely based on socio-linguistics and socio-semiotics approaches. Two distinct groups of students could be discerned. The first consisted of more competent students, who during their navigation visited fewer relevant pages, however of higher credibility and more specialized content. The second group consists of weaker students, who visited more pages, mainly of lower credibility and rather popularized content. Implications for designing educational web pages and teaching are discussed.

  6. Design of an Interface for Page Rank Calculation using Web Link Attributes Information

    Directory of Open Access Journals (Sweden)

    Jeyalatha SIVARAMAKRISHNAN

    2010-01-01

    Full Text Available This paper deals with the Web Structure Mining and the different Structure Mining Algorithms like Page Rank, HITS, Trust Rank and Sel-HITS. The functioning of these algorithms are discussed. An incremental algorithm for calculation of PageRank using an interface has been formulated. This algorithm makes use of Web Link Attributes Information as key parameters and has been implemented using Visibility and Position of a Link. The application of Web Structure Mining Algorithm in an Academic Search Application has been discussed. The present work can be a useful input to Web Users, Faculty, Students and Web Administrators in a University Environment.

  7. An efficient scheme for automatic web pages categorization using the support vector machine

    Science.gov (United States)

    Bhalla, Vinod Kumar; Kumar, Neeraj

    2016-07-01

    In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.

  8. JavaScript and interactive web pages in radiology.

    Science.gov (United States)

    Gurney, J W

    2001-10-01

    Web publishing is becoming a more common method of disseminating information. JavaScript is an object-orientated language embedded into modern browsers and has a wide variety of uses. The use of JavaScript in radiology is illustrated by calculating the indices of sensitivity, specificity, and predictive values from a table of true positives, true negatives, false positives, and false negatives. In addition, a single line of JavaScript code can be used to annotate images, which has a wide variety of uses.

  9. Future Trends in Children's Web Pages: Probing Hidden Biases for Information Quality

    Science.gov (United States)

    Kurubacak, Gulsun

    2007-01-01

    As global digital communication continues to flourish, Children's Web pages become more critical for children to realize not only the surface but also breadth and deeper meanings in presenting these milieus. These pages not only are very diverse and complex but also enable intense communication across social, cultural and political restrictions…

  10. Design of a Web Page as a complement of educative innovation through MOODLE

    Science.gov (United States)

    Mendiola Ubillos, M. A.; Aguado Cortijo, Pedro L.

    2010-05-01

    In the context of Information Technology to impart knowledge and to establish MOODLE system as a support and complementary tool to on-site educational methodology (b-learning) a Web Page was designed in Agronomic and Food Industry Crops (Plantas de interés Agroalimentario) during 2006-07 course. This web was inserted in the Thecnical University of Madrid (Universidad Politécnica de Madrid) computer system to facilitate to the students the first contact with the contents of this subject. In this page the objectives and methodology, personal work planning, subject program given plus the activities are showed. At another web site, the evaluation criteria and recommended bibliography are located. The objective of this web page has been to make more transparent and accessible the necessary information in the learning process and presenting it in a more attractive frame. This page has been update and modified in each academic course offered since its first implementation. We had added in some cases new specific links to increase its useful. At the end of each course a test is applied to the students that take this subject. We have asked which elements would like to modify, delete and add to this web page. In this way the direct users give their point of view and help to improve the web page each course.

  11. How Useful are Orthopedic Surgery Residency Web Pages?

    Science.gov (United States)

    Oladeji, Lasun O; Yu, Jonathan C; Oladeji, Afolayan K; Ponce, Brent A

    2015-01-01

    Medical students interested in orthopedic surgery residency positions frequently use the Internet as a modality to gather information about individual residency programs. Students often invest a painstaking amount of time and effort in determining programs that they are interested in, and the Internet is central to this process. Numerous studies have concluded that program websites are a valuable resource for residency and fellowship applicants. The purpose of the present study was to provide an update on the web pages of academic orthopedic surgery departments in the United States and to rate their utility in providing information on quality of education, faculty and resident information, environment, and applicant information. We reviewed existing websites for the 156 departments or divisions of orthopedic surgery that are currently accredited for resident education by the Accreditation Council for Graduate Medical Education. Each website was assessed for quality of information regarding quality of education, faculty and resident information, environment, and applicant information. We noted that 152 of the 156 departments (97%) had functioning websites that could be accessed. There was high variability regarding the comprehensiveness of orthopedic residency websites. Most of the orthopedic websites provided information on conference, didactics, and resident rotations. Less than 50% of programs provided information on resident call schedules, resident or faculty research and publications, resident hometowns, or resident salary. There is a lack of consistency regarding the content presented on orthopedic residency websites. As the competition for orthopedic websites continues to increase, applicants flock to the Internet to learn more about orthopedic websites in greater number. A well-constructed website has the potential to increase the caliber of students applying to a said program. Copyright © 2015 Association of Program Directors in Surgery. Published by

  12. Modeling user navigation behavior in web by colored Petri nets to determine the user's interest in recommending web pages

    Directory of Open Access Journals (Sweden)

    Mehdi Sadeghzadeh

    2013-01-01

    Full Text Available One of existing challenges in personalization of the web is increasing the efficiency of a web in meeting the users' requirements for the contents they require in an optimal state. All the information associated with the current user behavior following in web and data obtained from pervious users’ interaction in web can provide some necessary keys to recommend presentation of services, productions, and the required information of the users. This study aims at presenting a formal model based on colored Petri nets to identify the present user's interest, which is utilized to recommend the most appropriate pages ahead. In the proposed design, recommendation of the pages is considered with respect to information obtained from pervious users' profile as well as the current session of the present user. This model offers the updated proposed pages to the user by clicking on the web pages. Moreover, an example of web is modeled using CPN Tools. The results of the simulation show that this design improves the precision factor. We explain, through evaluation where the results of this method are more objective and the dynamic recommendations demonstrate that the results of the recommended method improve the precision criterion 15% more than the static method.

  13. Search Engine Ranking, Quality, and Content of Web Pages That Are Critical Versus Noncritical of Human Papillomavirus Vaccine.

    Science.gov (United States)

    Fu, Linda Y; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G

    2016-01-01

    Online information can influence attitudes toward vaccination. The aim of the present study was to provide a systematic evaluation of the search engine ranking, quality, and content of Web pages that are critical versus noncritical of human papillomavirus (HPV) vaccination. We identified HPV vaccine-related Web pages with the Google search engine by entering 20 terms. We then assessed each Web page for critical versus noncritical bias and for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than ninth grade. We also determined Web page comprehensiveness in terms of mention of 14 HPV vaccine-relevant topics. Twenty searches yielded 116 unique Web pages. HPV vaccine-critical Web pages comprised roughly a third of the top, top 5- and top 10-ranking Web pages. The prevalence of HPV vaccine-critical Web pages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical Web pages, Web pages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (p Web pages required viewers to have higher reading skills, were less likely to include an author byline, and were more likely to include testimonial accounts. They also were more likely to raise unsubstantiated concerns about vaccination. Web pages critical of HPV vaccine may be frequently returned and highly ranked by search engine queries despite being of lower quality and less comprehensive than noncritical Web pages. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  14. JavaScript: Convenient Interactivity for the Class Web Page.

    Science.gov (United States)

    Gray, Patricia

    This paper shows how JavaScript can be used within HTML pages to add interactive review sessions and quizzes incorporating graphics and sound files. JavaScript has the advantage of providing basic interactive functions without the use of separate software applications and players. Because it can be part of a standard HTML page, it is…

  15. Project Management - Development of course materiale as WEB pages

    DEFF Research Database (Denmark)

    Thorsteinsson, Uffe; Bjergø, Søren

    1997-01-01

    Development of Internet pages with lessons plans, slideshows, links, conference system and interactive student section for communication between students and to teacher as well.......Development of Internet pages with lessons plans, slideshows, links, conference system and interactive student section for communication between students and to teacher as well....

  16. The Recognition of Web Pages' Hyperlinks by People with Intellectual Disabilities: An Evaluation Study

    Science.gov (United States)

    Rocha, Tania; Bessa, Maximino; Goncalves, Martinho; Cabral, Luciana; Godinho, Francisco; Peres, Emanuel; Reis, Manuel C.; Magalhaes, Luis; Chalmers, Alan

    2012-01-01

    Background: One of the most mentioned problems of web accessibility, as recognized in several different studies, is related to the difficulty regarding the perception of what is or is not clickable in a web page. In particular, a key problem is the recognition of hyperlinks by a specific group of people, namely those with intellectual…

  17. Science on the Web: Secondary School Students' Navigation Patterns and Preferred Pages' Characteristics

    Science.gov (United States)

    Dimopoulos, Kostas; Asimakopoulos, Apostolos

    2010-01-01

    This study aims to explore navigation patterns and preferred pages' characteristics of ten secondary school students searching the web for information about cloning. The students navigated the Web for as long as they wished in a context of minimum support of teaching staff. Their navigation patterns were analyzed using audit trail data software.…

  18. Lost but not forgotten: finding pages on the unarchived web

    NARCIS (Netherlands)

    H.C. Huurdeman; J. Kamps; T. Samar (Thaer); A.P. de Vries (Arjen); A. Ben-David; R.A. Rogers (Richard)

    2015-01-01

    htmlabstractWeb archives attempt to preserve the fast changing web, yet they will always be incomplete. Due to restrictions in crawling depth, crawling frequency, and restrictive selection policies, large parts of the Web are unarchived and, therefore, lost to posterity. In this paper, we propose an

  19. Virtual real-time inspection of nuclear material via VRML and secure web pages

    International Nuclear Information System (INIS)

    Nilsen, C.; Jortner, J.; Damico, J.; Friesen, J.; Schwegel, J.

    1997-04-01

    Sandia National Laboratories' Straight Line project is working to provide the right sensor information to the right user to enhance the safety, security, and international accountability of nuclear material. One of Straight Line's efforts is to create a system to securely disseminate this data on the Internet's World-Wide-Web. To make the user interface more intuitive, Sandia has generated a three dimensional VRML (virtual reality modeling language) interface for a secure web page. This paper will discuss the implementation of the Straight Line secure 3-D web page. A discussion of the ''pros and cons'' of a 3-D web page is also presented. The public VRML demonstration described in this paper can be found on the Internet at the following address: http://www.ca.sandia.gov/NMM/. A Netscape browser, version 3 is strongly recommended

  20. Virtual real-time inspection of nuclear material via VRML and secure web pages

    International Nuclear Information System (INIS)

    Nilsen, C.; Jortner, J.; Damico, J.; Friesen, J.; Schwegel, J.

    1996-01-01

    Sandia National Laboratories'' Straight-Line project is working to provide the right sensor information to the right user to enhance the safety, security, and international accountability of nuclear material. One of Straight-Line''s efforts is to create a system to securely disseminate this data on the Internet''s World-Wide-Web. To make the user interface more intuitive, Sandia has generated a three dimensional VRML (virtual reality modeling language) interface for a secure web page. This paper will discuss the implementation of the Straight-Line secure 3-D web page. A discussion of the pros and cons of a 3-D web page is also presented. The public VRML demonstration described in this paper can be found on the Internet at this address, http://www.ca.sandia.gov/NMM/. A Netscape browser, version 3 is strongly recommended

  1. Virtual real-time inspection of nuclear material via VRML and secure web pages

    Energy Technology Data Exchange (ETDEWEB)

    Nilsen, C.; Jortner, J.; Damico, J.; Friesen, J.; Schwegel, J.

    1997-04-01

    Sandia National Laboratories` Straight Line project is working to provide the right sensor information to the right user to enhance the safety, security, and international accountability of nuclear material. One of Straight Line`s efforts is to create a system to securely disseminate this data on the Internet`s World-Wide-Web. To make the user interface more intuitive, Sandia has generated a three dimensional VRML (virtual reality modeling language) interface for a secure web page. This paper will discuss the implementation of the Straight Line secure 3-D web page. A discussion of the ``pros and cons`` of a 3-D web page is also presented. The public VRML demonstration described in this paper can be found on the Internet at the following address: http://www.ca.sandia.gov/NMM/. A Netscape browser, version 3 is strongly recommended.

  2. Identification of the unidentified deceased and locating next of kin: experience with a UID web site page, Fulton County, Georgia.

    Science.gov (United States)

    Hanzlick, Randy

    2006-06-01

    Medical examiner and coroner offices may face difficulties in trying to achieve identification of deceased persons who are unidentified or in locating next of kin for deceased persons who have been identified. The Fulton County medical examiner (FCME) has an office web site which includes information about unidentified decedents and cases for which next of kin are being sought. Information about unidentified deceased and cases in need of next of kin has been posted on the FCME web site for 3 years and 1 year, respectively. FCME investigators and staff medical examiners were surveyed about the web site's usefulness for making identifications and locating next of kin. No cases were recalled in which the web site led to making an identification. Two cases were reported in which next of kin were located, and another case involved a missing person being ruled out as one of the decedents. The web site page is visited by agencies interested in missing and unidentified persons, and employees do find it useful for follow-up because information about all unidentified decedents is located and easily accessible, electronically, in a single location. Despite low yield in making identifications and locating next of kin, the UID web site is useful in some respects, and there is no compelling reason to discontinue its existence. It is proposed that UID pages on office web sites be divided into "hot" (less than 30 days, for example) and "warm" (31 days to 1 year, for example) cases and that cases older than a year be designated as "cold cases." It is conceivable that all unidentified deceased cases nationally could be placed on a single web site designed for such purposes, to remain in public access until identity is established and confirmed.

  3. An ant colony optimization based feature selection for web page classification.

    Science.gov (United States)

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  4. Personal Web home pages of adolescents with cancer: self-presentation, information dissemination, and interpersonal connection.

    Science.gov (United States)

    Suzuki, Lalita K; Beale, Ivan L

    2006-01-01

    The content of personal Web home pages created by adolescents with cancer is a new source of information about this population of potential benefit to oncology nurses and psychologists. Individual Internet elements found on 21 home pages created by youths with cancer (14-22 years old) were rated for cancer-related self-presentation, information dissemination, and interpersonal connection. Examples of adolescents' online narratives were also recorded. Adolescents with cancer used various Internet elements on their home pages for cancer-related self-presentation (eg, welcome messages, essays, personal history and diary pages, news articles, and poetry), information dissemination (e.g., through personal interest pages, multimedia presentations, lists, charts, and hyperlinks), and interpersonal connection (eg, guestbook entries). Results suggest that various elements found on personal home pages are being used by a limited number of young patients with cancer for self-expression, information access, and contact with peers.

  5. Penerapan Single Page Application pada Proses Pengisian Online Data Rencana Studi Mahasiswa

    Directory of Open Access Journals (Sweden)

    Aryani Ristyabudi

    2016-06-01

    Full Text Available Universitas Muhammadiyah Surakarta (UMS telah membangun aplikasi berbasis web untuk pengisian data rencana studi mahasiswa dan menerapkannya selama belasan tahun. Paper ini mendeskripsikan sebuah studi untuk mempertimbangkan penerapan konsep Single Page Application (SPA ke dalam aplikasi tersebut. SPA adalah merupakan teknik baru yang menggunakan satu halaman web saja untuk beberapa tahap dalam satu kesatuan aktivitas. Penerapan SPA dalam proses pengisian data rencana studi dilakukan menggunakan teknologi web termutakhir yaitu HTML5 dan AngularJS. Kinerja aplikasi yang baru diukur dengan mencermati jumlah data yang ditransfer dan waktu yang dibutuhkan untuk melakukan pengisian data rencana studi seorang mahasiswa. Penggukuran dilakukan menggunakan Wireshark. Hasil pengujian membuktikan bahwa pengisian data rencana studi menggunakan aplikasi dengan SPA membutuhkan transfer data kurang dari sepersepuluh dari proses menggunakan aplikasi tanpa SPA. Aplikasi dengan SPA menghemat total waktu yang dibutuhkan selama proses pengisian data menjadi sepertiga dari waktu yang diperlukan jika proses dilakukan menggunakan aplikasi tanpa SPA.

  6. Searchers' relevance judgments and criteria in evaluating Web pages in a learning style perspective

    DEFF Research Database (Denmark)

    Papaeconomou, Chariste; Zijlema, Annemarie F.; Ingwersen, Peter

    2008-01-01

    The paper presents the results of a case study of searcher's relevance criteria used for assessments of Web pages in a perspective of learning style. 15 test persons participated in the experiments based on two simulated work tasks that provided cover stories to trigger their information needs. Two...... learning styles were examined: Global and Sequential learners. The study applied eye-tracking for the observation of relevance hot spots on Web pages, learning style index analysis and post-search interviews to gain more in-depth information on relevance behavior. Findings reveal that with respect to use......, they are statistically insignificant. When interviewed in retrospective the resulting profiles tend to become even similar across learning styles but a shift occurs from instant assessments with content features of web pages replacing topicality judgments as predominant relevance criteria....

  7. Reactor Engineering Division Material for World Wide Web Pages

    International Nuclear Information System (INIS)

    1996-01-01

    This document presents the home page of the Reactor Engineering Division of Argonne National Laboratory. This WWW site describes the activities of the Division, an introduction to its wide variety of programs and samples of the results of research by people in the division

  8. A Web Page That Provides Map-Based Interfaces for VRML/X3D Contents

    Science.gov (United States)

    Miyake, Yoshihiro; Suzaki, Kenichi; Araya, Shinji

    The electronic map is very useful for navigation in the VRML/X3D virtual environments. So far various map-based interfaces have been developed. But they are lacking for generality because they have been separately developed for individual VRML/X3D contents, and users must use different interfaces for different contents. Therefore we have developed a web page that provides a common map-based interface for VRML/X3D contents on the web. Users access VRML/X3D contents via the web page. The web page automatically generates a simplified map by analyzing the scene graph of downloaded contents, and embeds the mechanism to link the virtual world and the map. An avatar is automatically created and added to the map, and both a user and its avatar are bi-directionally linked together. In the simplified map, obstructive objects are removed and the other objects are replaced by base boxes. This paper proposes the architecture of the web page and the method to generate simplified maps. Finally, experimental system is developed in order to show the improvement of flame rates by simplifying the map.

  9. MPEG-7 low level image descriptors for modeling users' web pages visual appeal opinion

    OpenAIRE

    Uribe Mayoral, Silvia; Alvarez Garcia, Federico; Menendez Garcia, Jose Manuel

    2015-01-01

    The study of the users' web pages first impression is an important factor for interface designers, due to its influence over the final opinion about a site. In this regard, the analysis of web aesthetics can be considered as an interesting tool for evaluating this early impression, and the use of low level image descriptors for modeling it in an objective way represents an innovative research field. According to this, in this paper we present a new model for website aesthetics evaluation and ...

  10. SChiSM2: creating interactive web page annotations of molecular structure models using Jmol.

    Science.gov (United States)

    Cammer, Stephen

    2007-02-01

    SChiSM2 is a web server-based program for creating web pages that include interactive molecular graphics using the freely-available applet, Jmol, for illustration. The program works with Internet Explorer and Firefox on Windows, Safari and Firefox on Mac OSX and Firefox on Linux. The program can be accessed at the following address: http://ci.vbi.vt.edu/cammer/schism2.html.

  11. Is This Information Source Commercially Biased? How Contradictions between Web Pages Stimulate the Consideration of Source Information

    Science.gov (United States)

    Kammerer, Yvonne; Kalbfell, Eva; Gerjets, Peter

    2016-01-01

    In two experiments we systematically examined whether contradictions between two web pages--of which one was commercially biased as stated in an "about us" section--stimulated university students' consideration of source information both during and after reading. In Experiment 1 "about us" information of the web pages was…

  12. Research of Subgraph Estimation Page Rank Algorithm for Web Page Rank

    Directory of Open Access Journals (Sweden)

    LI Lan-yin

    2017-04-01

    Full Text Available The traditional PageRank algorithm can not efficiently perform large data Webpage scheduling problem. This paper proposes an accelerated algorithm named topK-Rank,which is based on PageRank on the MapReduce platform. It can find top k nodes efficiently for a given graph without sacrificing accuracy. In order to identify top k nodes,topK-Rank algorithm prunes unnecessary nodes and edges in each iteration to dynamically construct subgraphs,and iteratively estimates lower/upper bounds of PageRank scores through subgraphs. Theoretical analysis shows that this method guarantees result exactness. Experiments show that topK-Rank algorithm can find k nodes much faster than the existing approaches.

  13. The Impact of Salient Advertisements on Reading and Attention on Web Pages

    Science.gov (United States)

    Simola, Jaana; Kuisma, Jarmo; Oorni, Anssi; Uusitalo, Liisa; Hyona, Jukka

    2011-01-01

    Human vision is sensitive to salient features such as motion. Therefore, animation and onset of advertisements on Websites may attract visual attention and disrupt reading. We conducted three eye tracking experiments with authentic Web pages to assess whether (a) ads are efficiently ignored, (b) ads attract overt visual attention and disrupt…

  14. A construction scheme of web page comment information extraction system based on frequent subtree mining

    Science.gov (United States)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    Based on the frequent sub-tree mining algorithm, this paper proposes a construction scheme of web page comment information extraction system based on frequent subtree mining, referred to as FSM system. The entire system architecture and the various modules to do a brief introduction, and then the core of the system to do a detailed description, and finally give the system prototype.

  15. A Literature Review of Academic Library Web Page Studies

    Science.gov (United States)

    Blummer, Barbara

    2007-01-01

    In the early 1990s, numerous academic libraries adopted the web as a communication tool with users. The literature on academic library websites includes research on both design and navigation. Early studies typically focused on design characteristics, since websites initially merely provided information on the services and collections available in…

  16. Development of portal Web pages for the LHD experiment

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Funaba, Hisamichi; Nakanishi, Hideya; Iwata, Chie; Yoshida, Masanori; Nagayama, Yoshio

    2011-01-01

    Because the LHD project has been operating with the cooperation of many institutes in Japan, the remote participation facilities play an important role. Therefore, NIFS has been introducing these facilities to its remote participants. Because the authors regard Web services as essential tools for the current Internet communication, Web services for remote participation have been developed. However, because these services are dispersed among several servers in NIFS, users cannot find the required services easily. Therefore, the authors developed a portal Web server to list the existing and new Web services for the LHD experiment. The server provides services such as summary graph, plasma movie of the last plasma discharge, daily experiment logs, and daily experimental schedules. One of the most important information from these services is the summary graph. Usually, the plasma discharges of the LHD experiment are executed every three minutes. Between the discharges, the summary graph of the last plasma discharge is displayed on the front screen in the control room soon after the discharge is complete. The graph is useful in evaluating the last discharge, which is important information for determining the subsequent experiment schedule. Therefore, it is required to display the summary graph, which plots more than 10 data diagnostics, as soon as possible. On the other hand, the data-appearance time varies from one diagnostic to another. To display the graph faster, the new system retrieves the data asynchronously; several data retrieval processes work simultaneously, and the system plots the data all at once. (author)

  17. Automatic Removal of Advertising from Web-Page Display / Extended Abstract

    OpenAIRE

    Rowe, Neil C.; Coffman, Jim; Degirmenci, Yilmaz; Hall, Scott; Lee, Shong; Williams, Clifton

    2002-01-01

    Joint Conference on Digital Libraries ’02, July 8-12, Portland, Oregon. The usefulness of the World Wide Web as a digital library of precise and reliable information is reduced by the increasing presence of advertising on Web pages. But no one is required to read or see advertising, and this cognitive censorship can be automated by software. Such filters can be useful to the U.S. government which must permit its employees to use the Web but which is prohibited by law from endorsing c...

  18. PSB goes personal: The failure of personalised PSB web pages

    Directory of Open Access Journals (Sweden)

    Jannick Kirk Sørensen

    2013-08-01

    Full Text Available Between 2006 and 2011, a number of European public service broadcasting (PSB organisations offered their website users the opportunity to create their own PSB homepage. The web customisation was conceived by the editors as a response to developments in commercial web services, particularly social networking and content aggregation services, but the customisation projects revealed tensions between the ideals of customer sovereignty and the editorial agenda-setting. This paper presents an overview of the PSB activities as well as reflections on the failure of the customisable PSB homepages. The analysis is based on interviews with the PSB editors involved in the projects and on studies of the interfaces and user comments. Commercial media customisation is discussed along with the PSB projects to identify similarities and differences.

  19. PSB goes personal: The failure of personalised PSB web pages

    Directory of Open Access Journals (Sweden)

    Jannick Kirk Sørensen

    2013-12-01

    Full Text Available Between 2006 and 2011, a number of European public service broadcasting (PSB organisations offered their website users the opportunity to create their own PSB homepage. The web customisation was conceived by the editors as a response to developments in commercial web services, particularly social networking and content aggregation services, but the customisation projects revealed tensions between the ideals of customer sovereignty and the editorial agenda-setting. This paper presents an overview of the PSB activities as well as reflections on the failure of the customisable PSB homepages. The analysis is based on interviews with the PSB editors involved in the projects and on studies of the interfaces and user comments. Commercial media customisation is discussed along with the PSB projects to identify similarities and differences.

  20. The ATLAS Public Web Pages: Online Management of HEP External Communication Content

    CERN Document Server

    Goldfarb, Steven; Phoboo, Abha Eli; Shaw, Kate

    2015-01-01

    The ATLAS Education and Outreach Group is in the process of migrating its public online content to a professionally designed set of web pages built on the Drupal content management system. Development of the front-end design passed through several key stages, including audience surveys, stakeholder interviews, usage analytics, and a series of fast design iterations, called sprints. Implementation of the web site involves application of the html design using Drupal templates, refined development iterations, and the overall population of the site with content. We present the design and development processes and share the lessons learned along the way, including the results of the data-driven discovery studies. We also demonstrate the advantages of selecting a back-end supported by content management, with a focus on workflow. Finally, we discuss usage of the new public web pages to implement outreach strategy through implementation of clearly presented themes, consistent audience targeting and messaging, and th...

  1. A Survey on PageRank Computing

    OpenAIRE

    Berkhin, Pavel

    2005-01-01

    This survey reviews the research related to PageRank computing. Components of a PageRank vector serve as authority weights for web pages independent of their textual content, solely based on the hyperlink structure of the web. PageRank is typically used as a web search ranking component. This defines the importance of the model and the data structures that underly PageRank processing. Computing even a single PageRank is a difficult computational task. Computing many PageRanks is a much mor...

  2. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    Science.gov (United States)

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  3. Table Extraction from Web Pages Using Conditional Random Fields to Extract Toponym Related Data

    Science.gov (United States)

    Luthfi Hanifah, Hayyu'; Akbar, Saiful

    2017-01-01

    Table is one of the ways to visualize information on web pages. The abundant number of web pages that compose the World Wide Web has been the motivation of information extraction and information retrieval research, including the research for table extraction. Besides, there is a need for a system which is designed to specifically handle location-related information. Based on this background, this research is conducted to provide a way to extract location-related data from web tables so that it can be used in the development of Geographic Information Retrieval (GIR) system. The location-related data will be identified by the toponym (location name). In this research, a rule-based approach with gazetteer is used to recognize toponym from web table. Meanwhile, to extract data from a table, a combination of rule-based approach and statistical-based approach is used. On the statistical-based approach, Conditional Random Fields (CRF) model is used to understand the schema of the table. The result of table extraction is presented on JSON format. If a web table contains toponym, a field will be added on the JSON document to store the toponym values. This field can be used to index the table data in accordance to the toponym, which then can be used in the development of GIR system.

  4. Domainwise Web Page Optimization Based On Clustered Query Sessions Using Hybrid Of Trust And ACO For Effective Information Retrieval

    Directory of Open Access Journals (Sweden)

    Dr. Suruchi Chawla

    2015-08-01

    Full Text Available Abstract In this paper hybrid of Ant Colony OptimizationACO and trust has been used for domainwise web page optimization in clustered query sessions for effective Information retrieval. The trust of the web page identifies its degree of relevance in satisfying specific information need of the user. The trusted web pages when optimized using pheromone updates in ACO will identify the trusted colonies of web pages which will be relevant to users information need in a given domain. Hence in this paper the hybrid of Trust and ACO has been used on clustered query sessions for identifying more and more relevant number of documents in a given domain in order to better satisfy the information need of the user. Experiment was conducted on the data set of web query sessions to test the effectiveness of the proposed approach in selected three domains Academics Entertainment and Sports and the results confirm the improvement in the precision of search results.

  5. Citations to Web pages in scientific articles: the permanence of archived references.

    Science.gov (United States)

    Thorp, Andrea W; Schriger, David L

    2011-02-01

    We validate the use of archiving Internet references by comparing the accessibility of published uniform resource locators (URLs) with corresponding archived URLs over time. We scanned the "Articles in Press" section in Annals of Emergency Medicine from March 2009 through June 2010 for Internet references in research articles. If an Internet reference produced the authors' expected content, the Web page was archived with WebCite (http://www.webcitation.org). Because the archived Web page does not change, we compared it with the original URL to determine whether the original Web page had changed. We attempted to access each original URL and archived Web site URL at 3-month intervals from the time of online publication during an 18-month study period. Once a URL no longer existed or failed to contain the original authors' expected content, it was excluded from further study. The number of original URLs and archived URLs that remained accessible over time was totaled and compared. A total of 121 articles were reviewed and 144 Internet references were found within 55 articles. Of the original URLs, 15% (21/144; 95% confidence interval [CI] 9% to 21%) were inaccessible at publication. During the 18-month observation period, there was no loss of archived URLs (apart from the 4% [5/123; 95% CI 2% to 9%] that could not be archived), whereas 35% (49/139) of the original URLs were lost (46% loss; 95% CI 33% to 61% by the Kaplan-Meier method; difference between curves PWeb page at publication can help preserve the authors' expected information. Copyright © 2010 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  6. Web Pages Content Analysis Using Browser-Based Volunteer Computing

    Directory of Open Access Journals (Sweden)

    Wojciech Turek

    2013-01-01

    Full Text Available Existing solutions to the problem of finding valuable information on the Websuffers from several limitations like simplified query languages, out-of-date in-formation or arbitrary results sorting. In this paper a different approach to thisproblem is described. It is based on the idea of distributed processing of Webpages content. To provide sufficient performance, the idea of browser-basedvolunteer computing is utilized, which requires the implementation of text pro-cessing algorithms in JavaScript. In this paper the architecture of Web pagescontent analysis system is presented, details concerning the implementation ofthe system and the text processing algorithms are described and test resultsare provided.

  7. What is the title of a Web page? A study of Webography practice

    Directory of Open Access Journals (Sweden)

    Timothy C. Craven

    2002-01-01

    Full Text Available Few style guides recommend a specific source for citing the title of a Web page that is not a duplicate of a printed format. Sixteen Web bibliographies were analyzed for uses of two different recommended sources: (1 the tagged title; (2 the title as it would appear to be from viewing the beginning of the page in the browser (apparent title. In all sixteen, the proportion of tagged titles was much less than that of apparent titles, and only rarely did the bibliography title match the tagged title and not the apparent title. Convenience of copying may partly explain the preference for the apparent title. Contrary to expectation, correlation between proportion of valid links in a bibliography and proportion of accurately reproduced apparent titles was slightly negative.

  8. Research on Chinese web page SVM classifer based on information gain

    Directory of Open Access Journals (Sweden)

    PAN Zhengcai

    2013-06-01

    Full Text Available In order to improve the efficiency and accuracy of text classification,optimization and improvement are made for defects and deficiencies of the feature dimensionality reduction method and traditional information gain method in text classification of Chinese web pages.At first,part-of-speech filtering and synonyms merging processes are taken for the first feature dimension reduction of feature items.Then,an improved information gain method is proposed for feature weighting computation of feature items.Finally,the classification algorithm of Support Vector Machine (SVM is used for text classification of Chinese web pages.Both theoretical analysis and experimental results show that this method has better performance and classification results than traditional method.

  9. Some features of alt texts associated with images in Web pages

    Directory of Open Access Journals (Sweden)

    Timothy C. Craven

    2006-01-01

    Full Text Available Introduction. This paper extends a series on summaries of Web objects, in this case, the alt attribute of image files. Method. Data were logged from 1894 pages from Yahoo!'s random page service and 4703 pages from the Google directory; an img tag was extracted randomly from each where present; its alt attribute, if any, was recorded; and the header for the corresponding image file was retrieved if possible. Analysis. Associations were measured between image type and use of null alt values, image type and image file size, image file size and alt text length, and alt text length and number of images on the page. Results. 16.6% and 17.3% of pages respectively showed no img elements. Of 1579 and 3888 img tags randomly selected from the remainder, 47.7% and 49.4% had alt texts, of which 26.3% and 27.5% were null. Of the 1316 and 3384 images for which headers could be retrieved, 71.2% and 74.2% were GIF, 28.1% and 20.5%, JPEG; and 0.8% and 0.8% PNG. GIF images were more commonly assigned null alt texts than JPEG images, and GIF files tended to be shorter than JPEG files. Weak positive correlations were observed between image file size and alt text length, except for JPEG files in the Yahoo! set. Alt texts for images from pages containing more images tended to be slightly shorter. Conclusion. Possible explanations for the results include GIF files' being more suited to decorative images and the likelihood that many images on image-rich pages are content-poor.

  10. Learning Layouts for Single-Page Graphic Designs.

    Science.gov (United States)

    O'Donovan, Peter; Agarwala, Aseem; Hertzmann, Aaron

    2014-08-01

    This paper presents an approach for automatically creating graphic design layouts using a new energy-based model derived from design principles. The model includes several new algorithms for analyzing graphic designs, including the prediction of perceived importance, alignment detection, and hierarchical segmentation. Given the model, we use optimization to synthesize new layouts for a variety of single-page graphic designs. Model parameters are learned with Nonlinear Inverse Optimization (NIO) from a small number of example layouts. To demonstrate our approach, we show results for applications including generating design layouts in various styles, retargeting designs to new sizes, and improving existing designs. We also compare our automatic results with designs created using crowdsourcing and show that our approach performs slightly better than novice designers.

  11. Rotorcraft Aeromechanics Branch Home Page on the World Wide Web

    Science.gov (United States)

    Peterson, Randall L.; Warmbrodt, William (Technical Monitor)

    1996-01-01

    The tilt rotor aircraft holds great promise for improving air travel in the future. It's benefits include vertical take off and landing combined with airspeeds comparable to propeller driven aircraft. However, the noise from a tilt rotor during approach to a landing is potentially a significant barrier to widespread acceptance of these aircraft. This approach noise is primarily caused by Blade Vortex Interactions (BVI), which are created when the blade passes near or through the vortex trailed by preceding blades. The XV- 15 Aeroacoustic test will measure the noise from a tilt rotor during descent conditions and demonstrate several possible techniques to reduce the noise. The XV- 15 Aeroacoustic test at NASA Ames Research Center will measure acoustics and performance for a full-scale XV-15 rotor. A single XV-15 rotor will be mounted on the Ames Rotor Test Apparatus (RTA) in the 80- by 120-Foot Wind Tunnel. The test will be conducted in helicopter mode with forward flight speeds up to 100 knots and tip path plane angles up to +/- 15 degrees. These operating conditions correspond to a wide range of tilt rotor descent and transition to forward flight cases. Rotor performance measurements will be made with the RTA rotor balance, while acoustic measurements will be made using an acoustic traverse and four fixed microphones. The acoustic traverse will provide limited directionality measurements on the advancing side of the rotor, where BVI noise is expected to be the highest. Baseline acoustics and performance measurements for the three-bladed rotor will be obtained over the entire test envelope. Acoustic measurements will also be obtained for correlation with the XV-15 aircraft Inflight Rotor Aeroacoustic Program (IRAP) recently conducted by Ames. Several techniques will be studied in an attempt to reduce the highest measured BVI noise conditions. The first of these techniques will use sub-wings mounted on the blade tips. These subwings are expected to alter the size

  12. TERM WEIGHTING BASED ON INDEX OF GENRE FOR WEB PAGE GENRE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    Sugiyanto Sugiyanto

    2014-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Automating the identification of the genre of web pages becomes an important area in web pages classification, as it can be used to improve the quality of the web search result and to reduce search time. To index the terms used in classification, generally the selected type of weighting is the document-based TF-IDF. However, this method does not consider genre, whereas web page documents have a type of categorization called genre. With the existence of genre, the term appearing often in a genre should be more significant in document indexing compared to the term appearing frequently in many genres despites its high TF-IDF value. We proposed a new weighting method for web page documents indexing called inverse genre frequency (IGF. This method is based on genre, a manual categorization done semantically from previous research. Experimental results show that the term weighting based on index of genre (TF-IGF performed better compared to term weighting based on index of document (TF-IDF, with the highest value of accuracy, precision, recall, and F-measure in case of excluding the genre-specific keywords were 78%, 80.2%, 78%, and 77.4% respectively, and in case of including the genre-specific keywords were 78.9%, 78

  13. When the Web meets the cell: using personalized PageRank for analyzing protein interaction networks.

    Science.gov (United States)

    Iván, Gábor; Grolmusz, Vince

    2011-02-01

    Enormous and constantly increasing quantity of biological information is represented in metabolic and in protein interaction network databases. Most of these data are freely accessible through large public depositories. The robust analysis of these resources needs novel technologies, being developed today. Here we demonstrate a technique, originating from the PageRank computation for the World Wide Web, for analyzing large interaction networks. The method is fast, scalable and robust, and its capabilities are demonstrated on metabolic network data of the tuberculosis bacterium and the proteomics analysis of the blood of melanoma patients. The Perl script for computing the personalized PageRank in protein networks is available for non-profit research applications (together with sample input files) at the address: http://uratim.com/pp.zip.

  14. Toward automated assessment of health Web page quality using the DISCERN instrument.

    Science.gov (United States)

    Allam, Ahmed; Schulz, Peter J; Krauthammer, Michael

    2017-05-01

    As the Internet becomes the number one destination for obtaining health-related information, there is an increasing need to identify health Web pages that convey an accurate and current view of medical knowledge. In response, the research community has created multicriteria instruments for reliably assessing online medical information quality. One such instrument is DISCERN, which measures health Web page quality by assessing an array of features. In order to scale up use of the instrument, there is interest in automating the quality evaluation process by building machine learning (ML)-based DISCERN Web page classifiers. The paper addresses 2 key issues that are essential before constructing automated DISCERN classifiers: (1) generation of a robust DISCERN training corpus useful for training classification algorithms, and (2) assessment of the usefulness of the current DISCERN scoring schema as a metric for evaluating the performance of these algorithms. Using DISCERN, 272 Web pages discussing treatment options in breast cancer, arthritis, and depression were evaluated and rated by trained coders. First, different consensus models were compared to obtain a robust aggregated rating among the coders, suitable for a DISCERN ML training corpus. Second, a new DISCERN scoring criterion was proposed (features-based score) as an ML performance metric that is more reflective of the score distribution across different DISCERN quality criteria. First, we found that a probabilistic consensus model applied to the DISCERN instrument was robust against noise (random ratings) and superior to other approaches for building a training corpus. Second, we found that the established DISCERN scoring schema (overall score) is ill-suited to measure ML performance for automated classifiers. Use of a probabilistic consensus model is advantageous for building a training corpus for the DISCERN instrument, and use of a features-based score is an appropriate ML metric for automated DISCERN

  15. The Evaluation of Web pages of State Universities’ Usability via Content Analysis

    Directory of Open Access Journals (Sweden)

    Ezgi CEVHER

    2015-12-01

    Full Text Available Within the scope of e-transformation project in Turkey, the “Preparation of Guideline for State Institutions’ Web Pages” action has been carried out for ensuring the minimal cohesiveness among government institutions’ and organizations’ Web pages in terms of design and content. As a result of those efforts, the first edition of “Guideline for State Institutions’ Web Pages” has been prepared in year 2006. The second edition of this guideline has been published in year 2009 under in simpler form under the name of “Guideline and Suggestions for the Standards of Governmental Institutions’ Web Pages”. It became compulsory for local and central level governmental institutions and organizations to obey the procedures and principles stated in Guideline. Through this Guideline, the preparation of websites of governmental institutions in harmony with mentioned standards, and updating them in parallel with changing conditions and requirements have been brought to agenda especially in recent years. In this study, by considering the characteristics stated in Guideline, the webpages of state universities’ have been assessed through “content analysis”. Considering that the webpages of universities are being visited by hundreds of visitors daily, it is required to ensure the effective, productive and comfortable usability. For this reason, by analyzing their webpages, the object is to determine to what extend the state universities implement the compulsory principles stated in Guideline, the webpages of universities have been assessed in this study from the aspects of compliance with standards, usability, and accessibility

  16. Do-It-Yourself: A Special Library's Approach to Creating Dynamic Web Pages Using Commercial Off-The-Shelf Applications

    Science.gov (United States)

    Steeman, Gerald; Connell, Christopher

    2000-01-01

    Many librarians may feel that dynamic Web pages are out of their reach, financially and technically. Yet we are reminded in library and Web design literature that static home pages are a thing of the past. This paper describes how librarians at the Institute for Defense Analyses (IDA) library developed a database-driven, dynamic intranet site using commercial off-the-shelf applications. Administrative issues include surveying a library users group for interest and needs evaluation; outlining metadata elements; and, committing resources from managing time to populate the database and training in Microsoft FrontPage and Web-to-database design. Technical issues covered include Microsoft Access database fundamentals, lessons learned in the Web-to-database process (including setting up Database Source Names (DSNs), redesigning queries to accommodate the Web interface, and understanding Access 97 query language vs. Standard Query Language (SQL)). This paper also offers tips on editing Active Server Pages (ASP) scripting to create desired results. A how-to annotated resource list closes out the paper.

  17. PROTOTIPE PEMESANAN BAHAN PUSTAKA MELALUI WEB MENGGUNAKAN ACTIVE SERVER PAGE (ASP

    Directory of Open Access Journals (Sweden)

    Djoni Haryadi Setiabudi

    2002-01-01

    Full Text Available Electronic commerce is one of the components in the internet that growing fast in the world. In this research, it is developed the prototype for library service that offers library collection ordering especially books and articles through World Wide Web. In order to get an interaction between seller and buyer, there is an urgency to develop a dynamic web, which needs the technology and software. One of the programming languages is called Active Server Pages (ASP and it is combining with database system to store data. The other component as an interface between application and database is ActiveX Data Objects (ADO. ASP has an advantage in the scripting method and it is easy to make the configuration with database. This application consists of two major parts those are administrator and user. This prototype has the facilities for editing, searching and looking through ordering information online. Users can also do downloading process for searching and ordering articles. Paying method in this e-commerce system is quite essential because in Indonesia not everybody has a credit card. As a solution to this situation, this prototype has a form for user who does not have credit card. If the bill has been paid, he can do the transaction online. In this case, one of the ASP advantages will be used. This is called "session" when data in process would not be lost as long as the user still in that "session". This will be used in user area and admin area where the users and the admin can do various processes. Abstract in Bahasa Indonesia : Electronic commerce adalah satu bagian dari internet yang berkembang pesat di dunia saat ini. Pada penelitian ini dibuat suatu prototipe program aplikasi untuk pengembangan jasa layanan perpustakaan khususnya pemesanan artikel dan buku melalui World Wide Web. Untuk membangun aplikasi berbasis web diperlukan teknologi dan software yang mendukung pembuatan situs web dinamis sehingga ada interaksi antara pembeli dan penjual

  18. Web page of the Ibero-American laboratories network of radioactivity analysis in foods: a tool for inter regional diffusion

    International Nuclear Information System (INIS)

    Melo Ferreira, Ana C. de; Osores, Jose M.; Fernandez Gomez, Isis M.; Iglicki, Flora A.; Vazquez Bolanos, Luis R.; Romero, Maria de L.; Aguirre Gomez, Jaime; Flores, Yasmine

    2008-01-01

    One objective of the thematic networks is the exchanges of knowledge among participants, for this reason, actions focused to the diffusion of their respective work are prioritized, evidencing the result of the cooperation among the participant groups and also among different networks. The Ibero-American Laboratories Network of Radioactivity Analysis in Foods (RILARA) was constituted in 2007, and one of the first actions carried out in this framework, was the design and conformation of a web page. The web pages have become a powerful means for diffusion of specialized information. Their power, as well as their continuous upgrading and the specificity of the topics that can develop, allow the user to obtain fast information on a wide range of products, services and organizations at local and world level. The main objective of the RILARA web page is to provide updated relevant information to interested specialists in the subject and also to public in general, about the work developed by the network laboratories regarding the control of radioactive pollutants in foods and related scientific issues. This web has been developed based on a Content Management Systems that helps to eliminate potential barriers to the communication web, reducing the creation costs, contribution and maintenance of the content. The tool used for its design is very effective to be used in the process of teaching, learning and for the organization of the information. This paper describes how was conceived the design of this web page, the information that contains and how can be accessed and/or to include any contribution, the value of this page depends directly on the grade of updating of the available contents so that it can be useful and attractive to the users. (author)

  19. Analysis of co-occurrence toponyms in web pages based on complex networks

    Science.gov (United States)

    Zhong, Xiang; Liu, Jiajun; Gao, Yong; Wu, Lun

    2017-01-01

    A large number of geographical toponyms exist in web pages and other documents, providing abundant geographical resources for GIS. It is very common for toponyms to co-occur in the same documents. To investigate these relations associated with geographic entities, a novel complex network model for co-occurrence toponyms is proposed. Then, 12 toponym co-occurrence networks are constructed from the toponym sets extracted from the People's Daily Paper documents of 2010. It is found that two toponyms have a high co-occurrence probability if they are at the same administrative level or if they possess a part-whole relationship. By applying complex network analysis methods to toponym co-occurrence networks, we find the following characteristics. (1) The navigation vertices of the co-occurrence networks can be found by degree centrality analysis. (2) The networks express strong cluster characteristics, and it takes only several steps to reach one vertex from another one, implying that the networks are small-world graphs. (3) The degree distribution satisfies the power law with an exponent of 1.7, so the networks are free-scale. (4) The networks are disassortative and have similar assortative modes, with assortative exponents of approximately 0.18 and assortative indexes less than 0. (5) The frequency of toponym co-occurrence is weakly negatively correlated with geographic distance, but more strongly negatively correlated with administrative hierarchical distance. Considering the toponym frequencies and co-occurrence relationships, a novel method based on link analysis is presented to extract the core toponyms from web pages. This method is suitable and effective for geographical information retrieval.

  20. Using Frames and JavaScript To Automate Teacher-Side Web Page Navigation for Classroom Presentations.

    Science.gov (United States)

    Snyder, Robin M.

    HTML provides a platform-independent way of creating and making multimedia presentations for classroom instruction and making that content available on the Internet. However, time in class is very valuable, so that any way to automate or otherwise assist the presenter in Web page navigation during class can save valuable seconds. This paper…

  1. ELAN - the web page based information system for emergency preparedness in Germany

    International Nuclear Information System (INIS)

    Zaehringer, M.; Hoebler, Ch.; Bieringer, P.

    2002-01-01

    A plan for a WEB-page based system was developed which compiles all important information in case of an nuclear emergency with actual or potential release of radioactivity into the environment. A prototype system providing information of the Federal Ministry for Environment, Nature Conservation and Reactor Safety (BMU) was tested successfully. The implementation at the National Emergency Operations Centre of Switzerland was used as template. However, further planning takes into account the special conditions of the federal structure in Germany. The main purpose of the system is to compile, to clearly arrange, and to timely provide on a central server all relevant information of the federal government, the states (Laender), and, if available, from foreign authorities that is needed for decision making. It is envisaged to integrate similar existing systems in some states conceptually and technically. ELAN makes use of standardised and secure web technology. Uploading of information and delivery to national and foreign authorities, international organisations and the public is managed by role specific access controlling. (orig.)

  2. A comprehensive analysis of Italian web pages mentioning squalene-based influenza vaccine adjuvants reveals a high prevalence of misinformation.

    Science.gov (United States)

    Panatto, Donatella; Amicizia, Daniela; Arata, Lucia; Lai, Piero Luigi; Gasparini, Roberto

    2017-11-27

    Squalene-based adjuvants have been included in influenza vaccines since 1997. Despite several advantages of adjuvanted seasonal and pandemic influenza vaccines, laypeople's perception of such formulations may be hesitant or even negative under certain circumstances. Moreover, in Italian, the term "squalene" has the same root as such common words as "shark" (squalo), "squalid" and "squalidness" that tend to have negative connotations. This study aimed to quantitatively and qualitatively analyze a representative sample of Italian web pages mentioning squalene-based adjuvants used in influenza vaccines. Every effort was made to limit the subjectivity of judgments. Eighty-four unique web pages were assessed. A high prevalence (47.6%) of pages with negative or ambiguous attitudes toward squalene-based adjuvants was established. Compared with web pages reporting balanced information on squalene-based adjuvants, those categorized as negative/ambiguous had significantly lower odds of belonging to a professional institution [adjusted odds ratio (aOR) = 0.12, p = .004], and significantly higher odds of containing pictures (aOR = 1.91, p = .034) and being more readable (aOR = 1.34, p = .006). Some differences in wording between positive/neutral and negative/ambiguous web pages were also observed. The most common scientifically unsound claims concerned safety issues and, in particular, claims linking squalene-based adjuvants to the Gulf War Syndrome and autoimmune disorders. Italian users searching the web for information on vaccine adjuvants have a high likelihood of finding unbalanced and misleading material. Information provided by institutional websites should be not only evidence-based but also carefully targeted towards laypeople. Conversely, authors writing for non-institutional websites should avoid sensationalism and provide their readers with more balanced information.

  3. Zooming Web browser

    Science.gov (United States)

    Bederson, Benjamin B.; Hollan, James D.; Stewart, Jason B.; Rogers, David; Druin, Allison; Vick, David

    1996-03-01

    The World Wide Web (WWW) is becoming increasingly important for business, education, and entertainment. Popular web browsers make access to Internet information resources relatively easy for novice users. Simply by clicking on a link, a new page of information replaces the current one on the screen. Unfortunately however, after following a number of links, people can have difficulty remembering where they've been and navigating links they have followed. As one's collection of web pages grows and as more information of interest populates the web, effective navigation becomes an issue of fundamental importance. We are developing a prototype zooming browser to explore alternative mechanisms for navigating the WWW. Instead of having a single page visible at a time, multiple pages and the links between them are depicted on a large zoomable information surface. Pages are scaled so that the page in focus is clearly readable with connected pages shown at smaller scales to provide context. As a link is followed the new page becomes the focus and existing pages are dynamically repositioned and scaled. Layout changes are animated so that the focus page moves smoothly to the center of the display surface while contextual information provided by linked pages scales down. While our browser supports multiscale representations of existing HTML pages, we have also extended HTML to support multiscale layout within a page. This extension, Multi-Scale Markup Language, is at an early stage of development. It currently supports inclusion within a page of variable-sized dynamic objects, graphics, and other interface mechanisms from our underlying Pad++ substrate. This provides sophisticated client- side interactions, permits annotations to be added to pages, and allows page constituents to be used as independent graphical objects. In this paper, we describe our prototype web browser and authoring facilities. We show how simple extensions to HTML can support sophisticated client

  4. The sources and popularity of online drug information: an analysis of top search engine results and web page views.

    Science.gov (United States)

    Law, Michael R; Mintzes, Barbara; Morgan, Steven G

    2011-03-01

    The Internet has become a popular source of health information. However, there is little information on what drug information and which Web sites are being searched. To investigate the sources of online information about prescription drugs by assessing the most common Web sites returned in online drug searches and to assess the comparative popularity of Web pages for particular drugs. This was a cross-sectional study of search results for the most commonly dispensed drugs in the US (n=278 active ingredients) on 4 popular search engines: Bing, Google (both US and Canada), and Yahoo. We determined the number of times a Web site appeared as the first result. A linked retrospective analysis counted Wikipedia page hits for each of these drugs in 2008 and 2009. About three quarters of the first result on Google USA for both brand and generic names linked to the National Library of Medicine. In contrast, Wikipedia was the first result for approximately 80% of generic name searches on the other 3 sites. On these other sites, over two thirds of brand name searches led to industry-sponsored sites. The Wikipedia pages with the highest number of hits were mainly for opiates, benzodiazepines, antibiotics, and antidepressants. Wikipedia and the National Library of Medicine rank highly in online drug searches. Further, our results suggest that patients most often seek information on drugs with the potential for dependence, for stigmatized conditions, that have received media attention, and for episodic treatments. Quality improvement efforts should focus on these drugs.

  5. Issues of Page Representation and Organisation in Web Browser's Revisitation Tools

    Directory of Open Access Journals (Sweden)

    Andy Cockburn

    2000-05-01

    Full Text Available Many commercial and research WWW browsers include a variety of graphical revisitation tools that let users return to previously seen pages. Examples include history lists, bookmarks and site maps. In this paper, we examine two fundamental design and usability issues that all graphical tools for revisitation must address. First, how can individual pages be represented to best support page identification? We discuss the problems and prospects of various page representations: the pages themselves, image thumbnails, text labels, and abstract page properties. Second, what display organisation schemes can be used to enhance the visualisation of large sets of previously visited pages? We compare temporal organisations, hub-and spoke dynamic trees, spatial layouts and site maps.

  6. A STUDY ON RANKING METHOD IN RETRIEVING WEB PAGES BASED ON CONTENT AND LINK ANALYSIS: COMBINATION OF FOURIER DOMAIN SCORING AND PAGERANK SCORING

    Directory of Open Access Journals (Sweden)

    Diana Purwitasari

    2008-01-01

    Full Text Available Ranking module is an important component of search process which sorts through relevant pages. Since collection of Web pages has additional information inherent in the hyperlink structure of the Web, it can be represented as link score and then combined with the usual information retrieval techniques of content score. In this paper we report our studies about ranking score of Web pages combined from link analysis, PageRank Scoring, and content analysis, Fourier Domain Scoring. Our experiments use collection of Web pages relate to Statistic subject from Wikipedia with objectives to check correctness and performance evaluation of combination ranking method. Evaluation of PageRank Scoring show that the highest score does not always relate to Statistic. Since the links within Wikipedia articles exists so that users are always one click away from more information on any point that has a link attached, it it possible that unrelated topics to Statistic are most likely frequently mentioned in the collection. While the combination method show link score which is given proportional weight to content score of Web pages does effect the retrieval results.

  7. Hormone Replacement Therapy advertising: sense and nonsense on the web pages of the best-selling pharmaceuticals in Spain

    Directory of Open Access Journals (Sweden)

    Cantero María

    2010-03-01

    Full Text Available Abstract Background The balance of the benefits and risks of long term use of hormone replacement therapy (HRT have been a matter of debate for decades. In Europe, HRT requires medical prescription and its advertising is only permitted when aimed at health professionals (direct to consumer advertising is allowed in some non European countries. The objective of this study is to analyse the appropriateness and quality of Internet advertising about HRT in Spain. Methods A search was carried out on the Internet (January 2009 using the eight best-selling HRT drugs in Spain. The brand name of each drug was entered into Google's search engine. The web sites appearing on the first page of results and the corresponding companies were analysed using the European Code of Good Practice as the reference point. Results Five corporate web pages: none of them included bibliographic references or measures to ensure that the advertising was only accessible by health professionals. Regarding non-corporate web pages (n = 27: 41% did not include the company name or address, 44% made no distinction between patient and health professional information, 7% contained bibliographic references, 26% provided unspecific information for the use of HRT for osteoporosis and 19% included menstrual cycle regulation or boosting feminity as an indication. Two online pharmacies sold HRT drugs which could be bought online in Spain, did not include the name or contact details of the registered company, nor did they stipulate the need for a medical prescription or differentiate between patient and health professional information. Conclusions Even though pharmaceutical companies have committed themselves to compliance with codes of good practice, deficiencies were observed regarding the identification, information and promotion of HRT medications on their web pages. Unaffected by legislation, non-corporate web pages are an ideal place for indirect HRT advertising, but they often contain

  8. Hormone Replacement Therapy advertising: sense and nonsense on the web pages of the best-selling pharmaceuticals in Spain

    Science.gov (United States)

    2010-01-01

    Background The balance of the benefits and risks of long term use of hormone replacement therapy (HRT) have been a matter of debate for decades. In Europe, HRT requires medical prescription and its advertising is only permitted when aimed at health professionals (direct to consumer advertising is allowed in some non European countries). The objective of this study is to analyse the appropriateness and quality of Internet advertising about HRT in Spain. Methods A search was carried out on the Internet (January 2009) using the eight best-selling HRT drugs in Spain. The brand name of each drug was entered into Google's search engine. The web sites appearing on the first page of results and the corresponding companies were analysed using the European Code of Good Practice as the reference point. Results Five corporate web pages: none of them included bibliographic references or measures to ensure that the advertising was only accessible by health professionals. Regarding non-corporate web pages (n = 27): 41% did not include the company name or address, 44% made no distinction between patient and health professional information, 7% contained bibliographic references, 26% provided unspecific information for the use of HRT for osteoporosis and 19% included menstrual cycle regulation or boosting feminity as an indication. Two online pharmacies sold HRT drugs which could be bought online in Spain, did not include the name or contact details of the registered company, nor did they stipulate the need for a medical prescription or differentiate between patient and health professional information. Conclusions Even though pharmaceutical companies have committed themselves to compliance with codes of good practice, deficiencies were observed regarding the identification, information and promotion of HRT medications on their web pages. Unaffected by legislation, non-corporate web pages are an ideal place for indirect HRT advertising, but they often contain misleading information

  9. Hormone replacement therapy advertising: sense and nonsense on the web pages of the best-selling pharmaceuticals in Spain.

    Science.gov (United States)

    Chilet-Rosell, Elisa; Martín Llaguno, Marta; Ruiz Cantero, María Teresa; Alonso-Coello, Pablo

    2010-03-16

    The balance of the benefits and risks of long term use of hormone replacement therapy (HRT) have been a matter of debate for decades. In Europe, HRT requires medical prescription and its advertising is only permitted when aimed at health professionals (direct to consumer advertising is allowed in some non European countries). The objective of this study is to analyse the appropriateness and quality of Internet advertising about HRT in Spain. A search was carried out on the Internet (January 2009) using the eight best-selling HRT drugs in Spain. The brand name of each drug was entered into Google's search engine. The web sites appearing on the first page of results and the corresponding companies were analysed using the European Code of Good Practice as the reference point. Five corporate web pages: none of them included bibliographic references or measures to ensure that the advertising was only accessible by health professionals. Regarding non-corporate web pages (n = 27): 41% did not include the company name or address, 44% made no distinction between patient and health professional information, 7% contained bibliographic references, 26% provided unspecific information for the use of HRT for osteoporosis and 19% included menstrual cycle regulation or boosting feminity as an indication. Two online pharmacies sold HRT drugs which could be bought online in Spain, did not include the name or contact details of the registered company, nor did they stipulate the need for a medical prescription or differentiate between patient and health professional information. Even though pharmaceutical companies have committed themselves to compliance with codes of good practice, deficiencies were observed regarding the identification, information and promotion of HRT medications on their web pages. Unaffected by legislation, non-corporate web pages are an ideal place for indirect HRT advertising, but they often contain misleading information. HRT can be bought online from Spain

  10. Electronic Ramp to Success: Designing Campus Web Pages for Users with Disabilities.

    Science.gov (United States)

    Coombs, Norman

    2002-01-01

    Discusses key issues in addressing the challenge of Web accessibility for people with disabilities, including tools for Web authoring, repairing, and accessibility validation, and relevant legal issues. Presents standards for Web accessibility, including the Section 508 Standards from the Federal Access Board, and the World Wide Web Consortium's…

  11. Improving the web site's effectiveness by considering each page's temporal information

    NARCIS (Netherlands)

    Li, ZG; Sun, MT; Dunham, MH; Xiao, YQ; Dong, G; Tang, C; Wang, W

    2003-01-01

    Improving the effectiveness of a web site is always one of its owner's top concerns. By focusing on analyzing web users' visiting behavior, web mining researchers have developed a variety of helpful methods, based upon association rules, clustering, prediction and so on. However, we have found

  12. Limitations of existing web services

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Limitations of existing web services. Uploading or downloading large data. Serving too many user from single source. Difficult to provide computer intensive job. Depend on internet and its bandwidth. Security of data in transition. Maintain confidentiality of data ...

  13. Children's Page

    Science.gov (United States)

    Kids, this page is for you. Learn about everything from how the body works to what happens when you go to the hospital. There are quizzes, games and lots of cool web sites for you to explore. Have fun!

  14. Preprocessing and Content/Navigational Pages Identification as Premises for an Extended Web Usage Mining Model Development

    Directory of Open Access Journals (Sweden)

    Daniel MICAN

    2009-01-01

    Full Text Available From its appearance until nowadays, the internet saw a spectacular growth not only in terms of websites number and information volume, but also in terms of the number of visitors. Therefore, the need of an overall analysis regarding both the web sites and the content provided by them was required. Thus, a new branch of research was developed, namely web mining, that aims to discover useful information and knowledge, based not only on the analysis of websites and content, but also on the way in which the users interact with them. The aim of the present paper is to design a database that captures only the relevant data from logs in a way that will allow to store and manage large sets of temporal data with common tools in real time. In our work, we rely on different web sites or website sections with known architecture and we test several hypotheses from the literature in order to extend the framework to sites with unknown or chaotic structure, which are non-transparent in determining the type of visited pages. In doing this, we will start from non-proprietary, preexisting raw server logs.

  15. Applying Web Analytics to Online Finding Aids: Page Views, Pathways, and Learning about Users

    Directory of Open Access Journals (Sweden)

    Mark R. O'English

    2011-05-01

    Full Text Available Online finding aids, Internet search tools, and increased access to the World Wide Web have greatly changed how patrons find archival collections. Through analyzing eighteen months of access data collected via Web analytics tools, this article examines how patrons discover archival materials. Contrasts are drawn between access from library catalogs and from online search engines, with the latter outweighing the former by an overwhelming margin, and argues whether archival description practices should change accordingly.

  16. How Many Pages in a Single Word: Alternative Typo-poetics of Surrealist Magazines

    Directory of Open Access Journals (Sweden)

    Biljana Andonovska

    2013-07-01

    Full Text Available The paper examines the experimental design, typography and editorial strategies of the rare avant-garde publication Four Pages - Onanism of Death - And So On (1930, published by Oskar Davičo, Đorđe Kostić and Đorđe Jovanović, probably the first Surrealist Edition of the Belgrade surrealist group. Starting from its unconventional format and the way authors (reshape and (misdirect each page in an autonomous fashion, I further analyze the intrinsic interaction between the text, its graphic embodiment and surrounding para-textual elements (illustrations, body text, titles, folding, dating, margins, comments. Special attention is given to the concepts of depersonalization, free association and automatic writing as primary poetical sources for the delinearisation of the reading process and 'emancipation' of the text, its content and syntax as well as its position, direction, and visual materiality on the page. Resisting conventional classifications and simplified distinctions between established print media and genres, this surrealist single-issue placard magazine mixes elements of the poster, magazine, and booklet. Its ambiguous nature leads us toward theoretical discussion of the avant-garde magazine as an autonomous literary genre and original, self-sufficient artwork, as was already suggested by the theory of Russian formalism.

  17. An Exploratory Study of Student Satisfaction with University Web Page Design

    Science.gov (United States)

    Gundersen, David E.; Ballenger, Joe K.; Crocker, Robert M.; Scifres, Elton L.; Strader, Robert

    2013-01-01

    This exploratory study evaluates the satisfaction of students with a web-based information system at a medium-sized regional university. The analysis provides a process for simplifying data interpretation in captured student user feedback. Findings indicate that student classifications, as measured by demographic and other factors, determine…

  18. SurveyWiz and factorWiz: JavaScript Web pages that make HTML forms for research on the Internet.

    Science.gov (United States)

    Birnbaum, M H

    2000-05-01

    SurveyWiz and factorWiz are Web pages that act as wizards to create HTML forms that enable one to collect data via the Web. SurveyWiz allows the user to enter survey questions or personality test items with a mixture of text boxes and scales of radio buttons. One can add demographic questions of age, sex, education, and nationality with the push of a button. FactorWiz creates the HTML for within-subjects, two-factor designs as large as 9 x 9, or higher order factorial designs up to 81 cells. The user enters levels of the row and column factors, which can be text, images, or other multimedia. FactorWiz generates the stimulus combinations, randomizes their order, and creates the page. In both programs HTML is displayed in a window, and the user copies it to a text editor to save it. When uploaded to a Web server and supported by a CGI script, the created Web pages allow data to be collected, coded, and saved on the server. These programs are intended to assist researchers and students in quickly creating studies that can be administered via the Web.

  19. Research on the Extraction Technology of Hot-words in Tibetan WebPages

    Directory of Open Access Journals (Sweden)

    Wang Chang-Zhi

    2016-01-01

    Full Text Available The construction of Tibetan corpus is the field of Tibetan information processing of basic work. This paper uses the technology of web crawler and pretreatment and real-time acquisition of web sites to obtain a large number of Tibetan corpus in short time. The hot words reflected the hotspot of Tibetan people’s attention in a certain period of time. The paper draws lessons from the TFIDF for Tibetan text information extraction and the words of different locations are given different weights to extract the hot words. It is really effective to realize the construction of the raw Tibetan corpus and the extraction of the hot-words by self-made software.

  20. Improving online visibility of the web pages with Search Engine Optimization: Laurea University of Applied Sciences

    OpenAIRE

    Bhandari, Deepak

    2017-01-01

    This project was commissioned by Laurea University of Applied Sciences (UAS). The organization’s website has a wide range of users from all over the world. It is important that the contents on the website are equally accessible to users with different abilities and disabilities (e.g. visually impaired & auditory). Search engine optimization (SEO) is one of the practices that contribute to im-proving web accessibility. A functioning website is one of the main means of communication for organiz...

  1. Direct advertising to the consumer of web pages in Spanish that offer cannabinoids for medicinal uses

    Directory of Open Access Journals (Sweden)

    Julio Cjuno

    2018-02-01

    Full Text Available Señor Editor: Los canabinoides son sustancias derivadas a partir de las plantas del cannabis (Whiting et al., 2015, las cuales han sido aprobadas por la Food & Drug Admnistration [FDA] para el manejo de diversos síntomas como pérdida de apetito en pacientes con VIH/SIDA, y las náuseas y vómitos asociados a la quimioterapia (OMS, 2015. Sin embargo, un reciente metaanálisis encontró que existe escasa evidencia sobre el uso de canabinoides para diversas condiciones (Whiting et al., 2015. Asimismo, se han reportado eventos adversos tales como mareos, sequedad de boca, náuseas, fatiga, somnolencia, vómitos, desorientación, confusión y alucinaciones (Whiting et al., 2015. Debido a esto, resulta importante que la publicidad dirigida al consumidor [PDAC] realizada por quienes ofertan productos derivados de la marihuana para usos medicinales contenga una adecuada recomendación sobre sus usos y posibles eventos adversos (Gellad and Lyles 2007. Lo cual no ha sido explorado previamente. El objetivo del presente estudio fue evaluar la PDAC de las páginas web que ofrecen derivados de la marihuana para usos medicinales, en países de habla hispana. Para ello, durante marzo del 2017 se realizaron búsquedas en Google.com utilizando los siguientes términos de búsqueda en español: [Marihuana medicinal], [Cannabis medicinal], [Aceite de marihuana], y [Aceite de cannabis], elegidos por ser los términos con más búsquedas sobre el tema en los últimos cinco años según estadísticas de Google Trends (2017, combinadas con nombres de los siguientes países: México, Colombia, Argentina, Chile, Bolivia, Paraguay, Uruguay, Venezuela, Ecuador, Perú y España. El historial y las cookies fueron previamente eliminados para no obtener resultados personalizados. Se revisaron los 50 primeros resultados de cada búsqueda, y se seleccionaron las páginas web que ofrecían derivados de la marihuana con fines medicinales, cuyas características fueron digitadas

  2. Interactive Development of Regional Climate Web Pages for the Western United States

    Science.gov (United States)

    Oakley, N.; Redmond, K. T.

    2013-12-01

    Weather and climate have a pervasive and significant influence on the western United States, driving a demand for information that is ongoing and constantly increasing. In communications with stakeholders, policy makers, researchers, educators, and the public through formal and informal encounters, three standout challenges face users of weather and climate information in the West. First, the needed information is scattered about the web making it difficult or tedious to access. Second, information is too complex or requires too much background knowledge to be immediately applicable. Third, due to complex terrain, there is high spatial variability in weather, climate, and their associated impacts in the West, warranting information outlets with a region-specific focus. Two web sites, TahoeClim and the Great Basin Weather and Climate Dashboard were developed to overcome these challenges to meeting regional weather and climate information needs. TahoeClim focuses on the Lake Tahoe Basin, a region of critical environmental concern spanning the border of Nevada and California. TahoeClim arose out of the need for researchers, policy makers, and environmental organizations to have access to all available weather and climate information in one place. Additionally, TahoeClim developed tools to both interpret and visualize data for the Tahoe Basin with supporting instructional material. The Great Basin Weather and Climate Dashboard arose from discussions at an informal meeting about Nevada drought organized by the USDA Farm Service Agency. Stakeholders at this meeting expressed a need to take a 'quick glance' at various climate indicators to support their decision making process. Both sites were designed to provide 'one-stop shopping' for weather and climate information in their respective regions and to be intuitive and usable by a diverse audience. An interactive, 'co-development' approach was taken with sites to ensure needs of potential users were met. The sites were

  3. Analysis of Croatian archives' web page from the perspective of public programmes

    Directory of Open Access Journals (Sweden)

    Goran Pavelin

    2015-04-01

    Full Text Available In order to remain relevant in society, archivists should promote collections and records that are kept in the archives. Through public programmes, archives interact with customers and various public actors and create the institutional image. This paper is concerned with the role of public programmes in the process of modernization of the archival practice, with the emphasis on the Croatian state archives. The aim of the paper is to identify what kind of information is offered to users and public in general on the web sites of the Croatian state archives. Public programmes involve two important components of archival practice: archives and users. Therefore, public programmes ensure good relations with the public. Croatian archivists still question the need for public relations in archives, while American and European archives have already integrated public relations into the basic archival functions. The key components needed for successful planning and implementation of public programs are the source of financing, compliance with the annual work plan, clear goals, defined target audience, cooperation and support from the local community, and the evaluation of results.

  4. A new means of communication with the populations: the Extremadura Regional Government Radiological Monitoring alert WEB Page

    International Nuclear Information System (INIS)

    Baeza, A.; Vasco, J.; Miralles, Y.; Torrado, L.; Gil, J. M.

    2003-01-01

    XXI a summary sheet, relatively easy to interpret, giving the radiation levels and dosimetry detected during the immediately proceeding semester. Recently too, the challenge has been taken on of providing constantly, updated information on as complex a topic as the radiological monitoring of the environment. To this end, a Web page has been developed dealing with the operation and results provided by the aforementioned Radiological Warning Betwork of Extremadura. The page structure consists of seven major blocks: (i) origin and objectives of the network; (ii) a description of the stations of the network; (iii) their modes of operation in normal circumstances and in the case of an operational or radiological anomaly; (iv) the results that the network provides; (v) a glossary of terms to clarify as straightforwardly as possible some of the terms and concepts that are of unavoidable use, but are unfamiliar to the population in general; (vi) information about links to other Web sites that also deal with this issue to some degree; and (vii) giving the option of questions and contacts between the visitor to the page and those responsible for its creation and maintenance. Actions such as that described here will doubtless contribute positively to increasing the necessary trust that the population deserves to have in the correct operation of the measures adopted to guarantee their adequate radiological protection. (Author)

  5. The effect of new links on Google PageRank

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli

    2004-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. We study the effect of newly created links on Google PageRank. We discuss to

  6. Web Caching

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. Web Caching - A Technique to Speedup Access to Web Contents. Harsha Srinath Shiva Shankar Ramanna. General Article Volume 7 Issue 7 July 2002 pp 54-62 ... Keywords. World wide web; data caching; internet traffic; web page access.

  7. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    Science.gov (United States)

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  8. Trident Web page

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Randall P. [Los Alamos National Laboratory; Fernandez, Juan C. [Los Alamos National Laboratory

    2012-06-25

    An Extensive Diagnostic Suite Enables Cutting-edge Research at Trident The Trident Laser Facility at Los Alamos National Laboratory is an extremely versatile Nd:glass laser system dedicated to high energy density physics research and fundamental laser-matter interactions. Trident's Unique Laser Capabilities Provide an Ideal Platform for Many Experiments. The laser system consists of three high energy beams which can be delivered into two independent target experimental areas. The target areas are equipped with an extensive suite of diagnostics for research in ultra-intense laser matter interactions, dynamic material properties, and laser-plasma instabilities. Several important discoveries and first observations have been made at Trident including laser-accelerated MeV mono-energetic ions, nonlinear kinetic plasma waves, transition between kinetic and fluid nonlinear behavior, as well as other fundamental laser-matter interaction processes. Trident's unique long-pulse capabilities have enabled state-of-the-art innovations in laser-launched flyer-plates, and other unique loading techniques for material dynamics research.

  9. Structural analysis of a composite continuous girder with a single rectangular web opening

    Directory of Open Access Journals (Sweden)

    Mohamed A. ElShaer

    2017-08-01

    In this paper, a non-linear finite element analysis has been done to analyze the deflection in the steel section and internal stresses in the concrete slab for continuous composite girders with a single rectangular opening in the steel web. ANSYS computer program (version 15 has been used to analyze the three-dimensional model. The reliability of the model was demonstrated by comparison with experimental results of continuous composite beams without an opening in the steel web carried out by another author. The parametric analysis was executed to investigate the width, height, and position of the opening in one span on the behavior of a composite girder under vertical load. The results indicated that when the width of opening is less than 0.05 of length of a single span and the height is less than 0.15 of the steel web, the deflection and internal stresses increased less than 10% comparing to continuous composite girders without an opening.

  10. The effect of new links on Google PageRank

    OpenAIRE

    Avrachenkov, Konstatin; Litvak, Nelli

    2004-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. We study the effect of newly created links on Google PageRank. We discuss to what extend a page can control its PageRank. Using the asymptotic analysis we provide simple conditions that show if new links bring benefits to a Web page and its neighbors in terms of PageRank or t...

  11. The iMars WebGIS - Spatio-Temporal Data Queries and Single Image Map Web Services

    Science.gov (United States)

    Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Muller, Jan-Peter; van Gasselt, Stephan; Sidiropoulos, Panagiotis; Lanz-Kroechert, Julia

    2017-04-01

    Introduction: Web-based planetary image dissemination platforms usually show outline coverages of the data and offer querying for metadata as well as preview and download, e.g. the HRSC Mapserver (Walter & van Gasselt, 2014). Here we introduce a new approach for a system dedicated to change detection by simultanous visualisation of single-image time series in a multi-temporal context. While the usual form of presenting multi-orbit datasets is the merge of the data into a larger mosaic, we want to stay with the single image as an important snapshot of the planetary surface at a specific time. In the context of the EU FP-7 iMars project we process and ingest vast amounts of automatically co-registered (ACRO) images. The base of the co-registration are the high precision HRSC multi-orbit quadrangle image mosaics, which are based on bundle-block-adjusted multi-orbit HRSC DTMs. Additionally we make use of the existing bundle-adjusted HRSC single images available at the PDS archives. A prototype demonstrating the presented features is available at http://imars.planet.fu-berlin.de. Multi-temporal database: In order to locate multiple coverage of images and select images based on spatio-temporal queries, we converge available coverage catalogs for various NASA imaging missions into a relational database management system with geometry support. We harvest available metadata entries during our processing pipeline using the Integrated Software for Imagers and Spectrometers (ISIS) software. Currently, this database contains image outlines from the MGS/MOC, MRO/CTX and the MO/THEMIS instruments with imaging dates ranging from 1996 to the present. For the MEx/HRSC data, we already maintain a database which we automatically update with custom software based on the VICAR environment. Web Map Service with time support: The MapServer software is connected to the database and provides Web Map Services (WMS) with time support based on the START_TIME image attribute. It allows temporal

  12. The Convergent Evolution of a Chemistry Project: Using Laboratory Posters as a Platform for Web Page Construction.

    Science.gov (United States)

    Rigeman, Sally

    1998-01-01

    Argues that evolution is a process that occurs within the curriculum as well as within the physical universe. Provides an example that involves student presentations. Discusses the transition from poster presentations to electronic presentations via the World Wide Web. (DDR)

  13. Monte Carlo methods in PageRank computation: When one iteration is sufficient

    NARCIS (Netherlands)

    Avrachenkov, K.; Litvak, Nelli; Nemirovsky, D.; Osipova, N.

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer, and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method, which requires

  14. Monte Carlo methods in PageRank computation: When one iteration is sufficient

    NARCIS (Netherlands)

    Avrachenkov, K.; Litvak, Nelli; Nemirovsky, D.; Osipova, N.

    2005-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method which requires

  15. SURVEY OF WEB CRAWLING ALGORITHMS

    OpenAIRE

    Rahul kumar; Anurag Jain; Chetan Agrawal

    2017-01-01

    The World Wide Web is the largest collection of data today and it continues increasing day by day. A web crawler is a program from the huge downloading of web pages from World Wide Web and this process is called Web crawling. To collect the web pages from www a search engine uses web crawler and the web crawler collects this by web crawling. Due to limitations of network bandwidth, time-consuming and hardware's a Web crawler cannot download all the pages, it is important to select the most im...

  16. Personal and Public Start Pages in a library setting

    NARCIS (Netherlands)

    Kieft-Wondergem, Dorine

    Personal and Public Start Pages are web-based resources. With these kind of tools it is possible to make your own free start page. A Start Page allows you to put all your web resources into one page, including blogs, email, podcasts, RSSfeeds. It is possible to share the content of the page with

  17. On Page Rank

    NARCIS (Netherlands)

    Hoede, C.

    In this paper the concept of page rank for the world wide web is discussed. The possibility of describing the distribution of page rank by an exponential law is considered. It is shown that the concept is essentially equal to that of status score, a centrality measure discussed already in 1953 by

  18. Calculating PageRank in a changing network with added or removed edges

    Science.gov (United States)

    Engström, Christopher; Silvestrov, Sergei

    2017-01-01

    PageRank was initially developed by S. Brinn and L. Page in 1998 to rank homepages on the Internet using the stationary distribution of a Markov chain created using the web graph. Due to the large size of the web graph and many other real world networks fast methods to calculate PageRank is needed and even if the original way of calculating PageRank using a Power iterations is rather fast, many other approaches have been made to improve the speed further. In this paper we will consider the problem of recalculating PageRank of a changing network where the PageRank of a previous version of the network is known. In particular we will consider the special case of adding or removing edges to a single vertex in the graph or graph component.

  19. An Efficient Monte Carlo Approach to Compute PageRank for Large Graphs on a Single PC

    Directory of Open Access Journals (Sweden)

    Sonobe Tomohiro

    2016-03-01

    Full Text Available This paper describes a novel Monte Carlo based random walk to compute PageRanks of nodes in a large graph on a single PC. The target graphs of this paper are ones whose size is larger than the physical memory. In such an environment, memory management is a difficult task for simulating the random walk among the nodes. We propose a novel method that partitions the graph into subgraphs in order to make them fit into the physical memory, and conducts the random walk for each subgraph. By evaluating the walks lazily, we can conduct the walks only in a subgraph and approximate the random walk by rotating the subgraphs. In computational experiments, the proposed method exhibits good performance for existing large graphs with several passes of the graph data.

  20. The STRESA (storage of reactor safety) database (Web page: http://asa2.jrc.it/stresa)

    Energy Technology Data Exchange (ETDEWEB)

    Annunziato, A.; Addabbo, C.; Brewka, W. [Joint Research Centre, Commission of the European Communities, Ispra (Italy)

    2001-07-01

    A considerable amount of resources has been devoted at the international level during the last few decades, to the generation of experimental databases in order to provide reference information for the understanding of reactor safety relevant phenomenologies and for the development and/or assessment of related computational methodologies. The extent to which these databases are preserved and can be accessed and retrieved is an issue of major concern. This paper provides an outline of the JRC databases preservation initiative and a description of the supporting web-based computer platform STRESA. (author)

  1. The STRESA (storage of reactor safety) database (Web page: http://asa2.jrc.it/stresa)

    International Nuclear Information System (INIS)

    Annunziato, A.; Addabbo, C.; Brewka, W.

    2001-01-01

    A considerable amount of resources has been devoted at the international level during the last few decades, to the generation of experimental databases in order to provide reference information for the understanding of reactor safety relevant phenomenologies and for the development and/or assessment of related computational methodologies. The extent to which these databases are preserved and can be accessed and retrieved is an issue of major concern. This paper provides an outline of the JRC databases preservation initiative and a description of the supporting web-based computer platform STRESA. (author)

  2. Improved Outcomes Following a Single Session Web-Based Intervention for Problem Gambling.

    Science.gov (United States)

    Rodda, S N; Lubman, D I; Jackson, A C; Dowling, N A

    2017-03-01

    Research suggests online interventions can have instant impact, however this is yet to be tested with help-seeking adults and in particular those with problem gambling. This study seeks to determine the immediate impact of a single session web-based intervention for problem gambling, and to examine whether sessions evaluated positively by clients are associated with greater improvement. The current study involved 229 participants classified as problem gamblers who agreed to participate after accessing Gambling Help Online between November 2010 and February 2012. Almost half were aged under 35 years of age (45 %), male (57 %) as well as first time treatment seekers (62 %). Participants completed measures of readiness to change and distress both prior to and post-counselling. Following the provision of a single-session of counselling, participants completed ratings of the character of the session (i.e., degree of depth and smoothness) post-counselling. A significant increase in confidence to resist and urge to gamble and a significant decrease in distress (moderate effect size; d = .56 and .63 respectively) was observed after receiving online counselling. A hierarchical regression indicated the character of the session was a significant predictor of change in confidence, however only the sub-scale smoothness was a significant predictor of change in distress. This was the case even after controlling for pre-session distress, session word count and client characteristics (gender, age, preferred gambling activity, preferred mode of gambling, gambling severity, and preferred mode of help-seeking). These findings suggest that single session web-based counselling for problem gambling can have immediate benefits, although further research is required to examine the impact on longer-term outcomes.

  3. EPA Web Taxonomy

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA's Web Taxonomy is a faceted hierarchical vocabulary used to tag web pages with terms from a controlled vocabulary. Tagging enables search and discovery of EPA's...

  4. Social Bookmarking Induced Active Page Ranking

    Science.gov (United States)

    Takahashi, Tsubasa; Kitagawa, Hiroyuki; Watanabe, Keita

    Social bookmarking services have recently made it possible for us to register and share our own bookmarks on the web and are attracting attention. The services let us get structured data: (URL, Username, Timestamp, Tag Set). And these data represent user interest in web pages. The number of bookmarks is a barometer of web page value. Some web pages have many bookmarks, but most of those bookmarks may have been posted far in the past. Therefore, even if a web page has many bookmarks, their value is not guaranteed. If most of the bookmarks are very old, the page may be obsolete. In this paper, by focusing on the timestamp sequence of social bookmarkings on web pages, we model their activation levels representing current values. Further, we improve our previously proposed ranking method for web search by introducing the activation level concept. Finally, through experiments, we show effectiveness of the proposed ranking method.

  5. Funnel-web spider bite

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/002844.htm Funnel-web spider bite To use the sharing features on ... the effects of a bite from the funnel-web spider. Male funnel-web spiders are more poisonous ...

  6. INTERNET and information about nuclear sciences. The world wide web virtual library: nuclear sciences

    International Nuclear Information System (INIS)

    Kuruc, J.

    1999-01-01

    In this work author proposes to constitute new virtual library which should centralize the information from nuclear disciplines on the INTERNET, in order to them to give first and foremost the connection on the most important links in set nuclear sciences. The author has entitled this new virtual library The World Wide Web Library: Nuclear Sciences. By constitution of this virtual library next basic principles were chosen: home pages of international organizations important from point of view of nuclear disciplines; home pages of the National Nuclear Commissions and governments; home pages of nuclear scientific societies; web-pages specialized on nuclear problematic, in general; periodical tables of elements and isotopes; web-pages aimed on Chernobyl crash and consequences; web-pages with antinuclear aim. Now continue the links grouped on web-pages according to single nuclear areas: nuclear arsenals; nuclear astrophysics; nuclear aspects of biology (radiobiology); nuclear chemistry; nuclear company; nuclear data centres; nuclear energy; nuclear energy, environmental aspects of (radioecology); nuclear energy info centres; nuclear engineering; nuclear industries; nuclear magnetic resonance; nuclear material monitoring; nuclear medicine and radiology; nuclear physics; nuclear power (plants); nuclear reactors; nuclear risk; nuclear technologies and defence; nuclear testing; nuclear tourism; nuclear wastes; nuclear wastes. In these single groups web-links will be concentrated into following groups: virtual libraries and specialized servers; science; nuclear societies; nuclear departments of the academic institutes; nuclear research institutes and laboratories; centres, info links

  7. Instant responsive web design

    CERN Document Server

    Simmons, Cory

    2013-01-01

    A step-by-step tutorial approach which will teach the readers what responsive web design is and how it is used in designing a responsive web page.If you are a web-designer looking to expand your skill set by learning the quickly growing industry standard of responsive web design, this book is ideal for you. Knowledge of CSS is assumed.

  8. 16 CFR 1130.8 - Requirements for Web site registration or alternative e-mail registration.

    Science.gov (United States)

    2010-01-01

    ... registration. (a) Link to registration page. The manufacturer's Web site, or other Web site established for the... web page that goes directly to “Product Registration.” (b) Purpose statement. The registration page... registration page. The Web site registration page shall request only the consumer's name, address, telephone...

  9. Methodologies for Crawler Based Web Surveys.

    Science.gov (United States)

    Thelwall, Mike

    2002-01-01

    Describes Web survey methodologies used to study the content of the Web, and discusses search engines and the concept of crawling the Web. Highlights include Web page selection methodologies; obstacles to reliable automatic indexing of Web sites; publicly indexable pages; crawling parameters; and tests for file duplication. (Contains 62…

  10. The Importance of Prior Probabilities for Entry Page Search

    NARCIS (Netherlands)

    Kraaij, W.; Westerveld, T.H.W.; Hiemstra, Djoerd

    An important class of searches on the world-wide-web has the goal to find an entry page (homepage) of an organisation. Entry page search is quite different from Ad Hoc search. Indeed a plain Ad Hoc system performs disappointingly. We explored three non-content features of web pages: page length,

  11. The mathematics behind PageRank algorithm

    OpenAIRE

    Spačal, Gregor

    2016-01-01

    PageRank is Google's algorithm for ranking web pages by relevance. Pages can then be hierarchically sorted in order to provide better search results. The MSc thesis considers functioning, relevance, general properties of web search and its weaknesses before the appearance of Google. One of the most important questions is, if we can formally explain the mathematics behind PageRank algorithm and what mathematical knowledge is necessary. Finally, we present an example of its implementation i...

  12. EPA Web Training Classes

    Science.gov (United States)

    Scheduled webinars can help you better manage EPA web content. Class topics include Drupal basics, creating different types of pages in the WebCMS such as document pages and forms, using Google Analytics, and best practices for metadata and accessibility.

  13. Aplikasi Web Crawler Untuk Web Content Pada Mobile Phone

    OpenAIRE

    Sarwosri, Sarwosri; Basori, Ahmad Hoirul; Surastyo, Wahyu Budi

    2009-01-01

    Crawling is the process behind a search engine, which served through the World Wide Web in a structured and with certain ethics. Applications that run the crawling process is called Web Crawler, also called web spider or web robot. The growth of mobile search services provider, followed by growth of a web crawler that can browse web pages in mobile content type. Crawler Web applications can be accessed by mobile devices and only web pages that type Mobile Content to be explored is the Web Cra...

  14. A new means of communication with the populations: the Extremadura Regional Government Radiological Monitoring alert WEB Page; Un nuevo intento de comunicacion a la poblacion: La pagina Web de la red de alerta de la Junta de Extremadura

    Energy Technology Data Exchange (ETDEWEB)

    Baeza, A.; Vasco, J.; Miralles, Y.; Torrado, L.; Gil, J. M.

    2003-07-01

    Extremadura XXI a summary sheet, relatively easy to interpret, giving the radiation levels and dosimetry detected during the immediately proceeding semester. Recently too, the challenge has been taken on of providing constantly, updated information on as complex a topic as the radiological monitoring of the environment. To this end, a Web page has been developed dealing with the operation and results provided by the aforementioned Radiological Warning Betwork of Extremadura. The page structure consists of seven major blocks: (i) origin and objectives of the network; (ii) a description of the stations of the network; (iii) their modes of operation in normal circumstances and in the case of an operational or radiological anomaly; (iv) the results that the network provides; (v) a glossary of terms to clarify as straightforwardly as possible some of the terms and concepts that are of unavoidable use, but are unfamiliar to the population in general; (vi) information about links to other Web sites that also deal with this issue to some degree; and (vii) giving the option of questions and contacts between the visitor to the page and those responsible for its creation and maintenance. Actions such as that described here will doubtless contribute positively to increasing the necessary trust that the population deserves to have in the correct operation of the measures adopted to guarantee their adequate radiological protection. (Author)

  15. A combined paging alert and web-based instrument alters clinician behavior and shortens hospital length of stay in acute pancreatitis.

    Science.gov (United States)

    Dimagno, Matthew J; Wamsteker, Erik-Jan; Rizk, Rafat S; Spaete, Joshua P; Gupta, Suraj; Sahay, Tanya; Costanzo, Jeffrey; Inadomi, John M; Napolitano, Lena M; Hyzy, Robert C; Desmond, Jeff S

    2014-03-01

    There are many published clinical guidelines for acute pancreatitis (AP). Implementation of these recommendations is variable. We hypothesized that a clinical decision support (CDS) tool would change clinician behavior and shorten hospital length of stay (LOS). Observational study, entitled, The AP Early Response (TAPER) Project. Tertiary center emergency department (ED) and hospital. Two consecutive samplings of patients having ICD-9 code (577.0) for AP were generated from the emergency department (ED) or hospital admissions. Diagnosis of AP was based on conventional Atlanta criteria. The Pre-TAPER-CDS-Tool group (5/30/06-6/22/07) had 110 patients presenting to the ED with AP per 976 ICD-9 (577.0) codes and the Post-TAPER-CDS-Tool group (5/30/06-6/22/07) had 113 per 907 ICD-9 codes (7/14/10-5/5/11). The TAPER-CDS-Tool, developed 12/2008-7/14/2010, is a combined early, automated paging-alert system, which text pages ED clinicians about a patient with AP and an intuitive web-based point-of-care instrument, consisting of seven early management recommendations. The pre- vs. post-TAPER-CDS-Tool groups had similar baseline characteristics. The post-TAPER-CDS-Tool group met two management goals more frequently than the pre-TAPER-CDS-Tool group: risk stratification (P6L/1st 0-24 h (P=0.0003). Mean (s.d.) hospital LOS was significantly shorter in the post-TAPER-CDS-Tool group (4.6 (3.1) vs. 6.7 (7.0) days, P=0.0126). Multivariate analysis identified four independent variables for hospital LOS: the TAPER-CDS-Tool associated with shorter LOS (P=0.0049) and three variables associated with longer LOS: Japanese severity score (P=0.0361), persistent organ failure (P=0.0088), and local pancreatic complications (<0.0001). The TAPER-CDS-Tool is associated with changed clinician behavior and shortened hospital LOS, which has significant financial implications.

  16. Single Session Web-Based Counselling: A Thematic Analysis of Content from the Perspective of the Client

    Science.gov (United States)

    Rodda, S. N.; Lubman, D. I.; Cheetham, A.; Dowling, N. A.; Jackson, A. C.

    2015-01-01

    Despite the exponential growth of non-appointment-based web counselling, there is limited information on what happens in a single session intervention. This exploratory study, involving a thematic analysis of 85 counselling transcripts of people seeking help for problem gambling, aimed to describe the presentation and content of online…

  17. Asymptotic analysis for personalized web search

    NARCIS (Netherlands)

    Volkovich, Y.; Litvak, Nelli

    2010-01-01

    PageRank with personalization is used in Web search as an importance measure for Web documents. The goal of this paper is to characterize the tail behavior of the PageRank distribution in the Web and other complex networks characterized by power laws. To this end, we model the PageRank as a solution

  18. Accessibility of State Department of Education Home Pages and Special Education Pages.

    Science.gov (United States)

    Opitz, Christine; Savenye, Wilhelmina; Rowland, Cyndi

    2003-01-01

    This study evaluated State Department of Education Internet home pages and special education pages for accessibility compliance with standards of the World Wide Web Consortium and Section 508 of the revised Rehabilitation Act. Only 26% of state department home pages and 52% of special education pages achieved W3C compliance and fewer conformed…

  19. The One-Page Project Manager Comunicate and Manage Any Project With a Single Sheet of Paper

    CERN Document Server

    Campbell, Clark A

    2007-01-01

    The One-Page Project Manager shows you how to boil down any project into a simple, one-page document that can be used to communicate all essential details to upper management, other departments, suppliers, and audiences. This practical guide will save time and effort, helping you identify the vital parts of a project and communicate those parts and duties to other team members.

  20. Marketing on the World Wide Web.

    Science.gov (United States)

    Teague, John H.

    1995-01-01

    Discusses the World Wide Web, its importance for marketing, its advantages, non-commercial promotions on the Web, how businesses use the Web, the Web market, resistance to Internet commercialization, getting on the Web, creating Web pages, rising above the noise, and some of the Web's problems and limitations. (SR)

  1. Microsoft Expression Web for dummies

    CERN Document Server

    Hefferman, Linda

    2013-01-01

    Expression Web is Microsoft's newest tool for creating and maintaining dynamic Web sites. This FrontPage replacement offers all the simple ""what-you-see-is-what-you-get"" tools for creating a Web site along with some pumped up new features for working with Cascading Style Sheets and other design options. Microsoft Expression Web For Dummies arrives in time for early adopters to get a feel for how to build an attractive Web site. Author Linda Hefferman teams up with longtime FrontPage For Dummies author Asha Dornfest to show the easy way for first-time Web designers, FrontPage ve

  2. Parametrisation of web pages design

    OpenAIRE

    Vozel, Gašper Karantan

    2011-01-01

    In this thesis we analyzes the structure and elements of websites and, with the help of criteria for aesthetic design, evaluates them. The purpose of the thesis is to find the connection between source code and website aesthetics. In the first part of the thesis we chose and evaluate meaningful set of criteria for website aesthetics. In the second part we evaluated the set of criteria with surveying group of users. After processing survey data with Orange software, we created a model which wa...

  3. Users page feedback

    CERN Multimedia

    2010-01-01

    In October last year the Communication Group proposed an interim redesign of the users’ web pages in order to improve the visibility of key news items, events and announcements to the CERN community. The proposed update to the users' page (right), and the current version (left, behind) This proposed redesign was seen as a small step on the way to much wider reforms of the CERN web landscape proposed in the group’s web communication plan.   The results are available here. Some of the key points: - the balance between news / events / announcements and access to links on the users’ pages was not right - many people asked to see a reversal of the order so that links appeared first, news/events/announcements last; - many people felt that we should keep the primary function of the users’ pages as an index to other CERN websites; - many people found the sections of the front page to be poorly delineated; - people do not like scrolling; - there were performance...

  4. Introduction to Webometrics Quantitative Web Research for the Social Sciences

    CERN Document Server

    Thelwall, Michael

    2009-01-01

    Webometrics is concerned with measuring aspects of the web: web sites, web pages, parts of web pages, words in web pages, hyperlinks, web search engine results. The importance of the web itself as a communication medium and for hosting an increasingly wide array of documents, from journal articles to holiday brochures, needs no introduction. Given this huge and easily accessible source of information, there are limitless possibilities for measuring or counting on a huge scale (e.g., the number of web sites, the number of web pages, the number of blogs) or on a smaller scale (e.g., the number o

  5. What snippets say about pages

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd

    What is the likelihood that a Web page is considered relevant to a query, given the relevance assessment of the corresponding snippet? Using a new FederatedWeb Search test collection that contains search results from over a hundred search engines on the internet, we are able to investigate such

  6. Introduction to the world wide web.

    Science.gov (United States)

    Downes, P K

    2007-05-12

    The World Wide Web used to be nicknamed the 'World Wide Wait'. Now, thanks to high speed broadband connections, browsing the web has become a much more enjoyable and productive activity. Computers need to know where web pages are stored on the Internet, in just the same way as we need to know where someone lives in order to post them a letter. This section explains how the World Wide Web works and how web pages can be viewed using a web browser.

  7. Reese Sorenson's Individual Professional Page

    Science.gov (United States)

    Sorenson, Reese; Nixon, David (Technical Monitor)

    1998-01-01

    The subject document is a World Wide Web (WWW) page entitled, "Reese Sorenson's Individual Professional Page." Its can be accessed at "http://george.arc.nasa.gov/sorenson/personal/index.html". The purpose of this page is to make the reader aware of me, who I am, and what I do. It lists my work assignments, my computer experience, my place in the NASA hierarchy, publications by me, awards received by me, my education, and how to contact me. Writing this page was a learning experience, pursuant to an element in my Job Description which calls for me to be able to use the latest computers. This web page contains very little technical information, none of which is classified or sensitive.

  8. Entertainment Pages.

    Science.gov (United States)

    Druce, Mike

    1981-01-01

    Notes that the planning of an effective entertainment page in a school newspaper must begin by establishing its purpose. Examines all the elements that contribute to the makeup of a good entertainment page. (RL)

  9. Probabilistic relation between In-Degree and PageRank

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    2008-01-01

    This paper presents a novel stochastic model that explains the relation between power laws of In-Degree and PageRank. PageRank is a popularity measure designed by Google to rank Web pages. We model the relation between PageRank and In-Degree through a stochastic equation, which is inspired by the

  10. Linking Wikipedia to the web

    NARCIS (Netherlands)

    Kaptein, R.; Serdyukov, P.; Kamps, J.; Chen, H.-H.; Efthimiadis, E.N.; Savoy, J.; Crestani, F.; Marchand-Maillet, S.

    2010-01-01

    We investigate the task of finding links from Wikipedia pages to external web pages. Such external links significantly extend the information in Wikipedia with information from the Web at large, while retaining the encyclopedic organization of Wikipedia. We use a language modeling approach to create

  11. New in protein structure and function annotation: hotspots, single nucleotide polymorphisms and the 'Deep Web'.

    Science.gov (United States)

    Bromberg, Yana; Yachdav, Guy; Ofran, Yanay; Schneider, Reinhard; Rost, Burkhard

    2009-05-01

    The rapidly increasing quantity of protein sequence data continues to widen the gap between available sequences and annotations. Comparative modeling suggests some aspects of the 3D structures of approximately half of all known proteins; homology- and network-based inferences annotate some aspect of function for a similar fraction of the proteome. For most known protein sequences, however, there is detailed knowledge about neither their function nor their structure. Comprehensive efforts towards the expert curation of sequence annotations have failed to meet the demand of the rapidly increasing number of available sequences. Only the automated prediction of protein function in the absence of homology can close the gap between available sequences and annotations in the foreseeable future. This review focuses on two novel methods for automated annotation, and briefly presents an outlook on how modern web software may revolutionize the field of protein sequence annotation. First, predictions of protein binding sites and functional hotspots, and the evolution of these into the most successful type of prediction of protein function from sequence will be discussed. Second, a new tool, comprehensive in silico mutagenesis, which contributes important novel predictions of function and at the same time prepares for the onset of the next sequencing revolution, will be described. While these two new sub-fields of protein prediction represent the breakthroughs that have been achieved methodologically, it will then be argued that a different development might further change the way biomedical researchers benefit from annotations: modern web software can connect the worldwide web in any browser with the 'Deep Web' (ie, proprietary data resources). The availability of this direct connection, and the resulting access to a wealth of data, may impact drug discovery and development more than any existing method that contributes to protein annotation.

  12. Customisable Scientific Web Portal for Fusion Research

    Energy Technology Data Exchange (ETDEWEB)

    Abla, G.; Kim, E.; Schissel, D.; Flannagan, S. [General Atomics, San Diego (United States)

    2009-07-01

    The Web browser has become one of the major application interfaces for remotely participating in magnetic fusion. Web portals are used to present very diverse sources of information in a unified way. While a web portal has several benefits over other software interfaces, such as providing single point of access for multiple computational services, and eliminating the need for client software installation, the design and development of a web portal has unique challenges. One of the challenges is that a web portal needs to be fast and interactive despite a high volume of tools and information that it presents. Another challenge is the visual output on the web portal often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments; therefore the applications and information should be customizable depending on the needs of users. An appropriate software architecture and web technologies can meet these problems. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It utilizes a multi-tier software architecture, and web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services for users to select. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, shared visualization and online instant message services. Furthermore, the web portal will provide a mechanism to allow users to create their own applications on the web portal as well as bridging capabilities to external applications such as Twitter and other social networks. In this series of slides, we describe the software architecture of this scientific web portal and our experiences in utilizing web 2.0 technologies. A

  13. DATA EXTRACTION AND LABEL ASSIGNMENT FOR WEB DATABASES

    OpenAIRE

    T. Rajesh; T. Prathap; S.Naveen Nambi; A.R. Arunachalam

    2015-01-01

    Deep Web contents are accessed by queries submitted to Web databases and the returned data records are en wrapped in dynamically generated Web pages (they will be called deep Web pages in this paper). The structured data that Extracting from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now, a too many number of techniques have been proposed to address this problem, but all of them have limitations because they are Web-page-programming...

  14. WebVis: a hierarchical web homepage visualizer

    Science.gov (United States)

    Renteria, Jose C.; Lodha, Suresh K.

    2000-02-01

    WebVis, the Hierarchical Web Home Page Visualizer, is a tool for managing home web pages. The user can access this tool via the WWW and obtain a hierarchical visualization of one's home web pages. WebVis is a real time interactive tool that supports many different queries on the statistics of internal files such as sizes, age, and type. In addition, statistics on embedded information such as VRML files, Java applets, images and sound files can be extracted and queried. Results of these queries are visualized using color, shape and size of different nodes of the hierarchy. The visualization assists the user in a variety of task, such as quickly finding outdated information or locate large files. WebVIs is one solution to the growing web space maintenance problem. Implementation of WebVis is realized with Perl and Java. Perl pattern matching and file handling routines are used to collect and process web space linkage information and web document information. Java utilizes the collected information to produce visualization of the web space. Java also provides WebVis with real time interactivity, while running off the WWW. Some WebVis examples of home web page visualization are presented.

  15. APLIKASI WEB CRAWLER UNTUK WEB CONTENT PADA MOBILE PHONE

    Directory of Open Access Journals (Sweden)

    Sarwosri Sarwosri

    2009-01-01

    Full Text Available Crawling is the process behind a search engine, which served through the World Wide Web in a structured and with certain ethics. Applications that run the crawling process is called Web Crawler, also called web spider or web robot. The growth of mobile search services provider, followed by growth of a web crawler that can browse web pages in mobile content type. Crawler Web applications can be accessed by mobile devices and only web pages that type Mobile Content to be explored is the Web Crawler. Web Crawler duty is to collect a number of Mobile Content. A mobile application functions as a search application that will use the results from the Web Crawler. Crawler Web server consists of the Servlet, Mobile Content Filter and datastore. Servlet is a gateway connection between the client with the server. Datastore is the storage media crawling results. Mobile Content Filter selects a web page, only the appropriate web pages for mobile devices or with mobile content that will be forwarded.

  16. REVIEW PAPER ON THE DEEP WEB DATA EXTRACTION

    OpenAIRE

    Prof. V. S. Patil*1, Miss Sneha Sitafale2, Miss Priyanka Kale3, Miss Poonam Bhujbal 4 , Miss Mohini Dandge 5 .

    2018-01-01

    Deep web data extraction is the process of extracting a set of data records and the items that they contain from a query result page. Such structured data can be later integrated into results from other data sources and given to the user in a single, cohesive view. Domain identification is used to identify the query interfaces related to the domain from the forms obtained in the search process. The surface web contains a large amount of unfiltered information, whereas the deep web includes hi...

  17. Discovery and Selection of Semantic Web Services

    CERN Document Server

    Wang, Xia

    2013-01-01

    For advanced web search engines to be able not only to search for semantically related information dispersed over different web pages, but also for semantic services providing certain functionalities, discovering semantic services is the key issue. Addressing four problems of current solution, this book presents the following contributions. A novel service model independent of semantic service description models is proposed, which clearly defines all elements necessary for service discovery and selection. It takes service selection as its gist and improves efficiency. Corresponding selection algorithms and their implementation as components of the extended Semantically Enabled Service-oriented Architecture in the Web Service Modeling Environment are detailed. Many applications of semantic web services, e.g. discovery, composition and mediation, can benefit from a general approach for building application ontologies. With application ontologies thus built, services are discovered in the same way as with single...

  18. Advanced express web application development

    CERN Document Server

    Keig, Andrew

    2013-01-01

    A practical book, guiding the reader through the development of a single page application using a feature-driven approach.If you are an experienced JavaScript developer who wants to build highly scalable, real-world applications using Express, this book is ideal for you. This book is an advanced title and assumes that the reader has some experience with node, Javascript MVC web development frameworks, and has heard of Express before, or is familiar with it. You should also have a basic understanding of Redis and MongoDB. This book is not a tutorial on Node, but aims to explore some of the more

  19. Web Defacement and Intrusion Monitoring Tool: WDIMT

    CSIR Research Space (South Africa)

    Masango, Mfundo G

    2017-09-01

    Full Text Available at altering the content of the web pages or to make the website inactive. This paper proposes a Web Defacement and Intrusion Monitoring Tool, that could be a possible solution to the rapid identification of altered or deleted web pages. The proposed tool...

  20. Research and optimization of page updated forecast on Nutch

    Directory of Open Access Journals (Sweden)

    HU Wei

    2016-08-01

    Full Text Available Web page updated prediction method of Nutch is an adjacent method and its relevant update parameters need to be set artificially,not adaptively adjustable,and unable to cope with the differences of massive web page updates.To address this problem,this paper puts forward dynamic selection strategy to improve the method of Nutch web page updated prediction.When the historical updated web page data are insufficient,the strategy uses DBSCAN clustering algorithm based on MapReduce to reduce the number of the pages of the crawler system crawling,the update cycle of the sample web pages is used as update cycle of other pages which are in the same category.When the historical updated web page data are enough,the data are used to model with the Poisson Process,which can more accurately predict each web page update cycle.Finally the improving strategy is tested in the Hadoop distributed platform.The experimental results show that the performance of optimized web page updated prediction method is better.

  1. Science.Gov - A single gateway to the deep web knowledge of U.S. science agencies

    International Nuclear Information System (INIS)

    Hitson, B.A.

    2004-01-01

    The impact of science and technology on our daily lives is easily demonstrated. From new drug discoveries, to new and more efficient energy sources, to the incorporation of new technologies into business and industry, the productive applications of R and D are innumerable. The possibility of creating such applications depends most heavily on the availability of one resource: knowledge. Knowledge must be shared for scientific progress to occur. In the past, the ability to share knowledge electronically has been limited by the 'deep Web' nature of scientific databases and the lack of technology to simultaneously search disparate and decentralized information collections. U.S. science agencies invest billions of dollars each year on basic and applied research and development projects. To make the collective knowledge from this R and D more easily accessible and searchable, 12 science agencies collaborated to develop Science.gov - a single, searchable gateway to the deep Web knowledge of U.S. science agencies. This paper will describe Science.gov and its contribution to nuclear knowledge management. (author)

  2. Customizable Scientific Web Portal for Fusion Research

    Energy Technology Data Exchange (ETDEWEB)

    Abla, G.; Kim, E.; Schissel, D.; Flannagan, S. [General Atomics, San Diego (United States)

    2009-07-01

    The Web browser has become one of the major application interfaces for remotely participating in magnetic fusion experiments. Recently in other areas, web portals have begun to be deployed. These portals are used to present very diverse sources of information in a unified way. While a web portal has several benefits over other software interfaces, such as providing single point of access for multiple computational services, and eliminating the need for client software installation, the design and development of a web portal has unique challenges. One of the challenges is that a web portal needs to be fast and interactive despite a high volume of tools and information that it presents. Another challenge is the visual output on the web portal often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments; therefore the applications and information should be customizable depending on the needs of users. An appropriate software architecture and web technologies can meet these problems. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It utilizes a multi-tier software architecture, and web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services for users to select. The users can create a unique personalized working environment to fit their own needs and interests. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, shared visualization and online instant message services. Furthermore, the web portal will provide a mechanism to allow users to create their own applications on the web portal as well as bridging capabilities to external applications such as

  3. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M.B. Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  4. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  5. Guidelines for Collecting Aggregations of Web Resources.

    Science.gov (United States)

    Walters, William H.; Demas, Samuel G.; Stewart, Linda; Weintraub, Jennifer

    1998-01-01

    Presents three criteria (content, coherence, and functionality) for selecting aggregated World Wide Web resources and planning presentations of aggregated resources in library catalogs and Web pages. Ensuring access to aggregated resources is also discussed. (PEN)

  6. Even Faster Web Sites Performance Best Practices for Web Developers

    CERN Document Server

    Souders, Steve

    2009-01-01

    Performance is critical to the success of any web site, and yet today's web applications push browsers to their limits with increasing amounts of rich content and heavy use of Ajax. In this book, Steve Souders, web performance evangelist at Google and former Chief Performance Yahoo!, provides valuable techniques to help you optimize your site's performance. Souders' previous book, the bestselling High Performance Web Sites, shocked the web development world by revealing that 80% of the time it takes for a web page to load is on the client side. In Even Faster Web Sites, Souders and eight exp

  7. Nuclear expert web search and crawler algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D., E-mail: thiagoreis@usp.br, E-mail: barroso@ipen.br, E-mail: bdbfilho@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  8. Nuclear expert web search and crawler algorithm

    International Nuclear Information System (INIS)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D.

    2013-01-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  9. ASAP: a web-based platform for the analysis and interactive visualization of single-cell RNA-seq data.

    Science.gov (United States)

    Gardeux, Vincent; David, Fabrice P A; Shajkofci, Adrian; Schwalie, Petra C; Deplancke, Bart

    2017-10-01

    Single-cell RNA-sequencing (scRNA-seq) allows whole transcriptome profiling of thousands of individual cells, enabling the molecular exploration of tissues at the cellular level. Such analytical capacity is of great interest to many research groups in the world, yet these groups often lack the expertise to handle complex scRNA-seq datasets. We developed a fully integrated, web-based platform aimed at the complete analysis of scRNA-seq data post genome alignment: from the parsing, filtering and normalization of the input count data files, to the visual representation of the data, identification of cell clusters, differentially expressed genes (including cluster-specific marker genes), and functional gene set enrichment. This Automated Single-cell Analysis Pipeline (ASAP) combines a wide range of commonly used algorithms with sophisticated visualization tools. Compared with existing scRNA-seq analysis platforms, researchers (including those lacking computational expertise) are able to interact with the data in a straightforward fashion and in real time. Furthermore, given the overlap between scRNA-seq and bulk RNA-seq analysis workflows, ASAP should conceptually be broadly applicable to any RNA-seq dataset. As a validation, we demonstrate how we can use ASAP to simply reproduce the results from a single-cell study of 91 mouse cells involving five distinct cell types. The tool is freely available at asap.epfl.ch and R/Python scripts are available at github.com/DeplanckeLab/ASAP. bart.deplancke@epfl.ch. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  10. EuroGOV: Engineering a Multilingual Web Corpus

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.

    2005-01-01

    EuroGOV is a multilingual web corpus that was created to serve as the document collection for WebCLEF, the CLEF 2005 web retrieval task. EuroGOV is a collection of web pages crawled from the European Union portal, European Union member state governmental web sites, and Russian government web sites.

  11. Location-based Web Search

    Science.gov (United States)

    Ahlers, Dirk; Boll, Susanne

    In recent years, the relation of Web information to a physical location has gained much attention. However, Web content today often carries only an implicit relation to a location. In this chapter, we present a novel location-based search engine that automatically derives spatial context from unstructured Web resources and allows for location-based search: our focused crawler applies heuristics to crawl and analyze Web pages that have a high probability of carrying a spatial relation to a certain region or place; the location extractor identifies the actual location information from the pages; our indexer assigns a geo-context to the pages and makes them available for a later spatial Web search. We illustrate the usage of our spatial Web search for location-based applications that provide information not only right-in-time but also right-on-the-spot.

  12. Toward Understanding the Role of Web 2.0 Technology in Self-Directed Learning and Job Performance in a Single Organizational Setting: A Qualitative Case Study

    Science.gov (United States)

    Caruso, Shirley J.

    2016-01-01

    This single instrumental qualitative case study explores and thickly describes job performance outcomes based upon the manner in which self-directed learning activities of a purposefully selected sample of 3 construction managers are conducted, mediated by the use of Web 2.0 technology. The data collected revealed that construction managers are…

  13. Food Enterprise Web Design Based on User Experience

    OpenAIRE

    Fei Wang

    2015-01-01

    Excellent auxiliary food enterprise web design conveyed good visual transmission effect through user experience. This study was based on the food enterprise managers and customers as the main operating object to get the performance of the web page creation, web page design not only focused on the function and work efficiency, the most important thing was that the user experience in the process of web page interaction.

  14. Improving Web Accessibility in a University Setting

    Science.gov (United States)

    Olive, Geoffrey C.

    2010-01-01

    Improving Web accessibility for disabled users visiting a university's Web site is explored following the World Wide Web Consortium (W3C) guidelines and Section 508 of the Rehabilitation Act rules for Web page designers to ensure accessibility. The literature supports the view that accessibility is sorely lacking, not only in the USA, but also…

  15. Results from a Web Impact Factor Crawler.

    Science.gov (United States)

    Thelwall, Mike

    2001-01-01

    Discusses Web impact factors (WIFs), Web versions of the impact factors for journals, and how they can be calculated by using search engines. Highlights include HTML and document indexing; Web page links; a Web crawler designed for calculating WIFs; and WIFs for United Kingdom universities that measured research profiles or capability. (Author/LRW)

  16. PageRank of integers

    International Nuclear Information System (INIS)

    Frahm, K M; Shepelyansky, D L; Chepelianskii, A D

    2012-01-01

    We up a directed network tracing links from a given integer to its divisors and analyze the properties of the Google matrix of this network. The PageRank vector of this matrix is computed numerically and it is shown that its probability is approximately inversely proportional to the PageRank index thus being similar to the Zipf law and the dependence established for the World Wide Web. The spectrum of the Google matrix of integers is characterized by a large gap and a relatively small number of nonzero eigenvalues. A simple semi-analytical expression for the PageRank of integers is derived that allows us to find this vector for matrices of billion size. This network provides a new PageRank order of integers. (paper)

  17. PMD2HD--a web tool aligning a PubMed search results page with the local German Cancer Research Centre library collection.

    Science.gov (United States)

    Bohne-Lang, Andreas; Lang, Elke; Taube, Anke

    2005-06-27

    Web-based searching is the accepted contemporary mode of retrieving relevant literature, and retrieving as many full text articles as possible is a typical prerequisite for research success. In most cases only a proportion of references will be directly accessible as digital reprints through displayed links. A large number of references, however, have to be verified in library catalogues and, depending on their availability, are accessible as print holdings or by interlibrary loan request. The problem of verifying local print holdings from an initial retrieval set of citations can be solved using Z39.50, an ANSI protocol for interactively querying library information systems. Numerous systems include Z39.50 interfaces and therefore can process Z39.50 interactive requests. However, the programmed query interaction command structure is non-intuitive and inaccessible to the average biomedical researcher. For the typical user, it is necessary to implement the protocol within a tool that hides and handles Z39.50 syntax, presenting a comfortable user interface. PMD2HD is a web tool implementing Z39.50 to provide an appropriately functional and usable interface to integrate into the typical workflow that follows an initial PubMed literature search, providing users with an immediate asset to assist in the most tedious step in literature retrieval, checking for subscription holdings against a local online catalogue. PMD2HD can facilitate literature access considerably with respect to the time and cost of manual comparisons of search results with local catalogue holdings. The example presented in this article is related to the library system and collections of the German Cancer Research Centre. However, the PMD2HD software architecture and use of common Z39.50 protocol commands allow for transfer to a broad range of scientific libraries using Z39.50-compatible library information systems.

  18. Testing the visual consistency of web sites

    NARCIS (Netherlands)

    van der Geest, Thea; Loorbach, N.R.

    2005-01-01

    Consistency in the visual appearance of Web pages is often checked by experts, such as designers or reviewers. This article reports a card sort study conducted to determine whether users rather than experts could distinguish visual (in-)consistency in Web elements and pages. The users proved to

  19. Responsivní web design

    OpenAIRE

    Němec, Milan

    2013-01-01

    This thesis deals with responsive web design, i.e. a method of creating web pages, which are adapted to device. The goal is to describe primary principles and introduce technological options of its creation. The described principles are discussed through exemplary source code and illustrations. Flexible grid, media queries and flexible images are the three basic pillars. The approaches of creating pages with mobile first and desktop first are two different strategies used in formation respons...

  20. Resolving Person Names in Web People Search

    Science.gov (United States)

    Balog, Krisztian; Azzopardi, Leif; de Rijke, Maarten

    Disambiguating person names in a set of documents (such as a set of web pages returned in response to a person name) is a key task for the presentation of results and the automatic profiling of experts. With largely unstructured documents and an unknown number of people with the same name the problem presents many difficulties and challenges. This chapter treats the task of person name disambiguation as a document clustering problem, where it is assumed that the documents represent particular people. This leads to the person cluster hypothesis, which states that similar documents tend to represent the same person. Single Pass Clustering, k-Means Clustering, Agglomerative Clustering and Probabilistic Latent Semantic Analysis are employed and empirically evaluated in this context. On the SemEval 2007 Web People Search it is shown that the person cluster hypothesis holds reasonably well and that the Single Pass Clustering and Agglomerative Clustering methods provide the best performance.

  1. Web Transfer Over Satellites Being Improved

    Science.gov (United States)

    Allman, Mark

    1999-01-01

    Extensive research conducted by NASA Lewis Research Center's Satellite Networks and Architectures Branch and the Ohio University has demonstrated performance improvements in World Wide Web transfers over satellite-based networks. The use of a new version of the Hypertext Transfer Protocol (HTTP) reduced the time required to load web pages over a single Transmission Control Protocol (TCP) connection traversing a satellite channel. However, an older technique of simultaneously making multiple requests of a given server has been shown to provide even faster transfer time. Unfortunately, the use of multiple simultaneous requests has been shown to be harmful to the network in general. Therefore, we are developing new mechanisms for the HTTP protocol which may allow a single request at any given time to perform as well as, or better than, multiple simultaneous requests. In the course of study, we also demonstrated that the time for web pages to load is at least as short via a satellite link as it is via a standard 28.8-kbps dialup modem channel. This demonstrates that satellites are a viable means of accessing the Internet.

  2. Web wisdom how to evaluate and create information quality on the Web

    CERN Document Server

    Alexander, Janet E

    1999-01-01

    Web Wisdom is an essential reference for anyone needing to evaluate or establish information quality on the World Wide Web. The book includes easy to use checklists for step-by-step quality evaluations of virtually any Web page. The checklists can also be used by Web authors to help them ensure quality information on their pages. In addition, Web Wisdom addresses other important issues, such as understanding the ways that advertising and sponsorship may affect the quality of Web information. It features: * a detailed discussion of the items involved in evaluating Web information; * checklists

  3. Integration of Web mining and web crawler: Relevance and State of Art

    OpenAIRE

    Subhendu kumar pani; Deepak Mohapatra,; Bikram Keshari Ratha

    2010-01-01

    This study presents the role of web crawler in web mining environment. As the growth of the World Wide Web exceeded all expectations,the research on Web mining is growing more and more.web mining research topic which combines two of the activated research areas: Data Mining and World Wide Web .So, the World Wide Web is a very advanced area for data mining research. Search engines that are based on web crawling framework also used in web mining to find theinteracted web pages. This paper discu...

  4. Maintaining Consistency of Data on the Web

    OpenAIRE

    Bernauer, Martin

    2005-01-01

    Increasingly more data is becoming available on the Web, estimates speaking of 1 billion documents in 2002. Most of the documents are Web pages whose data is considered to be in XML format, expecting it to eventually replace HTML. A common problem in designing and maintaining a Web site is that data on a Web page often replicates or derives from other data, the so-called base data, that is usually not contained in the deriving or replicating page. Consequently, replicas and derivations become...

  5. Page 5

    African Journals Online (AJOL)

    ezra

    Samaru Journal of Information Studies Vol. 7 (2)2007. Page 5. Stress Management By Library And Information Science Professionals In Nigerian University Libraries. BY. Ahmed Aliyu Lemu. Department of Library and Information Science. Ahmadu Bello University. Abstract. Job stress is an uncomfortable condition resulting ...

  6. Using food-web theory to conserve ecosystems

    Science.gov (United States)

    McDonald-Madden, E.; Sabbadin, R.; Game, E. T.; Baxter, P. W. J.; Chadès, I.; Possingham, H. P.

    2016-01-01

    Food-web theory can be a powerful guide to the management of complex ecosystems. However, we show that indices of species importance common in food-web and network theory can be a poor guide to ecosystem management, resulting in significantly more extinctions than necessary. We use Bayesian Networks and Constrained Combinatorial Optimization to find optimal management strategies for a wide range of real and hypothetical food webs. This Artificial Intelligence approach provides the ability to test the performance of any index for prioritizing species management in a network. While no single network theory index provides an appropriate guide to management for all food webs, a modified version of the Google PageRank algorithm reliably minimizes the chance and severity of negative outcomes. Our analysis shows that by prioritizing ecosystem management based on the network-wide impact of species protection rather than species loss, we can substantially improve conservation outcomes. PMID:26776253

  7. Automating Information Discovery Within the Invisible Web

    Science.gov (United States)

    Sweeney, Edwina; Curran, Kevin; Xie, Ermai

    A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.

  8. Towards Second and Third Generation Web-Based Multimedia

    NARCIS (Netherlands)

    J.R. van Ossenbruggen (Jacco); J.P.T.M. Geurts (Joost); F.J. Cornelissen; L. Rutledge (Lloyd); L. Hardman (Lynda)

    2001-01-01

    textabstractFirst generation Web-content encodes information in handwritten (HTML) Web pages. Second generation Web content generates HTML pages on demand, e.g. by filling in templates with content retrieved dynamically from a database or transformation of structured documents using style sheets

  9. Indian accent text-to-speech system for web browsing

    Indian Academy of Sciences (India)

    Incorporation of speech and Indian scripts can greatly enhance the accessibility of web information among common people. This paper describes a 'web reader' which 'reads out' the textual contents of a selected web page in Hindi or in English with Indian accent. The content of the page is downloaded and parsed into ...

  10. Web server for priority ordered multimedia services

    Science.gov (United States)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  11. CERN single sign on solution

    International Nuclear Information System (INIS)

    Ormancey, E

    2008-01-01

    The need for Single Sign On has always been restricted by the absence of cross platform solutions: a single sign on working only on one platform or technology is nearly useless. The recent improvements in Web Services Federation (WS-Federation) standard enabling federation of identity, attribute, authentication and authorization information can now provide real extended Single Sign On solutions. Various solutions have been investigated at CERN and now, a Web SSO solution using some parts of WS-Federation technology is available. Using the Shibboleth Service Provider module for Apache hosted web sites and Microsoft ADFS as the identity provider linked to Active Directory user, users can now authenticate on any web application using a single authentication platform, providing identity, user information (building, phone...) as well as group membership enabling authorization possibilities. A typical scenario: a CERN user can now authenticate on a Linux/Apache website using Windows Integrated credentials, and his Active Directory group membership can be checked before allowing access to a specific web page

  12. Web Service: MedlinePlus

    Science.gov (United States)

    ... this page: https://medlineplus.gov/webservices.html MedlinePlus Web Service To use the sharing features on this ... please enable JavaScript. MedlinePlus offers a search-based Web service that provides access to MedlinePlus health topic ...

  13. Scientist who weaves wonderful web

    CERN Multimedia

    Wills, D

    2000-01-01

    Mr Berners-Lee's unique standing makes him a sought-after speaker. People want to know how he developed the Web and where he thinks it is headed. 'Weaving the Web', written by himself with Mark Fischetti, is his attempt to answer these questions (1 page).

  14. Web Search Engines 4 -O----------------------------------------------~----------

    Indian Academy of Sciences (India)

    physics) for sharing research documents in nuclear physics, the web has grown to encompass diverse information sources: personal home ..... information by following these categories in the subject hierarchy: Business and economy; companies; chemical; and rare earth elements. Topic 2: To identify web pages related to ...

  15. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  16. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Banissi, E.; Khosrowshahi, F.; Sarfraz, M.; Ursyn, A.

    2001-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  17. A Runtime System for Interactive Web Services

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Sandholm, Anders

    1999-01-01

    Interactive web services are increasingly replacing traditional static web pages. Producing web services seems to require a tremendous amount of laborious low-level coding due to the primitive nature of CGI programming. We present ideas for an improved runtime system for interactive web services...... built on top of CGI running on virtually every combination of browser and HTTP/CGI server. The runtime system has been implemented and used extensively in , a tool for producing interactive web services....

  18. Head First Web Design

    CERN Document Server

    Watrall, Ethan

    2008-01-01

    Want to know how to make your pages look beautiful, communicate your message effectively, guide visitors through your website with ease, and get everything approved by the accessibility and usability police at the same time? Head First Web Design is your ticket to mastering all of these complex topics, and understanding what's really going on in the world of web design. Whether you're building a personal blog or a corporate website, there's a lot more to web design than div's and CSS selectors, but what do you really need to know? With this book, you'll learn the secrets of designing effecti

  19. ASH External Web Portal (External Portal) -

    Data.gov (United States)

    Department of Transportation — The ASH External Web Portal is a web-based portal that provides single sign-on functionality, making the web portal a single location from which to be authenticated...

  20. WEB STRUCTURE MINING USING PAGERANK, IMPROVED PAGERANK – AN OVERVIEW

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2011-03-01

    Full Text Available Web Mining is the extraction of interesting and potentially useful patterns and information from Web. It includes Web documents, hyperlinks between documents, and usage logs of web sites. The significant task for web mining can be listed out as Information Retrieval, Information Selection / Extraction, Generalization and Analysis. Web information retrieval tools consider only the text on pages and ignore information in the links. The goal of Web structure mining is to explore structural summary about web. Web structure mining focusing on link information is an important aspect of web data. This paper presents an overview of the PageRank, Improved Page Rank and its working functionality in web structure mining.

  1. Comaparison of Web Developement Technologies

    OpenAIRE

    Ramesh Nagilla, Ramesh

    2012-01-01

    Web applications play an important role for many business purpose activities in the modernworld. It has become a platform for the companies to fulfil the needs of their business. In thissituation, Web technologies that are useful in developing these kinds of applications become animportant aspect. Many Web technologies like Hypertext Preprocessor (PHP), Active ServerPages (ASP.NET), Cold Fusion Markup Language (CFML), Java, Python, and Ruby on Rails areavailable in the market. All these techn...

  2. Fuzzy Clustering: An Approachfor Mining Usage Profilesfrom Web

    OpenAIRE

    Ms.Archana N. Boob; Prof. D. M. Dakhane

    2012-01-01

    Web usage mining is an application of data mining technology to mining the data of the web server log file. It can discover the browsing patterns of user and some kind of correlations between the web pages. Web usage mining provides the support for the web site design, providing personalization server and other business making decision, etc. Web mining applies the data mining, the artificial intelligence and the chart technology and so on to the web data and traces users' visiting characteris...

  3. Using centrality to rank web snippets

    NARCIS (Netherlands)

    Jijkoun, V.; de Rijke, M.; Peters, C.; Jijkoun, V.; Mandl, T.; Müller, H.; Oard, D.W.; Peñas, A.; Petras, V.; Santos, D.

    2008-01-01

    We describe our participation in the WebCLEF 2007 task, targeted at snippet retrieval from web data. Our system ranks snippets based on a simple similarity-based centrality, inspired by the web page ranking algorithms. We experimented with retrieval units (sentences and paragraphs) and with the

  4. World Wide Access: Accessible Web Design.

    Science.gov (United States)

    Washington Univ., Seattle.

    This brief paper considers the application of "universal design" principles to Web page design in order to increase accessibility for people with disabilities. Suggestions are based on the World Wide Web Consortium's accessibility initiative, which has proposed guidelines for all Web authors and federal government standards. Seven guidelines for…

  5. A Runtime System for Interactive Web Services

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Sandholm, Anders

    1999-01-01

    Interactive web services are increasingly replacing traditional static web pages. Producing web services seems to require a tremendous amount of laborious low-level coding due to the primitive nature of CGI programming. We present ideas for an improved runtime system for interactive web services ...... built on top of CGI running on virtually every combination of browser and HTTP/CGI server. The runtime system has been implemented and used extensively in , a tool for producing interactive web services.......Interactive web services are increasingly replacing traditional static web pages. Producing web services seems to require a tremendous amount of laborious low-level coding due to the primitive nature of CGI programming. We present ideas for an improved runtime system for interactive web services...

  6. An Efficient PageRank Approach for Urban Traffic Optimization

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2012-01-01

    to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.

  7. Web Browser Security Update Effectiveness

    Science.gov (United States)

    Duebendorfer, Thomas; Frei, Stefan

    We analyze the effectiveness of different Web browser update mechanisms on various operating systems; from Google Chrome's silent update mechanism to Opera's update requiring a full re-installation. We use anonymized logs from Google's world wide distributed Web servers. An analysis of the logged HTTP user-agent strings that Web browsers report when requesting any Web page is used to measure the daily browser version shares in active use. To the best of our knowledge, this is the first global scale measurement of Web browser update effectiveness comparing four different Web browser update strategies including Google Chrome. Our measurements prove that silent updates and little dependency on the underlying operating system are most effective to get users of Web browsers to surf the Web with the latest browser version.

  8. Automated web page testing with Laravel

    OpenAIRE

    Volk, Matic

    2015-01-01

    Porast števila spletnih aplikacij je vplival na uporabo naprednih metod za njihov razvoj. V ospredju so agilne metode, ki vpeljujejo razvijanje, vključno s testiranjem. Testno voden način razvoja spletnih strani zajema pisanje testov, preden se začne implementacija posamezne funkcionalnosti spletne strani. V diplomski nalogi so opisani tipi testov, ki so prikazani tudi na primeru razvoja spletne strani. Testiranje poteka po komponentah glede na vlogo, ki je komponenti namenjena. Predstavljeni...

  9. Nine walks (photo series / web page)

    OpenAIRE

    Robinson, Andrew

    2015-01-01

    'Nine Walks' is a body of work resulting from my engagement with the Media Arts Research Walking Group at Sheffield Hallam University who are exploring the role of walking in as a social, developmental and production space for the creative arts. / My participation in the walking group is an extension of my investigation of the journey as a creative, conceptual and contemplative space for photography which in turn reflects an interest in the role of the accident, instinct and intuition and the...

  10. Geographic Information Systems and Web Page Development

    Science.gov (United States)

    Reynolds, Justin

    2004-01-01

    The Facilities Engineering and Architectural Branch is responsible for the design and maintenance of buildings, laboratories, and civil structures. In order to improve efficiency and quality, the FEAB has dedicated itself to establishing a data infrastructure based on Geographic Information Systems, GIS. The value of GIS was explained in an article dating back to 1980 entitled "Need for a Multipurpose Cadastre" which stated, "There is a critical need for a better land-information system in the United States to improve land-conveyance procedures, furnish a basis for equitable taxation, and provide much-needed information for resource management and environmental planning." Scientists and engineers both point to GIS as the solution. What is GIS? According to most text books, Geographic Information Systems is a class of software that stores, manages, and analyzes mapable features on, above, or below the surface of the earth. GIS software is basically database management software to the management of spatial data and information. Simply put, Geographic Information Systems manage, analyze, chart, graph, and map spatial information. GIS can be broken down into two main categories, urban GIS and natural resource GIS. Further still, natural resource GIS can be broken down into six sub-categories, agriculture, forestry, wildlife, catchment management, archaeology, and geology/mining. Agriculture GIS has several applications, such as agricultural capability analysis, land conservation, market analysis, or whole farming planning. Forestry GIs can be used for timber assessment and management, harvest scheduling and planning, environmental impact assessment, and pest management. GIS when used in wildlife applications enables the user to assess and manage habitats, identify and track endangered and rare species, and monitor impact assessment.

  11. An Improved Approach to the PageRank Problems

    Directory of Open Access Journals (Sweden)

    Yue Xie

    2013-01-01

    Full Text Available We introduce a partition of the web pages particularly suited to the PageRank problems in which the web link graph has a nested block structure. Based on the partition of the web pages, dangling nodes, common nodes, and general nodes, the hyperlink matrix can be reordered to be a more simple block structure. Then based on the parallel computation method, we propose an algorithm for the PageRank problems. In this algorithm, the dimension of the linear system becomes smaller, and the vector for general nodes in each block can be calculated separately in every iteration. Numerical experiments show that this approach speeds up the computation of PageRank.

  12. Sleep Apnea Information Page

    Science.gov (United States)

    ... Page You are here Home » Disorders » All Disorders Sleep Apnea Information Page Sleep Apnea Information Page What research is being done? ... Institutes of Health (NIH) conduct research related to sleep apnea in laboratories at the NIH, and also ...

  13. Dermatology Internet Yellow Page advertising.

    Science.gov (United States)

    Francis, Shayla; Kozak, Katarzyna Z; Heilig, Lauren; Lundahl, Kristy; Bowland, Terri; Hester, Eric; Best, Arthur; Dellavalle, Robert P

    2006-07-01

    Patients may use Internet Yellow Pages to help select a physician. We sought to describe dermatology Internet Yellow Page advertising. Dermatology advertisements in Colorado, California, New York, and Texas at 3 Yellow Page World Wide Web sites were systematically examined. Most advertisements (76%; 223/292) listed only one provider, 56 listed more than one provider, and 13 listed no practitioner names. Five advertisements listed provider names without any credentialing letters, 265 listed at least one doctor of medicine or osteopathy, and 9 listed only providers with other credentials (6 doctors of podiatric medicine and 3 registered nurses). Most advertisements (61%; 179/292) listed a doctor of medicine or osteopathy claiming board certification, 78% (139/179) in dermatology and 22% (40/179) in other medical specialties. Four (1%; 4/292) claims of board certification could not be verified (one each in dermatology, family practice, dermatologic/cosmetologic surgery, and laser surgery). Board certification could be verified for most doctors of medicine and osteopathy not advertising claims of board certification (68%; 41/60; 32 dermatology, 9 other specialties). A total of 50 advertisements (17%) contained unverifiable or no board certification information, and 47 (16%) listed a physician with verifiable board certification in a field other than dermatology. All Internet Yellow Page World Wide Web sites and all US states were not examined. Nonphysicians, physicians board certified in medical specialties other than dermatology, and individuals without verifiable board certification in any medical specialty are advertising in dermatology Internet Yellow Pages. Many board-certified dermatologists are not advertising this certification.

  14. FlaME: Flash Molecular Editor - a 2D structure input tool for the web

    Directory of Open Access Journals (Sweden)

    Dallakian Pavel

    2011-02-01

    Full Text Available Abstract Background So far, there have been no Flash-based web tools available for chemical structure input. The authors herein present a feasibility study, aiming at the development of a compact and easy-to-use 2D structure editor, using Adobe's Flash technology and its programming language, ActionScript. As a reference model application from the Java world, we selected the Java Molecular Editor (JME. In this feasibility study, we made an attempt to realize a subset of JME's functionality in the Flash Molecular Editor (FlaME utility. These basic capabilities are: structure input, editing and depiction of single molecules, data import and export in molfile format. Implementation The result of molecular diagram sketching in FlaME is accessible in V2000 molfile format. By integrating the molecular editor into a web page, its communication with the HTML elements on this page is established using the two JavaScript functions, getMol( and setMol(. In addition, structures can be copied to the system clipboard. Conclusion A first attempt was made to create a compact single-file application for 2D molecular structure input/editing on the web, based on Flash technology. With the application examples presented in this article, it could be demonstrated that the Flash methods are principally well-suited to provide the requisite communication between the Flash object (application and the HTML elements on a web page, using JavaScript functions.

  15. Hiding in Plain Sight: The Anatomy of Malicious Facebook Pages

    OpenAIRE

    Dewan, Prateek; Kumaraguru, Ponnurangam

    2015-01-01

    Facebook is the world's largest Online Social Network, having more than 1 billion users. Like most other social networks, Facebook is home to various categories of hostile entities who abuse the platform by posting malicious content. In this paper, we identify and characterize Facebook pages that engage in spreading URLs pointing to malicious domains. We used the Web of Trust API to determine domain reputations of URLs published by pages, and identified 627 pages publishing untrustworthy info...

  16. WebSelF: A Web Scraping Framework

    DEFF Research Database (Denmark)

    Thomsen, Jakob; Ernst, Erik; Brabrand, Claus

    2012-01-01

    previous work on web scraping. We have experimentally evaluated our framework and implementation in an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over......We present, WebSelF, a framework for web scraping which models the process of web scraping and decomposes it into four conceptually independent, reusable, and composable constituents. We have validated our framework through a full parameterized implementation that is flexible enough to capture...... a period of more than one year. Our framework solves three concrete problems with current web scraping and our experimental results indicate that com- position of previous and our new techniques achieve a higher degree of accuracy, precision and specificity than existing techniques alone....

  17. A Technique to Speedup Access to Web Contents

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. Web Caching - A Technique to Speedup Access to Web Contents. Harsha Srinath Shiva Shankar Ramanna. General Article Volume 7 Issue 7 July 2002 pp 54-62 ... Keywords. World wide web; data caching; internet traffic; web page access.

  18. Blueprint of a Cross-Lingual Web Retrieval Collection

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.; van Zwol, R.

    2005-01-01

    The world wide web is a natural setting for cross-lingual information retrieval; web content is essentially multilingual, and web searchers are often polyglots. Even though English has emerged as the lingua franca of the web, planning for a business trip or holiday usually involves digesting pages

  19. A design method for an intuitive web site

    Energy Technology Data Exchange (ETDEWEB)

    Quinniey, M.L.; Diegert, K.V.; Baca, B.G.; Forsythe, J.C.; Grose, E.

    1999-11-03

    The paper describes a methodology for designing a web site for human factor engineers that is applicable for designing a web site for a group of people. Many web pages on the World Wide Web are not organized in a format that allows a user to efficiently find information. Often the information and hypertext links on web pages are not organized into intuitive groups. Intuition implies that a person is able to use their knowledge of a paradigm to solve a problem. Intuitive groups are categories that allow web page users to find information by using their intuition or mental models of categories. In order to improve the human factors engineers efficiency for finding information on the World Wide Web, research was performed to develop a web site that serves as a tool for finding information effectively. The paper describes a methodology for designing a web site for a group of people who perform similar task in an organization.

  20. Study on online community user motif using web usage mining

    Science.gov (United States)

    Alphy, Meera; Sharma, Ajay

    2016-04-01

    The Web usage mining is the application of data mining, which is used to extract useful information from the online community. The World Wide Web contains at least 4.73 billion pages according to Indexed Web and it contains at least 228.52 million pages according Dutch Indexed web on 6th august 2015, Thursday. It’s difficult to get needed data from these billions of web pages in World Wide Web. Here is the importance of web usage mining. Personalizing the search engine helps the web user to identify the most used data in an easy way. It reduces the time consumption; automatic site search and automatic restore the useful sites. This study represents the old techniques to latest techniques used in pattern discovery and analysis in web usage mining from 1996 to 2015. Analyzing user motif helps in the improvement of business, e-commerce, personalisation and improvement of websites.

  1. Talking physics in the social web

    CERN Multimedia

    Griffiths, Martin

    2007-01-01

    "From "blogs" to "wikis", the Web is now more than a mere repository of information. Martin Griffiths investigates how this new interactivity is affecting the way physicists communicate and access information." (5 pages)

  2. Web Enabled DROLS Verity TopicSets

    National Research Council Canada - National Science Library

    Tong, Richard

    1999-01-01

    The focus of this effort has been the design and development of automatically generated TopicSets and HTML pages that provide the basis of the required search and browsing capability for DTIC's Web Enabled DROLS System...

  3. Intelligent Agent Based Semantic Web in Cloud Computing Environment

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Considering today's web scenario, there is a need of effective and meaningful search over the web which is provided by Semantic Web. Existing search engines are keyword based. They are vulnerable in answering intelligent queries from the user due to the dependence of their results on information available in web pages. While semantic search engines provides efficient and relevant results as the semantic web is an extension of the current web in which information is given well defined meaning....

  4. Türkiye'de engelli bireylere yaklaşım ve bir uygulama sorunu: Adalete erişim hakkı bağlamında engellilerin web sitelerine erişimi = Disability Awareness in Turkey and an Assessment about Accessibility of Web Pages Related to Justice by Disabled People

    Directory of Open Access Journals (Sweden)

    Korhan Levent Ertürk

    2014-12-01

    Full Text Available Fiziksel veya zihinsel nedenlerle bazı hareketleri, duyuları veya işlevleri kısıtlı olan bireyler toplumun bir grubunu oluşturmaktadır. Türkiye’de bu bireyler ve/veya çevreleri toplumda doğrudan ya da dolaylı olarak çeşitli sorunlarla karşı karşıya kalmaktadırlar. Günümüzde eğitim, sağlık, adalet, sosyal güvenlik gibi alanlarda bu durum sıklıkla görülebilmektedir. Söz konusu bireyler sorunlarıyla ilgilenilmesini ve çözüme kavuşturulmasını istemektedirler. Bir ülkenin gelişmişlik düzeyi anılan sorunların çözümüne yönelik çalışmalar ile doğrudan ilişkilidir. Çalışmamız, bazı hareketleri, duyuları veya işlevleri kısıtlı olan bireylerin ortak bir terimle ifade edilmesi, engelli birey farkındalığının ortaya konulması ve bu bağlamda ilgili bazı web sitelerinin bu bireyler açısından yeterliliğinin sorgulanmasına yöneliktir. Bunlar ve benzeri web sitelerinin olabildiğince erişilebilir yapılması engelli kullanıcılara diğer bireyler ile eşit hakların sağlanmasına katkı sağlayabilecek, bilgi ve iletişim kaynaklarını çeşitlendirebilecektir. / The people who have physical or mental disabilities which limit their movements, senses, activities or both of them are one of the groups in society. Those people have various problems in social life directly or indirectly in Turkey. Nowadays, they face many problems in accessibility on many fields such as education, healthcare, justice, social security, etc. Those individuals are willing to draw attention and resolve their problems. The level of development of countries is directly related with the solution of the aforementined problems. In this study we focused on the common terms about the people who have limited movements, senses, activities or both; and focused on the awereness of disabled people and we investigated the the accessibility of the web sites about the justice by disabled people. Making accessible of web

  5. Allocation of advertising space by a web service provider using ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    in the literature addressing the winner-determination problem in the context of advertising over the internet. In the current work, we address the problem faced by the web-page service provider on “how to optimally allocate the advertising space available on her web-page so as to maximize the overall revenue of the system.

  6. Elementary Algebra + Student-Written Web Illustrations = Math Mastery.

    Science.gov (United States)

    Veteto, Bette R.

    This project focuses on the construction and use of a student-made elementary algebra tutorial World Wide Web page at the University of Memphis (Tennessee), how this helps students further explore the topics studied in elementary algebra, and how students can publish their work on the class Web page for use by other students. Practical,…

  7. Modeling clicks beyond the first result page

    NARCIS (Netherlands)

    Chuklin, A.; Serdyukov, P.; de Rijke, M.

    2013-01-01

    Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more

  8. Web resources for myrmecologists

    DEFF Research Database (Denmark)

    Nash, David Richard

    2005-01-01

    The world wide web provides many resources that are useful to the myrmecologist. Here I provide a brief introduc- tion to the types of information currently available, and to recent developments in data provision over the internet which are likely to become important resources for myrmecologists...... in the near future. I discuss the following types of web site, and give some of the most useful examples of each: taxonomy, identification and distribution; conservation; myrmecological literature; individual species sites; news and discussion; picture galleries; personal pages; portals....

  9. Developing Large Web Applications

    CERN Document Server

    Loudon, Kyle

    2010-01-01

    How do you create a mission-critical site that provides exceptional performance while remaining flexible, adaptable, and reliable 24/7? Written by the manager of a UI group at Yahoo!, Developing Large Web Applications offers practical steps for building rock-solid applications that remain effective even as you add features, functions, and users. You'll learn how to develop large web applications with the extreme precision required for other types of software. Avoid common coding and maintenance headaches as small websites add more pages, more code, and more programmersGet comprehensive soluti

  10. Result disambiguation in web people search

    NARCIS (Netherlands)

    Berendsen, R.; Kovachev, B.; Nastou, E.; de Rijke, M.; Weerkamp, W.

    2012-01-01

    We study the problem of disambiguating the results of a web people search engine: given a query consisting of a person name plus the result pages for this query, find correct referents for all mentions by clustering the pages according to the different people sharing the name. While the problem has

  11. EVALUATION OF WEB SEARCHING METHOD USING A NOVEL WPRR ALGORITHM FOR TWO DIFFERENT CASE STUDIES

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2012-04-01

    Full Text Available The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to web data and documents. Web content mining and web structure mining have important roles in identifying the relevant web page. Relevancy of web page denotes how well a retrieved web page or set of web pages meets the information need of the user. Page Rank, Weighted Page Rank and Hypertext Induced Topic Selection (HITS are existing algorithms which considers only web structure mining. Vector Space Model (VSM, Cover Density Ranking (CDR, Okapi similarity measurement (Okapi and Three-Level Scoring method (TLS are some of existing relevancy score methods which consider only web content mining. In this paper, we propose a new algorithm, Weighted Page with Relevant Rank (WPRR which is blend of both web content mining and web structure mining that demonstrates the relevancy of the page with respect to given query for two different case scenarios. It is shown that WPRR’s performance is better than the existing algorithms.

  12. Decomposition of the Google PageRank and Optimal Linking Strategy

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli

    We provide the analysis of the Google PageRank from the perspective of the Markov Chain Theory. First we study the Google PageRank for a Web that can be decomposed into several connected components which do not have any links to each other. We show that in order to determine the Google PageRank for

  13. to jump in the page below to the specific gene. 4 Supplementary data

    Indian Academy of Sciences (India)

    gdyang

    The database and the web pages that you see were produced by the Meyers lab at the. University of Delaware. These data are freely available, but we ask that you cite this web page or related publications; most libraries have been published as part of specific papers, but the best general citation for the website is Nakano ...

  14. Crawling on the World Wide Web

    OpenAIRE

    Wang, Li; Fox, Edward A.

    2002-01-01

    As the World Wide Web grows rapidly, a web search engine is needed for people to search through the Web. The crawler is an important module of a web search engine. The quality of a crawler directly affects the searching quality of such web search engines. Given some seed URLs, the crawler should retrieve the web pages of those URLs, parse the HTML files, add new URLs into its buffer and go back to the first phase of this cycle. The crawler also can retrieve some other information from the HTM...

  15. Universal emergence of PageRank

    Energy Technology Data Exchange (ETDEWEB)

    Frahm, K M; Georgeot, B; Shepelyansky, D L, E-mail: frahm@irsamc.ups-tlse.fr, E-mail: georgeot@irsamc.ups-tlse.fr, E-mail: dima@irsamc.ups-tlse.fr [Laboratoire de Physique Theorique du CNRS, IRSAMC, Universite de Toulouse, UPS, 31062 Toulouse (France)

    2011-11-18

    The PageRank algorithm enables us to rank the nodes of a network through a specific eigenvector of the Google matrix, using a damping parameter {alpha} Element-Of ]0, 1[. Using extensive numerical simulations of large web networks, with a special accent on British University networks, we determine numerically and analytically the universal features of the PageRank vector at its emergence when {alpha} {yields} 1. The whole network can be divided into a core part and a group of invariant subspaces. For {alpha} {yields} 1, PageRank converges to a universal power-law distribution on the invariant subspaces whose size distribution also follows a universal power law. The convergence of PageRank at {alpha} {yields} 1 is controlled by eigenvalues of the core part of the Google matrix, which are extremely close to unity, leading to large relaxation times as, for example, in spin glasses. (paper)

  16. Universal emergence of PageRank

    International Nuclear Information System (INIS)

    Frahm, K M; Georgeot, B; Shepelyansky, D L

    2011-01-01

    The PageRank algorithm enables us to rank the nodes of a network through a specific eigenvector of the Google matrix, using a damping parameter α ∈ ]0, 1[. Using extensive numerical simulations of large web networks, with a special accent on British University networks, we determine numerically and analytically the universal features of the PageRank vector at its emergence when α → 1. The whole network can be divided into a core part and a group of invariant subspaces. For α → 1, PageRank converges to a universal power-law distribution on the invariant subspaces whose size distribution also follows a universal power law. The convergence of PageRank at α → 1 is controlled by eigenvalues of the core part of the Google matrix, which are extremely close to unity, leading to large relaxation times as, for example, in spin glasses. (paper)

  17. Using Power-Law Degree Distribution to Accelerate PageRank

    Directory of Open Access Journals (Sweden)

    Zhaoyan Jin

    2012-12-01

    Full Text Available The PageRank vector of a network is very important, for it can reflect the importance of a Web page in the World Wide Web, or of a people in a social network. However, with the growth of the World Wide Web and social networks, it needs more and more time to compute the PageRank vector of a network. In many real-world applications, the degree and PageRank distributions of these complex networks conform to the Power-Law distribution. This paper utilizes the degree distribution of a network to initialize its PageRank vector, and presents a Power-Law degree distribution accelerating algorithm of PageRank computation. Experiments on four real-world datasets show that the proposed algorithm converges more quickly than the original PageRank algorithm.

  18. Open Hypermedia as User Controlled Meta Data for the Web

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Bouvin, Niels Olof; Sloth, Lennard

    2000-01-01

    segments. By means of the Webvise system, OHIF structures can be authored, imposed on Web pages, and finally linked on the Web as any ordinary Web resource. Following a link to an OHIF file automatically invokes a Webvise download of the meta data structures and the annotated Web content will be displayed...... in the browser. Moreover, the Webvise system provides support for users to create, manipulate, and share the OHIF structures together with custom made Web pages and MS Office 2000 documents on WebDAV servers. These Webvise facilities goes beyond earlier open hypermedia systems in that it now allows fully...... distributed open hypermedia linking between Web pages and WebDAV aware desktop applications. The paper describes the OHIF format and demonstrates how the Webvise system handles OHIF. Finally, it argues for better support for handling user controlled meta data, e.g. support for linking in non-XML data...

  19. Open Hypermedia as User Controlled Meta Data for the Web

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Sloth, Lennert; Bouvin, Niels Olof

    2000-01-01

    segments. By means of the Webvise system, OHIF structures can be authored, imposed on Web pages, and finally linked on the Web as any ordinary Web resource. Following a link to an OHIF file automatically invokes a Webvise download of the meta data structures and the annotated Web content will be displayed...... in the browser. Moreover, the Webvise system provides support for users to create, manipulate, and share the OHIF structures together with custom made web pages and MS Office 2000 documents on WebDAV servers. These Webvise facilities goes beyond ealier open hypermedia systems in that it now allows fully...... distributed open hypermedia linking between Web pages and WebDAV aware desktop applications. The paper describes the OHIF format and demonstrates how the Webvise system handles OHIF. Finally, it argues for better support for handling user controlled meta data, e.g. support for linking in non-XML data...

  20. New page Intranet: just messaging?

    Directory of Open Access Journals (Sweden)

    Yaniel Barceló Fernández

    2009-12-01

    Full Text Available The objective of this article is to specify the multi-purpose use of the new Intranet at the Pedagogical University “Rafael M. de Mendive”, and the ways to accede into the variety of services it offers, not only as an e-mail service provider, which is the reason more oftenly used by costumers who visit it. So that, here are clarified the new, exclusive advantages of this web page, as well as to make all costumers realize that an intelligent use of this new option might bring to them benefits and only benefits.

  1. Fluid annotations through open hypermedia: Using and extending emerging Web standards

    DEFF Research Database (Denmark)

    Bouvin, Niels Olof; Zellweger, Polle Trescott; Grønbæk, Kaj

    2002-01-01

    and browsing of fluid annotations on third-party Web pages. This prototype is an extension of the Arakne Environment, an open hypermedia application that can augment Web pages with externally stored hypermedia structures. This paper describes how various Web standards, including DOM, CSS, XLink, XPointer...

  2. Happy birthday WWW: the web is now old enough to drive

    CERN Multimedia

    Gilbertson, Scott

    2007-01-01

    "The World Wide Web can now drive. Sixteen years ago yeterday, in a short post to the alt.hypertext newsgroup, tim Berners-Lee revealed the first public web pages summarizing his World Wide Web project." (1/4 page)

  3. A Note on the PageRank of Undirected Graphs

    OpenAIRE

    Grolmusz, Vince

    2012-01-01

    The PageRank is a widely used scoring function of networks in general and of the World Wide Web graph in particular. The PageRank is defined for directed graphs, but in some special cases applications for undirected graphs occur. In the literature it is widely noted that the PageRank for undirected graphs are proportional to the degrees of the vertices of the graph. We prove that statement for a particular personalization vector in the definition of the PageRank, and we also show that in gene...

  4. Multiplex PageRank.

    Directory of Open Access Journals (Sweden)

    Arda Halu

    Full Text Available Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation.

  5. Multiplex PageRank.

    Science.gov (United States)

    Halu, Arda; Mondragón, Raúl J; Panzarasa, Pietro; Bianconi, Ginestra

    2013-01-01

    Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation.

  6. Introduction pages

    Directory of Open Access Journals (Sweden)

    Radu E. Sestras

    2015-09-01

    Full Text Available Introduction Pages and Table of Contents Research ArticlesInsulin Requirements in Relation to Insulin Pump Indications in Type 1 DiabetesPDFGabriela GHIMPEŢEANU,\tSilvia Ş. IANCU,\tGabriela ROMAN,\tAnca M. ALIONESCU259-263Comparative Antibacterial Efficacy of Vitellaria paradoxa (Shea Butter Tree Extracts Against Some Clinical Bacterial IsolatesPDFKamoldeen Abiodun AJIJOLAKEWU,\tFola Jose AWARUN264-268A Murine Effort Model for Studying the Influence of Trichinella on Muscular Activity of MicePDFIonut MARIAN,\tCălin Mircea GHERMAN,\tAndrei Daniel MIHALCA269-271Prevalence and Antibiogram of Generic Extended-Spectrum β-Lactam-Resistant Enterobacteria in Healthy PigsPDFIfeoma Chinyere UGWU,\tMadubuike Umunna ANYANWU,\tChidozie Clifford UGWU,\tOgbonna Wilfred UGWUANYI272-280Index of Relative Importance of the Dietary Proportions of Sloth Bear (Melursus ursinus in Semi-Arid RegionPDFTana P. MEWADA281-288Bioaccumulation Potentials of Momordica charantia L. Medicinal Plant Grown in Lead Polluted Soil under Organic Fertilizer AmendmentPDFOjo Michael OSENI,\tOmotola Esther DADA,\tAdekunle Ajayi ADELUSI289-294Induced Chitinase and Chitosanase Activities in Turmeric Plants by Application of β-D-Glucan NanoparticlesPDFSathiyanarayanan ANUSUYA,\tMuthukrishnan SATHIYABAMA295-298Present or Absent? About a Threatened Fern, Asplenium adulterinum Milde, in South-Eastern Carpathians (RomaniaPDFAttila BARTÓK,\tIrina IRIMIA299-307Comparative Root and Stem Anatomy of Four Rare Onobrychis Mill. (Fabaceae Taxa Endemic in TurkeyPDFMehmet TEKİN,\tGülden YILMAZ308-312Propagation of Threatened Nepenthes khasiana: Methods and PrecautionsPDFJibankumar S. KHURAIJAM,\tRup K. ROY313-315Alleviate Seed Ageing Effects in Silybum marianum by Application of Hormone Seed PrimingPDFSeyed Ata SIADAT,\tSeyed Amir MOOSAVI,\tMehran SHARAFIZADEH316-321The Effect of Halopriming and Salicylic Acid on the Germination of Fenugreek (Trigonella foenum-graecum under Different Cadmium

  7. Oracle Application Express 5 for beginners a practical guide to rapidly develop data-centric web applications accessible from desktop, laptops, tablets, and smartphones

    CERN Document Server

    2015-01-01

    Oracle Application Express has taken another big leap towards becoming a true next generation RAD tool. It has entered into its fifth version to build robust web applications. One of the most significant feature in this release is a new page designer that helps developers create and edit page elements within a single page design view, which enormously maximizes developer productivity. Without involving the audience too much into the boring bits, this full colored edition adopts an inspiring approach that helps beginners practically evaluate almost every feature of Oracle Application Express, including all features new to version 5. The most convincing way to explore a technology is to apply it to a real world problem. In this book, you’ll develop a sales application that demonstrates almost every feature to practically expose the anatomy of Oracle Application Express 5. The short list below presents some main topics of Oracle APEX covered in this book: Rapid web application development for desktops, la...

  8. Upgrade of CERN OP Webtools IRRAD Page

    CERN Document Server

    Vik, Magnus Bjerke

    2017-01-01

    CERN Beams Department maintains a website with various tools for the Operations Group, with one of them being specific for the Proton Irradiation Facility (IRRAD). The IRRAD team use the tool to follow up and optimize the operation of the facility. The original version of the tool was difficult to maintain and adding new features to the page was challenging. Thus this summer student project is aimed to upgrade the web page by rewriting the web page with maintainability and flexibility in mind. The new application uses a server--client architecture with a REST API on the back end which is used by the front end to request data for visualization. PHP is used on the back end to implement the API's and Swagger is used to document them. Vue, Semantic UI, Webpack, Node and ECMAScript 5 is used on the fronted to visualize and administrate the data. The result is a new IRRAD operations web application with extended functionality, improved structure and an improved user interface. It includes a new Status Panel page th...

  9. Synchronizing Web Documents with Style

    NARCIS (Netherlands)

    R.L. Guimarães (Rodrigo); D.C.A. Bulterman (Dick); P.S. Cesar Garcia (Pablo Santiago); A.J. Jansen (Jack)

    2014-01-01

    htmlabstractIn this paper we report on our efforts to define a set of document extensions to Cascading Style Sheets (CSS) that allow for structured timing and synchronization of elements within a Web page. Our work considers the scenario in which the temporal structure can be decoupled from the

  10. Web Search Engines 4 -O----------------------------------------------~----------

    Indian Academy of Sciences (India)

    retrieve pages related to insects and also the automobile model. There is no straightforward way of telling a web search tool that you are looking for beetle as a ... (which owns the Altavista search engine) and Inktomi (the firm that writes the software for HotBot and Yahoo!) are said to be considering commercialisation of this ...

  11. 07 TSjoen WEB 02.pmd

    African Journals Online (AJOL)

    Owner

    De afgelopen jaren expandeert op het World Wide Web de internetpoëzie die niet langer in een gedrukte vorm beschikbaar is, ook wel aangeduid als Poetry Off the Page, en er is het fenomeen van de performance en de slam-poetry. Over dergelijke fenomenen wordt in het Nederlandse taalgebied de jongste tijd wel.

  12. Multigraph: Interactive Data Graphs on the Web

    Science.gov (United States)

    Phillips, M. B.

    2010-12-01

    Many aspects of geophysical science involve time dependent data that is often presented in the form of a graph. Considering that the web has become a primary means of communication, there are surprisingly few good tools and techniques available for presenting time-series data on the web. The most common solution is to use a desktop tool such as Excel or Matlab to create a graph which is saved as an image and then included in a web page like any other image. This technique is straightforward, but it limits the user to one particular view of the data, and disconnects the graph from the data in a way that makes updating a graph with new data an often cumbersome manual process. This situation is somewhat analogous to the state of mapping before the advent of GIS. Maps existed only in printed form, and creating a map was a laborious process. In the last several years, however, the world of mapping has experienced a revolution in the form of web-based and other interactive computer technologies, so that it is now commonplace for anyone to easily browse through gigabytes of geographic data. Multigraph seeks to bring a similar ease of access to time series data. Multigraph is a program for displaying interactive time-series data graphs in web pages that includes a simple way of configuring the appearance of the graph and the data to be included. It allows multiple data sources to be combined into a single graph, and allows the user to explore the data interactively. Multigraph lets users explore and visualize "data space" in the same way that interactive mapping applications such as Google Maps facilitate exploring and visualizing geography. Viewing a Multigraph graph is extremely simple and intuitive, and requires no instructions. Creating a new graph for inclusion in a web page involves writing a simple XML configuration file and requires no programming. Multigraph can read data in a variety of formats, and can display data from a web service, allowing users to "surf

  13. Programming the semantic web

    CERN Document Server

    Segaran, Toby; Taylor, Jamie

    2009-01-01

    With this book, the promise of the Semantic Web -- in which machines can find, share, and combine data on the Web -- is not just a technical possibility, but a practical reality Programming the Semantic Web demonstrates several ways to implement semantic web applications, using current and emerging standards and technologies. You'll learn how to incorporate existing data sources into semantically aware applications and publish rich semantic data. Each chapter walks you through a single piece of semantic technology and explains how you can use it to solve real problems. Whether you're writing

  14. WEB SemânticaSemantic web

    Directory of Open Access Journals (Sweden)

    Gisele Vasconcelos Dziekaniak

    2004-01-01

    Full Text Available O trabalho aborda a Web Semântica: a nova versão da web que está em desenvolvimento, através de projetos como o Scorpion1 e o Desire2. Estes projetos buscam organizar o conhecimento armazenado em seus arquivos e páginas web, prometendo a compreensão da linguagem humana pelas máquinas na recuperação da informação, sem que o usuário precise dominar refinadas estratégias de buscas. O artigo apresenta o padrão de metadados Dublin Core como o padrão mais utilizado atualmente pelas comunidades desenvolvedoras de projetos na área da Web Semântica e aborda o RDF como estrutura indicada pelos visionários desta nova web para desenvolver esquemas semânticos na representação da informação disponibilizada via rede, bem como o XML enquanto linguagem de marcação de dados estruturados. Revela a necessidade de melhorias na organização da informação no cenário brasileiro de indexação eletrônica a fim de que o mesmo possa acompanhar o novo paradigma da recuperação da informação e organização do conhecimento.This paper approaches the Semantic Web: a new version of web development, through projects as Scorpion and Desire. The aim of these projects in to organize knowledge stored in their files and web pages promissing the understanding of human language by the machines to recover information, without the user needs to dominate refined searching strategies. The article presents the metadatas pattern Dublin Core as the present day most used pattern by the project developer communities in the area of the Web Semantic and approaches RDF as suitable structure for the visionary of this new web to develop semantic outlines in the representation of the information made available through net, as well as XML as language of demarcation of structured data. Reveals the need of improvements in the treatment of the information in the Brazilian scenery of electronic indexation so that the same can accompany the new paradigm of recovery of

  15. PAGING IN COMMUNICATIONS

    DEFF Research Database (Denmark)

    2016-01-01

    A method and an apparatus are disclosed for managing paging in a communications system. The method may include, based on a received set of physical resources, determining, in a terminal apparatus, an original paging pattern defining potential time instants for paging, wherein the potential time...

  16. Semantic Advertising for Web 3.0

    Science.gov (United States)

    Thomas, Edward; Pan, Jeff Z.; Taylor, Stuart; Ren, Yuan; Jekjantuk, Nophadol; Zhao, Yuting

    Advertising on the World Wide Web is based around automatically matching web pages with appropriate advertisements, in the form of banner ads, interactive adverts, or text links. Traditionally this has been done by manual classification of pages, or more recently using information retrieval techniques to find the most important keywords from the page, and match these to keywords being used by adverts. In this paper, we propose a new model for online advertising, based around lightweight embedded semantics. This will improve the relevancy of adverts on the World Wide Web and help to kick-start the use of RDFa as a mechanism for adding lightweight semantic attributes to the Web. Furthermore, we propose a system architecture for the proposed new model, based on our scalable ontology reasoning infrastructure TrOWL.

  17. Building a dynamic Web/database interface

    OpenAIRE

    Cornell, Julie.

    1996-01-01

    Computer Science This thesis examines methods for accessing information stored in a relational database from a Web Page. The stateless and connectionless nature of the Web's Hypertext Transport Protocol as well as the open nature of the Internet Protocol pose problems in the areas of database concurrency, security, speed, and performance. We examined the Common Gateway Interface, Server API, Oracle's Web/database architecture, and the Java Database Connectivity interface in terms of p...

  18. Policy-Aware Content Reuse on the Web

    Science.gov (United States)

    Seneviratne, Oshani; Kagal, Lalana; Berners-Lee, Tim

    The Web allows users to share their work very effectively leading to the rapid re-use and remixing of content on the Web including text, images, and videos. Scientific research data, social networks, blogs, photo sharing sites and other such applications known collectively as the Social Web have lots of increasingly complex information. Such information from several Web pages can be very easily aggregated, mashed up and presented in other Web pages. Content generation of this nature inevitably leads to many copyright and license violations, motivating research into effective methods to detect and prevent such violations.

  19. Chapter 21 The Semantic Web : Webizing Knowledge Representation

    NARCIS (Netherlands)

    Hendler, Jim; van Harmelen, Frank

    2008-01-01

    The World Wide Web opens up new opportunities for the use of knowledge representation: a formal description of the semantic content of Web pages can allow better processing by computational agents. Further, the naming scheme of the Web, using Universal Resource Indicators, allows KR systems to avoid

  20. Automatic web site authoring with SiteGuide

    NARCIS (Netherlands)

    de Boer, V.; Hollink, V.; van Someren, M.W.; Kłopotek, M.A.; Przepiórkowski, A.; Wierzchoń, S.T.; Trojanowski, K.

    2009-01-01

    An important step in the design process for a web site is to determine which information is to be included and how the information should be organized on the web site’s pages. In this paper we describe ’SiteGuide’, a tool that automatically produces an information architecture for a web site that a

  1. Adaptive web data extraction policies

    Directory of Open Access Journals (Sweden)

    Provetti, Alessandro

    2008-12-01

    Full Text Available Web data extraction is concerned, among other things, with routine data accessing and downloading from continuously-updated dynamic Web pages. There is a relevant trade-off between the rate at which the external Web sites are accessed and the computational burden on the accessing client. We address the problem by proposing a predictive model, typical of the Operating Systems literature, of the rate-of-update of each Web source. The presented model has been implemented into a new version of the Dynamo project: a middleware that assists in generating informative RSS feeds out of traditional HTML Web sites. To be effective, i.e., make RSS feeds be timely and informative and to be scalable, Dynamo needs a careful tuning and customization of its polling policies, which are described in detail.

  2. Classifying web genres in context: a case study documenting the web genres used by a software engineer

    NARCIS (Netherlands)

    Montesi, M.; Navarrete, T.

    2008-01-01

    This case study analyzes the Internet-based resources that a software engineer uses in his daily work. Methodologically, we studied the web browser history of the participant, classifying all the web pages he had seen over a period of 12 days into web genres. We interviewed him before and after the

  3. PageRank for low frequency earthquake detection

    Science.gov (United States)

    Aguiar, A. C.; Beroza, G. C.

    2013-12-01

    We have analyzed Hi-Net seismic waveform data during the April 2006 tremor episode in the Nankai Trough in SW Japan using the autocorrelation approach of Brown et al. (2008), which detects low frequency earthquakes (LFEs) based on pair-wise waveform matching. We have generalized this to exploit the fact that waveforms may repeat multiple times, on more than just a pair-wise basis. We are working towards developing a sound statistical basis for event detection, but that is complicated by two factors. First, the statistical behavior of the autocorrelations varies between stations. Analyzing one station at a time assures that the detection threshold will only depend on the station being analyzed. Second, the positive detections do not satisfy "closure." That is, if window A correlates with window B, and window B correlates with window C, then window A and window C do not necessarily correlate with one another. We want to evaluate whether or not a linked set of windows are correlated due to chance. To do this, we map our problem on to one that has previously been solved for web search, and apply Google's PageRank algorithm. PageRank is the probability of a 'random surfer' to visit a particular web page; it assigns a ranking for a webpage based on the amount of links associated with that page. For windows of seismic data instead of webpages, the windows with high probabilities suggest likely LFE signals. Once identified, we stack the matched windows to improve the snr and use these stacks as template signals to find other LFEs within continuous data. We compare the results among stations and declare a detection if they are found in a statistically significant number of stations, based on multinomial statistics. We compare our detections using the single-station method to detections found by Shelly et al. (2007) for the April 2006 tremor sequence in Shikoku, Japan. We find strong similarity between the results, as well as many new detections that were not found using

  4. Web Mining

    Science.gov (United States)

    Fürnkranz, Johannes

    The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to Web data and documents. This chapter provides a brief overview of web mining techniques and research areas, most notably hypertext classification, wrapper induction, recommender systems and web usage mining.

  5. Using JavaScript and the FDSN web service to create an interactive earthquake information system

    Science.gov (United States)

    Fischer, Kasper D.

    2015-04-01

    The FDSN web service provides a web interface to access earthquake meta-data (e. g. event or station information) and waveform date over the internet. Requests are send to a server as URLs and the output is either XML or miniSEED. This makes it hard to read by humans but easy to process with different software. Different data centers are already supporting the FDSN web service, e. g. USGS, IRIS, ORFEUS. The FDSN web service is also part of the Seiscomp3 (http://www.seiscomp3.org) software. The Seismological Observatory of the Ruhr-University switched to Seiscomp3 as the standard software for the analysis of mining induced earthquakes at the beginning of 2014. This made it necessary to create a new web-based earthquake information service for the publication of results to the general public. This has be done by processing the output of a FDSN web service query by javascript running in a standard browser. The result is an interactive map presenting the observed events and further information of events and stations on a single web page as a table and on a map. In addition the user can download event information, waveform data and station data in different formats like miniSEED, quakeML or FDSNxml. The developed code and all used libraries are open source and freely available.

  6. The Next Page Access Prediction Using Makov Model

    OpenAIRE

    Deepti Razdan

    2011-01-01

    Predicting the next page to be accessed by the Webusers has attracted a large amount of research. In this paper, anew web usage mining approach is proposed to predict next pageaccess. It is proposed to identify similar access patterns from weblog using K-mean clustering and then Markov model is used forprediction for next page accesses. The tightness of clusters isimproved by setting similarity threshold while forming clusters.In traditional recommendation models, clustering by nonsequentiald...

  7. Secure Page Fusion with VUsion

    NARCIS (Netherlands)

    Oliverio, Marco; Bos, Herbert; Razavi, Kaveh; Giuffrida, Cristiano

    2017-01-01

    To reduce memory pressure, modern operating systems and hypervisors such as Linux/KVM deploy page-level memory fusion to merge physical memory pages with the same content (i.e., page fusion). A write to a fused memory page triggers a copy-on-write event that unmerges the page to preserve correct

  8. Web Annotation and Threaded Forum: How Did Learners Use the Two Environments in an Online Discussion?

    Science.gov (United States)

    Sun, Yanyan; Gao, Fei

    2014-01-01

    Web annotation is a Web 2.0 technology that allows learners to work collaboratively on web pages or electronic documents. This study explored the use of Web annotation as an online discussion tool by comparing it to a traditional threaded discussion forum. Ten graduate students participated in the study. Participants had access to both a Web…

  9. A URI-based approach for addressing fragments of media resources on the Web

    NARCIS (Netherlands)

    E. Mannens; D. van Deursen; R. Troncy (Raphael); S. Pfeiffer; C. Parker (Conrad); Y. Lafon; A.J. Jansen (Jack); M. Hausenblas; R. van de Walle

    2011-01-01

    htmlabstractTo make media resources a prime citizen on the Web, we have to go beyond simply replicating digital media files. The Web is based on hyperlinks between Web resources, and that includes hyperlinking out of resources (e.g., from a word or an image within a Web page) as well as hyperlinking

  10. Date restricted queries in web search engines

    OpenAIRE

    Lewandowski, Dirk

    2004-01-01

    Search engines usually offer a date restricted search on their advanced search pages. But determining the actual update of a web page is not without problems. We conduct a study testing date restricted queries on the search engines Google, Teoma and Yahoo!. We find that these searches fail to work properly in the examined engines. We discuss implications of this for further research and search engine development.

  11. Twelve Theses on Reactive Rules for the Web

    OpenAIRE

    Bry, François; Eckert, Michael

    2006-01-01

    Reactivity, the ability to detect and react to events, is an essential functionality in many information systems. In particular, Web systems such as online marketplaces, adaptive (e.g., recommender) sys- tems, and Web services, react to events such as Web page updates or data posted to a server. This article investigates issues of relevance in designing high-level programming languages dedicated to reactivity on the Web. It presents twelve theses on features desira...

  12. Adding a visualization feature to web search engines: it's time.

    Science.gov (United States)

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  13. What is the invisible web? A crawler perspective

    OpenAIRE

    Arroyo, Natalia

    2004-01-01

    The invisible Web, also known as the deep Web or dark matter, is an important problem for Webometrics due to difficulties of conceptualization and measurement. The invisible Web has been defined to be the part of the Web that cannot be indexed by search engines, including databases and dynamically generated pages. Some authors have recognized that this is a quite subjective concept that depends on the point of view of the observer: what is visible for one observer may be invisible for others....

  14. Oh What a Tangled Biofilm Web Bacteria Weave

    Science.gov (United States)

    ... Home Page Oh What a Tangled Biofilm Web Bacteria Weave By Elia Ben-Ari Posted May 1, ... a suitable surface, some water and nutrients, and bacteria will likely put down stakes and form biofilms. ...

  15. Parallel Strands: A Preliminary Investigation into Mining the Web

    National Research Council Canada - National Science Library

    Resnik, P

    1998-01-01

    .... A parallel corpus resource not yet explored is the World Wide Web which hosts an abundance of pages in parallel translation, offering a potential solution to some of these problems and unique opportunities of its own...

  16. Neutralizing SQL Injection Attack Using Server Side Code Modification in Web Applications

    OpenAIRE

    Dalai, Asish Kumar; Jena, Sanjay Kumar

    2017-01-01

    Reports on web application security risks show that SQL injection is the top most vulnerability. The journey of static to dynamic web pages leads to the use of database in web applications. Due to the lack of secure coding techniques, SQL injection vulnerability prevails in a large set of web applications. A successful SQL injection attack imposes a serious threat to the database, web application, and the entire web server. In this article, the authors have proposed a novel method for prevent...

  17. WAPTT - Web Application Penetration Testing Tool

    Directory of Open Access Journals (Sweden)

    DURIC, Z.

    2014-02-01

    Full Text Available Web applications vulnerabilities allow attackers to perform malicious actions that range from gaining unauthorized account access to obtaining sensitive data. The number of reported web application vulnerabilities in last decade is increasing dramatically. The most of vulnerabilities result from improper input validation and sanitization. The most important of these vulnerabilities based on improper input validation and sanitization are: SQL injection (SQLI, Cross-Site Scripting (XSS and Buffer Overflow (BOF. In order to address these vulnerabilities we designed and developed the WAPTT (Web Application Penetration Testing Tool tool - web application penetration testing tool. Unlike other web application penetration testing tools, this tool is modular, and can be easily extended by end-user. In order to improve efficiency of SQLI vulnerability detection, WAPTT uses an efficient algorithm for page similarity detection. The proposed tool showed promising results as compared to six well-known web application scanners in detecting various web application vulnerabilities.

  18. Augmenting the Web through Open Hypermedia

    DEFF Research Database (Denmark)

    Bouvin, N.O.

    2003-01-01

    Based on an overview of Web augmentation and detailing the three basic approaches to extend the hypermedia functionality of the Web, the author presents a general open hypermedia framework (the Arakne framework) to augment the Web. The aim is to provide users with the ability to link, annotate, a......, and otherwise structure Web pages, as they see fit. The paper further discusses the possibilities of the concept through the description of various experiments performed with an implementation of the framework, the Arakne Environment......Based on an overview of Web augmentation and detailing the three basic approaches to extend the hypermedia functionality of the Web, the author presents a general open hypermedia framework (the Arakne framework) to augment the Web. The aim is to provide users with the ability to link, annotate...

  19. Page Styles on steroids

    DEFF Research Database (Denmark)

    Madsen, Lars

    2008-01-01

    Designing a page style has long been a pain for novice users. Some parts are easy; others need strong LATEX knowledge. In this article we will present the memoir way of dealing with page styles, including new code added to the recent version of memoir that will reduce the pain to a mild annoyance...

  20. Web Services and Other Enhancements at the Northern California Earthquake Data Center

    Science.gov (United States)

    Neuhauser, D. S.; Zuzlewski, S.; Allen, R. M.

    2012-12-01

    The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, or MiniSEED depending on the service, and are compatible with the equivalent IRIS DMC web services. The NCEDC is currently providing the following Web Services: (1) Station inventory and channel response information delivered in StationXML format, (2) Channel response information delivered in RESP format, (3) Time series availability delivered in text and XML formats, (4) Single channel and bulk data request delivered in MiniSEED format. The NCEDC is also developing a rich Earthquake Catalog Web Service to allow users to query earthquake catalogs based on selection parameters such as time, location or geographic region, magnitude, depth, azimuthal gap, and rms. It will return (in QuakeML format) user-specified results that can include simple earthquake parameters, as well as observations such as phase arrivals, codas, amplitudes, and computed parameters such as first motion mechanisms, moment tensors, and rupture length. The NCEDC will work with both IRIS and the International Federation of Digital Seismograph Networks (FDSN) to define a uniform set of web service specifications that can be implemented by multiple data centers to provide users with a common data interface across data centers. The NCEDC now hosts earthquake catalogs and waveforms from the US Department of Energy (DOE) Enhanced Geothermal Systems (EGS) monitoring networks. These

  1. New WWW Pages

    CERN Multimedia

    Pommes, K

    New WWW pages have been created in order to provide easy access to the many activities and pertaining information of the ATLAS Technical Coordination. The main entry point is available on the ATLAS Collaboration page by clicking the Technical Coordination link which leads to the page shown in the following picture. Each button links to a page listing all tasks of the corresponding activity, the responsible task leaders, schedules, work-packages, and action lists, etc... The "ATLAS Documentation Center" button will present the pop-up window shown in the next figure: Besides linking to the Technical Coordination Activities, this page provides direct access to the tools for Project Progress Tracking (PPT) and Engineering Data Management (EDMS), as well as to the main topics being coordinated by the Technical Coordination.

  2. Aesthetic design of e-commerce web pages—Complexity, order, and preferences

    NARCIS (Netherlands)

    Poole, M.S.

    2012-01-01

    This study was conducted to understand the perceptual structure of e-commerce webpage visual aesthetics and to provide insight into how physical design features of web pages influence users' aesthetic perception of and preference for web pages. Drawing on the environmental aesthetics, human-computer

  3. Web evolution and Web Science

    OpenAIRE

    Hall, Wendy; Tiropanis, Thanassis

    2012-01-01

    This paper examines the evolution of the World Wide Web as a network of networks and discusses the emergence of Web Science as an interdisciplinary area that can provide us with insights on how the Web developed, and how it has affected and is affected by society. Through its different stages of evolution, the Web has gradually changed from a technological network of documents to a network where documents, data, people and organisations are interlinked in various and often unexpected ways. It...

  4. GALILEE: AN INTERNET WEB BASED DISTANCE LEARNING SUPPORT SYSTEM

    Directory of Open Access Journals (Sweden)

    Arthur Budiman

    1999-01-01

    Full Text Available This paper presents a project of Web-based Distance Learning support system. The system has been built based on the Internet and World Wide Web facility. The system could be accessed with a web browser which is directed to a certain web server address so that students can do learning process just like in the real situation, such as student admissions, taking course materials, syllabus, assignments, students grades, class discussions through web, and doing online quizzes. Students could also join collaboration works by giving opinions, feedback and student produced paper/web which can be shared to the entire learning community. Therefore, it will build a collaborative learning environment where lectures together with students make constructive knowledge databases for entire learning community. This system has been developed based on Active Server Pages (ASP technology from Microsoft which is embedded in a web server. Web pages reside in a web server which is connected to an SQL Database Server. Database server is used to store structured data such as lectures/students personal information, course lists, syllabus and its descriptions, announcement texts from lecturers, commentaries for discussion forum, student's study evaluations, scores for each assignment, quizzes for each course, assignments text from lectures, assignments which are collected by students and students contribution/materials. This system has been maintained by an administrator for maintaining and developing web pages using HTML. The administrator also does ASP scripts programming to convert web pages into active server pages. Lectures and students could contribute some course materials and share their ideas through their web browser. This web-based collaborative learning system gives the students more active role in the information gathering and learning process, making the distance students feel part of a learning community, therefore increasing motivation, comprehension and

  5. Web archives

    DEFF Research Database (Denmark)

    Finnemann, Niels Ole

    2018-01-01

    This article deals with general web archives and the principles for selection of materials to be preserved. It opens with a brief overview of reasons why general web archives are needed. Section two and three present major, long termed web archive initiatives and discuss the purposes and possible...... values of web archives and asks how to meet unknown future needs, demands and concerns. Section four analyses three main principles in contemporary web archiving strategies, topic centric, domain centric and time-centric archiving strategies and section five discuss how to combine these to provide...... a broad and rich archive. Section six is concerned with inherent limitations and why web archives are always flawed. The last sections deal with the question how web archives may fit into the rapidly expanding, but fragmented landscape of digital repositories taking care of various parts...

  6. Snippet-based relevance predictions for federated web search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd

    How well can the relevance of a page be predicted, purely based on snippets? This would be highly useful in a Federated Web Search setting where caching large amounts of result snippets is more feasible than caching entire pages. The experiments reported in this paper make use of result snippets and

  7. Distribution of pagerank mass among principle components of the web

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli; Pham, Kim Son; Bonato, A.; Chung, F.R.K.

    2007-01-01

    We study the PageRank mass of principal components in a bow-tie Web Graph, as a function of the damping factor c. Using a singular perturbation approach, we show that the PageRank share of IN and SCC components remains high even for very large values of the damping factor, in spite of the fact that

  8. UPGRADE OF THE CENTRAL WEB SERVERS

    CERN Multimedia

    WEB Services

    2000-01-01

    During the weekend of the 25-26 March, the infrastructure of the CERN central web servers will undergo a major upgrade.As a result, the web services hosted by the central servers (that is, the services the address of which starts with www.cern.ch) will be unavailable Friday 24th, from 17:30 to 18:30, and may suffer from short interruptions until 20:00. This includes access to the CERN top-level page as well as the services referenced by this page (such as access to the scientific program and events information, or training, recruitment, housing services).After the upgrade, the change will be transparent to the users. Expert readers may however notice that when they connect to a web page starting with www.cern.ch this address is slightly changed when the page is actually displayed on their screen (e.g. www.cern.ch/Press will be changed to Press.web.cern.ch/Press). They should not worry: this behaviour, necessary for technical reasons, is normal.web.services@cern.chTel 74989

  9. Web Spam, Social Propaganda and the Evolution of Search Engine Rankings

    Science.gov (United States)

    Metaxas, Panagiotis Takis

    Search Engines have greatly influenced the way we experience the web. Since the early days of the web, users have been relying on them to get informed and make decisions. When the web was relatively small, web directories were built and maintained using human experts to screen and categorize pages according to their characteristics. By the mid 1990's, however, it was apparent that the human expert model of categorizing web pages does not scale. The first search engines appeared and they have been evolving ever since, taking over the role that web directories used to play.

  10. FIRST Quantum-(1980)-Computing DISCOVERY in Siegel-Rosen-Feynman-...A.-I. Neural-Networks: Artificial(ANN)/Biological(BNN) and Siegel FIRST Semantic-Web and Siegel FIRST ``Page''-``Brin'' ``PageRank'' PRE-Google Search-Engines!!!

    Science.gov (United States)

    Rosen, Charles; Siegel, Edward Carl-Ludwig; Feynman, Richard; Wunderman, Irwin; Smith, Adolph; Marinov, Vesco; Goldman, Jacob; Brine, Sergey; Poge, Larry; Schmidt, Erich; Young, Frederic; Goates-Bulmer, William-Steven; Lewis-Tsurakov-Altshuler, Thomas-Valerie-Genot; Ibm/Exxon Collaboration; Google/Uw Collaboration; Microsoft/Amazon Collaboration; Oracle/Sun Collaboration; Ostp/Dod/Dia/Nsa/W.-F./Boa/Ubs/Ub Collaboration

    2013-03-01

    Belew[Finding Out About, Cambridge(2000)] and separately full-decade pre-Page/Brin/Google FIRST Siegel-Rosen(Machine-Intelligence/Atherton)-Feynman-Smith-Marinov(Guzik Enterprises/Exxon-Enterprises/A.-I./Santa Clara)-Wunderman(H.-P.) [IBM Conf. on Computers and Mathematics, Stanford(1986); APS Mtgs.(1980s): Palo Alto/Santa Clara/San Francisco/...(1980s) MRS Spring-Mtgs.(1980s): Palo Alto/San Jose/San Francisco/...(1980-1992) FIRST quantum-computing via Bose-Einstein quantum-statistics(BEQS) Bose-Einstein CONDENSATION (BEC) in artificial-intelligence(A-I) artificial neural-networks(A-N-N) and biological neural-networks(B-N-N) and Siegel[J. Noncrystalline-Solids 40, 453(1980); Symp. on Fractals..., MRS Fall-Mtg., Boston(1989)-5-papers; Symp. on Scaling..., (1990); Symp. on Transport in Geometric-Constraint (1990)

  11. A Survey On Various Web Template Detection And Extraction Methods

    Directory of Open Access Journals (Sweden)

    Neethu Mary Varghese

    2015-03-01

    Full Text Available Abstract In todays digital world reliance on the World Wide Web as a source of information is extensive. Users increasingly rely on web based search engines to provide accurate search results on a wide range of topics that interest them. The search engines in turn parse the vast repository of web pages searching for relevant information. However majority of web portals are designed using web templates which are designed to provide consistent look and feel to end users. The presence of these templates however can influence search results leading to inaccurate results being delivered to the users. Therefore to improve the accuracy and reliability of search results identification and removal of web templates from the actual content is essential. A wide range of approaches are commonly employed to achieve this and this paper focuses on the study of the various approaches of template detection and extraction that can be applied across homogenous as well as heterogeneous web pages.

  12. PageRank and rank-reversal dependence on the damping factor

    Science.gov (United States)

    Son, S.-W.; Christensen, C.; Grassberger, P.; Paczuski, M.

    2012-12-01

    PageRank (PR) is an algorithm originally developed by Google to evaluate the importance of web pages. Considering how deeply rooted Google's PR algorithm is to gathering relevant information or to the success of modern businesses, the question of rank stability and choice of the damping factor (a parameter in the algorithm) is clearly important. We investigate PR as a function of the damping factor d on a network obtained from a domain of the World Wide Web, finding that rank reversal happens frequently over a broad range of PR (and of d). We use three different correlation measures, Pearson, Spearman, and Kendall, to study rank reversal as d changes, and we show that the correlation of PR vectors drops rapidly as d changes from its frequently cited value, d0=0.85. Rank reversal is also observed by measuring the Spearman and Kendall rank correlation, which evaluate relative ranks rather than absolute PR. Rank reversal happens not only in directed networks containing rank sinks but also in a single strongly connected component, which by definition does not contain any sinks. We relate rank reversals to rank pockets and bottlenecks in the directed network structure. For the network studied, the relative rank is more stable by our measures around d=0.65 than at d=d0.

  13. PageRank and rank-reversal dependence on the damping factor.

    Science.gov (United States)

    Son, S-W; Christensen, C; Grassberger, P; Paczuski, M

    2012-12-01

    PageRank (PR) is an algorithm originally developed by Google to evaluate the importance of web pages. Considering how deeply rooted Google's PR algorithm is to gathering relevant information or to the success of modern businesses, the question of rank stability and choice of the damping factor (a parameter in the algorithm) is clearly important. We investigate PR as a function of the damping factor d on a network obtained from a domain of the World Wide Web, finding that rank reversal happens frequently over a broad range of PR (and of d). We use three different correlation measures, Pearson, Spearman, and Kendall, to study rank reversal as d changes, and we show that the correlation of PR vectors drops rapidly as d changes from its frequently cited value, d_{0}=0.85. Rank reversal is also observed by measuring the Spearman and Kendall rank correlation, which evaluate relative ranks rather than absolute PR. Rank reversal happens not only in directed networks containing rank sinks but also in a single strongly connected component, which by definition does not contain any sinks. We relate rank reversals to rank pockets and bottlenecks in the directed network structure. For the network studied, the relative rank is more stable by our measures around d=0.65 than at d=d_{0}.

  14. Importance of intrinsic and non-network contribution in PageRank centrality and its effect on PageRank localization

    OpenAIRE

    Deyasi, Krishanu

    2016-01-01

    PageRank centrality is used by Google for ranking web-pages to present search result for a user query. Here, we have shown that PageRank value of a vertex also depends on its intrinsic, non-network contribution. If the intrinsic, non-network contributions of the vertices are proportional to their degrees or zeros, then their PageRank centralities become proportion to their degrees. Some simulations and empirical data are used to support our study. In addition, we have shown that localization ...

  15. Identifying Aspects for Web-Search Queries

    OpenAIRE

    Wu, Fei; Madhavan, Jayant; Halevy, Alon

    2014-01-01

    Many web-search queries serve as the beginning of an exploration of an unknown space of information, rather than looking for a specific web page. To answer such queries effec- tively, the search engine should attempt to organize the space of relevant information in a way that facilitates exploration. We describe the Aspector system that computes aspects for a given query. Each aspect is a set of search queries that together represent a distinct information need relevant to the original search...

  16. An Expertise Recommender using Web Mining

    Science.gov (United States)

    Joshi, Anupam; Chandrasekaran, Purnima; ShuYang, Michelle; Ramakrishnan, Ramya

    2001-01-01

    This report explored techniques to mine web pages of scientists to extract information regarding their expertise, build expertise chains and referral webs, and semi automatically combine this information with directory information services to create a recommender system that permits query by expertise. The approach included experimenting with existing techniques that have been reported in research literature in recent past , and adapted them as needed. In addition, software tools were developed to capture and use this information.

  17. Reading and writing in the Web

    OpenAIRE

    Barbosa, Ana Cristina Lima Santos

    2010-01-01

    This article investigates the characteristics of Web texts, which shape specific reading and writing styles. At first, a discussion is made of the articulation between communication and knowledge, spanning from oral cultures to the cyberculture. After that, the informal writing of communication mediated by computers is introduced. Finally, under the perspective of the hypertextual language of the digital medium, writing styles and reading modes of those texts on Web pages are approached. Base...

  18. Give your feedback on the new Users’ page

    CERN Multimedia

    CERN Bulletin

    If you haven't already done so, visit the new Users’ page and provide the Communications group with your feedback. You can do this quickly and easily via an online form. A dedicated web steering group will design the future page on the basis of your comments. As a first step towards reforming the CERN website, the Communications group is proposing a ‘beta’ version of the Users’ pages. The primary aim of this version is to improve the visibility of key news items, events and announcements to the CERN community. The beta version is very much work in progress: your input is needed to make sure that the final site meets the needs of CERN’s wide and mixed community. The Communications group will read all your comments and suggestions, and will establish a web steering group that will make sure that the future CERN web pages match the needs of the community. More information on this process, including the gradual 'retirement' of the grey Users' pages we are a...

  19. Web 25

    DEFF Research Database (Denmark)

    Web 25: Histories from the First 25 Years of the World Wide Web celebrates the 25th anniversary of the Web. Since the beginning of the 1990s, the Web has played an important role in the development of the Internet as well as in the development of most societies at large, from its early grey...... and blue webpages introducing the hyperlink for a wider public, to today’s multifacted uses of the Web as an integrated part of our daily lives. This is the rst book to look back at 25 years of Web evolution, and it tells some of the histories about how the Web was born and has developed. It takes...... the reader on an exciting time travel journey to learn more about the prehistory of the hyperlink, the birth of the Web, the spread of the early Web, and the Web’s introduction to the general public in mainstream media. Fur- thermore, case studies of blogs, literature, and traditional media going online...

  20. Web Engineering

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-06-23

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.

  1. TCRC Fertility Page

    Science.gov (United States)

    The Testicular Cancer Resource Center The TCRC Fertility Page Testicular Cancer and fertility are interrelated in numerous ways. TC usually affects young men still in the process of having a family. ...

  2. Web Caching

    Indian Academy of Sciences (India)

    operating systems, computer networks, distributed systems,. E-commerce and security. The World Wide Web has been growing in leaps and bounds. Studies have indicated that this massive distributed system can benefit greatly by making use of appropriate caching methods. Intelligent Web caching can lessen the burden ...

  3. Traitor: associating concepts using the world wide web

    NARCIS (Netherlands)

    Drijfhout, Wanno; Oliver, J.; Oliver, Jundt; Wevers, L.; Hiemstra, Djoerd

    We use Common Crawl's 25TB data set of web pages to construct a database of associated concepts using Hadoop. The database can be queried through a web application with two query interfaces. A textual interface allows searching for similarities and differences between multiple concepts using a query

  4. Indian accent text-to-speech system for web browsing

    Indian Academy of Sciences (India)

    ... user the option to follow the link or continue perusing the current web page. The user can exercise the option either through a keyboard or via spoken commands. Future plans include refining the web parser, improvement of naturalness of synthetic speech and improving the robustness of the speech recognition system.

  5. Sample-based XPath Ranking for Web Information Extraction

    NARCIS (Netherlands)

    Jundt, Oliver; van Keulen, Maurice

    Web information extraction typically relies on a wrapper, i.e., program code or a configuration that specifies how to extract some information from web pages at a specific website. Manually creating and maintaining wrappers is a cumbersome and error-prone task. It may even be prohibitive as some

  6. 21 CFR 1304.45 - Internet Web site disclosure requirements.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Internet Web site disclosure requirements. 1304.45... OF REGISTRANTS Online Pharmacies § 1304.45 Internet Web site disclosure requirements. (a) Each online... the following information on the homepage of each Internet site it operates, or on a page directly...

  7. Include Your Patrons in Web Design. Computers in Small Libraries

    Science.gov (United States)

    Roberts, Gary

    2005-01-01

    Successful Web publishing requires not only technical skills but also a refined sense of taste, a good understanding of design, and strong writing abilities. When designing a library Web page, a person must possess all of these talents and be able to market to a broad spectrum of patrons. As a result, library sites vary widely in their style and…

  8. Off the Beaten tracks: Exploring Three Aspects of Web Navigation

    NARCIS (Netherlands)

    Weinreich, H.; Obendorf, H.; Herder, E.; Mayer, M.; Edmonds, H.; Hawkey, K.; Kellar, M.; Turnbull, D.

    2006-01-01

    This paper presents results of a long-term client-side Web usage study, updating previous studies that range in age from five to ten years. We focus on three aspects of Web navigation: changes in the distribution of navigation actions, speed of navigation and within-page navigation. “Navigation

  9. Classical Hypermedia Virtues on the Web with Webstrates

    DEFF Research Database (Denmark)

    Bouvin, Niels Olof; Klokmose, Clemens Nylandsted

    2016-01-01

    We show and analyze herein how Webstrates can augment the Web from a classical hypermedia perspective. Webstrates turns the DOM of Web pages into persistent and collaborative objects. We demonstrate how this can be applied to realize bidirectional links, shared collaborative annotations, and in...

  10. TREC2002 Web, Novelty and Filtering Track Experiments Using PIRCS

    National Research Council Canada - National Science Library

    Kwok, K. L; Deng, P; Dinstl, N; Chan, M

    2006-01-01

    .... The Web track has two tasks: distillation and named-page retrieval. Distillation is a new utility concept for ranking documents, and needs new design on the output document ranked list after an ad-hoc retrieval from the web (.gov) collection...

  11. Ten years on, the web spans the globe

    CERN Multimedia

    Dalton, A W

    2003-01-01

    Short article on the history of the WWW. Prof Berner-Lee states that one of the main reasons the web was such a success was due to CERN's decision to make the web foundations and protocols available on a royalty-free basis (1/2 page).

  12. DW3 Classical Music Resources: Managing Mozart on the Web.

    Science.gov (United States)

    Fineman, Yale

    2001-01-01

    Discusses the development of DW3 (Duke World Wide Web) Classical Music Resources, a vertical portal that comprises the most comprehensive collection of classical music resources on the Web with links to more than 2800 non-commercial pages/sites in over a dozen languages. Describes the hierarchical organization of subject headings and considers…

  13. What’s New? Deploying a Library New Titles Page with Minimal Programming

    OpenAIRE

    John Meyerhofer

    2017-01-01

    With a new titles web page, a library has a place to show faculty, students, and staff the items they are purchasing for their community. However, many times heavy programing knowledge and/or a LAMP stack (Linux, Apache, MySQL, PHP) or APIs separate a library’s data from making a new titles web page a reality. Without IT staff, a new titles page can become nearly impossible or not worth the effort. Here we will demonstrate how a small liberal arts college took its acquisition data and combine...

  14. An Optimization Model for Product Placement on Product Listing Pages

    Directory of Open Access Journals (Sweden)

    Yan-Kwang Chen

    2014-01-01

    Full Text Available The design of product listing pages is a key component of Website design because it has significant influence on the sales volume on a Website. This study focuses on product placement in designing product listing pages. Product placement concerns how venders of online stores place their products over the product listing pages for maximization of profit. This problem is very similar to the offline shelf management problem. Since product information sources on a Web page are typically communicated through the text and image, visual stimuli such as color, shape, size, and spatial arrangement often have an effect on the visual attention of online shoppers and, in turn, influence their eventual purchase decisions. In view of the above, this study synthesizes the visual attention literature and theory of shelf-space allocation to develop a mathematical programming model with genetic algorithms for finding optimal solutions to the focused issue. The validity of the model is illustrated with example problems.

  15. Fermilab joins in global live Web cast

    CERN Multimedia

    Polansek, Tom

    2005-01-01

    From 2 to 3:30 p.m., Lederman, who won the Nobel Prize for physics in 1988, will host his own wacky, science-centered talk show at Fermi National Accelerator Laboratory as part of a lvie, 12-hour, international Web cast celebrating Albert Einstein and the world Year of Physics (2/3 page)

  16. Knighthood for 'father of the web'

    CERN Multimedia

    Uhlig, R

    2003-01-01

    "Tim Berners-Lee, the father of the world wide web, was awarded a knighthood for services to the internet, which his efforts transformed from a haunt of computer geeks, scientists and the military into a global phenomenon" (1/2 page).

  17. Resolving person names in web people search

    NARCIS (Netherlands)

    Balog, K.; Azzopardi, L.; de Rijke, M.; King, I.; Baeza-Yates, R.

    2009-01-01

    Disambiguating person names in a set of documents (such as a set of web pages returned in response to a person name) is a key task for the presentation of results and the automatic profiling of experts. With largely unstructured documents and an unknown number of people with the same name the

  18. Personal name resolution of web people search

    NARCIS (Netherlands)

    Balog, K.; Azzopardi, L.; de Rijke, M.

    2008-01-01

    Disambiguating personal names in a set of documents (such as a set of web pages returned in response to a person name) is a difficult and challenging task. In this paper, we explore the extent to which the "cluster hypothesis" for this task holds (i.e., that similar documents tend to represent the

  19. Book Reviews, Annotation, and Web Technology.

    Science.gov (United States)

    Schulze, Patricia

    From reading texts to annotating web pages, grade 6-8 students rely on group cooperation and individual reading and writing skills in this research project that spans six 50-minute lessons. Student objectives for this project are that they will: read, discuss, and keep a journal on a book in literature circles; understand the elements of and…

  20. A Web Browser Interface to Manage the Searching and Organizing of Information on the Web by Learners

    Science.gov (United States)

    Li, Liang-Yi; Chen, Gwo-Dong

    2010-01-01

    Information Gathering is a knowledge construction process. Web learners make a plan for their Information Gathering task based on their prior knowledge. The plan is evolved with new information encountered and their mental model is constructed through continuously assimilating and accommodating new information gathered from different Web pages. In…

  1. On HTML and XML based web design and implementation techniques

    International Nuclear Information System (INIS)

    Bezboruah, B.; Kalita, M.

    2006-05-01

    Web implementation is truly a multidisciplinary field with influences from programming, choosing of scripting languages, graphic design, user interface design, and database design. The challenge of a Web designer/implementer is his ability to create an attractive and informative Web. To work with the universal framework and link diagrams from the design process as well as the Web specifications and domain information, it is essential to create Hypertext Markup Language (HTML) or other software and multimedia to accomplish the Web's objective. In this article we will discuss Web design standards and the techniques involved in Web implementation based on HTML and Extensible Markup Language (XML). We will also discuss the advantages and disadvantages of HTML over its successor XML in designing and implementing a Web. We have developed two Web pages, one utilizing the features of HTML and the other based on the features of XML to carry out the present investigation. (author)

  2. Academic medical center libraries on the Web.

    Science.gov (United States)

    Tannery, N H; Wessel, C B

    1998-10-01

    Academic medical center libraries are moving towards publishing electronically, utilizing networked technologies, and creating digital libraries. The catalyst for this movement has been the Web. An analysis of academic medical center library Web pages was undertaken to assess the information created and communicated in early 1997. A summary of present uses and suggestions for future applications is provided. A method for evaluating and describing the content of library Web sites was designed. The evaluation included categorizing basic information such as description and access to library services, access to commercial databases, and use of interactive forms. The main goal of the evaluation was to assess original resources produced by these libraries.

  3. Stresses of Single Parenting

    Science.gov (United States)

    ... Text Size Email Print Share Stresses of Single Parenting Page Content Article Body What are some ways ... way. Check your local library for books on parenting. Local hospitals, the YMCA, and church groups often ...

  4. What’s New? Deploying a Library New Titles Page with Minimal Programming

    Directory of Open Access Journals (Sweden)

    John Meyerhofer

    2017-01-01

    Full Text Available With a new titles web page, a library has a place to show faculty, students, and staff the items they are purchasing for their community. However, many times heavy programing knowledge and/or a LAMP stack (Linux, Apache, MySQL, PHP or APIs separate a library’s data from making a new titles web page a reality. Without IT staff, a new titles page can become nearly impossible or not worth the effort. Here we will demonstrate how a small liberal arts college took its acquisition data and combined it with a Google Sheet, HTML, and a little JavaScript to create a new titles web page that was dynamic and engaging to its users.

  5. Web watch

    CERN Multimedia

    Dodson, S

    2002-01-01

    British Telecom is claiming it invented hypertext and has a 1976 US patent to prove it. The company is accusing 17 of the biggest US internet service providers of using its technology without paying a royalty fee (1/2 page).

  6. PageRank, HITS and a unified framework for link analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Chris; He, Xiaofeng; Husbands, Parry; Zha, Hongyuan; Simon, Horst

    2001-10-01

    Two popular webpage ranking algorithms are HITS and PageRank. HITS emphasizes mutual reinforcement between authority and hub webpages, while PageRank emphasizes hyperlink weight normalization and web surfing based on random walk models. We systematically generalize/combine these concepts into a unified framework. The ranking framework contains a large algorithm space; HITS and PageRank are two extreme ends in this space. We study several normalized ranking algorithms which are intermediate between HITS and PageRank, and obtain closed-form solutions. We show that, to first order approximation, all ranking algorithms in this framework, including PageRank and HITS, lead to same ranking which is highly correlated with ranking by indegree. These results support the notion that in web resource ranking indegree and outdegree are of fundamental importance. Rankings of webgraphs of different sizes and queries are presented to illustrate our analysis.

  7. Sensor web

    Science.gov (United States)

    Delin, Kevin A. (Inventor); Jackson, Shannon P. (Inventor)

    2011-01-01

    A Sensor Web formed of a number of different sensor pods. Each of the sensor pods include a clock which is synchronized with a master clock so that all of the sensor pods in the Web have a synchronized clock. The synchronization is carried out by first using a coarse synchronization which takes less power, and subsequently carrying out a fine synchronization to make a fine sync of all the pods on the Web. After the synchronization, the pods ping their neighbors to determine which pods are listening and responded, and then only listen during time slots corresponding to those pods which respond.

  8. DERIVING USER ACCESS PATTERNS AND MINING WEB COMMUNITY WITH WEB-LOG DATA FOR PREDICTING USER SESSIONS WITH PAJEK

    Directory of Open Access Journals (Sweden)

    S. Balaji

    2012-10-01

    Full Text Available Web logs are a young and dynamic media type. Due to the intrinsic relationship among Web objects and the deficiency of a uniform schema of web documents, Web community mining has become significant area for Web data management and analysis. The research of Web communities extents a number of research domains. In this paper an ontological model has been present with some recent studies on this topic, which cover finding relevant Web pages based on linkage information, discovering user access patterns through analyzing Web log files from Web data. A simulation has been created with the academic website crawled data. The simulation is done in JAVA and ORACLE environment. Results show that prediction of user session could give us plenty of vital information for the Business Intelligence. Search Engine Optimization could also use these potential results which are discussed in the paper in detail.

  9. News from the Library: The CERN Web Archive

    CERN Multimedia

    CERN Library

    2012-01-01

    The World Wide Web was born at CERN in 1989. However, although historic paper documents from over 50 years ago survive in the CERN Archive, it is by no means certain that we will be able to consult today's web pages 50 years from now.   The Internet Archive's Wayback Machine includes an impressive collection of archived CERN web pages from 1996 onwards. However, their coverage is not complete - they aim for broad coverage of the whole Internet, rather than in-depth coverage of particular organisations. To try to fill this gap, the CERN Archive has entered into a partnership agreement with the Internet Memory Foundation. Harvesting of CERN's publicly available web pages is now being carried out on a regular basis, and the results are available here. 

  10. Evaluating The Markov Assumption For Web Usage Mining

    DEFF Research Database (Denmark)

    Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.

    2003-01-01

    Web usage mining concerns the discovery of common browsing patterns, i.e., pages requested in sequence, from web logs. To cope with the enormous amounts of data, several aggregated structures based on statistical models of web surfing have appeared, e.g., the Hypertext Probabilistic Grammar (HPG......) model~\\cite{borges99data}. These techniques typically rely on the \\textit{Markov assumption with history depth} $n$, i.e., it is assumed that the next requested page is only dependent on the last $n$ pages visited. This is not always valid, i.e. false browsing patterns may be discovered. However, to our...... knowledge there has been no systematic study of the validity of the Markov assumption wrt.\\ web usage mining and the resulting quality of the mined browsing patterns. In this paper we systematically investigate the quality of browsing patterns mined from structures based on the Markov assumption. Formal...

  11. 78 FR 67881 - Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and...

    Science.gov (United States)

    2013-11-12

    ... ticket agents are providing schedule and fare information and marketing covered air transportation... corresponding accessible pages on a mobile Web site by one year after the final rule's effective date; and (3... criteria) as the required accessibility standard for all public-facing Web pages involved in marketing air...

  12. Web Analytics

    Science.gov (United States)

    EPA’s Web Analytics Program collects, analyzes, and provides reports on traffic, quality assurance, and customer satisfaction metrics for EPA’s website. The program uses a variety of analytics tools, including Google Analytics and CrazyEgg.

  13. A Web Server for MACCS Magnetometer Data

    Science.gov (United States)

    Engebretson, Mark J.

    1998-01-01

    NASA Grant NAG5-3719 was provided to Augsburg College to support the development of a web server for the Magnetometer Array for Cusp and Cleft Studies (MACCS), a two-dimensional array of fluxgate magnetometers located at cusp latitudes in Arctic Canada. MACCS was developed as part of the National Science Foundation's GEM (Geospace Environment Modeling) Program, which was designed in part to complement NASA's Global Geospace Science programs during the decade of the 1990s. This report describes the successful use of these grant funds to support a working web page that provides both daily plots and file access to any user accessing the worldwide web. The MACCS home page can be accessed at http://space.augsburg.edu/space/MaccsHome.html.

  14. Web party effect: a cocktail party effect in the web environment

    Science.gov (United States)

    Gerbino, Walter

    2015-01-01

    In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all) web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots) on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search). Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment): users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others. PMID:25802803

  15. Learning in a Sheltered Internet Environment: The Use of WebQuests

    Science.gov (United States)

    Segers, Eliane; Verhoeven, Ludo

    2009-01-01

    The present study investigated the effects on learning in a sheltered Internet environment using so-called WebQuests in elementary school classrooms in the Netherlands. A WebQuest is an assignment presented together with a series of web pages to help guide children's learning. The learning gains and quality of the work of 229 sixth graders…

  16. Web Accessibility in Europe and the United States: What We Are Doing to Increase Inclusion

    Science.gov (United States)

    Wheaton, Joseph; Bertini, Patrizia

    2007-01-01

    Accessibility is hardly a new problem and certainly did not originate with the Web. Lack of access to buildings long preceded the call for accessible Web content. Although it is unlikely that rehabilitation educators look at Web page accessibility with indifference, many may also find it difficult to implement. The authors posit three reasons why…

  17. Learning in a sheltered Internet environment: The use of WebQuests

    NARCIS (Netherlands)

    Segers, P.C.J.; Verhoeven, L.T.W.

    2009-01-01

    The present study investigated the effects on learning in a sheltered Internet environment using so-called WebQuests in elementary school classrooms in the Netherlands. A WebQuest is an assignment presented together with a series of web pages to help guide children's learning. The learning gains and

  18. SiteGuide: An example-based approach to web site development assistance

    NARCIS (Netherlands)

    Hollink, V.; de Boer, V.; van Someren, M.; Filipe, J.; Cordeiro, J.

    2009-01-01

    We present ‘SiteGuide’, a tool that helps web designers to decide which information will be included in a new web site and how the information will be organized. SiteGuide takes as input URLs of web sites from the same domain as the site the user wants to create. It automatically searches the pages

  19. [Anesthesia and World Wide Web 2.0. Instructions for use].

    Science.gov (United States)

    Klein, K U; Thal, S C

    2009-09-01

    The World Wide Web (WWW) offers an increasing number of medical information sources with unprecedented actuality. However, the vast numbers of web pages make it difficult to find reliable sources of information. In respect of the web 2.0 technology this manuscript aims to present instructions for use of the WWW to anesthesiologists.

  20. Full page insight

    DEFF Research Database (Denmark)

    Cortsen, Rikke Platz

    2014-01-01

    Alan Moore and his collaborating artists often manipulate time and space by drawing upon the formal elements of comics and making alternative constellations. This article looks at an element that is used frequently in comics of all kinds – the full page – and discusses how it helps shape spatio-t...

  1. Pages 552 - 556.pmd

    African Journals Online (AJOL)

    Administrator

    recorded using optical microscope. Image Pro-Plus. 5.1 was used to calculate relative area of penumbra. Gap junction protein Cx43 measurement using. Western-blot. The tissues were sonicated in lysis buffer to extract and determine total protein. Each sample was electrophoresed on 10% SDS-PAGE and then transferred ...

  2. Title and title page.

    Science.gov (United States)

    Peh, W C G; Ng, K H

    2008-08-01

    The title gives the first impression of a scientific article, and should accurately convey to a reader what the whole article is about. A good title is short, informative and attractive. The title page provides information about the authors, their affiliations and the corresponding author's contact details.

  3. Folding worlds between pages

    CERN Document Server

    Meier, Matthias

    2010-01-01

    "We all remember pop-up books form our childhood. As fascinated as we were back then, we probably never imagined how much engineering know-how went into these books. Pop-up engineer Anton Radevsky has even managed to fold a 27-kilometre particle accelerator into a book" (4 pages)

  4. stage/page/play

    DEFF Research Database (Denmark)

    context. Contributors: Per Brask, Dario Fo, Jette Barnholdt Hansen, Pil Hansen, Sven Åke Heed, Ulla Kallenbach, Sofie Kluge, Annelis Kuhlmann, Kela Kvam, Anna Lawaetz, Bent Flemming Nielsen, Franco Perrelli, Magnus Tessing Schneider, Antonio Scuderi. stage/page/play is published as a festschrift...

  5. PageRank model of opinion formation on Ulam networks

    Science.gov (United States)

    Chakhmakhchyan, L.; Shepelyansky, D.

    2013-12-01

    We consider a PageRank model of opinion formation on Ulam networks, generated by the intermittency map and the typical Chirikov map. The Ulam networks generated by these maps have certain similarities with such scale-free networks as the World Wide Web (WWW), showing an algebraic decay of the PageRank probability. We find that the opinion formation process on Ulam networks has certain similarities but also distinct features comparing to the WWW. We attribute these distinctions to internal differences in network structure of the Ulam and WWW networks. We also analyze the process of opinion formation in the frame of generalized Sznajd model which protects opinion of small communities.

  6. Chapter 07: Species description pages

    Science.gov (United States)

    Alex C. Wiedenhoeft

    2011-01-01

    These pages are written to be the final step in the identification process; you will be directed to them by the key in Chapter 6. Each species or group of similar species in the same genus has its own set of pages. The information in the first page describes the characteristics of the wood covered in the manual. The page shows images of similar or confusable woods,...

  7. PageRank (II): Mathematics

    African Journals Online (AJOL)

    maths/stats

    Page Rank is a virtual value meaning nothing until you put it into the context of search engine results. Higher Page Rank pages will have tendency to rank better in the search engine results provided they are still optimized for the keywords you are searching for. Visitors coming from search engines are most praised kind of.

  8. Higher-order web link analysis using multilinear algebra.

    Energy Technology Data Exchange (ETDEWEB)

    Kenny, Joseph P.; Bader, Brett William (Sandia National Laboratories, Albuquerque, NM); Kolda, Tamara Gibson

    2005-07-01

    Linear algebra is a powerful and proven tool in web search. Techniques, such as the PageRank algorithm of Brin and Page and the HITS algorithm of Kleinberg, score web pages based on the principal eigenvector (or singular vector) of a particular non-negative matrix that captures the hyperlink structure of the web graph. We propose and test a new methodology that uses multilinear algebra to elicit more information from a higher-order representation of the hyperlink graph. We start by labeling the edges in our graph with the anchor text of the hyperlinks so that the associated linear algebra representation is a sparse, three-way tensor. The first two dimensions of the tensor represent the web pages while the third dimension adds the anchor text. We then use the rank-1 factors of a multilinear PARAFAC tensor decomposition, which are akin to singular vectors of the SVD, to automatically identify topics in the collection along with the associated authoritative web pages.

  9. Constructing a web recommender system using web usage mining and user’s profiles

    Directory of Open Access Journals (Sweden)

    T. Mombeini

    2014-12-01

    Full Text Available The World Wide Web is a great source of information, which is nowadays being widely used due to the availability of useful information changing, dynamically. However, the large number of webpages often confuses many users and it is hard for them to find information on their interests. Therefore, it is necessary to provide a system capable of guiding users towards their desired choices and services. Recommender systems search among a large collection of user interests and recommend those, which are likely to be favored the most by the user. Web usage mining was designed to function on web server records, which are included in user search results. Therefore, recommender servers use the web usage mining technique to predict users’ browsing patterns and recommend those patterns in the form of a suggestion list. In this article, a recommender system based on web usage mining phases (online and offline was proposed. In the offline phase, the first step is to analyze user access records to identify user sessions. Next, user profiles are built using data from server records based on the frequency of access to pages, the time spent by the user on each page and the date of page view. Date is of importance since it is more possible for users to request new pages more than old ones and old pages are less probable to be viewed, as users mostly look for new information. Following the creation of user profiles, users are categorized in clusters using the Fuzzy C-means clustering algorithm and S(c criterion based on their similarities. In the online phase, a neural network is offered to identify the suggested model while online suggestions are generated using the suggestion module for the active user. Search engines analyze suggestion lists based on rate of user interest in pages and page rank and finally suggest appropriate pages to the active user. Experiments show that the proposed method of predicting user recent requested pages has more accuracy and

  10. Effect of a Web-based intervention to promote physical activity and improve health among physically inactive adults

    DEFF Research Database (Denmark)

    Hansen, Andreas Wolff; Grønbæk, Morten; Helge, Jørn Wulff

    2012-01-01

    an intervention (website) (n = 6055) or a no-intervention control group (n = 6232) in 2008. The intervention website was founded on the theories of stages of change and of planned behavior and, apart from a forum page where a physiotherapist answered questions about PA and training, was fully automated. After 3...... in the website group. CONCLUSIONS: Based on our findings, we suggest that active users of a Web-based PA intervention can improve their level of PA. However, for unmotivated users, single-tailored feedback may be too brief. Future research should focus on developing more sophisticated interventions...

  11. Characterizing and modeling web sessions with applications

    OpenAIRE

    Chiarandini, Luca

    2014-01-01

    This thesis focuses on the analysis and modeling of web sessions, groups of requests made by a single user for a single navigation purpose. Understanding how people browse through websites is important, helping us to improve interfaces and provide to better content. After first conducting a statistical analysis of web sessions, we go on to present algorithms to summarize and model web sessions. Finally, we describe applications that use novel browsing methods, in particular parallel...

  12. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.

    Directory of Open Access Journals (Sweden)

    Mansour Alsaleh

    Full Text Available Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.

  13. Customizable scientific web portal for fusion research

    Energy Technology Data Exchange (ETDEWEB)

    Abla, G., E-mail: abla@fusion.gat.co [General Atomics, P.O. Box 85608, San Diego, CA (United States); Kim, E.N.; Schissel, D.P.; Flanagan, S.M. [General Atomics, P.O. Box 85608, San Diego, CA (United States)

    2010-07-15

    Web browsers have become a major application interface for participating in scientific experiments such as those in magnetic fusion. The recent advances in web technologies motivated the deployment of interactive web applications with rich features. In the scientific world, web applications have been deployed in portal environments. When used in a scientific research environment, such as fusion experiments, web portals can present diverse sources of information in a unified interface. However, the design and development of a scientific web portal has its own challenges. One such challenge is that a web portal needs to be fast and interactive despite the high volume of information and number of tools it presents. Another challenge is that the visual output of the web portal must not be overwhelming to the end users, despite the high volume of data generated by fusion experiments. Therefore, the applications and information should be customizable depending on the needs of end users. In order to meet these challenges, the design and implementation of a web portal needs to support high interactivity and user customization. A web portal has been designed to support the experimental activities of DIII-D researchers worldwide by providing multiple services, such as real-time experiment status monitoring, diagnostic data access and interactive data visualization. The web portal also supports interactive collaborations by providing a collaborative logbook, shared visualization and online instant messaging services. The portal's design utilizes the multi-tier software architecture and has been implemented utilizing web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services, which allows users to create a unique, personalized working environment to fit their own needs and interests. This paper describes the software

  14. Customizable scientific web portal for fusion research

    International Nuclear Information System (INIS)

    Abla, G.; Kim, E.N.; Schissel, D.P.; Flanagan, S.M.

    2010-01-01

    Web browsers have become a major application interface for participating in scientific experiments such as those in magnetic fusion. The recent advances in web technologies motivated the deployment of interactive web applications with rich features. In the scientific world, web applications have been deployed in portal environments. When used in a scientific research environment, such as fusion experiments, web portals can present diverse sources of information in a unified interface. However, the design and development of a scientific web portal has its own challenges. One such challenge is that a web portal needs to be fast and interactive despite the high volume of information and number of tools it presents. Another challenge is that the visual output of the web portal must not be overwhelming to the end users, despite the high volume of data generated by fusion experiments. Therefore, the applications and information should be customizable depending on the needs of end users. In order to meet these challenges, the design and implementation of a web portal needs to support high interactivity and user customization. A web portal has been designed to support the experimental activities of DIII-D researchers worldwide by providing multiple services, such as real-time experiment status monitoring, diagnostic data access and interactive data visualization. The web portal also supports interactive collaborations by providing a collaborative logbook, shared visualization and online instant messaging services. The portal's design utilizes the multi-tier software architecture and has been implemented utilizing web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services, which allows users to create a unique, personalized working environment to fit their own needs and interests. This paper describes the software

  15. TDCCREC: AN EFFICIENT AND SCALABLE WEB-BASED RECOMMENDATION SYSTEM

    Directory of Open Access Journals (Sweden)

    K.Latha

    2010-10-01

    Full Text Available Web browsers are provided with complex information space where the volume of information available to them is huge. There comes the Recommender system which effectively recommends web pages that are related to the current webpage, to provide the user with further customized reading material. To enhance the performance of the recommender systems, we include an elegant proposed web based recommendation system; Truth Discovery based Content and Collaborative RECommender (TDCCREC which is capable of addressing scalability. Existing approaches such as Learning automata deals with usage and navigational patterns of users. On the other hand, Weighted Association Rule is applied for recommending web pages by assigning weights to each page in all the transactions. Both of them have their own disadvantages. The websites recommended by the search engines have no guarantee for information correctness and often delivers conflicting information. To solve them, content based filtering and collaborative filtering techniques are introduced for recommending web pages to the active user along with the trustworthiness of the website and confidence of facts which outperforms the existing methods. Our results show how the proposed recommender system performs better in predicting the next request of web users.

  16. Web Caching

    Indian Academy of Sciences (India)

    The user may never realize that the cache is between the client and server except in special circumstances. It is important to distinguish between Web cache and a proxy server as their functions are often misunderstood. Proxy servers serve as an intermediary to place a firewall between network users and the outside world.

  17. Fiber webs

    Science.gov (United States)

    Roger M. Rowell; James S. Han; Von L. Byrd

    2005-01-01

    Wood fibers can be used to produce a wide variety of low-density three-dimensional webs, mats, and fiber-molded products. Short wood fibers blended with long fibers can be formed into flexible fiber mats, which can be made by physical entanglement, nonwoven needling, or thermoplastic fiber melt matrix technologies. The most common types of flexible mats are carded, air...

  18. Undergraduate Students’Evaluation Criteria When Using Web Resources for Class Papers

    Directory of Open Access Journals (Sweden)

    Tsai-Youn Hung

    2004-09-01

    Full Text Available The growth in popularity of the World Wide Web has dramatically changed the way undergraduate students conduct information searches. The purpose of this study is to investigate what core quality criteria undergraduate students use to evaluate Web resources for their class papers and to what extent they evaluate the Web resources. This study reports on five Web page evaluations and a questionnaire survey of thirty five undergraduate students in the Information Technology and Informatics Program at Rutgers University. Results show that undergraduate students have become increasingly sophisticated about using Web resources, but not yet sophisticated about searching them. Undergraduate students only used one or two surface quality criteria to evaluate Web resources. They made immediate judgments about the surface features of Web pages and ignored the content of the documents themselves. This research suggests that undergraduate instructors should take the responsibility for instructing students on basic Web use knowledge or work with librarians to develop undergraduate students information literacy skills.

  19. The use of the TWiki Web in ATLAS

    International Nuclear Information System (INIS)

    Amram, Nir; Antonelli, Stefano; Haywood, Stephen; Lloyd, Steve; Luehring, Frederick; Poulard, Gilbert

    2010-01-01

    The ATLAS Experiment, with over 2000 collaborators, needs efficient and effective means of communicating information. The Collaboration has been using the TWiki Web at CERN for over three years and now has more than 7000 web pages, some of which are protected. This number greatly exceeds the number of 'static' HTML pages, and in the last year, there has been a significant migration to the TWiki. The TWiki is one example of the many different types of Wiki web which exist. In this paper, a description is given of the ATLAS TWiki at CERN. The tools used by the Collaboration to manage the TWiki are described and some of the problems encountered explained. A very useful development has been the creation of a set of Workbooks (Users' Guides) - these have benefitted from the TWiki environment and, in particular, a tool to extract pdf from the associated pages.

  20. PageMan: An interactive ontology tool to generate, display, and annotate overview graphs for profiling experiments

    Directory of Open Access Journals (Sweden)

    Hannah Matthew A

    2006-12-01

    Full Text Available Abstract Background Microarray technology has become a widely accepted and standardized tool in biology. The first microarray data analysis programs were developed to support pair-wise comparison. However, as microarray experiments have become more routine, large scale experiments have become more common, which investigate multiple time points or sets of mutants or transgenics. To extract biological information from such high-throughput expression data, it is necessary to develop efficient analytical platforms, which combine manually curated gene ontologies with efficient visualization and navigation tools. Currently, most tools focus on a few limited biological aspects, rather than offering a holistic, integrated analysis. Results Here we introduce PageMan, a multiplatform, user-friendly, and stand-alone software tool that annotates, investigates, and condenses high-throughput microarray data in the context of functional ontologies. It includes a GUI tool to transform different ontologies into a suitable format, enabling the user to compare and choose between different ontologies. It is equipped with several statistical modules for data analysis, including over-representation analysis and Wilcoxon statistical testing. Results are exported in a graphical format for direct use, or for further editing in graphics programs. PageMan provides a fast overview of single treatments, allows genome-level responses to be compared across several microarray experiments covering, for example, stress responses at multiple time points. This aids in searching for trait-specific changes in pathways using mutants or transgenics, analyzing development time-courses, and comparison between species. In a case study, we analyze the results of publicly available microarrays of multiple cold stress experiments using PageMan, and compare the results to a previously published meta-analysis. PageMan offers a complete user's guide, a web-based over-representation analysis as

  1. Design of remote weather monitor system based on embedded web database

    International Nuclear Information System (INIS)

    Gao Jiugang; Zhuang Along

    2010-01-01

    The remote weather monitoring system is designed by employing the embedded Web database technology and the S3C2410 microprocessor as the core. The monitoring system can simultaneously monitor the multi-channel sensor signals, and can give a dynamic Web pages display of various types of meteorological information on the remote computer. It gives a elaborated introduction of the construction and application of the Web database under the embedded Linux. Test results show that the client access the Web page via the GPRS or the Internet, acquires data and uses an intuitive graphical way to display the value of various types of meteorological information. (authors)

  2. Internet Resources: Using Web Pages in Social Studies.

    Science.gov (United States)

    Dale, Jack

    1999-01-01

    Contends that students in social studies classes can utilize Hypertext Markup Language (HTML) as a presentation and collaborative tool by developing websites. Presents two activities where students submitted webpages for country case studies and created a timeline for the French Revolution. Describes how to use HTML by discussing the various tags.…

  3. Web Page Content and Quality Assessed for Shoulder Replacement.

    Science.gov (United States)

    Matthews, John R; Harrison, Caitlyn M; Hughes, Travis M; Dezfuli, Bobby; Sheppard, Joseph

    2016-01-01

    The Internet has become a major source for obtaining health-related information. This study assesses and compares the quality of information available online for shoulder replacement using medical (total shoulder arthroplasty [TSA]) and nontechnical (shoulder replacement [SR]) terminology. Three evaluators reviewed 90 websites for each search term across 3 search engines (Google, Yahoo, and Bing). Websites were grouped into categories, identified as commercial or noncommercial, and evaluated with the DISCERN questionnaire. Total shoulder arthroplasty provided 53 unique sites compared to 38 websites for SR. Of the 53 TSA websites, 30% were health professional-oriented websites versus 18% of SR websites. Shoulder replacement websites provided more patient-oriented information at 48%, versus 45% of TSA websites. In total, SR websites provided 47% (42/90) noncommercial websites, with the highest number seen in Yahoo, compared with TSA at 37% (33/90), with Google providing 13 of the 33 websites (39%). Using the nonmedical terminology with Yahoo's search engine returned the most noncommercial and patient-oriented websites. However, the quality of information found online was highly variable, with most websites being unreliable and incomplete, regardless of search term.

  4. Young Children's Ability to Recognize Advertisements in Web Page Designs

    Science.gov (United States)

    Ali, Moondore; Blades, Mark; Oates, Caroline; Blumberg, Fran

    2009-01-01

    Identifying what is, and what is not an advertisement is the first step in realizing that an advertisement is a marketing message. Children can distinguish television advertisements from programmes by about 5 years of age. Although previous researchers have investigated television advertising, little attention has been given to advertisements in…

  5. Programmes de conception de pages Web (article en arabe ...

    African Journals Online (AJOL)

    aux collectivités. Raisons pour lesquelles sont testés dix programmes dans une optique débutant, notamment HTML, script JAVA et autres. Testes qui ont permis de dégager les avantages et inconvénients de chacun des programmes cités.

  6. Sustainable Materials Management (SMM) Web Academy Webinar: Food Waste Reduction Alliance, a Unique Industry Collaboration

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  7. Sustainable Materials Management (SMM) Web Academy Webinar: The Changing Waste Stream

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  8. Web révolution; il y a 17 ans, qui le connaissait?

    CERN Multimedia

    2007-01-01

    When tim Berners Lee, a CERN scientist, found the World Wide Web in 1989, the aim was to find an automatic sharing of the information for scientists working in the universities or institutes in the whole world. (2 pages)

  9. Sustainable Materials Management (SMM) Web Academy Webinar: Managing Wasted Food with Anaerobic Digestion: Incentives and Innovations

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  10. Sustainable Materials Management (SMM) Web Academy Webinar: Reducing Wasted Food: How Packaging Can Help

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  11. Né à Genève, le Web embobine la planète

    CERN Multimedia

    Broute, Anne-Muriel

    2009-01-01

    On the earth, today, one people about six is connected to the Web. Twenty years ago, they were only two: the english computer specialist Tim Berners-Lee and the belgium ingenner Robert Cailliau. (1,5 page)

  12. WEB BASED LEARNING OF COMPUTER NETWORK COURSE

    Directory of Open Access Journals (Sweden)

    Hakan KAPTAN

    2004-04-01

    Full Text Available As a result of developing on Internet and computer fields, web based education becomes one of the area that many improving and research studies are done. In this study, web based education materials have been explained for multimedia animation and simulation aided Computer Networks course in Technical Education Faculties. Course content is formed by use of university course books, web based education materials and technology web pages of companies. Course content is formed by texts, pictures and figures to increase motivation of students and facilities of learning some topics are supported by animations. Furthermore to help working principles of routing algorithms and congestion control algorithms simulators are constructed in order to interactive learning

  13. A grammar checker based on web searching

    Directory of Open Access Journals (Sweden)

    Joaquim Moré

    2006-05-01

    Full Text Available This paper presents an English grammar and style checker for non-native English speakers. The main characteristic of this checker is the use of an Internet search engine. As the number of web pages written in English is immense, the system hypothesises that a piece of text not found on the Web is probably badly written. The system also hypothesises that the Web will provide examples of how the content of the text segment can be expressed in a grammatically correct and idiomatic way. Thus, when the checker warns the user about the odd nature of a text segment, the Internet engine searches for contexts that can help the user decide whether he/she should correct the segment or not. By means of a search engine, the checker also suggests use of other expressions that appear on the Web more often than the expression he/she actually wrote.

  14. Caught in the Web

    International Nuclear Information System (INIS)

    Gillies, James

    1995-01-01

    The World-Wide Web may have taken the Internet by storm, but many people would be surprised to learn that it owes its existence to CERN. Around half the world's particle physicists come to CERN for their experiments, and the Web is the result of their need to share information quickly and easily on a global scale. Six years after Tim Berners-Lee's inspired idea to marry hypertext to the Internet in 1989, CERN is handing over future Web development to the World-Wide Web Consortium, run by the French National Institute for Research in Computer Science and Control, INRIA, and the Laboratory for Computer Science of the Massachusetts Institute of Technology, MIT, leaving itself free to concentrate on physics. The Laboratory marked this transition with a conference designed to give a taste of what the Web can do, whilst firmly stamping it with the label ''Made in CERN''. Over 200 European journalists and educationalists came to CERN on 8 - 9 March for the World-Wide Web Days, resulting in wide media coverage. The conference was opened by UK Science Minister David Hunt who stressed the importance of fundamental research in generating new ideas. ''Who could have guessed 10 years ago'', he said, ''that particle physics research would lead to a communication system which would allow every school to have the biggest library in the world in a single computer?''. In his introduction, the Minister also pointed out that ''CERN and other basic research laboratories help to break new technological ground and sow the seeds of what will become mainstream manufacturing in the future.'' Learning the jargon is often the hardest part of coming to grips with any new invention, so CERN put it at the top of the agenda. Jacques Altaber, who helped introduce the Internet to CERN in the early 1980s, explained that without the Internet, the Web couldn't exist. The Internet began as a US Defense

  15. Vague but exciting…CERN celebrates 20 years of the Web

    CERN Multimedia

    2009-01-01

    Twenty years ago work started on something that would change the world forever. It would change the way we work, the way we communicate and the way we make our voices heard. On 13 March CERN will celebrate the 20th anniversary of the birth of the World Wide Web. Tim Berners-Lee with Nicola Pellow, next to the NeXT computer.In March 1989 here at CERN, Tim Berners-Lee submitted a proposal for a new information management system to his boss, Mike Sendall. ‘Vague, but exciting’, were the words that Sendall wrote on the proposal, allowing Berners-Lee to continue with the project, but unaware that it would evolve into one of the most important communication tools ever created. Tim Berners-Lee used a NeXT computer at CERN to create the first web server running a single website – info.cern.ch. Since then the World Wide Web has grown into the incredible phenomenon that we know today, a web of more than 60 billion pages, and hundreds of ...

  16. Improving the interactivity and functionality of Web-based radiology teaching files with the Java programming language.

    Science.gov (United States)

    Eng, J

    1997-01-01

    Java is a programming language that runs on a "virtual machine" built into World Wide Web (WWW)-browsing programs on multiple hardware platforms. Web pages were developed with Java to enable Web-browsing programs to overlay transparent graphics and text on displayed images so that the user could control the display of labels and annotations on the images, a key feature not available with standard Web pages. This feature was extended to include the presentation of normal radiologic anatomy. Java programming was also used to make Web browsers compatible with the Digital Imaging and Communications in Medicine (DICOM) file format. By enhancing the functionality of Web pages, Java technology should provide greater incentive for using a Web-based approach in the development of radiology teaching material.

  17. Bringing Control System User Interfaces to the Web

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xihui [ORNL; Kasemir, Kay [ORNL

    2013-01-01

    With the evolution of web based technologies, especially HTML5 [1], it becomes possible to create web-based control system user interfaces (UI) that are cross-browser and cross-device compatible. This article describes two technologies that facilitate this goal. The first one is the WebOPI [2], which can seamlessly display CSS BOY [3] Operator Interfaces (OPI) in web browsers without modification to the original OPI file. The WebOPI leverages the powerful graphical editing capabilities of BOY and provides the convenience of re-using existing OPI files. On the other hand, it uses generic JavaScript and a generic communication mechanism between the web browser and web server. It is not optimized for a control system, which results in unnecessary network traffic and resource usage. Our second technology is the WebSocket-based Process Data Access (WebPDA) [4]. It is a protocol that provides efficient control system data communication using WebSocket [5], so that users can create web-based control system UIs using standard web page technologies such as HTML, CSS and JavaScript. WebPDA is control system independent, potentially supporting any type of control system.

  18. Interactive Visualization and Navigation of Web Search Results Revealing Community Structures and Bridges

    OpenAIRE

    Sallaberry, Arnaud; Zaidi, Faraz; Pich, C.; Melançon, Guy

    2010-01-01

    International audience; With the information overload on the Internet, organization and visualization of web search results so as to facilitate faster access to information is a necessity. The classical methods present search results as an ordered list of web pages ranked in terms of relevance to the searched topic. Users thus have to scan text snippets or navigate through various pages before finding the required information. In this paper we present an interactive visualization system for c...

  19. Full page fax print

    Indian Academy of Sciences (India)

    user

    Application form can be downloaded from the Centre for Water Resources web site www.cwr.co.in and also from the JNT University website www.jntu.ac.in. Last date for registration is 18th August, 2008. Dr. M V S S GIRIDHAR (Course Coordinator) , Assistant Professor, Centre for Water Resources. Institute of Science and ...

  20. Web-based pathology practice examination usage.

    Science.gov (United States)

    Klatt, Edward C

    2014-01-01

    General and subject specific practice examinations for students in health sciences studying pathology were placed onto a free public internet web site entitled web path and were accessed four clicks from the home web site menu. Multiple choice questions were coded into. html files with JavaScript functions for web browser viewing in a timed format. A Perl programming language script with common gateway interface for web page forms scored examinations and placed results into a log file on an internet computer server. The four general review examinations of 30 questions each could be completed in up to 30 min. The 17 subject specific examinations of 10 questions each with accompanying images could be completed in up to 15 min each. The results of scores and user educational field of study from log files were compiled from June 2006 to January 2014. The four general review examinations had 31,639 accesses with completion of all questions, for a completion rate of 54% and average score of 75%. A score of 100% was achieved by 7% of users, ≥90% by 21%, and ≥50% score by 95% of users. In top to bottom web page menu order, review examination usage was 44%, 24%, 17%, and 15% of all accessions. The 17 subject specific examinations had 103,028 completions, with completion rate 73% and average score 74%. Scoring at 100% was 20% overall, ≥90% by 37%, and ≥50% score by 90% of users. The first three menu items on the web page accounted for 12.6%, 10.0%, and 8.2% of all completions, and the bottom three accounted for no more than 2.2% each. Completion rates were higher for shorter 10 questions subject examinations. Users identifying themselves as MD/DO scored higher than other users, averaging 75%. Usage was higher for examinations at the top of the web page menu. Scores achieved suggest that a cohort of serious users fully completing the examinations had sufficient preparation to use them to support their pathology education.

  1. Web-based pathology practice examination usage

    Directory of Open Access Journals (Sweden)

    Edward C Klatt

    2014-01-01

    Full Text Available Context: General and subject specific practice examinations for students in health sciences studying pathology were placed onto a free public internet web site entitled web path and were accessed four clicks from the home web site menu. Subjects and Methods: Multiple choice questions were coded into. html files with JavaScript functions for web browser viewing in a timed format. A Perl programming language script with common gateway interface for web page forms scored examinations and placed results into a log file on an internet computer server. The four general review examinations of 30 questions each could be completed in up to 30 min. The 17 subject specific examinations of 10 questions each with accompanying images could be completed in up to 15 min each. The results of scores and user educational field of study from log files were compiled from June 2006 to January 2014. Results: The four general review examinations had 31,639 accesses with completion of all questions, for a completion rate of 54% and average score of 75%. A score of 100% was achieved by 7% of users, ≥90% by 21%, and ≥50% score by 95% of users. In top to bottom web page menu order, review examination usage was 44%, 24%, 17%, and 15% of all accessions. The 17 subject specific examinations had 103,028 completions, with completion rate 73% and average score 74%. Scoring at 100% was 20% overall, ≥90% by 37%, and ≥50% score by 90% of users. The first three menu items on the web page accounted for 12.6%, 10.0%, and 8.2% of all completions, and the bottom three accounted for no more than 2.2% each. Conclusions: Completion rates were higher for shorter 10 questions subject examinations. Users identifying themselves as MD/DO scored higher than other users, averaging 75%. Usage was higher for examinations at the top of the web page menu. Scores achieved suggest that a cohort of serious users fully completing the examinations had sufficient preparation to use them to support

  2. A Study of HTML Title Tag Creation Behavior of Academic Web Sites

    Science.gov (United States)

    Noruzi, Alireza

    2007-01-01

    The HTML title tag information should identify and describe exactly what a Web page contains. This paper analyzes the "Title element" and raises a significant question: "Why is the title tag important?" Search engines base search results and page rankings on certain criteria. Among the most important criteria is the presence of the search keywords…

  3. Quantum computational webs

    International Nuclear Information System (INIS)

    Gross, D.; Eisert, J.

    2010-01-01

    We discuss the notion of quantum computational webs: These are quantum states universal for measurement-based computation, which can be built up from a collection of simple primitives. The primitive elements--reminiscent of building blocks in a construction kit--are (i) one-dimensional states (computational quantum wires) with the power to process one logical qubit and (ii) suitable couplings, which connect the wires to a computationally universal web. All elements are preparable by nearest-neighbor interactions in a single pass, of the kind accessible in a number of physical architectures. We provide a complete classification of qubit wires, a physically well-motivated class of universal resources that can be fully understood. Finally, we sketch possible realizations in superlattices and explore the power of coupling mechanisms based on Ising or exchange interactions.

  4. The invisible Web uncovering information sources search engines can't see

    CERN Document Server

    Sherman, Chris

    2001-01-01

    Enormous expanses of the Internet are unreachable with standard web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, informa

  5. Content and Design Features of Academic Health Sciences Libraries' Home Pages.

    Science.gov (United States)

    McConnaughy, Rozalynd P; Wilson, Steven P

    2018-01-01

    The goal of this content analysis was to identify commonly used content and design features of academic health sciences library home pages. After developing a checklist, data were collected from 135 academic health sciences library home pages. The core components of these library home pages included a contact phone number, a contact email address, an Ask-a-Librarian feature, the physical address listed, a feedback/suggestions link, subject guides, a discovery tool or database-specific search box, multimedia, social media, a site search option, a responsive web design, and a copyright year or update date.

  6. Vue.js 2 cookbook build modern, interactive web applications with Vue.js

    CERN Document Server

    Passaglia, Andrea

    2017-01-01

    Vue.js is an open source JavaScript library for building modern, interactive web applications. With a rapidly growing community and a strong ecosystem, Vue.js makes developing complex single page applications a breeze. Its component-based approach, intuitive API, blazing fast core, and compact size make Vue.js a great solution to craft your next front-end application. From basic to advanced recipes, this book arms you with practical solutions to common tasks when building an application using Vue. We start off by exploring the fundamentals of Vue.js: its reactivity system, data-binding syntax, and component-based architecture through practical examples. After that, we delve into integrating Webpack and Babel to enhance your development workflow using single file components. Finally, we take an in-depth look at Vuex for state management and Vue Router to route in your single page applications, and integrate a variety of technologies ranging from Node.js to Electron, and Socket.io to Firebase and HorizonDB. ...

  7. Teaching Intuitive Eating and Acceptance and Commitment Therapy Skills Via a Web-Based Intervention: A Pilot Single-Arm Intervention Study

    Science.gov (United States)

    Boucher, Sara; Edwards, Olivia; Gray, Andrew; Nada-Raja, Shyamala; Lillis, Jason; Tylka, Tracy L

    2016-01-01

    Background Middle-aged women are at risk of weight gain and associated comorbidities. Deliberate restriction of food intake (dieting) produces short-term weight loss but is largely unsuccessful for long-term weight management. Two promising approaches for the prevention of weight gain are intuitive eating (ie, eating in accordance with hunger and satiety signals) and the development of greater psychological flexibility (ie, the aim of acceptance and commitment therapy [ACT]). Objectives This pilot study investigated the usage, acceptability, and feasibility of “Mind, Body, Food,” a Web-based weight gain prevention intervention prototype that teaches intuitive eating and psychological flexibility skills. Methods Participants were 40 overweight women (mean age 44.8 [standard deviation, SD, 3.06] years, mean body mass index [BMI] 32.9 [SD 6.01] kg/m2, mean Intuitive Eating Scale [IES-1] total score 53.4 [SD 7.46], classified as below average) who were recruited from the general population in Dunedin, New Zealand. Module completion and study site metrics were assessed using Google Analytics. Use of an online self-monitoring tool was determined by entries saved to a secure online database. Intervention acceptability was assessed postintervention. BMI, intuitive eating, binge eating, psychological flexibility, and general mental and physical health were assessed pre- and postintervention and 3-months postintervention. Results Of the 40 women enrolled in the study, 12 (30%) completed all 12 modules (median 7.5 [interquartile range, IQR, 2-12] modules) and 4 (10%) used the self-monitoring tool for all 14 weeks of the intervention period (median 3 [IQR 1-9] weeks). Among 26 women who completed postintervention assessments, most women rated “Mind, Body, Food” as useful (20/26, 77%), easy to use (17/25, 68%) and liked the intervention (22/25, 88%). From pre- to postintervention, there were statistically significant within-group increases in intuitive eating (IES-2

  8. Architecture for large-scale automatic web accessibility evaluation based on the UWEM methodology

    DEFF Research Database (Denmark)

    Ulltveit-Moe, Nils; Olsen, Morten Goodwin; Pillai, Anand B.

    2008-01-01

    The European Internet Accessibility project (EIAO) has developed an Observatory for performing large scale automatic web accessibility evaluations of public sector web sites in Europe. The architecture includes a distributed web crawler that crawls web sites for links until either a given budget...... of web pages have been identified or the web site has been crawled exhaustively. Subsequently, a uniform random subset of the crawled web pages is sampled and sent for accessibility evaluation and the evaluation results are stored in a Resource Description Format (RDF) database that is later loaded...... challenges that the project faced and the solutions developed towards building a system capable of regular large-scale accessibility evaluations with sufficient capacity and stability. It also outlines some possible future architectural improvements....

  9. Semantic Web

    Directory of Open Access Journals (Sweden)

    Anna Lamandini

    2011-06-01

    Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.

  10. Usare WebDewey

    OpenAIRE

    Baldi, Paolo

    2016-01-01

    This presentation shows how to use the WebDewey tool. Features of WebDewey. Italian WebDewey compared with American WebDewey. Querying Italian WebDewey. Italian WebDewey and MARC21. Italian WebDewey and UNIMARC. Numbers, captions, "equivalente verbale": Dewey decimal classification in Italian catalogues. Italian WebDewey and Nuovo soggettario. Italian WebDewey and LCSH. Italian WebDewey compared with printed version of Italian Dewey Classification (22. edition): advantages and disadvantages o...

  11. Full page fax print

    Indian Academy of Sciences (India)

    user

    [2] D Read, Plants on the web. Nature, Vol.396, pp.22–23, 1998. [3] D J Read and J Perez-Moreno, Mycorrhizas and nutrient cycling in ecosystems – a journey towards relevance? New Phytol.,Vol.157, pp.475–. 492, 2003. [4] M I Bidartondo, B Burghardt, G Gebauer, T D Bruns, and D J Read,. Changing partners in the dark: ...

  12. Review Pages: Cities, Energy and Mobility

    Directory of Open Access Journals (Sweden)

    Gennaro Angiello

    2015-12-01

    Full Text Available Starting from the relationship between urban planning and mobility management, TeMA has gradually expanded the view of the covered topics, always remaining in the groove of rigorous scientific in-depth analysis. During the last two years a particular attention has been paid on the Smart Cities theme and on the different meanings that come with it. The last section of the journal is formed by the Review Pages. They have different aims: to inform on the problems, trends and evolutionary processes; to investigate on the paths by highlighting the advanced relationships among apparently distant disciplinary fields; to explore the interaction’s areas, experiences and potential applications; to underline interactions, disciplinary developments but also, if present, defeats and setbacks. Inside the journal the Review Pages have the task of stimulating as much as possible the circulation of ideas and the discovery of new points of view. For this reason the section is founded on a series of basic’s references, required for the identification of new and more advanced interactions. These references are the research, the planning acts, the actions and the applications, analysed and investigated both for their ability to give a systematic response to questions concerning the urban and territorial planning, and for their attention to aspects such as the environmental sustainability and the innovation in the practices. For this purpose the Review Pages are formed by five sections (Web Resources; Books; Laws; Urban Practices; News and Events, each of which examines a specific aspect of the broader information storage of interest for TeMA.

  13. Review Pages: Cities, Energy and Climate Change

    Directory of Open Access Journals (Sweden)

    Gennaro Angiello

    2015-04-01

    Full Text Available Starting from the relationship between urban planning and mobility management, TeMA has gradually expanded the view of the covered topics, always remaining in the groove of rigorous scientific in-depth analysis. During the last two years a particular attention has been paid on the Smart Cities theme and on the different meanings that come with it. The last section of the journal is formed by the Review Pages. They have different aims: to inform on the problems, trends and evolutionary processes; to investigate on the paths by highlighting the advanced relationships among apparently distant disciplinary fields; to explore the interaction’s areas, experiences and potential applications; to underline interactions, disciplinary developments but also, if present, defeats and setbacks. Inside the journal the Review Pages have the task of stimulating as much as possible the circulation of ideas and the discovery of new points of view. For this reason the section is founded on a series of basic’s references, required for the identification of new and more advanced interactions. These references are the research, the planning acts, the actions and the applications, analysed and investigated both for their ability to give a systematic response to questions concerning the urban and territorial planning, and for their attention to aspects such as the environmental sustainability and the innovation in the practices. For this purpose the Review Pages are formed by five sections (Web Resources; Books; Laws; Urban Practices; News and Events, each of which examines a specific aspect of the broader information storage of interest for TeMA.

  14. Review Pages: Cities, Energy and Built Environment

    Directory of Open Access Journals (Sweden)

    Gennaro Angiello

    2015-07-01

    Full Text Available Starting from the relationship between urban planning and mobility management, TeMA has gradually expanded the view of the covered topics, always remaining in the groove of rigorous scientific in-depth analysis. During the last two years a particular attention has been paid on the Smart Cities theme and on the different meanings that come with it. The last section of the journal is formed by the Review Pages. They have different aims: to inform on the problems, trends and evolutionary processes; to investigate on the paths by highlighting the advanced relationships among apparently distant disciplinary fields; to explore the interaction’s areas, experiences and potential applications; to underline interactions, disciplinary developments but also, if present, defeats and setbacks. Inside the journal the Review Pages have the task of stimulating as much as possible the circulation of ideas and the discovery of new points of view. For this reason the section is founded on a series of basic’s references, required for the identification of new and more advanced interactions. These references are the research, the planning acts, the actions and the applications, analysed and investigated both for their ability to give a systematic response to questions concerning the urban and territorial planning, and for their attention to aspects such as the environmental sustainability and the innovation in the practices. For this purpose the Review Pages are formed by five sections (Web Resources; Books; Laws; Urban Practices; News and Events, each of which examines a specific aspect of the broader information storage of interest for TeMA.

  15. The Visual Web User Interface Design in Augmented Reality Technology

    OpenAIRE

    Chouyin Hsu; Haui-Chih Shiau

    2013-01-01

    Upon the popularity of 3C devices, the visual creatures are all around us, such the online game, touch pad, video and animation. Therefore, the text-based web page will no longer satisfy users. With the popularity of webcam, digital camera, stereoscopic glasses, or head-mounted display, the user interface becomes more visual and multi-dimensional. For the consideration of 3D and visual display in the research of web user interface design, Augmented Reality technology providing the convenient ...

  16. First in the web, but where are the pieces

    Energy Technology Data Exchange (ETDEWEB)

    Deken, J.M.

    1998-04-01

    The World Wide Web (WWW) does matter to the SLAC Archives and History Office for two very important, and related, reasons. The first reason is that the early Web at SLAC is historically significant: it was the first of its kind on this continent, and it achieved new and important things. The second reason is that the Web at SLAC--in its present and future forms--is a large and changing collection of official documents of the organization, many of which exist in no other form or environment. As of the first week of August, 1997, SLAC had 8,940 administratively-accounted-for web pages, and an estimated 2,000 to 4,000 additional pages that are hard to administratively track because they either reside on the main server in users directories several levels below their top-level pages, or they reside on one of the more than 60 non-main servers at the Center. A very small sampling of the information that SLAC WWW pages convey includes: information for the general public about programs and activities at SLAC; pages which allow physics experiment collaborators to monitor data, arrange work schedules and analyze results; pages that convey information to staff and visiting scientists about seminar and activity schedules, publication procedures, and ongoing experiments; and pages that allow staff and outside users to access databases maintained at SLAC. So, when SLAC's Archives and History Office begins to approach collecting the documents of their WWW presence, what are they collecting, and how are they to go about the process of collecting it. In this paper, the author discusses the effort to archive SLAC's Web in two parts, concentrating on the first task that has been undertaken: the initial effort to identify and gather into the archives evidence and documentation of the early days of the SLAC Web. The second task, which is the effort to collect present and future web pages at SLAC, are also covered, although in less detail, since it is an effort that is only

  17. Responsive web design workflow

    OpenAIRE

    LAAK, TIMO

    2013-01-01

    Responsive Web Design Workflow is a literature review about Responsive Web Design, a web standards based modern web design paradigm. The goals of this research were to define what responsive web design is, determine its importance in building modern websites and describe a workflow for responsive web design projects. Responsive web design is a paradigm to create adaptive websites, which respond to the properties of the media that is used to render them. The three key elements of responsi...

  18. Pacifier use: a systematic review of selected parenting web sites.

    Science.gov (United States)

    Cornelius, Aubrie N; D'Auria, Jennifer P; Wise, Lori M

    2008-01-01

    The purpose of this study was to explore and describe content related to pacifier use on parenting Web sites. Sixteen parenting Web sites met the inclusion criteria of the study. Two checklists were used to evaluate and describe different aspects of the Web sites. The first checklist provided a quality assessment of the Web sites. The second checklist was constructed to identify content categories of pacifier use. The majority of sites met quality assessment criteria. Eleven content categories regarding pacifier use were identified. Nine of the 16 sites contained eight or more of the 11 content areas. The most common types of Web pages containing pacifier information included pacifier specific (articles), questions and answer pages, and related content pages. Most of the parenting Web sites met the quality measures for online information. The content categories reflected the current controversies and information regarding pacifier use found in the expert literature. The findings of this study suggest the need to establish pacifier recommendations in the United States to guide parents and health care providers with decision making.

  19. WEB COHERENCE LEARNING

    Directory of Open Access Journals (Sweden)

    Peter Karlsudd

    2008-09-01

    Full Text Available This article describes a learning system constructed to facilitate teaching and learning by creating a functional web-based contact between schools and organisations which in cooperation with the school contribute to pupils’/students’ cognitive development. Examples of such organisations include science centres, museums, art and music workshops and teacher education internships. With the support of the “Web Coherence Learning” IT application (abbreviated in Swedish to Webbhang developed by the University of Kalmar, the aim is to reinforce learning processes in the encounter with organisations outside school. In close cooperation with potential users a system was developed which can be described as consisting of three modules. The first module, “the organisation page” supports the organisation in simply setting up a homepage, where overarching information on organisation operations can be published and where functions like calendar, guestbook, registration and newsletter can be included. In the second module, “the activity page” the activities offered by the organisation are described. Here pictures and information may prepare and inspire pupils/students to their own activities before future visits. The third part, “the participant page” is a communication module linked to the activity page enabling school classes to introduce themselves and their work as well as documenting the work and communicating with the educators responsible for external activities. When the project is finished, the work will be available to further school classes, parents and other interested parties. System development and testing have been performed in a small pilot study where two creativity educators at an art museum have worked together with pupils and teachers from a compulsory school class. The system was used to establish, prior to the visit of the class, a deeper contact and to maintain a more qualitative continuous dialogue during and after

  20. World Wide Web Usage Mining Systems and Technologies

    Directory of Open Access Journals (Sweden)

    Wen-Chen Hu

    2003-08-01

    Full Text Available Web usage mining is used to discover interesting user navigation patterns and can be applied to many real-world problems, such as improving Web sites/pages, making additional topic or product recommendations, user/customer behavior studies, etc. This article provides a survey and analysis of current Web usage mining systems and technologies. A Web usage mining system performs five major tasks: i data gathering, ii data preparation, iii navigation pattern discovery, iv pattern analysis and visualization, and v pattern applications. Each task is explained in detail and its related technologies are introduced. A list of major research systems and projects concerning Web usage mining is also presented, and a summary of Web usage mining is given in the last section.

  1. Negocios en la Web, un Mall Virtual.

    Directory of Open Access Journals (Sweden)

    Alexis Rocha

    2015-10-01

    The business on the web is aimed at identifying new opportunities and needs that users demand; Moreover, it should establish means of trust, communication and strategies to create a single competitive strength as pioneers in the market, supported by the Internet and web portals for delivering timely, efficient and effective information.

  2. Towards Information Systems Design for Value Webs

    NARCIS (Netherlands)

    Zarvic, N.; Wieringa, Roelf J.; Daneva, Maia; Pernici, B; Gulla, J.A.

    2007-01-01

    In this paper we discuss the alignment between a business model of a value web and the information systems of the participating companies needed to implement the business model. Traditional business-IT alignment approaches focus on one single company, but in a value web we are dealing with various

  3. A single-blind randomised controlled trial of the effects of a web-based decision aid on self-testing for cholesterol and diabetes. study protocol

    Directory of Open Access Journals (Sweden)

    Ickenroth Martine HP

    2012-01-01

    Full Text Available Abstract Background Self-tests, tests on body materials to detect medical conditions, are widely available to the general public. Self-testing does have advantages as well as disadvantages, and the debate on whether self-testing should be encouraged or rather discouraged is still ongoing. One of the concerns is whether consumers have sufficient knowledge to perform the test and interpret the results. An online decision aid (DA with information on self-testing in general, and test specific information on cholesterol and diabetes self-testing was developed. The DA aims to provide objective information on these self-tests as well as a decision support tool to weigh the pros and cons of self-testing. The aim of this study is to evaluate the effect of the online decision aid on knowledge on self-testing, informed choice, ambivalence and psychosocial determinants. Methods/Design A single blind randomised controlled trial in which the online decision aid 'zelftestwijzer' is compared to short, non-interactive information on self-testing in general. The entire trial will be conducted online. Participants will be selected from an existing Internet panel. Consumers who are considering doing a cholesterol or diabetes self-test in the future will be included. Outcome measures will be assessed directly after participants have viewed either the DA or the control condition. Weblog files will be used to record participants' use of the decision aid. Discussion Self-testing does have important pros and cons, and it is important that consumers base their decision whether they want to do a self-test or not on knowledge and personal values. This study is the first to evaluate the effect of an online decision aid for self-testing. Trial registration Dutch Trial Register: NTR3149

  4. A single-blind randomised controlled trial of the effects of a web-based decision aid on self-testing for cholesterol and diabetes. Study protocol.

    Science.gov (United States)

    Ickenroth, Martine H P; Grispen, Janaica E J; de Vries, Nanne K; Dinant, Geert-Jan; Elwyn, Glyn; Ronda, Gaby; van der Weijden, Trudy

    2012-01-04

    Self-tests, tests on body materials to detect medical conditions, are widely available to the general public. Self-testing does have advantages as well as disadvantages, and the debate on whether self-testing should be encouraged or rather discouraged is still ongoing. One of the concerns is whether consumers have sufficient knowledge to perform the test and interpret the results. An online decision aid (DA) with information on self-testing in general, and test specific information on cholesterol and diabetes self-testing was developed. The DA aims to provide objective information on these self-tests as well as a decision support tool to weigh the pros and cons of self-testing. The aim of this study is to evaluate the effect of the online decision aid on knowledge on self-testing, informed choice, ambivalence and psychosocial determinants. A single blind randomised controlled trial in which the online decision aid 'zelftestwijzer' is compared to short, non-interactive information on self-testing in general. The entire trial will be conducted online. Participants will be selected from an existing Internet panel. Consumers who are considering doing a cholesterol or diabetes self-test in the future will be included. Outcome measures will be assessed directly after participants have viewed either the DA or the control condition. Weblog files will be used to record participants' use of the decision aid. Self-testing does have important pros and cons, and it is important that consumers base their decision whether they want to do a self-test or not on knowledge and personal values. This study is the first to evaluate the effect of an online decision aid for self-testing. Dutch Trial Register: NTR3149.

  5. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    Science.gov (United States)

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  6. Medium-sized Universities Connect to Their Libraries: Links on University Home Pages and User Group Pages

    Directory of Open Access Journals (Sweden)

    Pamela Harpel-Burk

    2006-03-01

    Full Text Available From major tasks—such as recruitment of new students and staff—to the more mundane but equally important tasks—such as providing directions to campus—college and university Web sites perform a wide range of tasks for a varied assortment of users. Overlapping functions and user needs meld to create the need for a Web site with three major functions: promotion and marketing, access to online services, and providing a means of communication between individuals and groups. In turn, college and university Web sites that provide links to their library home page can be valuable assets for recruitment, public relations, and for helping users locate online services.

  7. TFC - Accesibilidad web

    OpenAIRE

    Aguilar Garzón, Daniel

    2011-01-01

    Estudio de 10 webs del portal uoc.edu, basado en las pautas de accesibilidad web W3C Estudi de 10 webs del portal uoc.edu, basat en les pautes d'accessibilitat web W3C Study of 10 portal websites uoc.edu based on the W3C web accessibility guidelines

  8. A Tutorial in Creating Web-Enabled Databases with Inmagic DB/TextWorks through ODBC.

    Science.gov (United States)

    Breeding, Marshall

    2000-01-01

    Explains how to create Web-enabled databases. Highlights include Inmagic's DB/Text WebPublisher product called DB/TextWorks; ODBC (Open Database Connectivity) drivers; Perl programming language; HTML coding; Structured Query Language (SQL); Common Gateway Interface (CGI) programming; and examples of HTML pages and Perl scripts. (LRW)

  9. MOL-D: A Collisional Database and Web Service within the Virtual ...

    Indian Academy of Sciences (India)

    Page 1. Review. J. Astrophys. Astr., Vol. 36, No. 4, December 2015, pp. 693–703. MOL-D: A Collisional Database and Web Service within the Virtual. Atomic and Molecular Data Center. V. Vujcic1,∗, D. ... coefficients for specific collisional processes and a web service within the. Serbian Virtual Observatory (SerVO) and the ...

  10. Creating an Index for Your Web Site to Make Info Easier to See

    Science.gov (United States)

    Hedden, Heather

    2006-01-01

    In this article, the author explains how librarians can ensure that their Web site visitors find the information they need. The pros and cons of four options used to help people find information on a Web site are explored. These options are: (1) redesigning the site; (2) creating drop-down, second-level menus for second-level pages; (3) adding a…

  11. Students' Evaluation Strategies in a Web Research Task: Are They Sensitive to Relevance and Reliability?

    Science.gov (United States)

    Rodicio, Héctor García

    2015-01-01

    When searching and using resources on the Web, students have to evaluate Web pages in terms of relevance and reliability. This evaluation can be done in a more or less systematic way, by either considering deep or superficial cues of relevance and reliability. The goal of this study was to examine how systematic students are when evaluating Web…

  12. What Are the Usage Conditions of Web 2.0 Tools Faculty of Education Students?

    Science.gov (United States)

    Agir, Ahmet

    2014-01-01

    As a result of advances in technology and then the emergence of using Internet in every step of life, web that provides access to the documents such as picture, audio, animation and text in Internet started to be used. At first, web consists of only visual and text pages that couldn't enable to make user's interaction. However, it is seen that not…

  13. A fuzzy method for improving the functionality of search engines based on user's web interactions

    Directory of Open Access Journals (Sweden)

    Farzaneh Kabirbeyk

    2015-04-01

    Full Text Available Web mining has been widely used to discover knowledge from various sources in the web. One of the important tools in web mining is mining of web user’s behavior that is considered as a way to discover the potential knowledge of web user’s interaction. Nowadays, Website personalization is regarded as a popular phenomenon among web users and it plays an important role in facilitating user access and provides information of users’ requirements based on their own interests. Extracting important features about web user behavior plays a significant role in web usage mining. Such features are page visit frequency in each session, visit duration, and dates of visiting a certain pages. This paper presents a method to predict user’s interest and to propose a list of pages based on their interests by identifying user’s behavior based on fuzzy techniques called fuzzy clustering method. Due to the user’s different interests and use of one or more interest at a time, user’s interest may belong to several clusters and fuzzy clustering provide a possible overlap. Using the resulted cluster helps extract fuzzy rules. This helps detecting user’s movement pattern and using neural network a list of suggested pages to the users is provided.

  14. SWORS: a system for the efficient retrieval of relevant spatial web objects

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian S.

    2012-01-01

    Spatial web objects that possess both a geographical location and a textual description are gaining in prevalence. This gives prominence to spatial keyword queries that exploit both location and textual arguments. Such queries are used in many web services such as yellow pages and maps services....

  15. Discovering How Students Search a Library Web Site: A Usability Case Study.

    Science.gov (United States)

    Augustine, Susan; Greene, Courtney

    2002-01-01

    Discusses results of a usability study at the University of Illinois Chicago that investigated whether Internet search engines have influenced the way students search library Web sites. Results show students use the Web site's internal search engine rather than navigating through the pages; have difficulty interpreting library terminology; and…

  16. The Electronic Welcome Mat: The Academic Library Web Site as a Marketing and Public Relations Tool

    Science.gov (United States)

    Welch, Jeanie M.

    2005-01-01

    This article explores the potential and reality of using the academic library Web site to market library resources and services, for fundraising, and to market special events. It explores such issues as the placement of a link to academic libraries from institutional home pages and the use of a library Web site to include links to news, exhibits,…

  17. Developing heuristics for Web communication: an introduction to this special issue

    NARCIS (Netherlands)

    van der Geest, Thea; Spyridakis, Jan H.

    2000-01-01

    This article describes the role of heuristics in the Web design process. The five sets of heuristics that appear in this issue are also described, as well as the research methods used in their development. The heuristics were designed to help designers and developers of Web pages or sites to

  18. Corporate Writing in the Web of Postmodern Culture and Postindustrial Capitalism.

    Science.gov (United States)

    Boje, David M.

    2001-01-01

    Uses Nike as an example to explore the impact of corporate writing (in annual reports, press releases, advertisements, web pages, sponsored research, and consultant reports). Shows how the intertextual web of "Nike Writing," as it legitimates industry-wide labor and ecological practices has significant, negative consequences for academic…

  19. Learning System of Web Navigation Patterns through Hypertext Probabilistic Grammars

    Science.gov (United States)

    Cortes Vasquez, Augusto

    2015-01-01

    One issue of real interest in the area of web data mining is to capture users' activities during connection and extract behavior patterns that help define their preferences in order to improve the design of future pages adapting websites interfaces to individual users. This research is intended to provide, first of all, a presentation of the…

  20. "Così abbiamo creato il World Wide Web"

    CERN Multimedia

    Sigiani, GianLuca

    2002-01-01

    Meeting with Robert Cailliau, scientist and pioneer of the web, who, in a book, tells how at CERN in Geneva, his team transformed Internet (an instrument used for military purposes) in one of the most revolutionary tool of mass media from ever (1 page)

  1. Perché è necessario capire il web

    CERN Multimedia

    Rosenthal, Edward C

    2007-01-01

    The birth of the cobweb of contents, the web 2.0, Wikipedia, the global collaboration, i blog, the net neutrality, the digital liberty and the future of the net: interview with Robert Cailliau, inventor of the WWW with Tim Bernes-Lee. (2 pages)

  2. Wikinews interviews World Wide Web co-inventor Robert Cailliau

    CERN Multimedia

    2007-01-01

    "The name Robert Caillau may not ring a bell to the general pbulic, but his invention is the reason why you are reading this: Dr. Cailliau together with his colleague Sir Tim Berners-Lee invented the World Wide Web, making the internet accessible so it could grow from an academic tool to a mass communication medium." (9 pages)

  3. Spinning the Web: The Design of Yale's Front Door.

    Science.gov (United States)

    Callum, Robert

    1996-01-01

    The process of designing the Yale University (Connecticut) World Wide Web page "front door" is described, including its conceptualization, evolution through technological advances and attitudinal change, achievement of consensus through an interdepartmental advisory team, constituency response, and expectations about future change. (MSE)

  4. Collection Development and Diversity on CIC Academic Library Web Sites

    Science.gov (United States)

    Young, Courtney L.

    2006-01-01

    CIC library Web sites were examined to determine how diversity related to collections was represented. As diversity in collection development is frequently highlighted by broader diversity initiatives, other diversity pages on these sites were explored as well. In the majority of cases, neither diversity collection development nor diversity was…

  5. Speech and Language Interaction in a Web Theatre Environment

    NARCIS (Netherlands)

    Nijholt, Antinus; Dalsgaard, P.; Hulstijn, J.; Lee, C.H.; Heisterkamp, P.; van Hessen, Adrianus J.; Cole, R.

    1999-01-01

    We discuss research on interaction in a virtual theatre that can be accessed through Web pages. In the environment we employ several agents. The virtual theatre allows navigation through keyboard and mouse, but there is also a navigation agent which listens to typed input and spoken commands. We

  6. Web TA Production (WebTA)

    Data.gov (United States)

    US Agency for International Development — WebTA is a web-based time and attendance system that supports USAID payroll administration functions, and is designed to capture hours worked, leave used and...

  7. Web Mining for Web Image Retrieval.

    Science.gov (United States)

    Chen, Zheng; Wenyin, Liu; Zhang, Feng; Li, Mingjing; Zhang, Hongjiang

    2001-01-01

    Presents a prototype system for image retrieval from the Internet using Web mining. Discusses the architecture of the Web image retrieval prototype; document space modeling; user log mining; and image retrieval experiments to evaluate the proposed system. (AEF)

  8. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Nejdl, Wolfgang

    2007-01-01

    provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... are crucial to be formalized by the semantic web ontologies for adaptive web. We use examples from an eLearning domain to illustrate the principles which are broadly applicable to any information domain on the web.......Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...

  9. HORISONTAALISEN WEB-SIVUN SUUNNITTELU : Esimerkkinä Virtuaalikärsämäki

    OpenAIRE

    Tiri, Erkki

    2012-01-01

    Opinnäytetyön aiheena oli suunnitella ja tehdä Kärsämäen kunnalle web-sivu, joka toisi erillä tavalla esille mitä kunnalla tapahtuu. Virtuaalikärsämäki on suunniteltu toimimaan usealla selaimella. Opinnäytteessä tutkittiin, miten voisi suunnitella horisontaalisesti toimivan sivuston, nor-maalin vertikaalin sijaan. Tuloksena oli toimiva, visuaalinen ja asiakasta miellyttävä web-sivusto. Topic of thesis was to desing a web page for the municipality of Kärsämäki. The page should bring...

  10. Web-based Logbook System for EAST Experiments

    International Nuclear Information System (INIS)

    Yang Fei; Xiao Bingjia

    2010-01-01

    Implementation of a web-based logbook system on EAST is introduced, which can store the comments for the experiments into a database and access the documents via various web browsers. The three-tier software architecture and asynchronous access technology are adopted to improve the system effectively. Authorized users can view the information of real-time discharge, comments from others and signal plots; add, delete, or revise their own comments; search signal data or comments under complicated search conditions; and collect relevant information and output it to an excel file. The web pages can be automatically updated after a new discharge is completed and without refreshment.

  11. Web-based Logbook System for EAST Experiments

    Science.gov (United States)

    Yang, Fei; Xiao, Bingjia

    2010-10-01

    Implementation of a web-based logbook system on EAST is introduced, which can store the comments for the experiments into a database and access the documents via various web browsers. The three-tier software architecture and asynchronous access technology are adopted to improve the system effectively. Authorized users can view the information of real-time discharge, comments from others and signal plots; add, delete, or revise their own comments; search signal data or comments under complicated search conditions; and collect relevant information and output it to an excel file. The web pages can be automatically updated after a new discharge is completed and without refreshment.

  12. Sustainable Materials Management (SMM) Web Academy Webinar: Building Collection Infrastructure for Composting: Success in the Greater Worcester, Massachusetts Area

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  13. Web Mining: Machine Learning for Web Applications.

    Science.gov (United States)

    Chen, Hsinchun; Chau, Michael

    2004-01-01

    Presents an overview of machine learning research and reviews methods used for evaluating machine learning systems. Ways that machine-learning algorithms were used in traditional information retrieval systems in the "pre-Web" era are described, and the field of Web mining and how machine learning has been used in different Web mining…

  14. Semantic web for dummies

    CERN Document Server

    Pollock, Jeffrey T

    2009-01-01

    Semantic Web technology is already changing how we interact with data on the Web. By connecting random information on the Internet in new ways, Web 3.0, as it is sometimes called, represents an exciting online evolution. Whether you're a consumer doing research online, a business owner who wants to offer your customers the most useful Web site, or an IT manager eager to understand Semantic Web solutions, Semantic Web For Dummies is the place to start! It will help you:Know how the typical Internet user will recognize the effects of the Semantic WebExplore all the benefits the data Web offers t

  15. A cohesive page ranking and depth-first crawling scheme for ...

    African Journals Online (AJOL)

    The quality of the results collections, displayed to users of web search engines today still remains a mirage with regard to the factors used in their ranking process. In this work we combined page rank crawling method and depth first crawling method to create a hybridized method. Our major objective is to unify into one ...

  16. Cardiology Patient Page: Electronic Cigarettes

    Science.gov (United States)

    ... American Heart Association Cardiology Patient Page Electronic Cigarettes Rachel A. Grana , Pamela M. Ling , Neal Benowitz , Stanton ... 129: e490-e492 Originally published May 12, 2014 Rachel A. Grana From the Center for Tobacco Control ...

  17. Web-based surveillance of public information needs for informing preconception interventions.

    Directory of Open Access Journals (Sweden)

    Angelo D'Ambrosio

    Full Text Available The risk of adverse pregnancy outcomes can be minimized through the adoption of healthy lifestyles before pregnancy by women of childbearing age. Initiatives for promotion of preconception health may be difficult to implement. Internet can be used to build tailored health interventions through identification of the public's information needs. To this aim, we developed a semi-automatic web-based system for monitoring Google searches, web pages and activity on social networks, regarding preconception health.Based on the American College of Obstetricians and Gynecologists guidelines and on the actual search behaviors of Italian Internet users, we defined a set of keywords targeting preconception care topics. Using these keywords, we analyzed the usage of Google search engine and identified web pages containing preconception care recommendations. We also monitored how the selected web pages were shared on social networks. We analyzed discrepancies between searched and published information and the sharing pattern of the topics.We identified 1,807 Google search queries which generated a total of 1,995,030 searches during the study period. Less than 10% of the reviewed pages contained preconception care information and in 42.8% information was consistent with ACOG guidelines. Facebook was the most used social network for sharing. Nutrition, Chronic Diseases and Infectious Diseases were the most published and searched topics. Regarding Genetic Risk and Folic Acid, a high search volume was not associated to a high web page production, while Medication pages were more frequently published than searched. Vaccinations elicited high sharing although web page production was low; this effect was quite variable in time.Our study represent a resource to prioritize communication on specific topics on the web, to address misconceptions, and to tailor interventions to specific populations.

  18. Extracting Related Words from Anchor Text Clusters by Focusing on the Page Designer's Intention

    Science.gov (United States)

    Liu, Jianquan; Chen, Hanxiong; Furuse, Kazutaka; Ohbo, Nobuo

    Approaches for extracting related words (terms) by co-occurrence work poorly sometimes. Two words frequently co-occurring in the same documents are considered related. However, they may not relate at all because they would have no common meanings nor similar semantics. We address this problem by considering the page designer’s intention and propose a new model to extract related words. Our approach is based on the idea that the web page designers usually make the correlative hyperlinks appear in close zone on the browser. We developed a browser-based crawler to collect “geographically” near hyperlinks, then by clustering these hyperlinks based on their pixel coordinates, we extract related words which can well reflect the designer’s intention. Experimental results show that our method can represent the intention of the web page designer in extremely high precision. Moreover, the experiments indicate that our extracting method can obtain related words in a high average precision.

  19. Development of a Web-Based Distributed Interactive Simulation (DIS) Environment Using JavaScript

    Science.gov (United States)

    2014-09-01

    curve of developing web applications. HTML5 and JavaScript are easy to learn and practice programming languages, and developers do not require...IDE) for developing desktop, mobile and web applications with JAVA, C++, HTML5 , JavaScript and more. b. Framework The DIS implementation of...Canvas, Scene, Camera, and Own Entities HTML5 has a “canvas” element for drawing 2D and 3D graphics on web pages, and the three.js library provides

  20. Srovnání webů městských částí Prahy

    OpenAIRE

    Hain, Tomáš

    2009-01-01

    This thesis is aimed at comparison web pages of selected districts of Prague. Its benefit consists in determination of the quality of web pages on this specific level of public sector. Moreover, each testing is concluded with a listing of major shortcomings and suggestions for improvements. Last but not least, the relationship between usability testing and heuristic testing has been determined at the end of this thesis.