WorldWideScience

Sample records for submission web page

  1. Creating Web Pages Simplified

    CERN Document Server

    Wooldridge, Mike

    2011-01-01

    The easiest way to learn how to create a Web page for your family or organization Do you want to share photos and family lore with relatives far away? Have you been put in charge of communication for your neighborhood group or nonprofit organization? A Web page is the way to get the word out, and Creating Web Pages Simplified offers an easy, visual way to learn how to build one. Full-color illustrations and concise instructions take you through all phases of Web publishing, from laying out and formatting text to enlivening pages with graphics and animation. This easy-to-follow visual guide sho

  2. Librarians' Personal Web Pages: An Analysis.

    Science.gov (United States)

    Haines, Annette

    1999-01-01

    Describes a study of academic librarians' personal Web pages. Results of an email survey showed that most Web sites were developed voluntarily, usually as an extension of the institutional library's Web page, and that librarians who were provided with guidelines produced higher quality Web pages. (Author/LRW)

  3. Color Assessment and Transfer for Web Pages

    OpenAIRE

    Wu, Ou

    2012-01-01

    Colors play a particularly important role in both designing and accessing Web pages. A well-designed color scheme improves Web pages' visual aesthetic and facilitates user interactions. As far as we know, existing color assessment studies focus on images; studies on color assessment and editing for Web pages are rare. This paper investigates color assessment for Web pages based on existing online color theme-rating data sets and applies this assessment to Web color edit. This study consists o...

  4. Interstellar Initiative Web Page Design

    Science.gov (United States)

    Mehta, Alkesh

    1999-01-01

    This summer at NASA/MSFC, I have contributed to two projects: Interstellar Initiative Web Page Design and Lenz's Law Relative Motion Demonstration. In the Web Design Project, I worked on an Outline. The Web Design Outline was developed to provide a foundation for a Hierarchy Tree Structure. The Outline would help design a Website information base for future and near-term missions. The Website would give in-depth information on Propulsion Systems and Interstellar Travel. The Lenz's Law Relative Motion Demonstrator is discussed in this volume by Russell Lee.

  5. Stochastic analysis of web page ranking

    NARCIS (Netherlands)

    Volkovich, Y.

    2009-01-01

    Today, the study of the World Wide Web is one of the most challenging subjects. In this work we consider the Web from a probabilistic point of view. We analyze the relations between various characteristics of the Web. In particular, we are interested in the Web properties that affect the Web page

  6. Finding Specification Pages from the Web

    Science.gov (United States)

    Yoshinaga, Naoki; Torisawa, Kentaro

    This paper presents a method of finding a specification page on the Web for a given object (e.g., ``Ch. d'Yquem'') and its class label (e.g., ``wine''). A specification page for an object is a Web page which gives concise attribute-value information about the object (e.g., ``county''-``Sauternes'') in well formatted structures. A simple unsupervised method using layout and symbolic decoration cues was applied to a large number of the Web pages to acquire candidate attributes for each class (e.g., ``county'' for a class ``wine''). We then filter out irrelevant words from the putative attributes through an author-aware scoring function that we called site frequency. We used the acquired attributes to select a representative specification page for a given object from the Web pages retrieved by a normal search engine. Experimental results revealed that our system greatly outperformed the normal search engine in terms of this specification retrieval.

  7. Integration of the CMS web interface to a Web page

    OpenAIRE

    Čebašek, Jure

    2011-01-01

    The aim of the thesis is the integration of CMS web interface into an existing static HTML page that allows users to edit websites without knowledge of web languages and databases. At first, we present the CMS web interface, basic concepts and functions that enable such systems. We continue with a description of technologies, techniques and tools to achieve the desired solution of a given problem. The core of the thesis is the development of a web page using a CMS web interface. In the ...

  8. CERN Web Pages Receive a Makeover

    CERN Multimedia

    2001-01-01

    Asudden allergic reaction to the colour turquoise? Never fear, from Monday 2 April you'll be able to click in the pink box at the top of the CERN users' welcome page to go to the all-new welcome page, which is simpler and better organized. CERN's new-look intranet is the first step in a complete Web-makeover being applied by the Web Public Education (WPE) group of ETT Division. The transition will be progressive, to allow users to familiarize themselves with the new pages. Until 17 April, CERN users will still get the familiar turquoise welcome page by default, with the new pages operating in parallel. From then on, the default will switch to the new pages, with the old ones being finally switched off on 25 May. Some 400 pages have received the makeover treatment. For more information about the changes to your Web, take a look at: http://www.cern.ch/CERN/NewUserPages/ Happy surfing!

  9. Classroom Web Pages: A "How-To" Guide for Educators.

    Science.gov (United States)

    Fehling, Eric E.

    This manual provides teachers, with very little or no technology experience, with a step-by-step guide for developing the necessary skills for creating a class Web Page. The first part of the manual is devoted to the thought processes preceding the actual creation of the Web Page. These include looking at other Web Pages, deciding what should be…

  10. Exploiting link structure for web page genre identification

    KAUST Repository

    Zhu, Jia

    2015-07-07

    As the World Wide Web develops at an unprecedented pace, identifying web page genre has recently attracted increasing attention because of its importance in web search. A common approach for identifying genre is to use textual features that can be extracted directly from a web page, that is, On-Page features. The extracted features are subsequently inputted into a machine learning algorithm that will perform classification. However, these approaches may be ineffective when the web page contains limited textual information (e.g., the page is full of images). In this study, we address genre identification of web pages under the aforementioned situation. We propose a framework that uses On-Page features while simultaneously considering information in neighboring pages, that is, the pages that are connected to the original page by backward and forward links. We first introduce a graph-based model called GenreSim, which selects an appropriate set of neighboring pages. We then construct a multiple classifier combination module that utilizes information from the selected neighboring pages and On-Page features to improve performance in genre identification. Experiments are conducted on well-known corpora, and favorable results indicate that our proposed framework is effective, particularly in identifying web pages with limited textual information. © 2015 The Author(s)

  11. gWidgetsWWW: Creating Interactive Web Pages within R

    Directory of Open Access Journals (Sweden)

    John Verzani

    2012-06-01

    Full Text Available The gWidgetsWWW package provides a framework for easily developing interactive web pages from within R. It uses the API developed in the gWidgets programming interface to specify the layout of the controls and the relationships between them. The web pages may be served locally under R's built-in web server for help pages or from an rApache-enabled web server.

  12. Implementing an Online Writable Web Page System and Its Applications

    Science.gov (United States)

    Nishi, Kentaro; Shintani, Toramatsu; Matsuo, Tokuro; Tashiro, Noriharu; Ito, Takayuki

    WWW has developed rapidly, and it is becoming easy to make personal web sites. In general, we create and edit web pages by using a HTML authoring software or writing HTML source codes in a text editor. Then, we need to upload them to a web server. When we make and build our own web pages by the existing tools, it takes a lot of time and effort to complete the necessary tasks. In this paper, we propose a home page authoring support system in which we directly edit and make web pages on a web browser. Current experimental results demonstrate that our system can effectively support novices to create their web pages. Also, we show two real-world applications that effectively utilize our system.

  13. Recognition of pornographic web pages by classifying texts and images.

    Science.gov (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  14. An Analysis of Academic Library Web Pages for Faculty

    Science.gov (United States)

    Gardner, Susan J.; Juricek, John Eric; Xu, F. Grace

    2008-01-01

    Web sites are increasingly used by academic libraries to promote key services and collections to teaching faculty. This study analyzes the content, location, language, and technological features of fifty-four academic library Web pages designed especially for faculty to expose patterns in the development of these pages.

  15. Categorization of web pages - Performance enhancement to search engine

    Digital Repository Service at National Institute of Oceanography (India)

    Lakshminarayana, S.

    Weight (PFW) for a web page and grouped for categorization. Using these experimental results we classified the web pages into four different groups i.e. (1) Simple type (2) Axis shifted (3) Fluctuated and (4) Oscillating types. Implication in development...

  16. Creating Web Pages: Is Anyone Considering Visual Literacy?

    Science.gov (United States)

    Clark, Barbara I.; And Others

    The purpose of this study was: (1) to look at the design, aesthetics, and functionality of educational and noneducational Web pages from the perspective of visual literacy; and (2) to evaluate printed and online materials that are used as resources by professionals and nonprofessionals to create these Web pages. These "how to" manuals…

  17. Web page sorting algorithm based on query keyword distance relation

    Science.gov (United States)

    Yang, Han; Cui, Hong Gang; Tang, Hao

    2017-08-01

    In order to optimize the problem of page sorting, according to the search keywords in the web page in the relationship between the characteristics of the proposed query keywords clustering ideas. And it is converted into the degree of aggregation of the search keywords in the web page. Based on the PageRank algorithm, the clustering degree factor of the query keyword is added to make it possible to participate in the quantitative calculation. This paper proposes an improved algorithm for PageRank based on the distance relation between search keywords. The experimental results show the feasibility and effectiveness of the method.

  18. Metadata Schema Used in OCLC Sampled Web Pages

    Directory of Open Access Journals (Sweden)

    Fei Yu

    2005-12-01

    Full Text Available The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources. However, many questions have been raised about the use of metadata schemas such as which metadata schemas have been used on the Web? How did they describe Web accessible information? What is the distribution of these metadata schemas among Web pages? Do certain schemas dominate the others? To address these issues, this study analyzed 16,383 Web pages with meta tags extracted from 200,000 OCLC sampled Web pages in 2000. It found that only 8.19% Web pages used meta tags; description tags, keyword tags, and Dublin Core tags were the only three schemas used in the Web pages. This article revealed the use of meta tags in terms of their function distribution, syntax characteristics, granularity of the Web pages, and the length distribution and word number distribution of both description and keywords tags.

  19. Migrating Multi-page Web Applications to Single-page AJAX Interfaces

    NARCIS (Netherlands)

    Mesbah, A.; Van Deursen, A.

    2006-01-01

    Recently, a new web development technique for creating interactive web applications, dubbed AJAX, has emerged. In this new model, the single-page web interface is composed of individual components which can be updated/replaced independently. With the rise of AJAX web applications classical

  20. Adaptive, Multilingual Named Entity Recognition in Web Pages

    OpenAIRE

    Petasis, Georgios; Karkaletsis, Vangelis; Grover, Claire; Hachey, Ben; Pazienza, Maria Teresa; Vindigni, Michele; Coch, José

    2004-01-01

    Most of the information on the Web today is in the form of HTML documents, which are designed for presentation purposes and not for machine understanding and reasoning. Existing web extraction systems require a lot of human involvement for maintenance due to changes to targeted web sites and for adaptation to new web sites or even to new domains. This paper presents the adaptive, multilingual named entity recognition and classification (NERC) technologies developed for processing web pages in...

  1. A teen's guide to creating web pages and blogs

    CERN Document Server

    Selfridge, Peter; Osburn, Jennifer

    2008-01-01

    Whether using a social networking site like MySpace or Facebook or building a Web page from scratch, millions of teens are actively creating a vibrant part of the Internet. This is the definitive teen''s guide to publishing exciting web pages and blogs on the Web. This easy-to-follow guide shows teenagers how to: Create great MySpace and Facebook pages Build their own unique, personalized Web site Share the latest news with exciting blogging ideas Protect themselves online with cyber-safety tips Written by a teenager for other teens, this book leads readers step-by-step through the basics of web and blog design. In this book, teens learn to go beyond clicking through web sites to learning winning strategies for web design and great ideas for writing blogs that attract attention and readership.

  2. Web Pages for Your Classroom The EASY Way!

    CERN Document Server

    Mccorkle, Sandra

    2003-01-01

    A practical how-to guide, this book provides the classroom teacher or librarian with all of the tools necessary for creating Web pages for student use. Useful templates-a CD ROM is included for easy use-and clear, logical instructions guide you in the creation of pages that students can later use for research or other types of projects that familiarize students with the power and usefulness of the Web. Gaining this skill allows you the flexibility of tailoring Web pages to students' specific needs and being sure of the quality of resources students are accessing. This book is indispensable for

  3. Digital libraries and World Wide Web sites and page persistence.

    Directory of Open Access Journals (Sweden)

    Wallace Koehler

    1999-01-01

    Full Text Available Web pages and Web sites, some argue, can either be collected as elements of digital or hybrid libraries, or, as others would have it, the WWW is itself a library. We begin with the assumption that Web pages and Web sites can be collected and categorized. The paper explores the proposition that the WWW constitutes a library. We conclude that the Web is not a digital library. However, its component parts can be aggregated and included as parts of digital library collections. These, in turn, can be incorporated into "hybrid libraries." These are libraries with both traditional and digital collections. Material on the Web can be organized and managed. Native documents can be collected in situ, disseminated, distributed, catalogueed, indexed, controlled, in traditional library fashion. The Web therefore is not a library, but material for library collections is selected from the Web. That said, the Web and its component parts are dynamic. Web documents undergo two kinds of change. The first type, the type addressed in this paper, is "persistence" or the existence or disappearance of Web pages and sites, or in a word the lifecycle of Web documents. "Intermittence" is a variant of persistence, and is defined as the disappearance but reappearance of Web documents. At any given time, about five percent of Web pages are intermittent, which is to say they are gone but will return. Over time a Web collection erodes. Based on a 120-week longitudinal study of a sample of Web documents, it appears that the half-life of a Web page is somewhat less than two years and the half-life of a Web site is somewhat more than two years. That is to say, an unweeded Web document collection created two years ago would contain the same number of URLs, but only half of those URLs point to content. The second type of change Web documents experience is change in Web page or Web site content. Again based on the Web document samples, very nearly all Web pages and sites undergo some

  4. Menu Positioning on Web Pages. Does it Matter?

    OpenAIRE

    Dr Pietro Murano; Lomas, Tracey J.

    2015-01-01

    This paper concerns an investigation by the authors into the efficiency and user opinions of menu positioning in web pages. While the idea and use of menus on web pages are not new, the authors feel there is not enough empirical evidence to help designers choo se an appropriate menu position. We therefore present the design and results of an empirical experiment, investigating the usability of menu positioning ...

  5. Evaluating Information Quality: Hidden Biases on the Children's Web Pages

    Science.gov (United States)

    Kurubacak, Gulsun

    2006-01-01

    As global digital communication continues to flourish, the Children's Web pages become more critical for children to realize not only the surface but also breadth and deeper meanings in presenting these milieus. These pages not only are very diverse and complex but also enable intense communication across social, cultural and political…

  6. A thorough spring-clean for CERN's Web pages

    CERN Multimedia

    2001-01-01

    This coming Tuesday will see the unveiling of CERN's new user pages on the Web. Their simplified layout and design will make everybody's lives a whole lot easier. Stand by for Tuesday 17 April when, as announced in the Weekly Bulletin of 2 April (n°14/2001), the new newly-designed users welcome page will be hitting our screens as the default CERN home page. But don't worry, if you've got the blues for the good old blue-green home page it's still in service and, to ensure a smooth transition, will be maintained in parallel until 25 May. But in all likelihood you'll be quickly won over by the new-look pages, which are so much simpler to use. Welcome to the new Web! The aim of this revamp, led by the WPE (Web Public Education) group, is to simplify and introduce a more logical hierarchy into the menus and welcome pages on CERN's Intranet. In a second stage, the 'General Public' pages will get a similar makeover. The fact is that the number of links on the user pages, and in particular the welcome page...

  7. Is Domain Highlighting Actually Helpful in Identifying Phishing Web Pages?

    Science.gov (United States)

    Xiong, Aiping; Proctor, Robert W; Yang, Weining; Li, Ninghui

    2017-06-01

    To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants' visual attention was attracted by the highlighted domains. Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages.

  8. Deep Neural Networks for Web Page Information Extraction

    OpenAIRE

    Gogar, Tomas; Hubacek, Ondrej; Sedivy, Jan

    2016-01-01

    Part 3: Ontology-Web and Social Media AI Modeling (OWESOM); International audience; Web wrappers are systems for extracting structured information from web pages. Currently, wrappers need to be adapted to a particular website template before they can start the extraction process. In this work we present a new method, which uses convolutional neural networks to learn a wrapper that can extract information from previously unseen templates. Therefore, this wrapper does not need any site-specific...

  9. Beginning ASPNET Web Pages with WebMatrix

    CERN Document Server

    Brind, Mike

    2011-01-01

    Learn to build dynamic web sites with Microsoft WebMatrix Microsoft WebMatrix is designed to make developing dynamic ASP.NET web sites much easier. This complete Wrox guide shows you what it is, how it works, and how to get the best from it right away. It covers all the basic foundations and also introduces HTML, CSS, and Ajax using jQuery, giving beginning programmers a firm foundation for building dynamic web sites.Examines how WebMatrix is expected to become the new recommended entry-level tool for developing web sites using ASP.NETArms beginning programmers, students, and educators with al

  10. Web pages of Slovenian public libraries

    Directory of Open Access Journals (Sweden)

    Silva Novljan

    2002-01-01

    Full Text Available Libraries should offer their patrons web sites which establish the unmistakeable concept (public of library, the concept that cannot be mistaken for other information brokers and services available on the Internet, but inside this framework of the concept of library, would show a diversity which directs patrons to other (public libraries. This can be achieved by reliability, quality of information and services, and safety of usage.Achieving this, patrons regard library web sites as important reference sources deserving continuous usage for obtaining relevant information. Libraries excuse investment in the development and sustainance of their web sites by the number of visits and by patron satisfaction. The presented research, made on a sample of Slovene public libraries’web sites, determines how the libraries establish their purpose and role, as well as the given professional recommendations in web site design.The results uncover the striving of libraries for the modernisation of their functions,major attention is directed to the presentation of classic libraries and their activities,lesser to the expansion of available contents and electronic sources. Pointing to their diversity is significant since it is not a result of patrons’ needs, but more the consequence of improvisation, too little attention to selection, availability, organisation and formation of different kind of information and services on the web sites. Based on the analysis of a common concept of the public library web site, certain activities for improving the existing state of affairs are presented in the paper.

  11. Identification of Malicious Web Pages by Inductive Learning

    Science.gov (United States)

    Liu, Peishun; Wang, Xuefang

    Malicious web pages are an increasing threat to current computer systems in recent years. Traditional anti-virus techniques focus typically on detection of the static signatures of Malware and are ineffective against these new threats because they cannot deal with zero-day attacks. In this paper, a novel classification method for detecting malicious web pages is presented. This method is generalization and specialization of attack pattern based on inductive learning, which can be used for updating and expanding knowledge database. The attack pattern is established from an example and generalized by inductive learning, which can be used to detect unknown attacks whose behavior is similar to the example.

  12. Identify Web-page Content meaning using Knowledge based System for Dual Meaning Words

    OpenAIRE

    Sinha, Sukanta; Dattagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Meaning of Web-page content plays a big role while produced a search result from a search engine. Most of the cases Web-page meaning stored in title or meta-tag area but those meanings do not always match with Web-page content. To overcome this situation we need to go through the Web-page content to identify the Web-page meaning. In such cases, where Webpage content holds dual meaning words that time it is really difficult to identify the meaning of the Web-page. In this paper, we are introdu...

  13. What Snippets Say About Pages in Federated Web Search

    OpenAIRE

    DEMEESTER, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd; Hou, Yuexian; Nie, Jian-Yun; Sun, Le; Wang, Bo; Zhang, Peng

    2012-01-01

    What is the likelihood that a Web page is considered relevant to a query, given the relevance assessment of the corresponding snippet? Using a new federated IR test collection that contains search results from over a hundred search engines on the internet, we are able to investigate such research questions from a global perspective. Our test collection covers the main Web search engines like Google, Yahoo!, and Bing, as well as a number of smaller search engines dedicated to multimedia, shopp...

  14. In-degree and pageRank of web pages: Why do they follow similar power laws?

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    The PageRank is a popularity measure designed by Google to rank Web pages. Experiments confirm that the PageRank obeys a 'power law' with the same exponent as the In-Degree. This paper presents a novel mathematical model that explains this phenomenon. The relation between the PageRank and In-Degree

  15. In-Degree and PageRank of web pages: why do they follow similar power laws?

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    2009-01-01

    PageRank is a popularity measure designed by Google to rank Web pages. Experiments confirm that PageRank values obey a power law with the same exponent as In-Degree values. This paper presents a novel mathematical model that explains this phenomenon. The relation between PageRank and In-Degree is

  16. Web Page Design in Distance Education

    Science.gov (United States)

    Isman, Aytekin; Dabaj, Fahme; Gumus, Agah; Altinay, Fahriye; Altinay, Zehra

    2004-01-01

    Distance education is contemporary process of the education. It facilitates fast, easy delivery of information with its concrete hardware and software tools. The development of high technology, internet and web-design delivering become impact of effective using as delivery system to the students. Within the global perspective, even the all work…

  17. Ecosystem Food Web Lift-The-Flap Pages

    Science.gov (United States)

    Atwood-Blaine, Dana; Rule, Audrey C.; Morgan, Hannah

    2016-01-01

    In the lesson on which this practical article is based, third grade students constructed a "lift-the-flap" page to explore food webs on the prairie. The moveable papercraft focused student attention on prairie animals' external structures and how the inferred functions of those structures could support further inferences about the…

  18. What Snippets Say About Pages in Federated Web Search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd; Hou, Yuexian; Nie, Jian-Yun; Sun, Le; Wang, Bo; Zhang, Peng

    2012-01-01

    What is the likelihood that a Web page is considered relevant to a query, given the relevance assessment of the corresponding snippet? Using a new federated IR test collection that contains search results from over a hundred search engines on the internet, we are able to investigate such research

  19. Business Systems Branch Abilities, Capabilities, and Services Web Page

    Science.gov (United States)

    Cortes-Pena, Aida Yoguely

    2009-01-01

    During the INSPIRE summer internship I acted as the Business Systems Branch Capability Owner for the Kennedy Web-based Initiative for Communicating Capabilities System (KWICC), with the responsibility of creating a portal that describes the services provided by this Branch. This project will help others achieve a clear view ofthe services that the Business System Branch provides to NASA and the Kennedy Space Center. After collecting the data through the interviews with subject matter experts and the literature in Business World and other web sites I identified discrepancies, made the necessary corrections to the sites and placed the information from the report into the KWICC web page.

  20. Filter for marking inappropriate content on web pages

    OpenAIRE

    Kovač, Boštjan

    2009-01-01

    Modern web sites often contain visitors' comments which are inappropriate, offensive or even violent. It is however possible to avoid having to read such text with appropriate tools. We developed one based Bayesan techniques used in spam filters. The goal was to teach the web browser to alert the user about any inappropriate content on the web page. To make the tool user friendly, we used a number of different technologies packed in a Firefox add-on. This work describes the used technolo...

  1. Automatic Hidden-Web Table Interpretation by Sibling Page Comparison

    Science.gov (United States)

    Tao, Cui; Embley, David W.

    The longstanding problem of automatic table interpretation still illudes us. Its solution would not only be an aid to table processing applications such as large volume table conversion, but would also be an aid in solving related problems such as information extraction and semi-structured data management. In this paper, we offer a conceptual modeling solution for the common special case in which so-called sibling pages are available. The sibling pages we consider are pages on the hidden web, commonly generated from underlying databases. We compare them to identify and connect nonvarying components (category labels) and varying components (data values). We tested our solution using more than 2,000 tables in source pages from three different domains—car advertisements, molecular biology, and geopolitical information. Experimental results show that the system can successfully identify sibling tables, generate structure patterns, interpret tables using the generated patterns, and automatically adjust the structure patterns, if necessary, as it processes a sequence of hidden-web pages. For these activities, the system was able to achieve an overall F-measure of 94.5%.

  2. Improving the Performance of Web Access by Bridging Global Ranking with Local Page Popularity Metrics.

    Science.gov (United States)

    Garofalakis, John; Kappos, Panagiotis; Makris, Christos

    2002-01-01

    Considers the problem of improving the performance of Web access by proposing a reconstruction of the internal link structure of a Web site to match quality of the pages with popularity of the pages. Provides a set of simple algorithms for local reorganization of a Web site, which results in improving users' access to quality pages in an easy and…

  3. Lifting Events in RDF from Interactions with Annotated Web Pages

    Science.gov (United States)

    Stühmer, Roland; Anicic, Darko; Sen, Sinan; Ma, Jun; Schmidt, Kay-Uwe; Stojanovic, Nenad

    In this paper we present a method and an implementation for creating and processing semantic events from interaction with Web pages which opens possibilities to build event-driven applications for the (Semantic) Web. Events, simple or complex, are models for things that happen e.g., when a user interacts with a Web page. Events are consumed in some meaningful way e.g., for monitoring reasons or to trigger actions such as responses. In order for receiving parties to understand events e.g., comprehend what has led to an event, we propose a general event schema using RDFS. In this schema we cover the composition of complex events and event-to-event relationships. These events can then be used to route semantic information about an occurrence to different recipients helping in making the Semantic Web active. Additionally, we present an architecture for detecting and composing events in Web clients. For the contents of events we show a way of how they are enriched with semantic information about the context in which they occurred. The paper is presented in conjunction with the use case of Semantic Advertising, which extends traditional clickstream analysis by introducing semantic short-term profiling, enabling discovery of the current interest of a Web user and therefore supporting advertisement providers in responding with more relevant advertisements.

  4. Problems of long-term preservation of web pages

    Directory of Open Access Journals (Sweden)

    Mitja Dečman

    2011-01-01

    Full Text Available The World Wide Web is a distributed collection of web sites available on the Internet anywhere in the world. Its content is constantly changing: old data are being replaced which causes constant loss of a huge amount of information and consequently the loss of scientific, cultural and other heritage. Often, unnoticeably even legal certainty is questioned. In what way the data on the web can be stored and how to preserve them for the long term is a great challenge. Even though some good practices have been developed, the question of final solution on the national level still remains. The paper presents the problems of long-term preservation of web pages from technical and organizational point of view. It includes phases such as capturing and preserving web pages, focusing on good solutions, world practices and strategies to find solutions in this area developed by different countries. The paper suggests some conceptual steps that have to be defined in Slovenia which would serve as a framework for all document creators in the web environment and therefore contributes to the consciousness in this field, mitigating problems of all dealing with these issues today and in the future.

  5. Arabic web pages clustering and annotation using semantic class features

    Directory of Open Access Journals (Sweden)

    Hanan M. Alghamdi

    2014-12-01

    Full Text Available To effectively manage the great amount of data on Arabic web pages and to enable the classification of relevant information are very important research problems. Studies on sentiment text mining have been very limited in the Arabic language because they need to involve deep semantic processing. Therefore, in this paper, we aim to retrieve machine-understandable data with the help of a Web content mining technique to detect covert knowledge within these data. We propose an approach to achieve clustering with semantic similarities. This approach comprises integrating k-means document clustering with semantic feature extraction and document vectorization to group Arabic web pages according to semantic similarities and then show the semantic annotation. The document vectorization helps to transform text documents into a semantic class probability distribution or semantic class density. To reach semantic similarities, the approach extracts the semantic class features and integrates them into the similarity weighting schema. The quality of the clustering result has evaluated the use of the purity and the mean intra-cluster distance (MICD evaluation measures. We have evaluated the proposed approach on a set of common Arabic news web pages. We have acquired favorable clustering results that are effective in minimizing the MICD, expanding the purity and lowering the runtime.

  6. Geographic Information Systems and Web Page Development

    Science.gov (United States)

    Reynolds, Justin

    2004-01-01

    The Facilities Engineering and Architectural Branch is responsible for the design and maintenance of buildings, laboratories, and civil structures. In order to improve efficiency and quality, the FEAB has dedicated itself to establishing a data infrastructure based on Geographic Information Systems, GIs. The value of GIS was explained in an article dating back to 1980 entitled "Need for a Multipurpose Cadastre which stated, "There is a critical need for a better land-information system in the United States to improve land-conveyance procedures, furnish a basis for equitable taxation, and provide much-needed information for resource management and environmental planning." Scientists and engineers both point to GIS as the solution. What is GIS? According to most text books, Geographic Information Systems is a class of software that stores, manages, and analyzes mapable features on, above, or below the surface of the earth. GIS software is basically database management software to the management of spatial data and information. Simply put, Geographic Information Systems manage, analyze, chart, graph, and map spatial information. At the outset, I was given goals and expectations from my branch and from my mentor with regards to the further implementation of GIs. Those goals are as follows: (1) Continue the development of GIS for the underground structures. (2) Extract and export annotated data from AutoCAD drawing files and construct a database (to serve as a prototype for future work). (3) Examine existing underground record drawings to determine existing and non-existing underground tanks. Once this data was collected and analyzed, I set out on the task of creating a user-friendly database that could be assessed by all members of the branch. It was important that the database be built using programs that most employees already possess, ruling out most AutoCAD-based viewers. Therefore, I set out to create an Access database that translated onto the web using Internet

  7. Building Interactive Simulations in Web Pages without Programming.

    Science.gov (United States)

    Mailen Kootsey, J; McAuley, Grant; Bernal, Julie

    2005-01-01

    A software system is described for building interactive simulations and other numerical calculations in Web pages. The system is based on a new Java-based software architecture named NumberLinX (NLX) that isolates each function required to build the simulation so that a library of reusable objects could be assembled. The NLX objects are integrated into a commercial Web design program for coding-free page construction. The model description is entered through a wizard-like utility program that also functions as a model editor. The complete system permits very rapid construction of interactive simulations without coding. A wide range of applications are possible with the system beyond interactive calculations, including remote data collection and processing and collaboration over a network.

  8. Recovering alternative presentation models of a web page with VAQUITA

    OpenAIRE

    Bouillon, Laurent; Vanderdonckt, Jean; Souchon, Nathalie

    2002-01-01

    VAQUITA allows developers to reverse engineer a presentation model of a web page according to multiple reverse engineering options. The alternative models offered by these options not only widen the spectrum of possible presentation models but also encourage developers in exploring multiple reverse engineering strategies. The options provide filtering capabilities in a static analysis of HTML code that are targeted either at multiple widgets simultaneously or at single widgets ...

  9. Credibility judgments in web page design - a brief review.

    Science.gov (United States)

    Selejan, O; Muresanu, D F; Popa, L; Muresanu-Oloeriu, I; Iudean, D; Buzoianu, A; Suciu, S

    2016-01-01

    Today, more than ever, knowledge that interfaces appearance analysis is a crucial point in human-computer interaction field has been accepted. As nowadays virtually anyone can publish information on the web, the credibility role has grown increasingly important in relation to the web-based content. Areas like trust, credibility, and behavior, doubled by overall impression and user expectation are today in the spotlight of research compared to the last period, when other pragmatic areas such as usability and utility were considered. Credibility has been discussed as a theoretical construct in the field of communication in the past decades and revealed that people tend to evaluate the credibility of communication primarily by the communicator's expertise. Other factors involved in the content communication process are trustworthiness and dynamism as well as various other criteria but to a lower extent. In this brief review, factors like web page aesthetics, browsing experiences and user experience are considered.

  10. Detection of spam web page using content and link-based techniques

    Indian Academy of Sciences (India)

    based approach, we have used collab- orative detection using personalized page ranking to detect the spam pages. First personalized page rank has been cal- culated for all the Web pages and then using an optimization function which is same ...

  11. Overhaul of CERN's top-level web pages

    CERN Multimedia

    2004-01-01

    The pages for CERN users and for the general public have been given a face-lift before they become operational on the central web servers later this month. You may already now inspect the new versions in their "waiting places" at: http://intranet.cern.ch/User/ and http://intranet.cern.ch/Public/ We hope you will like these improved versions and you can report errors and omissions in the usual way ("comments and change requests" link at the bottom of the pages). The new versions will replace the existing ones at the end of the month, so you do not need to change your bookmarks or start-up URL. ETT/EC/EX

  12. THE NEW PURCHASING SERVICE PAGE NOW ON THE WEB!

    CERN Multimedia

    SPL Division

    2000-01-01

    Users of CERN's Purchasing Service are encouraged to visit the new Purchasing Service web page, accessible from the CERN homepage or directly at: http://spl-purchasing.web.cern.ch/spl-purchasing/ There, you will find answers to questions such as: Who are the buyers? What do I need to know before creating a DAI? How many offers do I need? Where shall I send the offer I received? I know the amount of my future requirement, how do I proceed? How are contracts adjudicated at CERN? Which exhibitions and visits of Member State companies are foreseen in the future? A company I know is interested in making a presentation at CERN, who should they contact? Additionally, you will find information concerning: The Purchasing procedures Market Surveys and Invitations to Tender The Industrial Liaison Officers appointed in each Member State The Purchasing Broker at CERN

  13. Children's recognition of advertisements on television and on Web pages.

    Science.gov (United States)

    Blades, Mark; Oates, Caroline; Li, Shiying

    2013-03-01

    In this paper we consider the issue of advertising to children. Advertising to children raises a number of concerns, in particular the effects of food advertising on children's eating habits. We point out that virtually all the research into children's understanding of advertising has focused on traditional television advertisements, but much marketing aimed at children is now via the Internet and little is known about children's awareness of advertising on the Web. One important component of understanding advertisements is the ability to distinguish advertisements from other messages, and we suggest that young children's ability to recognise advertisements on a Web page is far behind their ability to recognise advertisements on television. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Cluster Analysis of Customer Reviews Extracted from Web Pages

    Directory of Open Access Journals (Sweden)

    S. Shivashankar

    2010-01-01

    Full Text Available As e-commerce is gaining popularity day by day, the web has become an excellent source for gathering customer reviews / opinions by the market researchers. The number of customer reviews that a product receives is growing at very fast rate (It could be in hundreds or thousands. Customer reviews posted on the websites vary greatly in quality. The potential customer has to read necessarily all the reviews irrespective of their quality to make a decision on whether to purchase the product or not. In this paper, we make an attempt to assess are view based on its quality, to help the customer make a proper buying decision. The quality of customer review is assessed as most significant, more significant, significant and insignificant.A novel and effective web mining technique is proposed for assessing a customer review of a particular product based on the feature clustering techniques, namely, k-means method and fuzzy c-means method. This is performed in three steps : (1Identify review regions and extract reviews from it, (2 Extract and cluster the features of reviews by a clustering technique and then assign weights to the features belonging to each of the clusters (groups and (3 Assess the review by considering the feature weights and group belongingness. The k-means and fuzzy c-means clustering techniques are implemented and tested on customer reviews extracted from web pages. Performance of these techniques are analyzed.

  15. Emerging Pattern-Based Clustering of Web Users Utilizing a Simple Page-Linked Graph

    Directory of Open Access Journals (Sweden)

    Xiuming Yu

    2016-03-01

    Full Text Available Web usage mining is a popular research area in data mining. With the extensive use of the Internet, it is essential to learn about the favorite web pages of its users and to cluster web users in order to understand the structural patterns of their usage behavior. In this paper, we propose an efficient approach to determining favorite web pages by generating large web pages, and emerging patterns of generated simple page-linked graphs. We identify the favorite web pages of each user by eliminating noise due to overall popular pages, and by clustering web users according to the generated emerging patterns. Afterwards, we label the clusters by using Term Frequency-Inverse Document Frequency (TF-IDF. In the experiments, we evaluate the parameters used in our proposed approach, discuss the effect of the parameters on generating emerging patterns, and analyze the results from clustering web users. The results of the experiments prove that the exact patterns generated in the emerging-pattern step eliminate the need to consider noise pages, and consequently, this step can improve the efficiency of subsequent mining tasks. Our proposed approach is capable of clustering web users from web log data.

  16. Unlocking the Gates to the Kingdom: Designing Web Pages for Accessibility.

    Science.gov (United States)

    Mills, Steven C.

    As the use of the Web is perceived to be an effective tool for dissemination of research findings for the provision of asynchronous instruction, the issue of accessibility of Web page information will become more and more relevant. The World Wide Web consortium (W3C) has recognized a disparity in accessibility to the Web between persons with and…

  17. Why Web Pages Annotation Tools Are Not Killer Applications? A New Approach to an Old Problem.

    Science.gov (United States)

    Ronchetti, Marco; Rizzi, Matteo

    The idea of annotating Web pages is not a new one: early proposals date back to 1994. A tool providing the ability to add notes to a Web page, and to share the notes with other users seems to be particularly well suited to an e-learning environment. Although several tools already provide such possibility, they are not widely popular. This paper…

  18. Teaching E-Commerce Web Page Evaluation and Design: A Pilot Study Using Tourism Destination Sites

    Science.gov (United States)

    Susser, Bernard; Ariga, Taeko

    2006-01-01

    This study explores a teaching method for improving business students' skills in e-commerce page evaluation and making Web design majors aware of business content issues through cooperative learning. Two groups of female students at a Japanese university studying either tourism or Web page design were assigned tasks that required cooperation to…

  19. Social Responsibility and Corporate Web Pages: Self-Presentation or Agenda-Setting?

    Science.gov (United States)

    Esrock, Stuart L.; Leichty, Greg B.

    1998-01-01

    Examines how corporate entities use the Web to present themselves as socially responsible citizens and to advance policy positions. Samples randomly "Fortune 500" companies, revealing that, although 90% had Web pages and 82% of the sites addressed a corporate social responsibility issue, few corporations used their pages to monitor…

  20. A New Era of Search Engines: Not Just Web Pages Anymore.

    Science.gov (United States)

    Hock, Ran

    2002-01-01

    Discusses various types of information that can be retrieved from the Web via search engines. Highlights include Web pages; time frames, including historical coverage and currentness; text pages in formats other than HTML; directory sites; news articles; discussion groups; images; and audio and video. (LRW)

  1. Environment: General; Grammar & Usage; Money Management; Music History; Web Page Creation & Design.

    Science.gov (United States)

    Web Feet, 2001

    2001-01-01

    Describes Web site resources for elementary and secondary education in the topics of: environment, grammar, money management, music history, and Web page creation and design. Each entry includes an illustration of a sample page on the site and an indication of the grade levels for which it is appropriate. (AEF)

  2. Socorro Students Translate NRAO Web Pages Into Spanish

    Science.gov (United States)

    2002-07-01

    Six Socorro High School students are spending their summer working at the National Radio Astronomy Observatory (NRAO) on a unique project that gives them experience in language translation, World Wide Web design, and technical communication. Under the project, called "Un puente a los cielos," the students are translating many of NRAO's Web pages on astronomy into Spanish. "These students are using their bilingual skills to help us make basic information about astronomy and radio telescopes available to the Spanish-speaking community," said Kristy Dyer, who works at NRAO as a National Science Foundation postdoctoral fellow and who developed the project and obtained funding for it from the National Aeronautics and Space Administration. The students are: Daniel Acosta, 16; Rossellys Amarante, 15; Sandra Cano, 16; Joel Gonzalez, 16; Angelica Hernandez, 16; and Cecilia Lopez, 16. The translation project, a joint effort of NRAO and the NM Tech physics department, also includes Zammaya Moreno, a teacher from Ecuador, Robyn Harrison, NRAO's education officer, and NRAO computer specialist Allan Poindexter. The students are translating NRAO Web pages aimed at the general public. These pages cover the basics of radio astronomy and frequently-asked questions about NRAO and the scientific research done with NRAO's telescopes. "Writing about science for non-technical audiences has to be done carefully. Scientific concepts must be presented in terms that are understandable to non-scientists but also that remain scientifically accurate," Dyer said. "When translating this type of writing from one language to another, we need to preserve both the understandability and the accuracy," she added. For that reason, Dyer recruited 14 Spanish-speaking astronomers from Argentina, Mexico and the U.S. to help verify the scientific accuracy of the Spanish translations. The astronomers will review the translations. The project is giving the students a broad range of experience. "They are

  3. AUTOMATIC TAGGING OF PERSIAN WEB PAGES BASED ON N-GRAM LANGUAGE MODELS USING MAPREDUCE

    Directory of Open Access Journals (Sweden)

    Saeed Shahrivari

    2015-07-01

    Full Text Available Page tagging is one of the most important facilities for increasing the accuracy of information retrieval in the web. Tags are simple pieces of data that usually consist of one or several words, and briefly describe a page. Tags provide useful information about a page and can be used for boosting the accuracy of searching, document clustering, and result grouping. The most accurate solution to page tagging is using human experts. However, when the number of pages is large, humans cannot be used, and some automatic solutions should be used instead. We propose a solution called PerTag which can automatically tag a set of Persian web pages. PerTag is based on n-gram models and uses the tf-idf method plus some effective Persian language rules to select proper tags for each web page. Since our target is huge sets of web pages, PerTag is built on top of the MapReduce distributed computing framework. We used a set of more than 500 million Persian web pages during our experiments, and extracted tags for each page using a cluster of 40 machines. The experimental results show that PerTag is both fast and accurate

  4. Enhancing the Ranking of a Web Page in the Ocean of Data

    Directory of Open Access Journals (Sweden)

    Hitesh KUMAR SHARMA

    2013-10-01

    Full Text Available In today's world, web is considered as ocean of data and information (like text, videos, multimedia etc. consisting of millions and millions of web pages in which web pages are linked with each other like a tree. It is often argued that, especially considering the dynamic of the internet, too much time has passed since the scientific work on PageRank, as that it still could be the basis for the ranking methods of the Google search engine. There is no doubt that within the past years most likely many changes, adjustments and modifications regarding the ranking methods of Google have taken place, but PageRank was absolutely crucial for Google's success, so that at least the fundamental concept behind PageRank should still be constitutive. This paper describes the components which affects the ranking of the web pages and helps in increasing the popularity of web site. By adapting these factors website developers can increase their site's page rank and within the PageRank concept, considering the rank of a document is given by the rank of those documents which link to it. Their rank again is given by the rank of documents which link to them. The PageRank of a document is always determined recursively by the PageRank of other documents.

  5. A look into the growing world of hospital security dept Web pages.

    Science.gov (United States)

    2001-03-01

    As more and more security professionals become computer savvy, a growing number of hospital police and security departments are now accessible on the Internet via home pages or complete web sites--some with Intranet capability. How some of your colleagues are using their web sites is described in this report. You can check out other hospital security Internet sites, thanks to a unique web page maintained by a security officer.

  6. Teaching Materials to Enhance the Visual Expression of Web Pages for Students Not in Art or Design Majors

    Science.gov (United States)

    Ariga, T.; Watanabe, T.

    2008-01-01

    The explosive growth of the Internet has made the knowledge and skills for creating Web pages into general subjects that all students should learn. It is now common to teach the technical side of the production of Web pages and many teaching materials have been developed. However teaching the aesthetic side of Web page design has been neglected,…

  7. Review of Metadata Elements within the Web Pages Resulting from Searching in General Search Engines

    Directory of Open Access Journals (Sweden)

    Sima Shafi’ie Alavijeh

    2009-12-01

    Full Text Available The present investigation was aimed to study the scope of presence of Dublin Core metadata elements and HTML meta tags in web pages. Ninety web pages were chosen by searching general search engines (Google, Yahoo and MSN. The scope of metadata elements (Dublin Core and HTML Meta tags present in these pages as well as existence of a significant correlation between presence of meta elements and type of search engines were investigated. Findings indicated very low presence of both Dublin Core metadata elements and HTML meta tags in the pages retrieved which in turn illustrates the very low usage of meta data elements in web pages. Furthermore, findings indicated that there are no significant correlation between the type of search engine used and presence of metadata elements. From the standpoint of including metadata in retrieval of web sources, search engines do not significantly differ from one another.

  8. Design of an Interface for Page Rank Calculation using Web Link Attributes Information

    Directory of Open Access Journals (Sweden)

    Jeyalatha SIVARAMAKRISHNAN

    2010-01-01

    Full Text Available This paper deals with the Web Structure Mining and the different Structure Mining Algorithms like Page Rank, HITS, Trust Rank and Sel-HITS. The functioning of these algorithms are discussed. An incremental algorithm for calculation of PageRank using an interface has been formulated. This algorithm makes use of Web Link Attributes Information as key parameters and has been implemented using Visibility and Position of a Link. The application of Web Structure Mining Algorithm in an Academic Search Application has been discussed. The present work can be a useful input to Web Users, Faculty, Students and Web Administrators in a University Environment.

  9. A Cyber-Room of Their Own: How Libraries Use Web Pages To Attract Young Adults.

    Science.gov (United States)

    Jones, Patrick

    1997-01-01

    Examines Web sites on the Internet that were created especially for teenagers by various public libraries. Highlights include young adult areas and services in actual library buildings; a list of outstanding teen Web sites; and a list of factors to consider when creating a young adult Web page. (LRW)

  10. An efficient scheme for automatic web pages categorization using the support vector machine

    Science.gov (United States)

    Bhalla, Vinod Kumar; Kumar, Neeraj

    2016-07-01

    In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.

  11. The development of a web page for lipid science and research. Main web sites of interest

    Directory of Open Access Journals (Sweden)

    Boatella, J.

    2001-08-01

    Full Text Available Internet provide access to a huge of scientific and technical information on Internet which is not validated by any committee of experts. This information needs filtering in order to optimize user access to these resources. In this paper, we describe the development of a WEB page outlining the activity of our research team Food Lipids Quality and Health. The WEB page seeks to fulfil the following objectives: to communicate the activities of the team, to use effectively the resources that Internet offers and to promote their use among the team. We report on the methods used in achieving these objectives. Finally, a large number of WEB addresses related to Lipids are presented and classified. The addresses have been selected on the basis of their usefulness and interest value.En internet encontramos gran cantidad de información científico-técnica cuya validez no suele estar controlada por comités correctores. Para aprovechar estos recursos es necesario filtrar y facilitar el acceso del usuario a la información. En este artículo se expone la experiencia práctica en el desarrollo de una página WEB centrada en las actividades del grupo de investigación «Calidad Nutricional y Tecnología de los Lípidos». Los objetivos de esta página WEB fueron los siguientes: difusión de las actividades del grupo de investigación, aprovechar los recursos que ofrece internet y fomentar y facilitar su uso. Esta experiencia permitió presentar una metodología de trabajo eficaz para conseguir estos objetivos. Finalmente, se presentan un gran número de direcciones WEB agrupadas por apartados en el ámbito de los lípidos. Estas direcciones han sido rigurosamente seleccionadas, entre un gran número de referencias consultadas, siguiendo una serie de criterios que se discuten en este trabajo, para ofrecer aquellas que presentan un mayor interés práctico.

  12. Search Engine Ranking, Quality, and Content of Web Pages That Are Critical Versus Noncritical of Human Papillomavirus Vaccine.

    Science.gov (United States)

    Fu, Linda Y; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G

    2016-01-01

    Online information can influence attitudes toward vaccination. The aim of the present study was to provide a systematic evaluation of the search engine ranking, quality, and content of Web pages that are critical versus noncritical of human papillomavirus (HPV) vaccination. We identified HPV vaccine-related Web pages with the Google search engine by entering 20 terms. We then assessed each Web page for critical versus noncritical bias and for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than ninth grade. We also determined Web page comprehensiveness in terms of mention of 14 HPV vaccine-relevant topics. Twenty searches yielded 116 unique Web pages. HPV vaccine-critical Web pages comprised roughly a third of the top, top 5- and top 10-ranking Web pages. The prevalence of HPV vaccine-critical Web pages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical Web pages, Web pages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (p engine queries despite being of lower quality and less comprehensive than noncritical Web pages. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  13. Automatic Detection for JavaScript Obfuscation Attacks in Web Pages through String Pattern Analysis

    Science.gov (United States)

    Choi, Younghan; Kim, Taeghyoon; Choi, Seokjin; Lee, Cheolwon

    Recently, most of malicious web pages include obfuscated codes in order to circumvent the detection of signature-based detection systems. It is difficult to decide whether the sting is obfuscated because the shape of obfuscated strings are changed continuously. In this paper, we propose a novel methodology that can detect obfuscated strings in the malicious web pages. We extracted three metrics as rules for detecting obfuscated strings by analyzing patterns of normal and malicious JavaScript codes. They are N-gram, Entropy, and Word Size. N-gram checks how many each byte code is used in strings. Entropy checks distributed of used byte codes. Word size checks whether there is used very long string. Based on the metrics, we implemented a practical tool for our methodology and evaluated it using read malicious web pages. The experiment results showed that our methodology can detect obfuscated strings in web pages effectively.

  14. Analysis and Testing of Ajax-based Single-page Web Applications

    NARCIS (Netherlands)

    Mesbah, A.

    2009-01-01

    This dissertation has focused on better understanding the shifting web paradigm and the consequences of moving from the classical multi-page model to an Ajax-based single-page style. Specifically to that end, this work has examined this new class of software from three main software engineering

  15. Detection of spam web page using content and link-based ...

    Indian Academy of Sciences (India)

    The content-based approach uses term density and Part of Speech (POS) ratio test and in the link-based approach, we explore the collaborative detection using personalized page ranking to classify the Web page as spam or non-spam. For experimental purpose, WEBSPAM-UK2006 dataset has been used. The results ...

  16. Future Trends in Chlldren's Web Pages: Probing Hidden Biases for Information Quality

    Science.gov (United States)

    Kurubacak, Gulsun

    2007-01-01

    As global digital communication continues to flourish, Children's Web pages become more critical for children to realize not only the surface but also breadth and deeper meanings in presenting these milieus. These pages not only are very diverse and complex but also enable intense communication across social, cultural and political restrictions…

  17. Future Trends in Children's Web Pages: Probing Hidden Biases for Information Quality

    Science.gov (United States)

    Kurubacak, Gulsun

    2007-01-01

    As global digital communication continues to flourish, Children's Web pages become more critical for children to realize not only the surface but also breadth and deeper meanings in presenting these milieus. These pages not only are very diverse and complex but also enable intense communication across social, cultural and political restrictions…

  18. How Useful are Orthopedic Surgery Residency Web Pages?

    Science.gov (United States)

    Oladeji, Lasun O; Yu, Jonathan C; Oladeji, Afolayan K; Ponce, Brent A

    2015-01-01

    Medical students interested in orthopedic surgery residency positions frequently use the Internet as a modality to gather information about individual residency programs. Students often invest a painstaking amount of time and effort in determining programs that they are interested in, and the Internet is central to this process. Numerous studies have concluded that program websites are a valuable resource for residency and fellowship applicants. The purpose of the present study was to provide an update on the web pages of academic orthopedic surgery departments in the United States and to rate their utility in providing information on quality of education, faculty and resident information, environment, and applicant information. We reviewed existing websites for the 156 departments or divisions of orthopedic surgery that are currently accredited for resident education by the Accreditation Council for Graduate Medical Education. Each website was assessed for quality of information regarding quality of education, faculty and resident information, environment, and applicant information. We noted that 152 of the 156 departments (97%) had functioning websites that could be accessed. There was high variability regarding the comprehensiveness of orthopedic residency websites. Most of the orthopedic websites provided information on conference, didactics, and resident rotations. Less than 50% of programs provided information on resident call schedules, resident or faculty research and publications, resident hometowns, or resident salary. There is a lack of consistency regarding the content presented on orthopedic residency websites. As the competition for orthopedic websites continues to increase, applicants flock to the Internet to learn more about orthopedic websites in greater number. A well-constructed website has the potential to increase the caliber of students applying to a said program. Copyright © 2015 Association of Program Directors in Surgery. Published by

  19. Design of a Web Page as a complement of educative innovation through MOODLE

    Science.gov (United States)

    Mendiola Ubillos, M. A.; Aguado Cortijo, Pedro L.

    2010-05-01

    In the context of Information Technology to impart knowledge and to establish MOODLE system as a support and complementary tool to on-site educational methodology (b-learning) a Web Page was designed in Agronomic and Food Industry Crops (Plantas de interés Agroalimentario) during 2006-07 course. This web was inserted in the Thecnical University of Madrid (Universidad Politécnica de Madrid) computer system to facilitate to the students the first contact with the contents of this subject. In this page the objectives and methodology, personal work planning, subject program given plus the activities are showed. At another web site, the evaluation criteria and recommended bibliography are located. The objective of this web page has been to make more transparent and accessible the necessary information in the learning process and presenting it in a more attractive frame. This page has been update and modified in each academic course offered since its first implementation. We had added in some cases new specific links to increase its useful. At the end of each course a test is applied to the students that take this subject. We have asked which elements would like to modify, delete and add to this web page. In this way the direct users give their point of view and help to improve the web page each course.

  20. Modeling user navigation behavior in web by colored Petri nets to determine the user's interest in recommending web pages

    Directory of Open Access Journals (Sweden)

    Mehdi Sadeghzadeh

    2013-01-01

    Full Text Available One of existing challenges in personalization of the web is increasing the efficiency of a web in meeting the users' requirements for the contents they require in an optimal state. All the information associated with the current user behavior following in web and data obtained from pervious users’ interaction in web can provide some necessary keys to recommend presentation of services, productions, and the required information of the users. This study aims at presenting a formal model based on colored Petri nets to identify the present user's interest, which is utilized to recommend the most appropriate pages ahead. In the proposed design, recommendation of the pages is considered with respect to information obtained from pervious users' profile as well as the current session of the present user. This model offers the updated proposed pages to the user by clicking on the web pages. Moreover, an example of web is modeled using CPN Tools. The results of the simulation show that this design improves the precision factor. We explain, through evaluation where the results of this method are more objective and the dynamic recommendations demonstrate that the results of the recommended method improve the precision criterion 15% more than the static method.

  1. Medical computing over the World Wide Web: use of forms and CGI scripts for constructing medical algorithm Web pages.

    Science.gov (United States)

    Doyle, D J; Jarvis, B A; Ruskin, K J; Engel, T P

    1997-01-01

    The development of the World Wide Web has led to an explosion of educational and clinical resources available via the Internet with minimal effort or special training. However, most of these Web pages contain only static information; few offer dynamic information shaped around clinical or laboratory test findings. In this report we show how this goal can be achieved with the design and construction of Medical Algorithm Web Pages (MAWP). Specifically, using Internet technologies known as forms and CGI scripts we demonstrate how one can implement medical algorithms remotely over the Internet's World Wide Web. To use a MAWP, one enters the URL for the site and then enters information according to the instructions presented there, usually by entering numbers and other information into fields displayed on screen. When all the data is entered, the user clicks on the SUBMIT icon, resulting in a new Web page being constructed "on-the-fly" containing diagnostic calculations and other information pertinent to the patient's clinical management. Four sample applications are presented in detail to illustrate the concept of a Medical Algorithm Web page: Computation of the alveolar-arterial oxygen tension difference using the alveolar gas equation; Computation of renal creatinine clearance; drug infusion calculation (micrograms/kilogram/minute); Computation of the renal failure index.

  2. Project Management - Development of course materiale as WEB pages

    DEFF Research Database (Denmark)

    Thorsteinsson, Uffe; Bjergø, Søren

    1997-01-01

    Development of Internet pages with lessons plans, slideshows, links, conference system and interactive student section for communication between students and to teacher as well.......Development of Internet pages with lessons plans, slideshows, links, conference system and interactive student section for communication between students and to teacher as well....

  3. JavaScript: Convenient Interactivity for the Class Web Page.

    Science.gov (United States)

    Gray, Patricia

    This paper shows how JavaScript can be used within HTML pages to add interactive review sessions and quizzes incorporating graphics and sound files. JavaScript has the advantage of providing basic interactive functions without the use of separate software applications and players. Because it can be part of a standard HTML page, it is…

  4. Design Of A Web-Based Paper Submission And Reviewing System ...

    African Journals Online (AJOL)

    This paper presents the design of a web-based conference paper management system which facilitates easy and efficient review of technical submissions to conferences. Our proposed system stores authors\\' information, abstracts, papers and reviewers\\' comments. The process of assignment of papers to reviewers is done ...

  5. The Recognition of Web Pages' Hyperlinks by People with Intellectual Disabilities: An Evaluation Study

    Science.gov (United States)

    Rocha, Tania; Bessa, Maximino; Goncalves, Martinho; Cabral, Luciana; Godinho, Francisco; Peres, Emanuel; Reis, Manuel C.; Magalhaes, Luis; Chalmers, Alan

    2012-01-01

    Background: One of the most mentioned problems of web accessibility, as recognized in several different studies, is related to the difficulty regarding the perception of what is or is not clickable in a web page. In particular, a key problem is the recognition of hyperlinks by a specific group of people, namely those with intellectual…

  6. Media Overload in Instructional Web Pages and the Impact on Learning.

    Science.gov (United States)

    Hartley, Kendall W.

    1999-01-01

    Discussion of the use of the Internet to deliver instruction and multimedia opportunities for Web-page design focuses on working memory and its implications for Web-based instruction. Topics include media overload; cognitive load theory; static visual displays; animations; audio; and research questions. (Author/LRW)

  7. First U.S. Web page went up 10 years ago

    CERN Multimedia

    Kornblum, J

    2001-01-01

    Wednesday marks the 10th anniversary of the first U.S. Web page, created by Paul Kunz, a physicist at SLAC. He says that if World Wide Web creator Tim Berners-Lee hadn't been so persistant that he attended a demonstration meeting on a visit to CERN, the Web wouldn't have taken off when it did -- maybe not at all.

  8. An Ant Colony Optimization Based Feature Selection for Web Page Classification

    Science.gov (United States)

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods. PMID:25136678

  9. A Web Support System for Submission and Handling of Programming Assignments

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2011-01-01

    Individual submission of programming assignments should be considered in all introductory programming courses. We describe a custom web support system for submission and management of programming assignments in an introductory C programming course. Experience from the first time use of the system...... is reported. In addition, we compare the pattern of use with the results of the final exam in order to reveal a possible impact of the programming assignments. We summarize the lessons learned in preparation for improving the system prior to the next round of use in the fall of 2011....

  10. A randomized controlled trial of concept based indexing of Web page content.

    Science.gov (United States)

    Elkin, P L; Ruggieri, A; Bergstrom, L; Bauer, B A; Lee, M; Ogren, P V; Chute, C G

    2000-01-01

    Medical information is increasingly being presented in a web-enabled format. Medical journals, guidelines, and textbooks are all accessible in a web-based format. It would be desirable to link these reference sources to the electronic medical record to provide education, to facilitate guideline implementation and usage and for decision support. In order for these rich information sources to be accessed via the medical record they will need to be indexed by a single comparable underlying reference terminology. We took a random sample of 100 web pages out of the 6,000 web pages on the Mayo Clinic's Health Oasis web site. The web pages were divided into four datasets each containing 25 pages. These were humanly reviewed by four clinicians to identify all of the health concepts present (R1DA, R2DB, R3DC, R4DD). The web pages were simultaneously indexed using the SNOMED-RT beta release. The indexing engine has been previously described and validated. A new clinician reviewed the indexed web pages to determine the accuracy of the automated mappings as compared with the human identified concepts (R4DA, R3DB, R2DC, R1DD). This review found 13,220 health concepts. Of these 10,383 concepts were identified by the initial human review (78.5% +/- 3.6%). The automated process identified 10,083 concepts correctly (76.3% +/- 4.0%) from within this corpus. The computer identified 2,420 concepts, which were not identified by the clinician's review but were upon further consideration important to include as health concepts. There was on average a 17.1% +/- 3.5% variability in the human reviewers ability to identify the important health concepts within web page content. Concept Based Indexing provided a positive predictive value (PPV) of finding a health concept of 79.3% as compared with keyword indexing which only has a PPV of 33.7% (p web page indexing. Concept based indexing provides a significantly greater accuracy in identifying health concepts when compared with keyword indexing.

  11. Building single-page web apps with meteor

    CERN Document Server

    Vogelsteller, Fabian

    2015-01-01

    If you are a web developer with basic knowledge of JavaScript and want to take on Web 2.0, build real-time applications, or simply want to write a complete application using only JavaScript and HTML/CSS, this is the book for you.This book is based on Meteor 1.0.

  12. Searchers' relevance judgments and criteria in evaluating Web pages in a learning style perspective

    DEFF Research Database (Denmark)

    Papaeconomou, Chariste; Zijlema, Annemarie F.; Ingwersen, Peter

    2008-01-01

    The paper presents the results of a case study of searcher's relevance criteria used for assessments of Web pages in a perspective of learning style. 15 test persons participated in the experiments based on two simulated work tasks that provided cover stories to trigger their information needs. Two...... learning styles were examined: Global and Sequential learners. The study applied eye-tracking for the observation of relevance hot spots on Web pages, learning style index analysis and post-search interviews to gain more in-depth information on relevance behavior. Findings reveal that with respect to use......, they are statistically insignificant. When interviewed in retrospective the resulting profiles tend to become even similar across learning styles but a shift occurs from instant assessments with content features of web pages replacing topicality judgments as predominant relevance criteria....

  13. Research of Subgraph Estimation Page Rank Algorithm for Web Page Rank

    Directory of Open Access Journals (Sweden)

    LI Lan-yin

    2017-04-01

    Full Text Available The traditional PageRank algorithm can not efficiently perform large data Webpage scheduling problem. This paper proposes an accelerated algorithm named topK-Rank,which is based on PageRank on the MapReduce platform. It can find top k nodes efficiently for a given graph without sacrificing accuracy. In order to identify top k nodes,topK-Rank algorithm prunes unnecessary nodes and edges in each iteration to dynamically construct subgraphs,and iteratively estimates lower/upper bounds of PageRank scores through subgraphs. Theoretical analysis shows that this method guarantees result exactness. Experiments show that topK-Rank algorithm can find k nodes much faster than the existing approaches.

  14. PSB goes personal: The failure of personalised PSB web pages

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk

    2013-01-01

    networking and content aggregation services, but the customisation projects revealed tensions between the ideals of customer sovereignty and the editorial agenda-setting. This paper presents an overview of the PSB activities as well as reflections on the failure of the customisable PSB homepages......Between 2006 and 2011, a number of European public service broadcasting (PSB) organisations offered their website users the opportunity to create their own PSB homepage. The web customisation was conceived by the editors as a response to developments in commercial web services, particularly social...

  15. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  16. SChiSM: creating interactive web page annotations of molecular structure models using chime.

    Science.gov (United States)

    Cammer, S A

    2000-07-01

    SChiSM is a program for creating World Wide Web (WWW) pages that include embedded interactive molecular models using the browser plug-in, CHIME:, for visualization. The program works with Netscape 4.x and Internet Explorer 5 browsers to facilitate Chime/Rasmol scripting control of a molecular display.

  17. A construction scheme of web page comment information extraction system based on frequent subtree mining

    Science.gov (United States)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    Based on the frequent sub-tree mining algorithm, this paper proposes a construction scheme of web page comment information extraction system based on frequent subtree mining, referred to as FSM system. The entire system architecture and the various modules to do a brief introduction, and then the core of the system to do a detailed description, and finally give the system prototype.

  18. Developing Dynamic Single Page Web Applications Using Meteor : Comparing JavaScript Frameworks: Blaze and React

    OpenAIRE

    Yetayeh, Asabeneh

    2017-01-01

    This paper studies Meteor which is a JavaScript full-stack framework to develop interactive single page web applications. Meteor allows building web applications entirely in JavaScript. Meteor uses Blaze, React or AngularJS as a view layer and Node.js and MongoDB as a back-end. The main purpose of this study is to compare the performance of Blaze and React. A multi-user Blaze and React web applications with similar HTML and CSS were developed. Both applications were deployed on Heroku’s w...

  19. PSB goes personal: The failure of personalised PSB web pages

    Directory of Open Access Journals (Sweden)

    Jannick Kirk Sørensen

    2013-12-01

    Full Text Available Between 2006 and 2011, a number of European public service broadcasting (PSB organisations offered their website users the opportunity to create their own PSB homepage. The web customisation was conceived by the editors as a response to developments in commercial web services, particularly social networking and content aggregation services, but the customisation projects revealed tensions between the ideals of customer sovereignty and the editorial agenda-setting. This paper presents an overview of the PSB activities as well as reflections on the failure of the customisable PSB homepages. The analysis is based on interviews with the PSB editors involved in the projects and on studies of the interfaces and user comments. Commercial media customisation is discussed along with the PSB projects to identify similarities and differences.

  20. 76 FR 28439 - Submission for OMB Review; Comment Request; NCI Cancer Genetics Services Directory Web-Based...

    Science.gov (United States)

    2011-05-17

    ... HUMAN SERVICES National Institutes of Health Submission for OMB Review; Comment Request; NCI Cancer Genetics Services Directory Web-Based Application Form and Update Mailer Summary: Under the provisions of... Collection: Title: NCI Cancer Genetics Services Directory Web-based Application Form and Update Mailer. Type...

  1. The ATLAS Public Web Pages: Online Management of HEP External Communication Content

    CERN Document Server

    Goldfarb, Steven; Phoboo, Abha Eli; Shaw, Kate

    2015-01-01

    The ATLAS Education and Outreach Group is in the process of migrating its public online content to a professionally designed set of web pages built on the Drupal content management system. Development of the front-end design passed through several key stages, including audience surveys, stakeholder interviews, usage analytics, and a series of fast design iterations, called sprints. Implementation of the web site involves application of the html design using Drupal templates, refined development iterations, and the overall population of the site with content. We present the design and development processes and share the lessons learned along the way, including the results of the data-driven discovery studies. We also demonstrate the advantages of selecting a back-end supported by content management, with a focus on workflow. Finally, we discuss usage of the new public web pages to implement outreach strategy through implementation of clearly presented themes, consistent audience targeting and messaging, and th...

  2. Citations to Web pages in scientific articles: the permanence of archived references.

    Science.gov (United States)

    Thorp, Andrea W; Schriger, David L

    2011-02-01

    We validate the use of archiving Internet references by comparing the accessibility of published uniform resource locators (URLs) with corresponding archived URLs over time. We scanned the "Articles in Press" section in Annals of Emergency Medicine from March 2009 through June 2010 for Internet references in research articles. If an Internet reference produced the authors' expected content, the Web page was archived with WebCite (http://www.webcitation.org). Because the archived Web page does not change, we compared it with the original URL to determine whether the original Web page had changed. We attempted to access each original URL and archived Web site URL at 3-month intervals from the time of online publication during an 18-month study period. Once a URL no longer existed or failed to contain the original authors' expected content, it was excluded from further study. The number of original URLs and archived URLs that remained accessible over time was totaled and compared. A total of 121 articles were reviewed and 144 Internet references were found within 55 articles. Of the original URLs, 15% (21/144; 95% confidence interval [CI] 9% to 21%) were inaccessible at publication. During the 18-month observation period, there was no loss of archived URLs (apart from the 4% [5/123; 95% CI 2% to 9%] that could not be archived), whereas 35% (49/139) of the original URLs were lost (46% loss; 95% CI 33% to 61% by the Kaplan-Meier method; difference between curves P<.0001, log rank test). Archiving a referenced Web page at publication can help preserve the authors' expected information. Copyright © 2010 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  3. JavaScript and interactive web pages in radiology.

    Science.gov (United States)

    Gurney, J W

    2001-10-01

    Web publishing is becoming a more common method of disseminating information. JavaScript is an object-orientated language embedded into modern browsers and has a wide variety of uses. The use of JavaScript in radiology is illustrated by calculating the indices of sensitivity, specificity, and predictive values from a table of true positives, true negatives, false positives, and false negatives. In addition, a single line of JavaScript code can be used to annotate images, which has a wide variety of uses.

  4. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    Science.gov (United States)

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  5. Key word placing in Web page body text to increase visibility to search engines

    Directory of Open Access Journals (Sweden)

    W. T. Kritzinger

    2007-11-01

    Full Text Available The growth of the World Wide Web has spawned a wide variety of new information sources, which has also left users with the daunting task of determining which sources are valid. Many users rely on the Web as an information source because of the low cost of information retrieval. It is also claimed that the Web has evolved into a powerful business tool. Examples include highly popular business services such as Amazon.com and Kalahari.net. It is estimated that around 80% of users utilize search engines to locate information on the Internet. This, by implication, places emphasis on the underlying importance of Web pages being listed on search engines indices. Empirical evidence that the placement of key words in certain areas of the body text will have an influence on the Web sites' visibility to search engines could not be found in the literature. The result of two experiments indicated that key words should be concentrated towards the top, and diluted towards the bottom of a Web page to increase visibility. However, care should be taken in terms of key word density, to prevent search engine algorithms from raising the spam alarm.

  6. Web Page Layout: A Comparison Between Left- and Right-justified Site Navigation Menus

    OpenAIRE

    Kalbach, James; Bosenick, Tim

    2006-01-01

    The usability of two Web page layouts was directly compared: one with the main site navigation menu on the left of the page, and one with the main site navigation menu on the right. Sixty-four participants were divided equally into two groups and assigned to either the left- or the right-hand navigation test condition. Using a stopwatch, the time to complete each of five tasks was measured. The hypothesis that the left-hand navigation would perform significantly faster than the right-hand nav...

  7. Web Pages Content Analysis Using Browser-Based Volunteer Computing

    Directory of Open Access Journals (Sweden)

    Wojciech Turek

    2013-01-01

    Full Text Available Existing solutions to the problem of finding valuable information on the Websuffers from several limitations like simplified query languages, out-of-date in-formation or arbitrary results sorting. In this paper a different approach to thisproblem is described. It is based on the idea of distributed processing of Webpages content. To provide sufficient performance, the idea of browser-basedvolunteer computing is utilized, which requires the implementation of text pro-cessing algorithms in JavaScript. In this paper the architecture of Web pagescontent analysis system is presented, details concerning the implementation ofthe system and the text processing algorithms are described and test resultsare provided.

  8. Domainwise Web Page Optimization Based On Clustered Query Sessions Using Hybrid Of Trust And ACO For Effective Information Retrieval

    Directory of Open Access Journals (Sweden)

    Dr. Suruchi Chawla

    2015-08-01

    Full Text Available Abstract In this paper hybrid of Ant Colony OptimizationACO and trust has been used for domainwise web page optimization in clustered query sessions for effective Information retrieval. The trust of the web page identifies its degree of relevance in satisfying specific information need of the user. The trusted web pages when optimized using pheromone updates in ACO will identify the trusted colonies of web pages which will be relevant to users information need in a given domain. Hence in this paper the hybrid of Trust and ACO has been used on clustered query sessions for identifying more and more relevant number of documents in a given domain in order to better satisfy the information need of the user. Experiment was conducted on the data set of web query sessions to test the effectiveness of the proposed approach in selected three domains Academics Entertainment and Sports and the results confirm the improvement in the precision of search results.

  9. Social Dynamics in Web Page through Inter-Agent Interaction

    Science.gov (United States)

    Takeuchi, Yugo; Katagiri, Yasuhiro

    Social persuasion abounds in human-human interactions. Attitudes and behaviors of people are invariably influenced by the attitudes and behaviors of other people as well as our social roles/relationships toward them. In the pedagogic scene, the relationship between teacher and learner produces one of the most typical interactions, in which the teacher makes the learner spontaneously study what he/she teaches. This study is an attempt to elucidate the nature and effectiveness of social persuasion in human-computer interaction environments. We focus on the social dynamics of multi-party interactions that involve both human-agent and inter-agent interactions. An experiment is conducted in a virtual web-instruction setting employing two types of agents: conductor agents who accompany and guide each learner throughout his/her learning sessions, and domain-expert agents who provide explanations and instructions for each stage of the instructional materials. In this experiment, subjects are assigned two experimental conditions: the authorized condition, in which an agent respectfully interacts with another agent, and the non-authorized condition, in which an agent carelessly interacts with another agent. The results indicate performance improvements in the authorized condition of inter-agent interactions. An analysis is given from the perspective of the transfer of authority from inter-agent to human-agent interactions based on social conformity. We argue for pedagogic advantages of social dynamics created by multiple animated character agents.

  10. 78 FR 53464 - Agency Information Collection Activities: Submission for Review; Information Collection Extension...

    Science.gov (United States)

    2013-08-29

    ... (FRCoP): User Registration Page (DHS Form 10059 (9/09)). The FRCoP web based tool collects profile... of Practice Web site found at . The user will complete the form online and submit it through the Web... SECURITY Agency Information Collection Activities: Submission for Review; Information Collection Extension...

  11. Effects of picture amount on preference, balance, and dynamic feel of Web pages.

    Science.gov (United States)

    Chiang, Shu-Ying; Chen, Chien-Hsiung

    2012-04-01

    This study investigates the effects of picture amount on subjective evaluation. The experiment herein adopted two variables to define picture amount: column ratio and picture size. Six column ratios were employed: 7:93,15:85, 24:76, 33:67, 41:59, and 50:50. Five picture sizes were examined: 140 x 81, 220 x 127, 300 x 173, 380 x 219, and 460 x 266 pixels. The experiment implemented a within-subject design; 104 participants were asked to evaluate 30 web page layouts. Repeated measurements revealed that the column ratio and picture size have significant effects on preference, balance, and dynamic feel. The results indicated the most appropriate picture amount for display: column ratios of 15:85 and 24:76, and picture sizes of 220 x 127, 300 x 173, and 380 x 219. The research findings can serve as the basis for the application of design guidelines for future web page interface design.

  12. Research on Chinese web page SVM classifer based on information gain

    Directory of Open Access Journals (Sweden)

    PAN Zhengcai

    2013-06-01

    Full Text Available In order to improve the efficiency and accuracy of text classification,optimization and improvement are made for defects and deficiencies of the feature dimensionality reduction method and traditional information gain method in text classification of Chinese web pages.At first,part-of-speech filtering and synonyms merging processes are taken for the first feature dimension reduction of feature items.Then,an improved information gain method is proposed for feature weighting computation of feature items.Finally,the classification algorithm of Support Vector Machine (SVM is used for text classification of Chinese web pages.Both theoretical analysis and experimental results show that this method has better performance and classification results than traditional method.

  13. The Impact on Effectiveness and User Satisfaction of Menu Positioning on Web Pages

    OpenAIRE

    Dr Pietro Murano; Kennedy K. Oenga

    2012-01-01

    The authors of this paper are conducting research into the usability of menu positioning on web pages. Other researchers have also done work in this area, but the results are not conclusive and therefore more work still needs to be done in this area. The design and results of an empirical experiment, investigating the usability of menu positioning on a supermarket web site, are presented in this paper. As a comparison, the authors tested a left vertical menu and a fisheye menu placed horizont...

  14. Introducing a Web API for Dataset Submission into a NASA Earth Science Data Center

    Science.gov (United States)

    Moroni, D. F.; Quach, N.; Francis-Curley, W.

    2016-12-01

    As the landscape of data becomes increasingly more diverse in the domain of Earth Science, the challenges of managing and preserving data become more onerous and complex, particularly for data centers on fixed budgets and limited staff. Many solutions already exist to ease the cost burden for the downstream component of the data lifecycle, yet most archive centers are still racing to keep up with the influx of new data that still needs to find a quasi-permanent resting place. For instance, having well-defined metadata that is consistent across the entire data landscape provides for well-managed and preserved datasets throughout the latter end of the data lifecycle. Translators between different metadata dialects are already in operational use, and facilitate keeping older datasets relevant in today's world of rapidly evolving metadata standards. However, very little is done to address the first phase of the lifecycle, which deals with the entry of both data and the corresponding metadata into a system that is traditionally opaque and closed off to external data producers, thus resulting in a significant bottleneck to the dataset submission process. The ATRAC system was the NOAA NCEI's answer to this previously obfuscated barrier to scientists wishing to find a home for their climate data records, providing a web-based entry point to submit timely and accurate metadata and information about a very specific dataset. A couple of NASA's Distributed Active Archive Centers (DAACs) have implemented their own versions of a web-based dataset and metadata submission form including the ASDC and the ORNL DAAC. The Physical Oceanography DAAC is the most recent in the list of NASA-operated DAACs who have begun to offer their own web-based dataset and metadata submission services to data producers. What makes the PO.DAAC dataset and metadata submission service stand out from these pre-existing services is the option of utilizing both a web browser GUI and a RESTful API to

  15. Job submission and management through web services the experience with the CREAM service

    CERN Document Server

    Aiftimiei, C; Bertocco, S; Fina, S D; Ronco, S D; Dorigo, A; Gianelle, A; Marzolla, M; Mazzucato, M; Sgaravatto, M; Verlato, M; Zangrando, L; Corvo, M; Miccio, V; Sciabà, A; Cesini, D; Dongiovanni, D; Grandi, C

    2008-01-01

    Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is being used within the gLite middleware. CREAM exposes a Web service interface allowing conforming clients to submit and manage computational jobs to a Local Resource Management System. We developed a special component, called ICE (Interface to CREAM Environment) to integrate CREAM in gLite. ICE transfers job submissions and cancellations from the Workload Management System, allowing users to manage CREAM jobs from the gLite User Interface. This paper describes some recent studies aimed at assessing the performance and reliability of CREAM and ICE; those tests have been performed as part of the acceptance tests for integration of ...

  16. The ATLAS Public Web Pages: Online Management of HEP External Communication Content

    Science.gov (United States)

    Goldfarb, S.; Marcelloni, C.; Eli Phoboo, A.; Shaw, K.

    2015-12-01

    The ATLAS Education and Outreach Group is in the process of migrating its public online content to a professionally designed set of web pages built on the Drupal [1] content management system. Development of the front-end design passed through several key stages, including audience surveys, stakeholder interviews, usage analytics, and a series of fast design iterations, called sprints. Implementation of the web site involves application of the html design using Drupal templates, refined development iterations, and the overall population of the site with content. We present the design and development processes and share the lessons learned along the way, including the results of the data-driven discovery studies. We also demonstrate the advantages of selecting a back-end supported by content management, with a focus on workflow. Finally, we discuss usage of the new public web pages to implement outreach strategy through implementation of clearly presented themes, consistent audience targeting and messaging, and the enforcement of a well-defined visual identity.

  17. When the Web meets the cell: using personalized PageRank for analyzing protein interaction networks.

    Science.gov (United States)

    Iván, Gábor; Grolmusz, Vince

    2011-02-01

    Enormous and constantly increasing quantity of biological information is represented in metabolic and in protein interaction network databases. Most of these data are freely accessible through large public depositories. The robust analysis of these resources needs novel technologies, being developed today. Here we demonstrate a technique, originating from the PageRank computation for the World Wide Web, for analyzing large interaction networks. The method is fast, scalable and robust, and its capabilities are demonstrated on metabolic network data of the tuberculosis bacterium and the proteomics analysis of the blood of melanoma patients. The Perl script for computing the personalized PageRank in protein networks is available for non-profit research applications (together with sample input files) at the address: http://uratim.com/pp.zip.

  18. 'Submission'

    DEFF Research Database (Denmark)

    Berg-Sørensen, Anders

    2017-01-01

    On 7 January 2015, the day of the deadly attack on Charlie Hebdo, the Parisian satirical magazine, French author Michel Houellebecq published Soumission (Submission), his already contested novel. Charlie Hebdo had a satirical feature on the cover that day ridiculing Houellebecq’s novel, which...

  19. Do-It-Yourself: A Special Library's Approach to Creating Dynamic Web Pages Using Commercial Off-The-Shelf Applications

    Science.gov (United States)

    Steeman, Gerald; Connell, Christopher

    2000-01-01

    Many librarians may feel that dynamic Web pages are out of their reach, financially and technically. Yet we are reminded in library and Web design literature that static home pages are a thing of the past. This paper describes how librarians at the Institute for Defense Analyses (IDA) library developed a database-driven, dynamic intranet site using commercial off-the-shelf applications. Administrative issues include surveying a library users group for interest and needs evaluation; outlining metadata elements; and, committing resources from managing time to populate the database and training in Microsoft FrontPage and Web-to-database design. Technical issues covered include Microsoft Access database fundamentals, lessons learned in the Web-to-database process (including setting up Database Source Names (DSNs), redesigning queries to accommodate the Web interface, and understanding Access 97 query language vs. Standard Query Language (SQL)). This paper also offers tips on editing Active Server Pages (ASP) scripting to create desired results. A how-to annotated resource list closes out the paper.

  20. PROTOTIPE PEMESANAN BAHAN PUSTAKA MELALUI WEB MENGGUNAKAN ACTIVE SERVER PAGE (ASP

    Directory of Open Access Journals (Sweden)

    Djoni Haryadi Setiabudi

    2002-01-01

    Full Text Available Electronic commerce is one of the components in the internet that growing fast in the world. In this research, it is developed the prototype for library service that offers library collection ordering especially books and articles through World Wide Web. In order to get an interaction between seller and buyer, there is an urgency to develop a dynamic web, which needs the technology and software. One of the programming languages is called Active Server Pages (ASP and it is combining with database system to store data. The other component as an interface between application and database is ActiveX Data Objects (ADO. ASP has an advantage in the scripting method and it is easy to make the configuration with database. This application consists of two major parts those are administrator and user. This prototype has the facilities for editing, searching and looking through ordering information online. Users can also do downloading process for searching and ordering articles. Paying method in this e-commerce system is quite essential because in Indonesia not everybody has a credit card. As a solution to this situation, this prototype has a form for user who does not have credit card. If the bill has been paid, he can do the transaction online. In this case, one of the ASP advantages will be used. This is called "session" when data in process would not be lost as long as the user still in that "session". This will be used in user area and admin area where the users and the admin can do various processes. Abstract in Bahasa Indonesia : Electronic commerce adalah satu bagian dari internet yang berkembang pesat di dunia saat ini. Pada penelitian ini dibuat suatu prototipe program aplikasi untuk pengembangan jasa layanan perpustakaan khususnya pemesanan artikel dan buku melalui World Wide Web. Untuk membangun aplikasi berbasis web diperlukan teknologi dan software yang mendukung pembuatan situs web dinamis sehingga ada interaksi antara pembeli dan penjual

  1. Analysis of co-occurrence toponyms in web pages based on complex networks

    Science.gov (United States)

    Zhong, Xiang; Liu, Jiajun; Gao, Yong; Wu, Lun

    2017-01-01

    A large number of geographical toponyms exist in web pages and other documents, providing abundant geographical resources for GIS. It is very common for toponyms to co-occur in the same documents. To investigate these relations associated with geographic entities, a novel complex network model for co-occurrence toponyms is proposed. Then, 12 toponym co-occurrence networks are constructed from the toponym sets extracted from the People's Daily Paper documents of 2010. It is found that two toponyms have a high co-occurrence probability if they are at the same administrative level or if they possess a part-whole relationship. By applying complex network analysis methods to toponym co-occurrence networks, we find the following characteristics. (1) The navigation vertices of the co-occurrence networks can be found by degree centrality analysis. (2) The networks express strong cluster characteristics, and it takes only several steps to reach one vertex from another one, implying that the networks are small-world graphs. (3) The degree distribution satisfies the power law with an exponent of 1.7, so the networks are free-scale. (4) The networks are disassortative and have similar assortative modes, with assortative exponents of approximately 0.18 and assortative indexes less than 0. (5) The frequency of toponym co-occurrence is weakly negatively correlated with geographic distance, but more strongly negatively correlated with administrative hierarchical distance. Considering the toponym frequencies and co-occurrence relationships, a novel method based on link analysis is presented to extract the core toponyms from web pages. This method is suitable and effective for geographical information retrieval.

  2. Using Frames and JavaScript To Automate Teacher-Side Web Page Navigation for Classroom Presentations.

    Science.gov (United States)

    Snyder, Robin M.

    HTML provides a platform-independent way of creating and making multimedia presentations for classroom instruction and making that content available on the Internet. However, time in class is very valuable, so that any way to automate or otherwise assist the presenter in Web page navigation during class can save valuable seconds. This paper…

  3. Compiling Web-Based Topical Pages for Teaching Geoscience With Visualizations

    Science.gov (United States)

    Crabaugh, J. P.; Manduca, C.; Cantwell, L.

    2004-12-01

    The effective use of visualizations is one of the most important aspects of teaching university-level geoscience. Today, there is a rich array of visualizations available on-line that can be integrated into lectures, class activities, and lab exercises. However, when a teacher conducts a web-based search on a topic, the search result is often overload of material with little selectivity. At the Science Education Resource Center (SERC) located at Carleton College, we have begun to address this problem within the larger context of a NAGT "On the Cutting Edge" workshop. Entitled "Teaching Geoscience with Visualizations" this workshop met at Carleton College in February 2004. The workshop website can be accessed at http://serc.carleton.edu/NAGTWorkshops/visualize04/index.html In cooperation with participants of the workshop, we are growing a number of different types of on-line collections, including topical collections. These "Topical Pages" are collections of web-based visualizations, suitable for use in a class or lab, which are grouped together based on a specific geoscience topic. An important aspect of these topical collections is the effort given to making the collections: 1.) integrated from subtopic to subtopic, 2.) a selective gathering of effective visualizations, and 3.) representative of the diversity of material available on each topic. The topical page collections which we have created at SERC are more than just a gathering of related visualizations. Each entry includes a brief, instructive caption describing the nature of the visualizations contained and the specific ideas that are graphically represented by the visualization. In addition, most collections contain links not only to the website source of the visualizations but also to the website of the creator of the visualization. The number and breadth of topical pages at the "Teaching with Visualizations" website continues to grow. Initial collections are available on topics such as Plate Tectonic

  4. Issues of Page Representation and Organisation in Web Browser's Revisitation Tools

    Directory of Open Access Journals (Sweden)

    Andy Cockburn

    2000-05-01

    Full Text Available Many commercial and research WWW browsers include a variety of graphical revisitation tools that let users return to previously seen pages. Examples include history lists, bookmarks and site maps. In this paper, we examine two fundamental design and usability issues that all graphical tools for revisitation must address. First, how can individual pages be represented to best support page identification? We discuss the problems and prospects of various page representations: the pages themselves, image thumbnails, text labels, and abstract page properties. Second, what display organisation schemes can be used to enhance the visualisation of large sets of previously visited pages? We compare temporal organisations, hub-and spoke dynamic trees, spatial layouts and site maps.

  5. A STUDY ON RANKING METHOD IN RETRIEVING WEB PAGES BASED ON CONTENT AND LINK ANALYSIS: COMBINATION OF FOURIER DOMAIN SCORING AND PAGERANK SCORING

    Directory of Open Access Journals (Sweden)

    Diana Purwitasari

    2008-01-01

    Full Text Available Ranking module is an important component of search process which sorts through relevant pages. Since collection of Web pages has additional information inherent in the hyperlink structure of the Web, it can be represented as link score and then combined with the usual information retrieval techniques of content score. In this paper we report our studies about ranking score of Web pages combined from link analysis, PageRank Scoring, and content analysis, Fourier Domain Scoring. Our experiments use collection of Web pages relate to Statistic subject from Wikipedia with objectives to check correctness and performance evaluation of combination ranking method. Evaluation of PageRank Scoring show that the highest score does not always relate to Statistic. Since the links within Wikipedia articles exists so that users are always one click away from more information on any point that has a link attached, it it possible that unrelated topics to Statistic are most likely frequently mentioned in the collection. While the combination method show link score which is given proportional weight to content score of Web pages does effect the retrieval results.

  6. Hormone Replacement Therapy advertising: sense and nonsense on the web pages of the best-selling pharmaceuticals in Spain

    Directory of Open Access Journals (Sweden)

    Cantero María

    2010-03-01

    Full Text Available Abstract Background The balance of the benefits and risks of long term use of hormone replacement therapy (HRT have been a matter of debate for decades. In Europe, HRT requires medical prescription and its advertising is only permitted when aimed at health professionals (direct to consumer advertising is allowed in some non European countries. The objective of this study is to analyse the appropriateness and quality of Internet advertising about HRT in Spain. Methods A search was carried out on the Internet (January 2009 using the eight best-selling HRT drugs in Spain. The brand name of each drug was entered into Google's search engine. The web sites appearing on the first page of results and the corresponding companies were analysed using the European Code of Good Practice as the reference point. Results Five corporate web pages: none of them included bibliographic references or measures to ensure that the advertising was only accessible by health professionals. Regarding non-corporate web pages (n = 27: 41% did not include the company name or address, 44% made no distinction between patient and health professional information, 7% contained bibliographic references, 26% provided unspecific information for the use of HRT for osteoporosis and 19% included menstrual cycle regulation or boosting feminity as an indication. Two online pharmacies sold HRT drugs which could be bought online in Spain, did not include the name or contact details of the registered company, nor did they stipulate the need for a medical prescription or differentiate between patient and health professional information. Conclusions Even though pharmaceutical companies have committed themselves to compliance with codes of good practice, deficiencies were observed regarding the identification, information and promotion of HRT medications on their web pages. Unaffected by legislation, non-corporate web pages are an ideal place for indirect HRT advertising, but they often contain

  7. Hormone replacement therapy advertising: sense and nonsense on the web pages of the best-selling pharmaceuticals in Spain.

    Science.gov (United States)

    Chilet-Rosell, Elisa; Martín Llaguno, Marta; Ruiz Cantero, María Teresa; Alonso-Coello, Pablo

    2010-03-16

    The balance of the benefits and risks of long term use of hormone replacement therapy (HRT) have been a matter of debate for decades. In Europe, HRT requires medical prescription and its advertising is only permitted when aimed at health professionals (direct to consumer advertising is allowed in some non European countries). The objective of this study is to analyse the appropriateness and quality of Internet advertising about HRT in Spain. A search was carried out on the Internet (January 2009) using the eight best-selling HRT drugs in Spain. The brand name of each drug was entered into Google's search engine. The web sites appearing on the first page of results and the corresponding companies were analysed using the European Code of Good Practice as the reference point. Five corporate web pages: none of them included bibliographic references or measures to ensure that the advertising was only accessible by health professionals. Regarding non-corporate web pages (n = 27): 41% did not include the company name or address, 44% made no distinction between patient and health professional information, 7% contained bibliographic references, 26% provided unspecific information for the use of HRT for osteoporosis and 19% included menstrual cycle regulation or boosting feminity as an indication. Two online pharmacies sold HRT drugs which could be bought online in Spain, did not include the name or contact details of the registered company, nor did they stipulate the need for a medical prescription or differentiate between patient and health professional information. Even though pharmaceutical companies have committed themselves to compliance with codes of good practice, deficiencies were observed regarding the identification, information and promotion of HRT medications on their web pages. Unaffected by legislation, non-corporate web pages are an ideal place for indirect HRT advertising, but they often contain misleading information. HRT can be bought online from Spain

  8. Improving the web site's effectiveness by considering each page's temporal information

    NARCIS (Netherlands)

    Li, ZG; Sun, MT; Dunham, MH; Xiao, YQ; Dong, G; Tang, C; Wang, W

    2003-01-01

    Improving the effectiveness of a web site is always one of its owner's top concerns. By focusing on analyzing web users' visiting behavior, web mining researchers have developed a variety of helpful methods, based upon association rules, clustering, prediction and so on. However, we have found

  9. Perspectives in Education: Submissions

    African Journals Online (AJOL)

    Author Guidelines. Guidelines to authors can be found on the journal's own site here: http://www.perspectives-in-education.com/pages.aspx?PID=10. Alternatively, see below: Information for Authors. Submission of articles. PiE invites submissions in the following categories: Research articles. Contributors are encouraged to ...

  10. An Exploratory Study of Student Satisfaction with University Web Page Design

    Science.gov (United States)

    Gundersen, David E.; Ballenger, Joe K.; Crocker, Robert M.; Scifres, Elton L.; Strader, Robert

    2013-01-01

    This exploratory study evaluates the satisfaction of students with a web-based information system at a medium-sized regional university. The analysis provides a process for simplifying data interpretation in captured student user feedback. Findings indicate that student classifications, as measured by demographic and other factors, determine…

  11. SurveyWiz and factorWiz: JavaScript Web pages that make HTML forms for research on the Internet.

    Science.gov (United States)

    Birnbaum, M H

    2000-05-01

    SurveyWiz and factorWiz are Web pages that act as wizards to create HTML forms that enable one to collect data via the Web. SurveyWiz allows the user to enter survey questions or personality test items with a mixture of text boxes and scales of radio buttons. One can add demographic questions of age, sex, education, and nationality with the push of a button. FactorWiz creates the HTML for within-subjects, two-factor designs as large as 9 x 9, or higher order factorial designs up to 81 cells. The user enters levels of the row and column factors, which can be text, images, or other multimedia. FactorWiz generates the stimulus combinations, randomizes their order, and creates the page. In both programs HTML is displayed in a window, and the user copies it to a text editor to save it. When uploaded to a Web server and supported by a CGI script, the created Web pages allow data to be collected, coded, and saved on the server. These programs are intended to assist researchers and students in quickly creating studies that can be administered via the Web.

  12. 75 FR 21297 - Submission for OMB Review; Comment Request Web Based Training for Pain Management Providers

    Science.gov (United States)

    2010-04-23

    ... Based Training for Pain Management Providers, via the Web site PainAndAddictionTreatment.com , to... addiction co-occurring in the provider's patients. In order to evaluate the effectives of the program... collection techniques or other forms of information technology. Direct Comments to OMB: Written comments and...

  13. 77 FR 39269 - Submission for OMB Review, Comment Request, Proposed Collection: IMLS Museum Web Database...

    Science.gov (United States)

    2012-07-02

    ... Web Database: MuseumsCount.gov AGENCY: Institute of Museum and Library Services, National Foundation... Institute of Museum and Library Services announces that the following information collection has been... Miller, Management Analyst, Institute of Museum and Library Services, 1800 M St. NW., Washington, DC...

  14. Direct advertising to the consumer of web pages in Spanish that offer cannabinoids for medicinal uses

    Directory of Open Access Journals (Sweden)

    Julio Cjuno

    2018-02-01

    Full Text Available Señor Editor: Los canabinoides son sustancias derivadas a partir de las plantas del cannabis (Whiting et al., 2015, las cuales han sido aprobadas por la Food & Drug Admnistration [FDA] para el manejo de diversos síntomas como pérdida de apetito en pacientes con VIH/SIDA, y las náuseas y vómitos asociados a la quimioterapia (OMS, 2015. Sin embargo, un reciente metaanálisis encontró que existe escasa evidencia sobre el uso de canabinoides para diversas condiciones (Whiting et al., 2015. Asimismo, se han reportado eventos adversos tales como mareos, sequedad de boca, náuseas, fatiga, somnolencia, vómitos, desorientación, confusión y alucinaciones (Whiting et al., 2015. Debido a esto, resulta importante que la publicidad dirigida al consumidor [PDAC] realizada por quienes ofertan productos derivados de la marihuana para usos medicinales contenga una adecuada recomendación sobre sus usos y posibles eventos adversos (Gellad and Lyles 2007. Lo cual no ha sido explorado previamente. El objetivo del presente estudio fue evaluar la PDAC de las páginas web que ofrecen derivados de la marihuana para usos medicinales, en países de habla hispana. Para ello, durante marzo del 2017 se realizaron búsquedas en Google.com utilizando los siguientes términos de búsqueda en español: [Marihuana medicinal], [Cannabis medicinal], [Aceite de marihuana], y [Aceite de cannabis], elegidos por ser los términos con más búsquedas sobre el tema en los últimos cinco años según estadísticas de Google Trends (2017, combinadas con nombres de los siguientes países: México, Colombia, Argentina, Chile, Bolivia, Paraguay, Uruguay, Venezuela, Ecuador, Perú y España. El historial y las cookies fueron previamente eliminados para no obtener resultados personalizados. Se revisaron los 50 primeros resultados de cada búsqueda, y se seleccionaron las páginas web que ofrecían derivados de la marihuana con fines medicinales, cuyas características fueron digitadas

  15. Interactive Development of Regional Climate Web Pages for the Western United States

    Science.gov (United States)

    Oakley, N.; Redmond, K. T.

    2013-12-01

    Weather and climate have a pervasive and significant influence on the western United States, driving a demand for information that is ongoing and constantly increasing. In communications with stakeholders, policy makers, researchers, educators, and the public through formal and informal encounters, three standout challenges face users of weather and climate information in the West. First, the needed information is scattered about the web making it difficult or tedious to access. Second, information is too complex or requires too much background knowledge to be immediately applicable. Third, due to complex terrain, there is high spatial variability in weather, climate, and their associated impacts in the West, warranting information outlets with a region-specific focus. Two web sites, TahoeClim and the Great Basin Weather and Climate Dashboard were developed to overcome these challenges to meeting regional weather and climate information needs. TahoeClim focuses on the Lake Tahoe Basin, a region of critical environmental concern spanning the border of Nevada and California. TahoeClim arose out of the need for researchers, policy makers, and environmental organizations to have access to all available weather and climate information in one place. Additionally, TahoeClim developed tools to both interpret and visualize data for the Tahoe Basin with supporting instructional material. The Great Basin Weather and Climate Dashboard arose from discussions at an informal meeting about Nevada drought organized by the USDA Farm Service Agency. Stakeholders at this meeting expressed a need to take a 'quick glance' at various climate indicators to support their decision making process. Both sites were designed to provide 'one-stop shopping' for weather and climate information in their respective regions and to be intuitive and usable by a diverse audience. An interactive, 'co-development' approach was taken with sites to ensure needs of potential users were met. The sites were

  16. Analysis of Croatian archives' web page from the perspective of public programmes

    Directory of Open Access Journals (Sweden)

    Goran Pavelin

    2015-04-01

    Full Text Available In order to remain relevant in society, archivists should promote collections and records that are kept in the archives. Through public programmes, archives interact with customers and various public actors and create the institutional image. This paper is concerned with the role of public programmes in the process of modernization of the archival practice, with the emphasis on the Croatian state archives. The aim of the paper is to identify what kind of information is offered to users and public in general on the web sites of the Croatian state archives. Public programmes involve two important components of archival practice: archives and users. Therefore, public programmes ensure good relations with the public. Croatian archivists still question the need for public relations in archives, while American and European archives have already integrated public relations into the basic archival functions. The key components needed for successful planning and implementation of public programs are the source of financing, compliance with the annual work plan, clear goals, defined target audience, cooperation and support from the local community, and the evaluation of results.

  17. Etat de l'art des méthodes d'adaptation des pages Web en situation de handicap visuel

    OpenAIRE

    Bonavero, Yoann; Huchard, Marianne; Meynard, Michel; Waffo Kouhoué, Austin

    2016-01-01

    National audience; Cet article se consacre à l'étude des technologies, outils et travaux de recherche existant autour de l'accessibilité au Web pour les personnes en situation de handicap visuel. Nous commençons par décrire la structure d'une page Web et les différents standards et normalisations qui existent et permettent d'assurer une accessibilité minimale. Nous détaillons ensuite les diverses possibilités offertes par les technologies d'assistance les plus connues et les outils spécifique...

  18. The effect of new links on Google PageRank

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli

    2004-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. We study the effect of newly created links on Google PageRank. We discuss to

  19. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    Science.gov (United States)

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  20. Trident Web page

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Randall P. [Los Alamos National Laboratory; Fernandez, Juan C. [Los Alamos National Laboratory

    2012-06-25

    An Extensive Diagnostic Suite Enables Cutting-edge Research at Trident The Trident Laser Facility at Los Alamos National Laboratory is an extremely versatile Nd:glass laser system dedicated to high energy density physics research and fundamental laser-matter interactions. Trident's Unique Laser Capabilities Provide an Ideal Platform for Many Experiments. The laser system consists of three high energy beams which can be delivered into two independent target experimental areas. The target areas are equipped with an extensive suite of diagnostics for research in ultra-intense laser matter interactions, dynamic material properties, and laser-plasma instabilities. Several important discoveries and first observations have been made at Trident including laser-accelerated MeV mono-energetic ions, nonlinear kinetic plasma waves, transition between kinetic and fluid nonlinear behavior, as well as other fundamental laser-matter interaction processes. Trident's unique long-pulse capabilities have enabled state-of-the-art innovations in laser-launched flyer-plates, and other unique loading techniques for material dynamics research.

  1. Web Caching

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. Web Caching - A Technique to Speedup Access to Web Contents. Harsha Srinath Shiva Shankar Ramanna. General Article Volume 7 Issue 7 July 2002 pp 54-62 ... Keywords. World wide web; data caching; internet traffic; web page access.

  2. Journal of Cultural Studies: Submissions

    African Journals Online (AJOL)

    Bibliographic referencing within and at the end of each paper should follow the MLA style. An abstract of between 150 and 200 words, and a cover page, which indicates the full name and brief bio-data of the author, should accompany each submission. The cover page should be typed separately from the manuscript, which ...

  3. PageRank (II): Mathematics

    African Journals Online (AJOL)

    maths/stats

    INTRODUCTION. PageRank is Google's system for ranking web pages. A page with a higher PageRank is deemed more important and is more likely to be listed above a ... Felix U. Ogban, Department of Mathematics/Statistics and Computer Science, Faculty of Science, University of ..... probability, 2004, 41, (3): 721-734.

  4. Paleolimnology Web Portal: A Web Site Designed to Increase Paleolimnology Data Availability

    Science.gov (United States)

    Eakin, C. M.; Moy, C. M.; Habermann, T.; Gross, W. S.; Keltner, J. M.

    2001-12-01

    Despite widespread use of lacustrine records to interpret paleolimnologic and paleoclimatic change, there is a large gap between the data published in peer-reviewed journals and those submitted for archive and available to other researchers online. A primary goal of the World Data Center (WDC) for paleoclimatology and the International Geosphere-Biosphere Programme (IGBP) - Past Global Changes (PAGES) core programme is to have full and open sharing of all data sets needed for global change studies. To help improve the quantity and quality of data submitted to the WDC for Paleoclimatology, we are developing online data submission and advanced interactive browse and access tools. Our poster presents a new web-site designed to make paleolimnology data more accessible by incorporating web-based data submission forms, a multi-proxy relational database, and interactive mapping tools. The WDC for Paleoclimatology is currently designing intuitive and streamlined web-based submission forms, which will allow investigators to quickly submit their data and metadata on-line. We are also importing all existing data and metadata in our archives into a multiproxy relational database that will allow users to quickly query and retrieve paleolimnological data, as well as display the data in various formats. Furthermore, we are implementing two Paleolimnology mapping tools that will allow users to search, display, and query data in a geographical format. The first tool, WebMapper, uses a Java applet to draw maps and display metadata. This will be supplemented by a plotting tool that will provide basic plotting functions to allow users to examine data before downloading them. The second mapping tool, ArcIMS, allows users to overlay paleoclimatic data with various GIS data sets in addition to providing basic spatial analysis functions. We believe that these new web-based features will encourage more extensive data sharing and submission, making paleolimnological data more available and

  5. Dynamic Scheduling for Web Monitoring Crawler

    Science.gov (United States)

    2009-02-27

    pages are related to a specific web page. This project uses classification history of web pages to determine similarity between web pages. That is, web... Badminton , Baseball, Basketball, Boxing, Canoeing, Cycling, Equestrian, Fencing, Gymnastics, Handball, Hocky, Judo, Morden Pentathlon, Rowing, Sailing

  6. Monte Carlo methods in PageRank computation: When one iteration is sufficient

    NARCIS (Netherlands)

    Avrachenkov, K.; Litvak, Nelli; Nemirovsky, D.; Osipova, N.

    2005-01-01

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method which requires

  7. Monte Carlo methods in PageRank computation: When one iteration is sufficient

    NARCIS (Netherlands)

    Avrachenkov, K.; Litvak, Nelli; Nemirovsky, D.; Osipova, N.

    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer, and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method, which requires

  8. Using PHP to Parse eBook Resources from Drupal 6 to Populate a Mobile Web Page

    Directory of Open Access Journals (Sweden)

    Junior Tidal

    2012-10-01

    Full Text Available The Ursula C. Schwerin library needed to create a page for its mobile website devoted to subscribed eBooks. These resources, however, were only available through the main desktop website. These resources were organized using the Drupal 6 content management system with contributed and core modules. It was necessary to create a solution to retrieve the eBook databases from the Drupal installation to a separate mobile site.

  9. Personal and Public Start Pages in a library setting

    NARCIS (Netherlands)

    Kieft-Wondergem, Dorine

    Personal and Public Start Pages are web-based resources. With these kind of tools it is possible to make your own free start page. A Start Page allows you to put all your web resources into one page, including blogs, email, podcasts, RSSfeeds. It is possible to share the content of the page with

  10. MedlinePlus Connect: Web Application

    Science.gov (United States)

    ... page: https://medlineplus.gov/connect/application.html MedlinePlus Connect: Web Application To use the sharing features on this page, please enable JavaScript. MedlinePlus Connect is available as a Web application or Web ...

  11. 78 FR 35040 - Submission for OMB Review; 30-day Comment Request; Web-Based Media Literacy Parent Training for...

    Science.gov (United States)

    2013-06-11

    ... be requested in writing. Proposed Collection: Web-based Media Literacy Parent Training for Substance...-Based Media Literacy Parent Training for Substance Use Prevention in Rural Locations SUMMARY: Under the... displays a currently valid OMB control number. Direct Comments To OMB: Written comments and/or suggestions...

  12. The STRESA (storage of reactor safety) database (Web page: http://asa2.jrc.it/stresa)

    Energy Technology Data Exchange (ETDEWEB)

    Annunziato, A.; Addabbo, C.; Brewka, W. [Joint Research Centre, Commission of the European Communities, Ispra (Italy)

    2001-07-01

    A considerable amount of resources has been devoted at the international level during the last few decades, to the generation of experimental databases in order to provide reference information for the understanding of reactor safety relevant phenomenologies and for the development and/or assessment of related computational methodologies. The extent to which these databases are preserved and can be accessed and retrieved is an issue of major concern. This paper provides an outline of the JRC databases preservation initiative and a description of the supporting web-based computer platform STRESA. (author)

  13. Social Bookmarking Induced Active Page Ranking

    Science.gov (United States)

    Takahashi, Tsubasa; Kitagawa, Hiroyuki; Watanabe, Keita

    Social bookmarking services have recently made it possible for us to register and share our own bookmarks on the web and are attracting attention. The services let us get structured data: (URL, Username, Timestamp, Tag Set). And these data represent user interest in web pages. The number of bookmarks is a barometer of web page value. Some web pages have many bookmarks, but most of those bookmarks may have been posted far in the past. Therefore, even if a web page has many bookmarks, their value is not guaranteed. If most of the bookmarks are very old, the page may be obsolete. In this paper, by focusing on the timestamp sequence of social bookmarkings on web pages, we model their activation levels representing current values. Further, we improve our previously proposed ranking method for web search by introducing the activation level concept. Finally, through experiments, we show effectiveness of the proposed ranking method.

  14. International Journal of Humanistic Studies: Submissions

    African Journals Online (AJOL)

    Manuscripts are refereed anonymously; therefore, the author's name, e-mail address, and brief contributor information (not exceeding fifty words) should appear on the title page only. All pages must be numbered. The Journal prefers submissions sent as an e-mail attachment editoruniswaijhs@yahoo.com in Microsoft Word.

  15. South African Journal of Geomatics: Submissions

    African Journals Online (AJOL)

    Author Guidelines. Before a paper may be submitted, the corresponding author must register as a User on the journal web page (http://www.sajg.org.za). Registration: 1. Navigate to the web page: http://www.sajg.org.za. 2. Click on the Register button (top centre of the page). 3. Create the user profile: Key in: user name, ...

  16. CpGAVAS, an integrated web server for the annotation, visualization, analysis, and GenBank submission of completely sequenced chloroplast genome sequences

    Directory of Open Access Journals (Sweden)

    Liu Chang

    2012-12-01

    Full Text Available Abstract Background The complete sequences of chloroplast genomes provide wealthy information regarding the evolutionary history of species. With the advance of next-generation sequencing technology, the number of completely sequenced chloroplast genomes is expected to increase exponentially, powerful computational tools annotating the genome sequences are in urgent need. Results We have developed a web server CPGAVAS. The server accepts a complete chloroplast genome sequence as input. First, it predicts protein-coding and rRNA genes based on the identification and mapping of the most similar, full-length protein, cDNA and rRNA sequences by integrating results from Blastx, Blastn, protein2genome and est2genome programs. Second, tRNA genes and inverted repeats (IR are identified using tRNAscan, ARAGORN and vmatch respectively. Third, it calculates the summary statistics for the annotated genome. Fourth, it generates a circular map ready for publication. Fifth, it can create a Sequin file for GenBank submission. Last, it allows the extractions of protein and mRNA sequences for given list of genes and species. The annotation results in GFF3 format can be edited using any compatible annotation editing tools. The edited annotations can then be uploaded to CPGAVAS for update and re-analyses repeatedly. Using known chloroplast genome sequences as test set, we show that CPGAVAS performs comparably to another application DOGMA, while having several superior functionalities. Conclusions CPGAVAS allows the semi-automatic and complete annotation of a chloroplast genome sequence, and the visualization, editing and analysis of the annotation results. It will become an indispensible tool for researchers studying chloroplast genomes. The software is freely accessible from http://www.herbalgenomics.org/cpgavas.

  17. Créations graphiques réussissez vos brochures, logos, pages web, newsletters, flyers, et bien plus encore !

    CERN Document Server

    McWade, John

    2010-01-01

    Dans cet ouvrage, qui regroupe un florilège des meilleurs projets professionnels publiés dans son célèbre magazine de graphisme, Before & After, John McWade offre une présentation approfondie des principes de base du graphisme, avant de partager concrètement techniques et procédés. Dans un style simple et convivial, il analyse différentes créations graphiques abouties – brochures, lettres d'information, sites web, cartes de visite et autres supports visuels – et explique le pourquoi et le comment de leur réussite. À votre tour, inspirez-vous de ces exemples et employez ces mêmes techniques à l'envie pour améliorer vos propres documents. Vous apprendrez à: rogner des photos pour parfaire leur fonction et leur signification; attirer le regard du lecteur à l'emplacement souhaité à l'aide de huit solutions différentes; travailler sur les composants de base de la création graphique, tels que le trait, la forme, la direction, le mouvement, la mise à l'échelle, la couleur, et bien d'a...

  18. EPA Web Taxonomy

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA's Web Taxonomy is a faceted hierarchical vocabulary used to tag web pages with terms from a controlled vocabulary. Tagging enables search and discovery of EPA's...

  19. A new means of communication with the populations: the Extremadura Regional Government Radiological Monitoring alert WEB Page; Un nuevo intento de comunicacion a la poblacion: La pagina Web de la red de alerta de la Junta de Extremadura

    Energy Technology Data Exchange (ETDEWEB)

    Baeza, A.; Vasco, J.; Miralles, Y.; Torrado, L.; Gil, J. M.

    2003-07-01

    Extremadura XXI a summary sheet, relatively easy to interpret, giving the radiation levels and dosimetry detected during the immediately proceeding semester. Recently too, the challenge has been taken on of providing constantly, updated information on as complex a topic as the radiological monitoring of the environment. To this end, a Web page has been developed dealing with the operation and results provided by the aforementioned Radiological Warning Betwork of Extremadura. The page structure consists of seven major blocks: (i) origin and objectives of the network; (ii) a description of the stations of the network; (iii) their modes of operation in normal circumstances and in the case of an operational or radiological anomaly; (iv) the results that the network provides; (v) a glossary of terms to clarify as straightforwardly as possible some of the terms and concepts that are of unavoidable use, but are unfamiliar to the population in general; (vi) information about links to other Web sites that also deal with this issue to some degree; and (vii) giving the option of questions and contacts between the visitor to the page and those responsible for its creation and maintenance. Actions such as that described here will doubtless contribute positively to increasing the necessary trust that the population deserves to have in the correct operation of the measures adopted to guarantee their adequate radiological protection. (Author)

  20. Alzheimer's Disease Information Page

    Science.gov (United States)

    ... Seizures Information Page Fibromuscular Dysplasia Information Page Foot Drop Information Page Friedreich's Ataxia Information Page Gaucher Disease Information Page Generalized Gangliosidoses Information Page Gerstmann's Syndrome ...

  1. Het WEB leert begrijpen

    CERN Multimedia

    Stroeykens, Steven

    2004-01-01

    The WEB could be much more useful if the computers understood something of information on the Web pages. That explains the goal of the "semantic Web", a project in which takes part, amongst others, Tim Berners Lee, the inventor of the original WEB

  2. Users page feedback

    CERN Multimedia

    2010-01-01

    In October last year the Communication Group proposed an interim redesign of the users’ web pages in order to improve the visibility of key news items, events and announcements to the CERN community. The proposed update to the users' page (right), and the current version (left, behind) This proposed redesign was seen as a small step on the way to much wider reforms of the CERN web landscape proposed in the group’s web communication plan.   The results are available here. Some of the key points: - the balance between news / events / announcements and access to links on the users’ pages was not right - many people asked to see a reversal of the order so that links appeared first, news/events/announcements last; - many people felt that we should keep the primary function of the users’ pages as an index to other CERN websites; - many people found the sections of the front page to be poorly delineated; - people do not like scrolling; - there were performance...

  3. Page 1 Page 2 Page 3 SCIOHJOIWI (INV. STVIHOLLYW ...

    African Journals Online (AJOL)

    ------------. Page 4. Page 5. Page 6. Page 7. Page 8. 9/. S0000 SSLLLLaaa SaLLL LaLLL Laaaa aaaaa LLLL LLLLLLLL CCLLCCLLLL LLL mmGCCCC LLLLL CC 00Caa S S0 aaaaS ? "ON "ZI "IoA. Page 9. Page 10. Page 11. Page 12.

  4. Asymptotic analysis for personalized web search

    NARCIS (Netherlands)

    Volkovich, Y.; Litvak, Nelli

    2010-01-01

    PageRank with personalization is used in Web search as an importance measure for Web documents. The goal of this paper is to characterize the tail behavior of the PageRank distribution in the Web and other complex networks characterized by power laws. To this end, we model the PageRank as a solution

  5. Development of Semantic Web - Markup Languages, Web Services, Rules, Explanation, Querying, Proof and Reasoning

    National Research Council Canada - National Science Library

    McGuinness, Deborah

    2008-01-01

    ...-S), the Web Ontology Query Language (OWL-QL) and Semantic Web Rule Language (SWRL) W3C submissions. This report contains the evolution of these markup languages as well as a discussion of semantic query languages, proof and explanation...

  6. Microsoft Expression Web for dummies

    CERN Document Server

    Hefferman, Linda

    2013-01-01

    Expression Web is Microsoft's newest tool for creating and maintaining dynamic Web sites. This FrontPage replacement offers all the simple ""what-you-see-is-what-you-get"" tools for creating a Web site along with some pumped up new features for working with Cascading Style Sheets and other design options. Microsoft Expression Web For Dummies arrives in time for early adopters to get a feel for how to build an attractive Web site. Author Linda Hefferman teams up with longtime FrontPage For Dummies author Asha Dornfest to show the easy way for first-time Web designers, FrontPage ve

  7. Sjogren's Syndrome Information Page

    Science.gov (United States)

    ... Syndrome Information Page NINDS Whiplash Information Page NINDS Infantile Spasms Information Page NINDS Myotonia Congenita Information Page NINDS Ataxias and Cerebellar or Spinocerebellar Degeneration Information Page Congenital ...

  8. What snippets say about pages

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd

    What is the likelihood that a Web page is considered relevant to a query, given the relevance assessment of the corresponding snippet? Using a new FederatedWeb Search test collection that contains search results from over a hundred search engines on the internet, we are able to investigate such

  9. Entertainment Pages.

    Science.gov (United States)

    Druce, Mike

    1981-01-01

    Notes that the planning of an effective entertainment page in a school newspaper must begin by establishing its purpose. Examines all the elements that contribute to the makeup of a good entertainment page. (RL)

  10. A zooming Web browser

    Energy Technology Data Exchange (ETDEWEB)

    Bederson, B.B.; Hollan, J.D.; Stewart, J.; Rogers, D.; Vick, D. [New Mexico Univ., Albuquerque, NM (United States). Dept. of Computer Science; Ring, L.; Grose, E.; Forsythe, C. [Sandia National Labs., Albuquerque, NM (United States)

    1996-12-31

    We are developing a prototype zooming World-Wide Web browser within Pad++, a multiscale graphical environment. Instead of having a single page visible at a time, multiple pages and the links between them are depicted on a large zoomable information surface. Pages are scaled so that the page in focus is clearly readable with connected pages shown at smaller scales to provide context. We quantitatively compared performance with the Pad++ Web browser and Netscape in several different scenarios. We examined how quickly users could answer questions about a specific Web site designed for this test. Initially we found that subjects answered questions slightly slower with Pad++ than with Netscape. After analyzing the results of this study, we implemented several changes to the Pad++ Web browser, and repeated one Pad++ condition. After improvements were made to the Pad++ browser, subjects using Pad++ answered questions 23% faster than those using Netscape.

  11. What Snippets Say About Pages

    OpenAIRE

    DEMEESTER, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd

    2013-01-01

    We summarize findings from [1]. What is the likelihood that a Web page is considered relevant to a query, given the relevance assessment of the corresponding snippet? Using a new Federated Web Search test collection that contains search results from over a hundred search engines on the internet, weare able to investigate such research questions from a global perspective. Our test collection covers the main Web search engines like Google, Yahoo!, and Bing, as well as smaller search engines ded...

  12. Evaluating Web Usability

    Science.gov (United States)

    Snider, Jean; Martin, Florence

    2012-01-01

    Web usability focuses on design elements and processes that make web pages easy to use. A website for college students was evaluated for underutilization. One-on-one testing, focus groups, web analytics, peer university review and marketing focus group and demographic data were utilized to conduct usability evaluation. The results indicated that…

  13. Introduction to Webometrics Quantitative Web Research for the Social Sciences

    CERN Document Server

    Thelwall, Michael

    2009-01-01

    Webometrics is concerned with measuring aspects of the web: web sites, web pages, parts of web pages, words in web pages, hyperlinks, web search engine results. The importance of the web itself as a communication medium and for hosting an increasingly wide array of documents, from journal articles to holiday brochures, needs no introduction. Given this huge and easily accessible source of information, there are limitless possibilities for measuring or counting on a huge scale (e.g., the number of web sites, the number of web pages, the number of blogs) or on a smaller scale (e.g., the number o

  14. Probabilistic relation between In-Degree and PageRank

    NARCIS (Netherlands)

    Litvak, Nelli; Scheinhardt, Willem R.W.; Volkovich, Y.

    2008-01-01

    This paper presents a novel stochastic model that explains the relation between power laws of In-Degree and PageRank. PageRank is a popularity measure designed by Google to rank Web pages. We model the relation between PageRank and In-Degree through a stochastic equation, which is inspired by the

  15. International Journal of Health Research: Submissions

    African Journals Online (AJOL)

    The maximum length of manuscripts should be 6000 words (24 double-spaced typewritten pages) for review, 4000 words for research articles, 1,500 for technical notes, commentaries and short communications. Submission of Manuscript With effect from June 2006 all manuscripts (most be in English) and should be ...

  16. Macroscopic characterisations of Web accessibility

    Science.gov (United States)

    Lopes, Rui; Carriço, Luis

    2010-12-01

    The Web Science framework poses fundamental questions on the analysis of the Web, by focusing on how microscopic properties (e.g. at the level of a Web page or Web site) emerge into macroscopic properties and phenomena. One research topic on the analysis of the Web is Web accessibility evaluation, which centres on understanding how accessible a Web page is for people with disabilities. However, when framing Web accessibility evaluation on Web Science, we have found that existing research stays at the microscopic level. This article presents an experimental study on framing Web accessibility evaluation into Web Science's goals. This study resulted in novel accessibility properties of the Web not found at microscopic levels, as well as of Web accessibility evaluation processes themselves. We observed at large scale some of the empirical knowledge on how accessibility is perceived by designers and developers, such as the disparity of interpretations of accessibility evaluation tools warnings. We also found a direct relation between accessibility quality and Web page complexity. We provide a set of guidelines for designing Web pages, education on Web accessibility, as well as on the computational limits of large-scale Web accessibility evaluations.

  17. Measuring the Utilization of On-Page Search Engine Optimization in Selected Domain

    National Research Council Canada - National Science Library

    Goran Matošević

    2015-01-01

    Search engine optimization (SEO) techniques involve „on-page“ and „off-page“ actions taken by web developers and SEO specialists with aim to increase the ranking of web pages in search engine results pages (SERP...

  18. Identifying multiple submissions in Internet research: preserving data integrity.

    Science.gov (United States)

    Bowen, Anne M; Daniel, Candice M; Williams, Mark L; Baird, Grayson L

    2008-11-01

    Internet-based sexuality research with hidden populations has become increasingly popular. Respondent anonymity may encourage participation and lower social desirability, but associated disinhibition may promote multiple submissions, especially when incentives are offered. The goal of this study was to identify the usefulness of different variables for detecting multiple submissions from repeat responders and to explore incentive effects. The data included 1,900 submissions from a three-session Internet intervention with a pretest and three post-test questionnaires. Participants were men who have sex with men and incentives were offered to rural participants for completing each questionnaire. The final number of submissions included 1,273 "unique", 132 first submissions by "repeat responders" and 495 additional submissions by the "repeat responders" (N = 1,900). Four categories of repeat responders were identified: "infrequent" (2-5 submissions), "persistent" (6-10 submissions), "very persistent" (11-30 submissions), and "hackers" (more than 30 submissions). Internet Provider (IP) addresses, user names, and passwords were the most useful for identifying "infrequent" repeat responders. "Hackers" often varied their IP address and identifying information to prevent easy identification, but investigating the data for small variations in IP, using reverse telephone look up, and patterns across usernames and passwords were helpful. Incentives appeared to play a role in stimulating multiple submissions, especially from the more sophisticated "hackers". Finally, the web is ever evolving and it will be necessary to have good programmers and staff who evolve as fast as "hackers".

  19. Web-Based Course Management and Web Services

    Science.gov (United States)

    Mandal, Chittaranjan; Sinha, Vijay Luxmi; Reade, Christopher M. P.

    2004-01-01

    The architecture of a web-based course management tool that has been developed at IIT [Indian Institute of Technology], Kharagpur and which manages the submission of assignments is discussed. Both the distributed architecture used for data storage and the client-server architecture supporting the web interface are described. Further developments…

  20. Zimbabwe Science News: Submissions

    African Journals Online (AJOL)

    Submission Preparation Checklist. As part of the submission process, authors are required to check off their submission's compliance with all of the following items, and submissions may be returned to authors that do not adhere to these guidelines. The submission has not been previously published, nor is it before another ...

  1. Linking Wikipedia to the web

    NARCIS (Netherlands)

    Kaptein, R.; Serdyukov, P.; Kamps, J.; Chen, H.-H.; Efthimiadis, E.N.; Savoy, J.; Crestani, F.; Marchand-Maillet, S.

    2010-01-01

    We investigate the task of finding links from Wikipedia pages to external web pages. Such external links significantly extend the information in Wikipedia with information from the Web at large, while retaining the encyclopedic organization of Wikipedia. We use a language modeling approach to create

  2. Research and optimization of page updated forecast on Nutch

    Directory of Open Access Journals (Sweden)

    HU Wei

    2016-08-01

    Full Text Available Web page updated prediction method of Nutch is an adjacent method and its relevant update parameters need to be set artificially,not adaptively adjustable,and unable to cope with the differences of massive web page updates.To address this problem,this paper puts forward dynamic selection strategy to improve the method of Nutch web page updated prediction.When the historical updated web page data are insufficient,the strategy uses DBSCAN clustering algorithm based on MapReduce to reduce the number of the pages of the crawler system crawling,the update cycle of the sample web pages is used as update cycle of other pages which are in the same category.When the historical updated web page data are enough,the data are used to model with the Poisson Process,which can more accurately predict each web page update cycle.Finally the improving strategy is tested in the Hadoop distributed platform.The experimental results show that the performance of optimized web page updated prediction method is better.

  3. Web database development

    OpenAIRE

    Tsardas, Nikolaos A.

    2001-01-01

    This thesis explores the concept of Web Database Development using Active Server Pages (ASP) and Java Server Pages (JSP). These are among the leading technologies in the web database development. The focus of this thesis was to analyze and compare the ASP and JSP technologies, exposing their capabilities, limitations, and differences between them. Specifically, issues related to back-end connectivity using Open Database Connectivity (ODBC) and Java Database Connectivity (JDBC), application ar...

  4. WEB STRUCTURE MINING

    Directory of Open Access Journals (Sweden)

    CLAUDIA ELENA DINUCĂ

    2011-01-01

    Full Text Available The World Wide Web became one of the most valuable resources for information retrievals and knowledge discoveries due to the permanent increasing of the amount of data available online. Taking into consideration the web dimension, the users get easily lost in the web’s rich hyper structure. Application of data mining methods is the right solution for knowledge discovery on the Web. The knowledge extracted from the Web can be used to raise the performances for Web information retrievals, question answering and Web based data warehousing. In this paper, I provide an introduction of Web mining categories and I focus on one of these categories: the Web structure mining. Web structure mining, one of three categories of web mining for data, is a tool used to identify the relationship between Web pages linked by information or direct link connection. It offers information about how different pages are linked together to form this huge web. Web Structure Mining finds hidden basic structures and uses hyperlinks for more web applications such as web search.

  5. Algorithms and Models for the Web Graph

    NARCIS (Netherlands)

    Gleich, David F.; Komjathy, Julia; Litvak, Nelli

    2015-01-01

    This volume contains the papers presented at WAW2015, the 12th Workshop on Algorithms and Models for the Web-Graph held during December 10–11, 2015, in Eindhoven. There were 24 submissions. Each submission was reviewed by at least one, and on average two, Program Committee members. The committee

  6. APLIKASI WEB CRAWLER UNTUK WEB CONTENT PADA MOBILE PHONE

    Directory of Open Access Journals (Sweden)

    Sarwosri Sarwosri

    2009-01-01

    Full Text Available Crawling is the process behind a search engine, which served through the World Wide Web in a structured and with certain ethics. Applications that run the crawling process is called Web Crawler, also called web spider or web robot. The growth of mobile search services provider, followed by growth of a web crawler that can browse web pages in mobile content type. Crawler Web applications can be accessed by mobile devices and only web pages that type Mobile Content to be explored is the Web Crawler. Web Crawler duty is to collect a number of Mobile Content. A mobile application functions as a search application that will use the results from the Web Crawler. Crawler Web server consists of the Servlet, Mobile Content Filter and datastore. Servlet is a gateway connection between the client with the server. Datastore is the storage media crawling results. Mobile Content Filter selects a web page, only the appropriate web pages for mobile devices or with mobile content that will be forwarded.

  7. Best Practices for Searchable Collection Pages

    Science.gov (United States)

    Searchable Collection pages are stand-alone documents that do not have any web area navigation. They should not recreate existing content on other sites and should be tagged with quality metadata and taxonomy terms.

  8. Editorial page

    OpenAIRE

    Saaty, T.L.

    1988-01-01

    Author GuidelinesIJCH strictly adheres on the recommendations for the Conduct, Reporting, Editing and Publication of Scholarly Work in Medical Journals as per the standard universal guidelines given by International Committee of Medical Journal Editors (ICMJE - Recommendations for Uniform Requirements for Manuscripts). Authors are requested to visit http://www.icmje.org/index.html before making online submission of their manuscript(s).  http://www.icmje.org/recommendations/browse/manuscript-p...

  9. Google Analytics: Single Page Traffic Reports

    Science.gov (United States)

    These are pages that live outside of Google Analytics (GA) but allow you to view GA data for any individual page on either the public EPA web or EPA intranet. You do need to log in to Google Analytics to view them.

  10. Design and Implementation of Domain based Semantic Hidden Web Crawler

    OpenAIRE

    Manvi; Bhatia, Komal Kumar; Dixit, Ashutosh

    2015-01-01

    Web is a wide term which mainly consists of surface web and hidden web. One can easily access the surface web using traditional web crawlers, but they are not able to crawl the hidden portion of the web. These traditional crawlers retrieve contents from web pages, which are linked by hyperlinks ignoring the information hidden behind form pages, which cannot be extracted using simple hyperlink structure. Thus, they ignore large amount of data hidden behind search forms. This paper emphasizes o...

  11. CERN celebrates Web anniversary

    CERN Multimedia

    2003-01-01

    "Ten years ago, CERN issued a statement declaring that a little known piece of software called the World Wide Web was in the public domain. That was on 30 April 1993, and it opened the floodgates to Web development around the world" (1 page).

  12. PageRank of integers

    Science.gov (United States)

    Frahm, K. M.; Chepelianskii, A. D.; Shepelyansky, D. L.

    2012-10-01

    We up a directed network tracing links from a given integer to its divisors and analyze the properties of the Google matrix of this network. The PageRank vector of this matrix is computed numerically and it is shown that its probability is approximately inversely proportional to the PageRank index thus being similar to the Zipf law and the dependence established for the World Wide Web. The spectrum of the Google matrix of integers is characterized by a large gap and a relatively small number of nonzero eigenvalues. A simple semi-analytical expression for the PageRank of integers is derived that allows us to find this vector for matrices of billion size. This network provides a new PageRank order of integers.

  13. Web Data Management

    OpenAIRE

    Abiteboul, Serge; Manolescu, Ioana; Rigaux, Philippe; Rousset, Marie-Christine; Senellart, Pierre

    2012-01-01

    Open access of the full text on the Web; International audience; Available at http://webdam.inria.fr/Jorge/ Internet and the Web have revolutionized access to information. Today, one finds primarily on the Web, HTML (the standard for the Web) but also documents in pdf, doc, plain text as well as images, music and videos. The public Web is composed of billions of pages on millions of servers. It is a fantastic means of sharing information. It is very simple to use for humans. On the negative s...

  14. CrazyEgg Reports for Single Page Analysis

    Science.gov (United States)

    CrazyEgg provides an in depth look at visitor behavior on one page. While you can use GA to do trend analysis of your web area, CrazyEgg helps diagnose the design of a single Web page by visually displaying all visitor clicks during a specified time.

  15. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  16. Web Metasearch Result Clustering System

    Directory of Open Access Journals (Sweden)

    Adina LIPAI

    2008-01-01

    Full Text Available The paper presents a web search result clustering algorithm that was integrated in to a desktop application. The application aims to increase the web search engines performances by reducing the user effort in finding a web page in the list of results returned by the search engines.

  17. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M.B. Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  18. 61 | Page

    African Journals Online (AJOL)

    Fr. Ikenga

    2016-12-10

    Dec 10, 2016 ... this in Bishak v. National Productivity Centre & Anor7 when in determining the nature of probationary employment it held that 'an officer on probation ...... Ezekiel Amakiri & Ors. (1976) 2 S.C. 1 at page 11 – 12, Casir v London North Western Railway Co. (1975). LR 10 CP. 307; Pascoe V. Turner (1979) 2 All ...

  19. Even Faster Web Sites Performance Best Practices for Web Developers

    CERN Document Server

    Souders, Steve

    2009-01-01

    Performance is critical to the success of any web site, and yet today's web applications push browsers to their limits with increasing amounts of rich content and heavy use of Ajax. In this book, Steve Souders, web performance evangelist at Google and former Chief Performance Yahoo!, provides valuable techniques to help you optimize your site's performance. Souders' previous book, the bestselling High Performance Web Sites, shocked the web development world by revealing that 80% of the time it takes for a web page to load is on the client side. In Even Faster Web Sites, Souders and eight exp

  20. EuroGOV: Engineering a Multilingual Web Corpus

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.

    2005-01-01

    EuroGOV is a multilingual web corpus that was created to serve as the document collection for WebCLEF, the CLEF 2005 web retrieval task. EuroGOV is a collection of web pages crawled from the European Union portal, European Union member state governmental web sites, and Russian government web sites.

  1. Automatic Generation of Web Applications from Visual High-Level Functional Web Components

    Directory of Open Access Journals (Sweden)

    Quan Liang Chen

    2009-01-01

    Full Text Available This paper presents high-level functional Web components such as frames, framesets, and pivot tables, which conventional development environments for Web applications have not yet supported. Frameset Web components provide several editing facilities such as adding, deleting, changing, and nesting of framesets to make it easier to develop Web applications that use frame facilities. Pivot table Web components sum up various kinds of data in two dimensions. They reduce the amount of code to be written by developers greatly. The paper also describes the system that implements these high-level functional components as visual Web components. This system assists designers in the development of Web applications based on the page-transition framework that models a Web application as a set of Web page transitions, and by using visual Web components, makes it easier to write processes to be executed when a Web page transfers to another.

  2. Instant PageSpeed optimization

    CERN Document Server

    Jaiswal, Sanjeev

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Instant PageSpeed Optimization is a hands-on guide that provides a number of clear, step-by-step exercises for optimizing your websites for better performance and improving their efficiency.Instant PageSpeed Optimization is aimed at website developers and administrators who wish to make their websites load faster without any errors and consume less bandwidth. It's assumed that you will have some experience in basic web technologies like HTML, CSS3, JavaScript, and the basics of netw

  3. XML Schema Guide for Secondary CDR Submissions

    Science.gov (United States)

    This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.1 XML schema for the Joint Submission Form. Please note that the order of the elements must match the schema.

  4. Improving Web Accessibility in a University Setting

    Science.gov (United States)

    Olive, Geoffrey C.

    2010-01-01

    Improving Web accessibility for disabled users visiting a university's Web site is explored following the World Wide Web Consortium (W3C) guidelines and Section 508 of the Rehabilitation Act rules for Web page designers to ensure accessibility. The literature supports the view that accessibility is sorely lacking, not only in the USA, but also…

  5. A Runtime System for Interactive Web Services

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Sandholm, Anders

    1999-01-01

    Interactive web services are increasingly replacing traditional static web pages. Producing web services seems to require a tremendous amount of laborious low-level coding due to the primitive nature of CGI programming. We present ideas for an improved runtime system for interactive web services ...

  6. Database Driven Web Systems for Education.

    Science.gov (United States)

    Garrison, Steve; Fenton, Ray

    1999-01-01

    Provides technical information on publishing to the Web. Demonstrates some new applications in database publishing. Discusses the difference between static and database-drive Web pages. Reviews failures and successes of a Web database system. Addresses the question of how to build a database-drive Web site, discussing connectivity software, Web…

  7. Funnel-web spider bite

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/002844.htm Funnel-web spider bite To use the sharing features on ... Duplication for commercial use must be authorized in writing by ADAM Health Solutions. About MedlinePlus Site Map ...

  8. Testing the visual consistency of web sites

    NARCIS (Netherlands)

    van der Geest, Thea; Loorbach, N.R.

    2005-01-01

    Consistency in the visual appearance of Web pages is often checked by experts, such as designers or reviewers. This article reports a card sort study conducted to determine whether users rather than experts could distinguish visual (in-)consistency in Web elements and pages. The users proved to

  9. WebSelF

    DEFF Research Database (Denmark)

    Thomsen, Jakob Grauenkjær; Ernst, Erik; Brabrand, Claus

    2012-01-01

    We present WebSelF, a framework for web scraping which models the process of web scraping and decomposes it into four conceptually independent, reusable, and composable constituents. We have validated our framework through a full parameterized implementation that is flexible enough to capture...... previous work on web scraping. We conducted an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over a period of more than one year. Our framework solves three...... concrete problems with current web scraping and our experimental results indicate that composition of previous and our new techniques achieve a higher degree of accuracy, precision and specificity than existing techniques alone....

  10. Limitations of existing web services

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Limitations of existing web services. Uploading or downloading large data. Serving too many user from single source. Difficult to provide computer intensive job. Depend on internet and its bandwidth. Security of data in transition. Maintain confidentiality of data ...

  11. Web wisdom how to evaluate and create information quality on the Web

    CERN Document Server

    Alexander, Janet E

    1999-01-01

    Web Wisdom is an essential reference for anyone needing to evaluate or establish information quality on the World Wide Web. The book includes easy to use checklists for step-by-step quality evaluations of virtually any Web page. The checklists can also be used by Web authors to help them ensure quality information on their pages. In addition, Web Wisdom addresses other important issues, such as understanding the ways that advertising and sponsorship may affect the quality of Web information. It features: * a detailed discussion of the items involved in evaluating Web information; * checklists

  12. Maintaining Consistency of Data on the Web

    OpenAIRE

    Bernauer, Martin

    2005-01-01

    Increasingly more data is becoming available on the Web, estimates speaking of 1 billion documents in 2002. Most of the documents are Web pages whose data is considered to be in XML format, expecting it to eventually replace HTML. A common problem in designing and maintaining a Web site is that data on a Web page often replicates or derives from other data, the so-called base data, that is usually not contained in the deriving or replicating page. Consequently, replicas and derivations become...

  13. 76 FR 68486 - Agency Information Collection Activities: Submission for Review; Information Collection Request...

    Science.gov (United States)

    2011-11-04

    ... collected via an online Web form. The information collected is used by the DHS S&T E- STCS program to... Web site will only employ secure Web-based technology (i.e., electronic registration form) to collect... SECURITY Agency Information Collection Activities: Submission for Review; Information Collection Request...

  14. PDF compression, OCR, web optimization using a watermarked ...

    Indian Academy of Sciences (India)

    PDF compression, OCR, web optimization using a watermarked evaluation copy of CVISION PDFCompressor. Page 2. PDF compression, OCR, web optimization using a watermarked evaluation copy of CVISION PDFCompressor. Page 3. PDF compression, OCR, web optimization using a watermarked evaluation copy of ...

  15. Indian accent text-to-speech system for web browsing

    Indian Academy of Sciences (India)

    Incorporation of speech and Indian scripts can greatly enhance the accessibility of web information among common people. This paper describes a 'web reader' which 'reads out' the textual contents of a selected web page in Hindi or in English with Indian accent. The content of the page is downloaded and parsed into ...

  16. Automating Information Discovery Within the Invisible Web

    Science.gov (United States)

    Sweeney, Edwina; Curran, Kevin; Xie, Ermai

    A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.

  17. Sleep Apnea Information Page

    Science.gov (United States)

    ... Page You are here Home » Disorders » All Disorders Sleep Apnea Information Page Sleep Apnea Information Page What research is being done? ... Institutes of Health (NIH) conduct research related to sleep apnea in laboratories at the NIH, and also ...

  18. Connecting Classrooms to the Web: An Introduction to HTML.

    Science.gov (United States)

    LaRoe, R. John

    The World Wide Web (WWW) acts as a multimedia Internet, navigable via Web browsers. Web browsers (Mosaic, Netscape, Cello, WinWeb, etc.) read files treated in HyperText Mark-up Language (HTML) and display interactive pages to users. Teachers with computer-mediated classrooms or labs can use HTML and Web browsers to create multimedia presentations…

  19. Searching a database based web site

    OpenAIRE

    Filipe Silva; Gabriel David

    2003-01-01

    Currently, information systems are usually supported by databases (DB) and accessed through a Web interface. Pages in such Web sites are not drawn from HTML files but are generated on the fly upon request. Indexing and searching such dynamic pages raises several extra difficulties not solved by most search engines, which were designed for static contents. In this paper we describe the development of a search engine that overcomes most of the problems for a specific Web site, how the limitatio...

  20. An Efficient PageRank Approach for Urban Traffic Optimization

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2012-01-01

    to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.

  1. Spinning the web of knowledge

    CERN Multimedia

    Knight, Matthew

    2007-01-01

    "On August 6, 1991, Tim Berners-Lee posted the World Wide Web's first Web site. Fifteen years on there are estimated to be over 100 million. The space of growth has happened at a bewildering rate and its success has even confounded its inventor." (1/2 page)

  2. Scientist who weaves wonderful web

    CERN Multimedia

    Wills, D

    2000-01-01

    Mr Berners-Lee's unique standing makes him a sought-after speaker. People want to know how he developed the Web and where he thinks it is headed. 'Weaving the Web', written by himself with Mark Fischetti, is his attempt to answer these questions (1 page).

  3. JavaServer pages

    National Research Council Canada - National Science Library

    Bergsten, H

    2004-01-01

    ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Part I. JSP Application Basics 1. Introducing JavaServer Pages What Is JavaServer Pages? Why Use JSP? What You Need to Get Started...

  4. XML Schema Guide for Primary CDR Submissions

    Science.gov (United States)

    This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.7 XML schema. Please note that the order of the elements must match the schema.

  5. Comparison of quality of internet pages on human papillomavirus immunization in Italian and in English.

    Science.gov (United States)

    Tozzi, Alberto Eugenio; Buonuomo, Paola Sabrina; Ciofi degli Atti, Marta Luisa; Carloni, Emanuela; Meloni, Marco; Gamba, Fiorenza

    2010-01-01

    Information available on the Internet about immunizations may influence parents' perception about human papillomavirus (HPV) immunization and their attitude toward vaccinating their daughters. We hypothesized that the quality of information on HPV available on the Internet may vary with language and with the level of knowledge of parents. To this end we compared the quality of a sample of Web pages in Italian with a sample of Web pages in English. Five reviewers assessed the quality of Web pages retrieved with popular search engines using criteria adapted from the Good Information Practice Essential Criteria for Vaccine Safety Web Sites recommended by the World Health Organization. Quality of Web pages was assessed in the domains of accessibility, credibility, content, and design. Scores in these domains were compared through nonparametric statistical tests. We retrieved and reviewed 74 Web sites in Italian and 117 in English. Most retrieved Web pages (33.5%) were from private agencies. Median scores were higher in Web pages in English compared with those in Italian in the domain of accessibility (p content (p content scores were those of Web pages from governmental agencies or universities. Accessibility scores were positively associated with content scores (p < .01) and with credibility scores (p < .01). A total of 16.2% of Web pages in Italian opposed HPV immunization compared with 6.0% of those in English (p < .05). Quality of information and number of Web pages opposing HPV immunization may vary with the Web site language. High-quality Web pages on HPV, especially from public health agencies and universities, should be easily accessible and retrievable with common Web search engines. Copyright 2010 Society for Adolescent Medicine. Published by Elsevier Inc. All rights reserved.

  6. Web Development Simplified

    Science.gov (United States)

    Becker, Bernd W.

    2010-01-01

    The author has discussed the Multimedia Educational Resource for Teaching and Online Learning site, MERLOT, in a recent Electronic Roundup column. In this article, he discusses an entirely new Web page development tool that MERLOT has added for its members. The new tool is called the MERLOT Content Builder and is directly integrated into the…

  7. WEB CONTENT EXTRACTION USING HYBRID APPROACH

    Directory of Open Access Journals (Sweden)

    K. Nethra

    2014-01-01

    Full Text Available The World Wide Web has rich source of voluminous and heterogeneous information which continues to expand in size and complexity. Many Web pages are unstructured and semi-structured, so it consists of noisy information like advertisement, links, headers, footers etc. This noisy information makes extraction of Web content tedious. Many techniques that were proposed for Web content extraction are based on automatic extraction and hand crafted rule generation. Automatic extraction technique is done through Web page segmentation, but it increases the time complexity. Hand crafted rule generation uses string manipulation function for rule generation, but generating those rules is very difficult. A hybrid approach is proposed to extract main content from Web pages. A HTML Web page is converted to DOM tree and features are extracted and with the extracted features, rules are generated. Decision tree classification and Naïve Bayes classification are machine learning methods used for rules generation. By using the rules, noisy part in the Web page is discarded and informative content in the Web page is extracted. The performance of both decision tree classification and Naïve Bayes classification are measured with metrics like precision, recall, F-measure and accuracy.

  8. A Runtime System for Interactive Web Services

    DEFF Research Database (Denmark)

    Brabrand, Claus; Møller, Anders; Sandholm, Anders

    1999-01-01

    Interactive web services are increasingly replacing traditional static web pages. Producing web services seems to require a tremendous amount of laborious low-level coding due to the primitive nature of CGI programming. We present ideas for an improved runtime system for interactive web services...... built on top of CGI running on virtually every combination of browser and HTTP/CGI server. The runtime system has been implemented and used extensively in , a tool for producing interactive web services....

  9. An Improved Approach to the PageRank Problems

    Directory of Open Access Journals (Sweden)

    Yue Xie

    2013-01-01

    Full Text Available We introduce a partition of the web pages particularly suited to the PageRank problems in which the web link graph has a nested block structure. Based on the partition of the web pages, dangling nodes, common nodes, and general nodes, the hyperlink matrix can be reordered to be a more simple block structure. Then based on the parallel computation method, we propose an algorithm for the PageRank problems. In this algorithm, the dimension of the linear system becomes smaller, and the vector for general nodes in each block can be calculated separately in every iteration. Numerical experiments show that this approach speeds up the computation of PageRank.

  10. Geographic Information Systems and Web Page Development

    Science.gov (United States)

    Reynolds, Justin

    2004-01-01

    The Facilities Engineering and Architectural Branch is responsible for the design and maintenance of buildings, laboratories, and civil structures. In order to improve efficiency and quality, the FEAB has dedicated itself to establishing a data infrastructure based on Geographic Information Systems, GIS. The value of GIS was explained in an article dating back to 1980 entitled "Need for a Multipurpose Cadastre" which stated, "There is a critical need for a better land-information system in the United States to improve land-conveyance procedures, furnish a basis for equitable taxation, and provide much-needed information for resource management and environmental planning." Scientists and engineers both point to GIS as the solution. What is GIS? According to most text books, Geographic Information Systems is a class of software that stores, manages, and analyzes mapable features on, above, or below the surface of the earth. GIS software is basically database management software to the management of spatial data and information. Simply put, Geographic Information Systems manage, analyze, chart, graph, and map spatial information. GIS can be broken down into two main categories, urban GIS and natural resource GIS. Further still, natural resource GIS can be broken down into six sub-categories, agriculture, forestry, wildlife, catchment management, archaeology, and geology/mining. Agriculture GIS has several applications, such as agricultural capability analysis, land conservation, market analysis, or whole farming planning. Forestry GIs can be used for timber assessment and management, harvest scheduling and planning, environmental impact assessment, and pest management. GIS when used in wildlife applications enables the user to assess and manage habitats, identify and track endangered and rare species, and monitor impact assessment.

  11. WEB STRUCTURE MINING USING PAGERANK, IMPROVED PAGERANK – AN OVERVIEW

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2011-03-01

    Full Text Available Web Mining is the extraction of interesting and potentially useful patterns and information from Web. It includes Web documents, hyperlinks between documents, and usage logs of web sites. The significant task for web mining can be listed out as Information Retrieval, Information Selection / Extraction, Generalization and Analysis. Web information retrieval tools consider only the text on pages and ignore information in the links. The goal of Web structure mining is to explore structural summary about web. Web structure mining focusing on link information is an important aspect of web data. This paper presents an overview of the PageRank, Improved Page Rank and its working functionality in web structure mining.

  12. Head First Web Design

    CERN Document Server

    Watrall, Ethan

    2008-01-01

    Want to know how to make your pages look beautiful, communicate your message effectively, guide visitors through your website with ease, and get everything approved by the accessibility and usability police at the same time? Head First Web Design is your ticket to mastering all of these complex topics, and understanding what's really going on in the world of web design. Whether you're building a personal blog or a corporate website, there's a lot more to web design than div's and CSS selectors, but what do you really need to know? With this book, you'll learn the secrets of designing effecti

  13. Creating advanced web map for mountain biking

    OpenAIRE

    Pasarić, Darko

    2013-01-01

    The diploma presents the creation of a web map designed for mountain bikers. The web map is based on Google’s application Google maps. This means that we use Google’s maps to show the route and its markers. The thesis mostly describes web programming and the interface Google Maps JavaScript API v3 that enables us, to integrate the interactive map onto web page. It also describes the markup language for web pages (HTML). In the thesis we discuss chapters such as HTML, Google maps, the b...

  14. Comaparison of Web Developement Technologies

    OpenAIRE

    Ramesh Nagilla, Ramesh

    2012-01-01

    Web applications play an important role for many business purpose activities in the modernworld. It has become a platform for the companies to fulfil the needs of their business. In thissituation, Web technologies that are useful in developing these kinds of applications become animportant aspect. Many Web technologies like Hypertext Preprocessor (PHP), Active ServerPages (ASP.NET), Cold Fusion Markup Language (CFML), Java, Python, and Ruby on Rails areavailable in the market. All these techn...

  15. Hiding in Plain Sight: The Anatomy of Malicious Facebook Pages

    OpenAIRE

    Dewan, Prateek; Kumaraguru, Ponnurangam

    2015-01-01

    Facebook is the world's largest Online Social Network, having more than 1 billion users. Like most other social networks, Facebook is home to various categories of hostile entities who abuse the platform by posting malicious content. In this paper, we identify and characterize Facebook pages that engage in spreading URLs pointing to malicious domains. We used the Web of Trust API to determine domain reputations of URLs published by pages, and identified 627 pages publishing untrustworthy info...

  16. MedlinePlus Connect: Web Service

    Science.gov (United States)

    ... https://medlineplus.gov/connect/service.html MedlinePlus Connect: Web Service To use the sharing features on this page, ... if you implement MedlinePlus Connect by contacting us . Web Service Overview The parameters for the Web service requests ...

  17. Using centrality to rank web snippets

    NARCIS (Netherlands)

    Jijkoun, V.; de Rijke, M.; Peters, C.; Jijkoun, V.; Mandl, T.; Müller, H.; Oard, D.W.; Peñas, A.; Petras, V.; Santos, D.

    2008-01-01

    We describe our participation in the WebCLEF 2007 task, targeted at snippet retrieval from web data. Our system ranks snippets based on a simple similarity-based centrality, inspired by the web page ranking algorithms. We experimented with retrieval units (sentences and paragraphs) and with the

  18. Modeling clicks beyond the first result page

    NARCIS (Netherlands)

    Chuklin, A.; Serdyukov, P.; de Rijke, M.

    2013-01-01

    Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more

  19. Designing usable web forms- Empirical evaluation of web form improvement guidelines

    DEFF Research Database (Denmark)

    Seckler, Mirjam; Heinz, Silvia; Bargas-Avila, Javier A.

    2014-01-01

    This study reports a controlled eye tracking experiment (N = 65) that shows the combined effectiveness of 20 guidelines to improve interactive online forms when applied to forms found on real company websites. Results indicate that improved web forms lead to faster completion times, fewer form...... submission trials, and fewer eye movements. Data from subjective questionnaires and interviews further show increased user satisfaction. Overall, our findings highlight the importance for web designers to improve their web forms using UX guidelines....

  20. Web Browser Security Update Effectiveness

    Science.gov (United States)

    Duebendorfer, Thomas; Frei, Stefan

    We analyze the effectiveness of different Web browser update mechanisms on various operating systems; from Google Chrome's silent update mechanism to Opera's update requiring a full re-installation. We use anonymized logs from Google's world wide distributed Web servers. An analysis of the logged HTTP user-agent strings that Web browsers report when requesting any Web page is used to measure the daily browser version shares in active use. To the best of our knowledge, this is the first global scale measurement of Web browser update effectiveness comparing four different Web browser update strategies including Google Chrome. Our measurements prove that silent updates and little dependency on the underlying operating system are most effective to get users of Web browsers to surf the Web with the latest browser version.

  1. So You Want To Write a School Home Page But Don't Know Where To Begin.

    Science.gov (United States)

    Buchanan, Madeline

    1997-01-01

    Discusses aspects of school home pages: school policy, parental involvement and concerns, Internet access and commercial providers, HTML programming, links to other Web pages, graphics, and the importance of proofreading. (PEN)

  2. Caching Strategies for Data-Intensive Web Sites

    OpenAIRE

    Florescu, Daniela; Issarny, Valérie; Valduriez, Patrick; Yagoub, Khaled

    2000-01-01

    Projet CARAVEL; A data-intensive Web site is a Web server that accesses large numbers of pages whose content is dynamically extracted from a database. In this context, returning a Web page may require costly interaction with the database system (for connection and querying) thereby increasing much the response time. In this paper, we address this performance problem. Our approach relies on the declarative specification of the Web site. We propose a customizable cache system architecture and i...

  3. Universal emergence of PageRank

    Energy Technology Data Exchange (ETDEWEB)

    Frahm, K M; Georgeot, B; Shepelyansky, D L, E-mail: frahm@irsamc.ups-tlse.fr, E-mail: georgeot@irsamc.ups-tlse.fr, E-mail: dima@irsamc.ups-tlse.fr [Laboratoire de Physique Theorique du CNRS, IRSAMC, Universite de Toulouse, UPS, 31062 Toulouse (France)

    2011-11-18

    The PageRank algorithm enables us to rank the nodes of a network through a specific eigenvector of the Google matrix, using a damping parameter {alpha} Element-Of ]0, 1[. Using extensive numerical simulations of large web networks, with a special accent on British University networks, we determine numerically and analytically the universal features of the PageRank vector at its emergence when {alpha} {yields} 1. The whole network can be divided into a core part and a group of invariant subspaces. For {alpha} {yields} 1, PageRank converges to a universal power-law distribution on the invariant subspaces whose size distribution also follows a universal power law. The convergence of PageRank at {alpha} {yields} 1 is controlled by eigenvalues of the core part of the Google matrix, which are extremely close to unity, leading to large relaxation times as, for example, in spin glasses. (paper)

  4. Introduction pages

    Directory of Open Access Journals (Sweden)

    Radu E. Sestras

    2015-09-01

    Full Text Available Introduction Pages and Table of Contents Research ArticlesInsulin Requirements in Relation to Insulin Pump Indications in Type 1 DiabetesPDFGabriela GHIMPEŢEANU,\tSilvia Ş. IANCU,\tGabriela ROMAN,\tAnca M. ALIONESCU259-263Comparative Antibacterial Efficacy of Vitellaria paradoxa (Shea Butter Tree Extracts Against Some Clinical Bacterial IsolatesPDFKamoldeen Abiodun AJIJOLAKEWU,\tFola Jose AWARUN264-268A Murine Effort Model for Studying the Influence of Trichinella on Muscular Activity of MicePDFIonut MARIAN,\tCălin Mircea GHERMAN,\tAndrei Daniel MIHALCA269-271Prevalence and Antibiogram of Generic Extended-Spectrum β-Lactam-Resistant Enterobacteria in Healthy PigsPDFIfeoma Chinyere UGWU,\tMadubuike Umunna ANYANWU,\tChidozie Clifford UGWU,\tOgbonna Wilfred UGWUANYI272-280Index of Relative Importance of the Dietary Proportions of Sloth Bear (Melursus ursinus in Semi-Arid RegionPDFTana P. MEWADA281-288Bioaccumulation Potentials of Momordica charantia L. Medicinal Plant Grown in Lead Polluted Soil under Organic Fertilizer AmendmentPDFOjo Michael OSENI,\tOmotola Esther DADA,\tAdekunle Ajayi ADELUSI289-294Induced Chitinase and Chitosanase Activities in Turmeric Plants by Application of β-D-Glucan NanoparticlesPDFSathiyanarayanan ANUSUYA,\tMuthukrishnan SATHIYABAMA295-298Present or Absent? About a Threatened Fern, Asplenium adulterinum Milde, in South-Eastern Carpathians (RomaniaPDFAttila BARTÓK,\tIrina IRIMIA299-307Comparative Root and Stem Anatomy of Four Rare Onobrychis Mill. (Fabaceae Taxa Endemic in TurkeyPDFMehmet TEKİN,\tGülden YILMAZ308-312Propagation of Threatened Nepenthes khasiana: Methods and PrecautionsPDFJibankumar S. KHURAIJAM,\tRup K. ROY313-315Alleviate Seed Ageing Effects in Silybum marianum by Application of Hormone Seed PrimingPDFSeyed Ata SIADAT,\tSeyed Amir MOOSAVI,\tMehran SHARAFIZADEH316-321The Effect of Halopriming and Salicylic Acid on the Germination of Fenugreek (Trigonella foenum-graecum under Different Cadmium

  5. OneWeb: web content adaptation platform based on W3C Mobile Web Initiative guidelines

    Directory of Open Access Journals (Sweden)

    Francisco O. Martínez P.

    2011-01-01

    Full Text Available  Restrictions regardingnavigability and user-friendliness are the main challenges the Mobile Web faces to be accepted worldwide. W3C has recently developed the Mobile Web Initiative (MWI, a set of directives for the suitable design and presentation of mobile Web interfaces. This article presents the main features and functional modules of OneWeb, an MWI-based Web content adaptation platform developed by Mobile Devices Applications Development Interest Group’s  (W@PColombia research activities, forming part of the Universidad de Cauca’s Telematics Engineering Group.Some performance measurementresults and comparison with other Web content adaptation platforms are presented. Tests have shown suitable response times for Mobile Web environments; MWI guidelines were applied to over twenty Web pages selected for testing purposes.  

  6. Decomposition of the Google PageRank and Optimal Linking Strategy

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Litvak, Nelli

    We provide the analysis of the Google PageRank from the perspective of the Markov Chain Theory. First we study the Google PageRank for a Web that can be decomposed into several connected components which do not have any links to each other. We show that in order to determine the Google PageRank for

  7. Mizan Law Review: Submissions

    African Journals Online (AJOL)

    Author Guidelines. SUBMISSION GUIDELINES The following submissions are acceptable for publication upon approval by the Editorial Board. Publication of an article further involves anonymous peer review by two External Assessors. Articles: Research articles that identify, examine, explore and analyze legal and related ...

  8. Digital plagiarism - The web giveth and the web shall taketh

    Science.gov (United States)

    Presti, David E

    2000-01-01

    Publishing students' and researchers' papers on the World Wide Web (WWW) facilitates the sharing of information within and between academic communities. However, the ease of copying and transporting digital information leaves these authors' ideas open to plagiarism. Using tools such as the Plagiarism.org database, which compares submissions to reports and papers available on the Internet, could discover instances of plagiarism, revolutionize the peer review process, and raise the quality of published research everywhere. PMID:11720925

  9. Using Power-Law Degree Distribution to Accelerate PageRank

    Directory of Open Access Journals (Sweden)

    Zhaoyan Jin

    2012-12-01

    Full Text Available The PageRank vector of a network is very important, for it can reflect the importance of a Web page in the World Wide Web, or of a people in a social network. However, with the growth of the World Wide Web and social networks, it needs more and more time to compute the PageRank vector of a network. In many real-world applications, the degree and PageRank distributions of these complex networks conform to the Power-Law distribution. This paper utilizes the degree distribution of a network to initialize its PageRank vector, and presents a Power-Law degree distribution accelerating algorithm of PageRank computation. Experiments on four real-world datasets show that the proposed algorithm converges more quickly than the original PageRank algorithm.

  10. Blueprint of a Cross-Lingual Web Retrieval Collection

    NARCIS (Netherlands)

    Sigurbjörnsson, B.; Kamps, J.; de Rijke, M.; van Zwol, R.

    2005-01-01

    The world wide web is a natural setting for cross-lingual information retrieval; web content is essentially multilingual, and web searchers are often polyglots. Even though English has emerged as the lingua franca of the web, planning for a business trip or holiday usually involves digesting pages

  11. Syskill & Webert: Identifying interesting web sites

    Energy Technology Data Exchange (ETDEWEB)

    Pazzani, M.; Muramatsu, J.; Billsus, D. [Univ. of California, Irvine, CA (United States)

    1996-12-31

    We describe Syskill & Webert, a software agent that learns to rate pages on the World Wide Web (WWW), deciding what pages might interest a user. The user rates explored pages on a three point scale, and Syskill & Webert learns a user profile by analyzing the information on each page. The user profile can be used in two ways. First, it can be used to suggest which links a user would be interested in exploring. Second, it can be used to construct a LYCOS query to find pages that would interest a user. We compare six different algorithms from machine learning and information retrieval on this task. We find that the naive Bayesian classifier offers several advantages over other learning algorithms on this task. Furthermore, we find that an initial portion of a web page is sufficient for making predictions on its interestingness substantially reducing the amount of network transmission required to make predictions.

  12. WebSelF: A Web Scraping Framework

    DEFF Research Database (Denmark)

    Thomsen, Jakob; Ernst, Erik; Brabrand, Claus

    2012-01-01

    We present, WebSelF, a framework for web scraping which models the process of web scraping and decomposes it into four conceptually independent, reusable, and composable constituents. We have validated our framework through a full parameterized implementation that is flexible enough to capture...... previous work on web scraping. We have experimentally evaluated our framework and implementation in an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over...... a period of more than one year. Our framework solves three concrete problems with current web scraping and our experimental results indicate that com- position of previous and our new techniques achieve a higher degree of accuracy, precision and specificity than existing techniques alone....

  13. WEB CONTENT EXTRACTION USING HYBRID APPROACH

    OpenAIRE

    K. Nethra; Anitha, J.; G. Thilagavathi

    2014-01-01

    The World Wide Web has rich source of voluminous and heterogeneous information which continues to expand in size and complexity. Many Web pages are unstructured and semi-structured, so it consists of noisy information like advertisement, links, headers, footers etc. This noisy information makes extraction of Web content tedious. Many techniques that were proposed for Web content extraction are based on automatic extraction and hand crafted rule generation. Automatic extraction technique is do...

  14. Augmenting the Web through Open Hypermedia

    DEFF Research Database (Denmark)

    Bouvin, N.O.

    2003-01-01

    Based on an overview of Web augmentation and detailing the three basic approaches to extend the hypermedia functionality of the Web, the author presents a general open hypermedia framework (the Arakne framework) to augment the Web. The aim is to provide users with the ability to link, annotate......, and otherwise structure Web pages, as they see fit. The paper further discusses the possibilities of the concept through the description of various experiments performed with an implementation of the framework, the Arakne Environment...

  15. Study on online community user motif using web usage mining

    Science.gov (United States)

    Alphy, Meera; Sharma, Ajay

    2016-04-01

    The Web usage mining is the application of data mining, which is used to extract useful information from the online community. The World Wide Web contains at least 4.73 billion pages according to Indexed Web and it contains at least 228.52 million pages according Dutch Indexed web on 6th august 2015, Thursday. It’s difficult to get needed data from these billions of web pages in World Wide Web. Here is the importance of web usage mining. Personalizing the search engine helps the web user to identify the most used data in an easy way. It reduces the time consumption; automatic site search and automatic restore the useful sites. This study represents the old techniques to latest techniques used in pattern discovery and analysis in web usage mining from 1996 to 2015. Analyzing user motif helps in the improvement of business, e-commerce, personalisation and improvement of websites.

  16. Web Enabled DROLS Verity TopicSets

    National Research Council Canada - National Science Library

    Tong, Richard

    1999-01-01

    The focus of this effort has been the design and development of automatically generated TopicSets and HTML pages that provide the basis of the required search and browsing capability for DTIC's Web Enabled DROLS System...

  17. Talking physics in the social web

    CERN Multimedia

    Griffiths, Martin

    2007-01-01

    "From "blogs" to "wikis", the Web is now more than a mere repository of information. Martin Griffiths investigates how this new interactivity is affecting the way physicists communicate and access information." (5 pages)

  18. Detection and classification of Web robots with honeypots

    OpenAIRE

    McKenna, Sean F.

    2016-01-01

    Approved for public release; distribution is unlimited Web robots are automated programs that systematically browse the Web, collecting information. Although Web robots are valuable tools for indexing content on the Web, they can also be malicious through phishing, spamming, or performing targeted attacks. In this thesis, we study an approach to Web-robot detection that uses honeypots in the form of hidden resources on Web pages. Our detection model is based upon the observation that malic...

  19. Intelligent Agent Based Semantic Web in Cloud Computing Environment

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Considering today's web scenario, there is a need of effective and meaningful search over the web which is provided by Semantic Web. Existing search engines are keyword based. They are vulnerable in answering intelligent queries from the user due to the dependence of their results on information available in web pages. While semantic search engines provides efficient and relevant results as the semantic web is an extension of the current web in which information is given well defined meaning....

  20. 77 FR 40371 - Agency Information Collection Activities: Submission for Review; Information Collection Extension...

    Science.gov (United States)

    2012-07-09

    ... Security (DHS) invites the general public to comment on the data collection form for the DHS Science... of Practice Web site found at . The user will complete the form online and submit it through the Web... SECURITY Agency Information Collection Activities: Submission for Review; Information Collection Extension...

  1. 78 FR 66036 - Agency Information Collection Activities: Submission for Review; Information Collection Extension...

    Science.gov (United States)

    2013-11-04

    ... of Practice Web site found at . The user will complete the form online and submit it through the Web... SECURITY Agency Information Collection Activities: Submission for Review; Information Collection Extension... Directorate, DHS. ACTION: 30-day Notice and request for comment. SUMMARY: The Department of Homeland Security...

  2. Adapting Web Information to Disabled and Elderly Users.

    Science.gov (United States)

    Kobsa, Alfred

    This paper describes work aimed at catering the content of World Wide Web (WWW) pages to the needs of different users, including elderly people and users with vision and motor impairments. An overview is provided of the AVANTI system, a European WWW-based tourist information system that adapts Web pages to each user's individual needs before…

  3. Upgrade of CERN OP Webtools IRRAD Page

    CERN Document Server

    Vik, Magnus Bjerke

    2017-01-01

    CERN Beams Department maintains a website with various tools for the Operations Group, with one of them being specific for the Proton Irradiation Facility (IRRAD). The IRRAD team use the tool to follow up and optimize the operation of the facility. The original version of the tool was difficult to maintain and adding new features to the page was challenging. Thus this summer student project is aimed to upgrade the web page by rewriting the web page with maintainability and flexibility in mind. The new application uses a server--client architecture with a REST API on the back end which is used by the front end to request data for visualization. PHP is used on the back end to implement the API's and Swagger is used to document them. Vue, Semantic UI, Webpack, Node and ECMAScript 5 is used on the fronted to visualize and administrate the data. The result is a new IRRAD operations web application with extended functionality, improved structure and an improved user interface. It includes a new Status Panel page th...

  4. PAGING IN COMMUNICATIONS

    DEFF Research Database (Denmark)

    2016-01-01

    A method and an apparatus are disclosed for managing paging in a communications system. The method may include, based on a received set of physical resources, determining, in a terminal apparatus, an original paging pattern defining potential time instants for paging, wherein the potential time...

  5. EVALUATION OF WEB SEARCHING METHOD USING A NOVEL WPRR ALGORITHM FOR TWO DIFFERENT CASE STUDIES

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2012-04-01

    Full Text Available The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to web data and documents. Web content mining and web structure mining have important roles in identifying the relevant web page. Relevancy of web page denotes how well a retrieved web page or set of web pages meets the information need of the user. Page Rank, Weighted Page Rank and Hypertext Induced Topic Selection (HITS are existing algorithms which considers only web structure mining. Vector Space Model (VSM, Cover Density Ranking (CDR, Okapi similarity measurement (Okapi and Three-Level Scoring method (TLS are some of existing relevancy score methods which consider only web content mining. In this paper, we propose a new algorithm, Weighted Page with Relevant Rank (WPRR which is blend of both web content mining and web structure mining that demonstrates the relevancy of the page with respect to given query for two different case scenarios. It is shown that WPRR’s performance is better than the existing algorithms.

  6. Developing Large Web Applications

    CERN Document Server

    Loudon, Kyle

    2010-01-01

    How do you create a mission-critical site that provides exceptional performance while remaining flexible, adaptable, and reliable 24/7? Written by the manager of a UI group at Yahoo!, Developing Large Web Applications offers practical steps for building rock-solid applications that remain effective even as you add features, functions, and users. You'll learn how to develop large web applications with the extreme precision required for other types of software. Avoid common coding and maintenance headaches as small websites add more pages, more code, and more programmersGet comprehensive soluti

  7. Interleaving Semantic Web Reasoning and Service Discovery to Enforce Context-Sensitive Security and Privacy Policies

    Science.gov (United States)

    2005-07-01

    Services Symposium, AAAI 2004 Spring Symposium Series, Stanford California. [25] A P3P Preference Exchange Language 1.0 (APPEL1.0). W3C Working Draft...15 April 2002, http://www.w3.org/TR/ P3P -preferences/ [26] OWL-S: Semantic Markup for Web Services, W3C Submission Member Submission, November 2004

  8. Web site for GAIN

    OpenAIRE

    Brænden, Stig; Gjerde, Stian; Hansen, Terje, TAR

    2001-01-01

    The project started with an inquiry from GAIN, Graphic Arts Intelligence Network, on the September 26. 2000. GAIN currently has a website that is static, and is not functioning in a satisfying way. The desire is therefore to establish a new dynamic web site with the possibility for the GAIN members to update the page via a browser interface, and maintain their own profiles. In addition to this they would like a brand new and more functional design. GAIN also wants to e...

  9. MANUSCRIPT SUBMISSION FORM Upon submission of a ...

    Indian Academy of Sciences (India)

    IAS Admin

    The submission of a paper by a set of authors represents the results of their original research not previously published; that it is not under consideration for publication elsewhere; and that if accepted for the journal, it will not be published elsewhere. ii). The list of authors includes those and all those who have contributed in.

  10. Happy birthday WWW: the web is now old enough to drive

    CERN Multimedia

    Gilbertson, Scott

    2007-01-01

    "The World Wide Web can now drive. Sixteen years ago yeterday, in a short post to the alt.hypertext newsgroup, tim Berners-Lee revealed the first public web pages summarizing his World Wide Web project." (1/4 page)

  11. Automatic page layout using genetic algorithms for electronic albuming

    Science.gov (United States)

    Geigel, Joe; Loui, Alexander C. P.

    2000-12-01

    In this paper, we describe a flexible system for automatic page layout that makes use of genetic algorithms for albuming applications. The system is divided into two modules, a page creator module which is responsible for distributing images amongst various album pages, and an image placement module which positions images on individual pages. Final page layouts are specified in a textual form using XML for printing or viewing over the Internet. The system makes use of genetic algorithms, a class of search and optimization algorithms that are based on the concepts of biological evolution, for generating solutions with fitness based on graphic design preferences supplied by the user. The genetic page layout algorithm has been incorporated into a web-based prototype system for interactive page layout over the Internet. The prototype system is built using client-server architecture and is implemented in java. The system described in this paper has demonstrated the feasibility of using genetic algorithms for automated page layout in albuming and web-based imaging applications. We believe that the system adequately proves the validity of the concept, providing creative layouts in a reasonable number of iterations. By optimizing the layout parameters of the fitness function, we hope to further improve the quality of the final layout in terms of user preference and computation speed.

  12. Importance of intrinsic and non-network contribution in PageRank centrality and its effect on PageRank localization

    CERN Document Server

    Deyasi, Krishanu

    2016-01-01

    PageRank centrality is used by Google for ranking web-pages to present search result for a user query. Here, we have shown that PageRank value of a vertex also depends on its intrinsic, non-network contribution. If the intrinsic, non-network contributions of the vertices are proportional to their degrees or zeros, then their PageRank centralities become proportion to their degrees. Some simulations and empirical data are used to support our study. In addition, we have shown that localization of PageRank centrality depends upon the same intrinsic, non-network contribution.

  13. 8th Chinese Conference on The Semantic Web and Web Science

    CERN Document Server

    Du, Jianfeng; Wang, Haofen; Wang, Peng; Ji, Donghong; Pan, Jeff Z; CSWS 2014

    2014-01-01

    This book constitutes the thoroughly refereed papers of the 8th Chinese Conference on The Semantic Web and Web Science, CSWS 2014, held in Wuhan, China, in August 2014. The 22 research papers presented were carefully reviewed and selected from 61 submissions. The papers are organized in topical sections such as ontology reasoning and learning; semantic data generation and management; and semantic technology and applications.

  14. A Framework for Consistent, Replicated Web Objects

    NARCIS (Netherlands)

    Kermarrec, A.-M.; Kuz, I.; Steen, M. van; Tanenbaum, A.S.

    1998-01-01

    Despite the extensive use of caching techniques, the Web is overloaded. While the caching techniques currently used help some, it would be better to use different caching and replication strategies for different Web pages, depending on their characteristics. We propose a framework in which such

  15. Synchronizing Web Documents with Style

    NARCIS (Netherlands)

    R.L. Guimarães (Rodrigo); D.C.A. Bulterman (Dick); P.S. Cesar Garcia (Pablo Santiago); A.J. Jansen (Jack)

    2014-01-01

    htmlabstractIn this paper we report on our efforts to define a set of document extensions to Cascading Style Sheets (CSS) that allow for structured timing and synchronization of elements within a Web page. Our work considers the scenario in which the temporal structure can be decoupled from the

  16. Scientists work on nextgen web

    CERN Multimedia

    Bagla, Pallava

    2007-01-01

    "Scientists at the European Organisation for Nuclear Research or CERN are busy mastering the nextgen web. Very soon, the worldwide we as it is called will peak and scientists are already working on the replacement called GRID computing." (1/2 page)

  17. Full page insight

    DEFF Research Database (Denmark)

    Cortsen, Rikke Platz

    2014-01-01

    Alan Moore and his collaborating artists often manipulate time and space by drawing upon the formal elements of comics and making alternative constellations. This article looks at an element that is used frequently in comics of all kinds – the full page – and discusses how it helps shape spatio-t......, something that it shares with the full page in comics. Through an analysis of several full pages from Moore titles like Swamp Thing, From Hell, Watchmen and Promethea, it is made clear why the full page provides an apt vehicle for an apocalypse in comics....

  18. Users’ recognition in web using web mining techniques

    Directory of Open Access Journals (Sweden)

    Hamed Ghazanfaripoor

    2013-06-01

    Full Text Available The rapid growth of the web and the lack of structure or an integrated schema create various issues to access the information for users. All users’ access on web information are saved in the related server log files. The circumstance of using these files is implemented as a resource for finding some patterns of user's behavior. Web mining is a subset of data mining and it means the mining of the related data from WWW, which is categorized into three parts including web content mining, web structure mining and web usage mining, based on the part of data, which is mined. It seems necessary to have a technique, which is capable of learning the users’ interests and based on the interests, which could filter the unrelated interests automatically or it could offer the related information to the user in reasonable amount of time. The web usage mining makes a profile from users to recognize them and it has direct relationship to web personalizing. The primary objective of personalizing systems is to prepare the thing, which is required by users, without asking them explicitly. In the other way, formal models prepare the possibility of system’s behavior modeling. The Petri and queue nets as some samples of these models can analyze the user's behavior in web. The primary objective of this paper is to present a colored Petri net to model the user's interactions for offering a list of pages recommendation to them in web. Estimating the user's behavior is implemented in some cases like offering the proper pages to continue the browse in web, ecommerce and targeted advertising. The preliminary results indicate that the proposed method is able to improve the accuracy criterion 8.3% rather static method.

  19. A power quality monitoring system based on MATLAB Server Pages

    OpenAIRE

    VURAL, Bülent; KIZIL, Ali; UZUNOĞLU, Mehmet

    2014-01-01

    The power quality (PQ) requirement is one of the most important issues for power companies and their customers. Continuously monitoring the PQ from remote and distributed centers will help to improve the PQ. In this study, a remote power quality monitoring system for low voltage sub-networks is developed using MATLAB Server Pages (MSP). MATLAB Server Pages, which is an open source technical web programming language, combines MATLAB with integrated J2EE specifications. The proposed PQ...

  20. Semantic Advertising for Web 3.0

    Science.gov (United States)

    Thomas, Edward; Pan, Jeff Z.; Taylor, Stuart; Ren, Yuan; Jekjantuk, Nophadol; Zhao, Yuting

    Advertising on the World Wide Web is based around automatically matching web pages with appropriate advertisements, in the form of banner ads, interactive adverts, or text links. Traditionally this has been done by manual classification of pages, or more recently using information retrieval techniques to find the most important keywords from the page, and match these to keywords being used by adverts. In this paper, we propose a new model for online advertising, based around lightweight embedded semantics. This will improve the relevancy of adverts on the World Wide Web and help to kick-start the use of RDFa as a mechanism for adding lightweight semantic attributes to the Web. Furthermore, we propose a system architecture for the proposed new model, based on our scalable ontology reasoning infrastructure TrOWL.

  1. Building a dynamic Web/database interface

    OpenAIRE

    Cornell, Julie.

    1996-01-01

    Computer Science This thesis examines methods for accessing information stored in a relational database from a Web Page. The stateless and connectionless nature of the Web's Hypertext Transport Protocol as well as the open nature of the Internet Protocol pose problems in the areas of database concurrency, security, speed, and performance. We examined the Common Gateway Interface, Server API, Oracle's Web/database architecture, and the Java Database Connectivity interface in terms of p...

  2. A Survey on Semantic Web Search Engine

    OpenAIRE

    G.Sudeepthi; Anuradha, G.; M.Surendra Prasad Babu

    2012-01-01

    The tremendous growth in the volume of data and with the terrific growth of number of web pages, traditional search engines now a days are not appropriate and not suitable anymore. Search engine is the most important tool to discover any information in World Wide Web. Semantic Search Engine is born of traditional search engine to overcome the above problem. The Semantic Web is an extension of the current web in which information is given well-defined meaning. Semantic web technologies are pla...

  3. Association and Sequence Mining in Web Usage

    Directory of Open Access Journals (Sweden)

    Claudia Elena DINUCA

    2011-06-01

    Full Text Available Web servers worldwide generate a vast amount of information on web users’ browsing activities. Several researchers have studied these so-called clickstream or web access log data to better understand and characterize web users. Clickstream data can be enriched with information about the content of visited pages and the origin (e.g., geographic, organizational of the requests. The goal of this project is to analyse user behaviour by mining enriched web access log data. With the continued growth and proliferation of e-commerce, Web services, and Web-based information systems, the volumes of click stream and user data collected by Web-based organizations in their daily operations has reached astronomical proportions. This information can be exploited in various ways, such as enhancing the effectiveness of websites or developing directed web marketing campaigns. The discovered patterns are usually represented as collections of pages, objects, or re-sources that are frequently accessed by groups of users with common needs or interests. The focus of this paper is to provide an overview how to use frequent pattern techniques for discovering different types of patterns in a Web log database. In this paper we will focus on finding association as a data mining technique to extract potentially useful knowledge from web usage data. I implemented in Java, using NetBeans IDE, a program for identification of pages’ association from sessions. For exemplification, we used the log files from a commercial web site.

  4. Page Styles on steroids

    DEFF Research Database (Denmark)

    Madsen, Lars

    2008-01-01

    Designing a page style has long been a pain for novice users. Some parts are easy; others need strong LATEX knowledge. In this article we will present the memoir way of dealing with page styles, including new code added to the recent version of memoir that will reduce the pain to a mild annoyance...

  5. New WWW Pages

    CERN Multimedia

    Pommes, K

    New WWW pages have been created in order to provide easy access to the many activities and pertaining information of the ATLAS Technical Coordination. The main entry point is available on the ATLAS Collaboration page by clicking the Technical Coordination link which leads to the page shown in the following picture. Each button links to a page listing all tasks of the corresponding activity, the responsible task leaders, schedules, work-packages, and action lists, etc... The "ATLAS Documentation Center" button will present the pop-up window shown in the next figure: Besides linking to the Technical Coordination Activities, this page provides direct access to the tools for Project Progress Tracking (PPT) and Engineering Data Management (EDMS), as well as to the main topics being coordinated by the Technical Coordination.

  6. PageRank model of opinion formation on social networks

    Science.gov (United States)

    Kandiah, Vivek; Shepelyansky, Dima L.

    2012-11-01

    We propose the PageRank model of opinion formation and investigate its rich properties on real directed networks of the Universities of Cambridge and Oxford, LiveJournal, and Twitter. In this model, the opinion formation of linked electors is weighted with their PageRank probability. Such a probability is used by the Google search engine for ranking of web pages. We find that the society elite, corresponding to the top PageRank nodes, can impose its opinion on a significant fraction of the society. However, for a homogeneous distribution of two opinions, there exists a bistability range of opinions which depends on a conformist parameter characterizing the opinion formation. We find that the LiveJournal and Twitter networks have a stronger tendency to a totalitarian opinion formation than the university networks. We also analyze the Sznajd model generalized for scale-free networks with the weighted PageRank vote of electors.

  7. Home Page: The Mode of Transport through the Information Superhighway

    Science.gov (United States)

    Lujan, Michelle R.

    1995-01-01

    The purpose of the project with the Aeroacoustics Branch was to create and submit a home page for the internet about branch information. In order to do this, one must also become familiar with the way that the internet operates. Learning HyperText Markup Language (HTML), and the ability to create a document using this language was the final objective in order to place a home page on the internet (World Wide Web). A manual of instructions regarding maintenance of the home page, and how to keep it up to date was also necessary in order to provide branch members with the opportunity to make any pertinent changes.

  8. Classifying web genres in context: a case study documenting the web genres used by a software engineer

    NARCIS (Netherlands)

    Montesi, M.; Navarrete, T.

    2008-01-01

    This case study analyzes the Internet-based resources that a software engineer uses in his daily work. Methodologically, we studied the web browser history of the participant, classifying all the web pages he had seen over a period of 12 days into web genres. We interviewed him before and after the

  9. Adaptive web data extraction policies

    Directory of Open Access Journals (Sweden)

    Provetti, Alessandro

    2008-12-01

    Full Text Available Web data extraction is concerned, among other things, with routine data accessing and downloading from continuously-updated dynamic Web pages. There is a relevant trade-off between the rate at which the external Web sites are accessed and the computational burden on the accessing client. We address the problem by proposing a predictive model, typical of the Operating Systems literature, of the rate-of-update of each Web source. The presented model has been implemented into a new version of the Dynamo project: a middleware that assists in generating informative RSS feeds out of traditional HTML Web sites. To be effective, i.e., make RSS feeds be timely and informative and to be scalable, Dynamo needs a careful tuning and customization of its polling policies, which are described in detail.

  10. The Rise and Fall of Text on the Web: A Quantitative Study of Web Archives

    Science.gov (United States)

    Cocciolo, Anthony

    2015-01-01

    Introduction: This study addresses the following research question: is the use of text on the World Wide Web declining? If so, when did it start declining, and by how much has it declined? Method: Web pages are downloaded from the Internet Archive for the years 1999, 2002, 2005, 2008, 2011 and 2014, producing 600 captures of 100 prominent and…

  11. Chapter 59: Web Services

    Science.gov (United States)

    Graham, M. J.

    Web services are a cornerstone of the distributed computing infrastructure that the VO is built upon yet to the newcomer, they can appear to be a black art. This perception is not helped by the miasma of technobabble that pervades the subject and the seemingly impenetrable high priesthood of actual users. In truth, however, there is nothing conceptually difficult about web services (unsurprisingly any complexities will lie in the implementation details) nor indeed anything particularly new. A web service is a piece of software available over a network with a formal description of how it is called and what it returns that a computer can understand. Note that entities such as web servers, ftp servers and database servers do not generally qualify as they lack the standardized description of their inputs and outputs. There are prior technologies, such as RMI, CORBA, and DCOM, that have employed a similar approach but the success of web services lies predominantly in its use of standardized XML to provide a language-neutral way for representing data. In fact, the standardization goes further as web services are traditionally (or as traditionally as five years will allow) tied to a specific set of technologies (WSDL and SOAP conveyed using HTTP with an XML serialization). Alternative implementations are becoming increasingly common and we will cover some of these here. One important thing to remember in all of this, though, is that web services are meant for use by computers and not humans (unlike web pages) and this is why so much of it seems incomprehensible gobbledegook. In this chapter, we will start with an overview of the web services current in the VO and present a short guide on how to use and deploy a web service. We will then review the different approaches to web services, particularly REST and SOAP, and alternatives to XML as a data format. We will consider how web services can be formally described and discuss how advanced features such as security, state

  12. Web Engineering

    OpenAIRE

    Deshpande, Yogesh; Murugesan, San; Ginige, Athula; Hansen, Steve; Schwabe, Daniel; Gaedke, Martin; White, Bebo

    2003-01-01

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: a) why is it needed? b) what is its domain of operation? c) how does it help and what should it do to improve Web application develo...

  13. Calculating PageRank in a changing network with added or removed edges

    Science.gov (United States)

    Engström, Christopher; Silvestrov, Sergei

    2017-01-01

    PageRank was initially developed by S. Brinn and L. Page in 1998 to rank homepages on the Internet using the stationary distribution of a Markov chain created using the web graph. Due to the large size of the web graph and many other real world networks fast methods to calculate PageRank is needed and even if the original way of calculating PageRank using a Power iterations is rather fast, many other approaches have been made to improve the speed further. In this paper we will consider the problem of recalculating PageRank of a changing network where the PageRank of a previous version of the network is known. In particular we will consider the special case of adding or removing edges to a single vertex in the graph or graph component.

  14. Tanzania Veterinary Journal: Submissions

    African Journals Online (AJOL)

    The corresponding author certifies in the letter that all coauthors have read the manuscript and agree to its submission. Every coauthor should .... If you will be using a digital camera to capture images for print production, you must use the highest resolution setting option with the least amount of compression. Digital camera ...

  15. ORiON: Submissions

    African Journals Online (AJOL)

    This format is also supported by the ORiON LATEX style sheet (which may be downloaded from http://www.orssa.org.za -> ORiON -> Submissions -> Style Sheets). ... If MS Word is used to prepare a manuscript, it should be utilised appropriately. .... An example of an unpublished technical report [6] is also shown below.

  16. Nigerian Veterinary Journal: Submissions

    African Journals Online (AJOL)

    SCOPE The Editorial Board of the Nigerian Veterinary Journal (NVJ) welcomes contributions in the form of original research papers, review articles, clinical case reports, and short communications on all aspects of Veterinary Medicine, Surgery and Animal Production. Submissions are accepted on the understanding that ...

  17. Open Veterinary Journal: Submissions

    African Journals Online (AJOL)

    All submitted manuscripts are checked for plagiarism using PlagScan Plagiarism Detection Software: The image shows our cooperation with the online plagiarism detection service PlagScan. Submission ... For case reports, text should be organized as follows: Introduction, Case Details, Discussion, and References. Review ...

  18. Ergonomics SA: Submissions

    African Journals Online (AJOL)

    Manuscript submissions. Authors should submit their full papers (using the abovementioned template) as an attachment via email to the journal email address j.mcdougall@ru.ac.za. All submitted papers should be sent in .doc or .rtf formats. No other formats will be accepted. Editor. Editor-in-Chief: Ergonomics SA

  19. Manuscript Submission Form

    Indian Academy of Sciences (India)

    Mr.XAVIER

    To: Indian Academy of Sciences. From: Author or Corresponding author with institutional/corresponding address including e-mail. (on behalf of, and binding upon, all the authors). Journal: Title of manuscript: Date of submission of manuscript: In respect of the work mentioned above, I/we undertake to ensure that: i).

  20. Lagos Historical Review: Submissions

    African Journals Online (AJOL)

    Submissions can be made by sending a word processing computer file in MS Word format by e-mail to sarlek@yahoo.com, or by mailing three paper copies to the Editorial Office. Authors should keep a computer file version of their manuscript, as Lagos Historical Review will require a disk version upon acceptance for ...

  1. Africa Sanguine: Submissions

    African Journals Online (AJOL)

    Author Guidelines. Submissions for consideration may include original scientific articles (which will be peer reviewed), short reports, letters to the Editor, reviews, congress proceedings, and reprints of published articles (with permission). Original scientific work must meet the following requirements: Be a report of original ...

  2. COVER AND CONTENTS PAGES

    OpenAIRE

    Anonymous

    1998-01-01

    Includes: Front Cover, Editorial Information, Contents Pages, Dr. Carl G. Anderson: Lifetime Achievement Award, Dr. Eldon D. Smith: Lifetime Achievement Award, Dr. Kenneth R. Tefertiller: Lifetime Achievement Award, Eduardo Segarra: 1998-99 President

  3. TCRC Fertility Page

    Science.gov (United States)

    The Testicular Cancer Resource Center The TCRC Fertility Page Testicular Cancer and fertility are interrelated in numerous ways. TC usually affects young men still in the process of having a family. ...

  4. Animal Research International: Submissions

    African Journals Online (AJOL)

    Author Guidelines. Original research papers, short communications and review articles are published. Original papers should not normally exceed 15 double spaced typewritten pages including tables and figures. Short communications should not normally exceed six double spaced typewritten pages including tables and ...

  5. Zoologist (The): Submissions

    African Journals Online (AJOL)

    Title page: The first page of the manuscript should contain the complete title of the paper, the names of authors and their affiliations; a short title (not more than 30 ... Plates: The journal will only accept laser-print of images or very high quality of photographs fully annotated and titled, if electronic format is not available.

  6. FIRST Quantum-(1980)-Computing DISCOVERY in Siegel-Rosen-Feynman-...A.-I. Neural-Networks: Artificial(ANN)/Biological(BNN) and Siegel FIRST Semantic-Web and Siegel FIRST ``Page''-``Brin'' ``PageRank'' PRE-Google Search-Engines!!!

    Science.gov (United States)

    Rosen, Charles; Siegel, Edward Carl-Ludwig; Feynman, Richard; Wunderman, Irwin; Smith, Adolph; Marinov, Vesco; Goldman, Jacob; Brine, Sergey; Poge, Larry; Schmidt, Erich; Young, Frederic; Goates-Bulmer, William-Steven; Lewis-Tsurakov-Altshuler, Thomas-Valerie-Genot; Ibm/Exxon Collaboration; Google/Uw Collaboration; Microsoft/Amazon Collaboration; Oracle/Sun Collaboration; Ostp/Dod/Dia/Nsa/W.-F./Boa/Ubs/Ub Collaboration

    2013-03-01

    Belew[Finding Out About, Cambridge(2000)] and separately full-decade pre-Page/Brin/Google FIRST Siegel-Rosen(Machine-Intelligence/Atherton)-Feynman-Smith-Marinov(Guzik Enterprises/Exxon-Enterprises/A.-I./Santa Clara)-Wunderman(H.-P.) [IBM Conf. on Computers and Mathematics, Stanford(1986); APS Mtgs.(1980s): Palo Alto/Santa Clara/San Francisco/...(1980s) MRS Spring-Mtgs.(1980s): Palo Alto/San Jose/San Francisco/...(1980-1992) FIRST quantum-computing via Bose-Einstein quantum-statistics(BEQS) Bose-Einstein CONDENSATION (BEC) in artificial-intelligence(A-I) artificial neural-networks(A-N-N) and biological neural-networks(B-N-N) and Siegel[J. Noncrystalline-Solids 40, 453(1980); Symp. on Fractals..., MRS Fall-Mtg., Boston(1989)-5-papers; Symp. on Scaling..., (1990); Symp. on Transport in Geometric-Constraint (1990)

  7. Creating web map with Google Fusion Tables and integration of maps into web site

    OpenAIRE

    Dreu, Bojan

    2015-01-01

    The Thesis presents the creation of web page consisted of web map of water protection areas. Web map is created in Google aplication Google Fusion Tables. This means that Google's maps is used as a basis to see the water protection areas and their road signs. Large part of thesis describes creation of maps in Google Fusion Tables and their integration into web site. All needed Excel file, KML-files and HTML (hyper text mark language for creation of web sites), creation of map i...

  8. Give your feedback on the new Users’ page

    CERN Multimedia

    CERN Bulletin

    If you haven't already done so, visit the new Users’ page and provide the Communications group with your feedback. You can do this quickly and easily via an online form. A dedicated web steering group will design the future page on the basis of your comments. As a first step towards reforming the CERN website, the Communications group is proposing a ‘beta’ version of the Users’ pages. The primary aim of this version is to improve the visibility of key news items, events and announcements to the CERN community. The beta version is very much work in progress: your input is needed to make sure that the final site meets the needs of CERN’s wide and mixed community. The Communications group will read all your comments and suggestions, and will establish a web steering group that will make sure that the future CERN web pages match the needs of the community. More information on this process, including the gradual 'retirement' of the grey Users' pages we are a...

  9. A URI-based approach for addressing fragments of media resources on the Web

    NARCIS (Netherlands)

    E. Mannens; D. van Deursen; R. Troncy (Raphael); S. Pfeiffer; C. Parker (Conrad); Y. Lafon; A.J. Jansen (Jack); M. Hausenblas; R. van de Walle

    2011-01-01

    htmlabstractTo make media resources a prime citizen on the Web, we have to go beyond simply replicating digital media files. The Web is based on hyperlinks between Web resources, and that includes hyperlinking out of resources (e.g., from a word or an image within a Web page) as well as hyperlinking

  10. What is the invisible web? A crawler perspective

    OpenAIRE

    Arroyo, Natalia

    2004-01-01

    The invisible Web, also known as the deep Web or dark matter, is an important problem for Webometrics due to difficulties of conceptualization and measurement. The invisible Web has been defined to be the part of the Web that cannot be indexed by search engines, including databases and dynamically generated pages. Some authors have recognized that this is a quite subjective concept that depends on the point of view of the observer: what is visible for one observer may be invisible for others....

  11. Adding a visualization feature to web search engines: it's time.

    Science.gov (United States)

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  12. Towards more mature web maintenance practices for accessibility.

    OpenAIRE

    Bailey, J. O.; Burd, E.L.

    2007-01-01

    This paper proposes a need to differentiate Web maintenance from traditional software maintenance. The maintenance of a Web site is different than that of software as it is under constant maintenance and works by default in "modified environments". Building on the authors' previous work of automated quality measurement, a case study of large scale Web maintenance activity is described which focuses on a change to ensure that Web pages are accessible to users with disabilities. The results of ...

  13. Academic libraries web-sites in Poland and their users

    OpenAIRE

    Szerksznis, Żaneta

    2005-01-01

    The author describes the methods of communication with Web-site’s users analyzing Polish university libraries. Three forms of communication are identified: static, dynamic and supportive multimedia courses (the so-called “ help-yourself guides” ). The second edition of the “Best Academic Library Web-Site” competition organized by the Polish Librarians Association last year selected the best web-site in 2004. The Wroclaw University Library web page in its new shape was initiated in July 2003 ...

  14. An Enhanced Rule-Based Web Scanner Based on Similarity Score

    Directory of Open Access Journals (Sweden)

    LEE, M.

    2016-08-01

    Full Text Available This paper proposes an enhanced rule-based web scanner in order to get better accuracy in detecting web vulnerabilities than the existing tools, which have relatively high false alarm rate when the web pages are installed in unconventional directory paths. Using the proposed matching method based on similarity score, the proposed scheme can determine whether two pages have the same vulnerabilities or not. With this method, the proposed scheme is able to figure out the target web pages are vulnerable by comparing them to the web pages that are known to have vulnerabilities. We show the proposed scanner reduces 12% false alarm rate compared to the existing well-known scanner through the performance evaluation via various experiments. The proposed scheme is especially helpful in detecting vulnerabilities of the web applications which come from well-known open-source web applications after small customization, which happens frequently in many small-sized companies.

  15. JBrowse: a dynamic web platform for genome visualization and analysis

    National Research Council Canada - National Science Library

    Buels, Robert; Yao, Eric; Diesh, Colin M; Hayes, Richard D; Munoz-Torres, Monica; Helt, Gregg; Goodstein, David M; Elsik, Christine G; Lewis, Suzanna E; Stein, Lincoln; Holmes, Ian H

    2016-01-01

    .... It is easily embedded into websites or apps but can also be served as a standalone web page. Overall improvements to speed and scalability are accompanied by specific enhancements that support complex interactive queries on large track sets...

  16. Three loud cheers for the father of the Web

    CERN Multimedia

    2005-01-01

    World Wide Web creator Sir Tim Berners-Lee could have been a very rich man - but he gave away his invention for the good of mankind. Tom Leonard meets the modest genius voted Great Briton 2004 (2 pages)

  17. Oh What a Tangled Biofilm Web Bacteria Weave

    Science.gov (United States)

    ... Home Page Oh What a Tangled Biofilm Web Bacteria Weave By Elia Ben-Ari Posted May 1, ... a suitable surface, some water and nutrients, and bacteria will likely put down stakes and form biofilms. ...

  18. The intelligent web search, smart algorithms, and big data

    CERN Document Server

    Shroff, Gautam

    2013-01-01

    As we use the Web for social networking, shopping, and news, we leave a personal trail. These days, linger over a Web page selling lamps, and they will turn up at the advertising margins as you move around the Internet, reminding you, tempting you to make that purchase. Search engines such as Google can now look deep into the data on the Web to pull out instances of the words you are looking for. And there are pages that collect and assess information to give you a snapshot ofchanging political opinion. These are just basic examples of the growth of ""Web intelligence"", as increasingly sophis

  19. Fenix-Personalized Information Filtering System for WWW Pages.

    Science.gov (United States)

    Delicato, Flavia Coimbra; Pirmez, Luci; Rust da Costa Carmo, Luiz Fernando

    2001-01-01

    Suggests the use of intelligent agents for the personalized filtering of WWW (World Wide Web) pages and describes the development of a system to satisfy the user's need for information while reducing the amount of information the user has to deal with through relevance feedback. (Author/LRW)

  20. The Inquiry Page: Bringing Digital Libraries to Learners.

    Science.gov (United States)

    Bruce, Bertram C.; Bishop, Ann Peterson; Heidorn, P. Bryan; Lunsford, Karen J.; Poulakos, Steven; Won, Mihye

    2003-01-01

    Discusses digital library development, particularly a national science digital library, and describes the Inquiry Page which focuses on building a constructivist environment using Web resources, collaborative processes, and knowledge that bridges digital libraries with users in K-12 schools, museums, community groups, or other organizations. (LRW)

  1. Neutralizing SQL Injection Attack Using Server Side Code Modification in Web Applications

    OpenAIRE

    Asish Kumar Dalai; Sanjay Kumar Jena

    2017-01-01

    Reports on web application security risks show that SQL injection is the top most vulnerability. The journey of static to dynamic web pages leads to the use of database in web applications. Due to the lack of secure coding techniques, SQL injection vulnerability prevails in a large set of web applications. A successful SQL injection attack imposes a serious threat to the database, web application, and the entire web server. In this article, the authors have proposed a novel method for prevent...

  2. Pfam: clans, web tools and services

    OpenAIRE

    Finn, Robert D.; Mistry, Jaina; Schuster-Böckler, Benjamin; Griffiths-Jones, Sam; Hollich, Volker; Lassmann, Timo; Moxon, Simon; Marshall, Mhairi; Khanna, Ajay; Durbin, Richard; Eddy, Sean R.; Sonnhammer, Erik L. L.; Bateman, Alex

    2006-01-01

    Pfam is a database of protein families that currently contains 7973 entries (release 18.0). A recent development in Pfam has enabled the grouping of related families into clans. Pfam clans are described in detail, together with the new associated web pages. Improvements to the range of Pfam web tools and the first set of Pfam web services that allow programmatic access to the database and associated tools are also presented. Pfam is available on the web in the UK (http://www.sanger.ac.uk/Soft...

  3. Journal of Psychology in Africa: Submissions

    African Journals Online (AJOL)

    Author Guidelines. Please note that this journal is no longer published by NISC. Submission Preparation Checklist. As part of the submission process, authors are required to check off their submission's compliance with all of the following items, and submissions may be returned to authors that do not adhere to these ...

  4. 76 FR 34232 - Agency Information Collection Activities, Submission for Review; Information Collection Extension...

    Science.gov (United States)

    2011-06-13

    ... First Responders Community of Practice Web site found at . The user will complete the form online and... SECURITY Agency Information Collection Activities, Submission for Review; Information Collection Extension... Directorate, DHS. ACTION: 30-day Notice and request for comment. SUMMARY: The Department of Homeland Security...

  5. 77 FR 25185 - Agency Information Collection Activities: Submission for Review; Information Collection Extension...

    Science.gov (United States)

    2012-04-27

    ... First Responders Community of Practice Web site found at . The user will complete the form online and... SECURITY Agency Information Collection Activities: Submission for Review; Information Collection Extension... Directorate, DHS. ACTION: 60-day Notice and request for comment. SUMMARY: The Department of Homeland Security...

  6. WAPTT - Web Application Penetration Testing Tool

    Directory of Open Access Journals (Sweden)

    DURIC, Z.

    2014-02-01

    Full Text Available Web applications vulnerabilities allow attackers to perform malicious actions that range from gaining unauthorized account access to obtaining sensitive data. The number of reported web application vulnerabilities in last decade is increasing dramatically. The most of vulnerabilities result from improper input validation and sanitization. The most important of these vulnerabilities based on improper input validation and sanitization are: SQL injection (SQLI, Cross-Site Scripting (XSS and Buffer Overflow (BOF. In order to address these vulnerabilities we designed and developed the WAPTT (Web Application Penetration Testing Tool tool - web application penetration testing tool. Unlike other web application penetration testing tools, this tool is modular, and can be easily extended by end-user. In order to improve efficiency of SQLI vulnerability detection, WAPTT uses an efficient algorithm for page similarity detection. The proposed tool showed promising results as compared to six well-known web application scanners in detecting various web application vulnerabilities.

  7. Web Caching

    Indian Academy of Sciences (India)

    E-commerce and security. The World Wide Web has been growing in leaps and bounds. Studies have indicated that this massive distributed system can benefit greatly by making use of appropriate caching methods. Intelligent Web caching can lessen the burden on. Web servers, improves its performance and at the same ...

  8. GALILEE: AN INTERNET WEB BASED DISTANCE LEARNING SUPPORT SYSTEM

    Directory of Open Access Journals (Sweden)

    Arthur Budiman

    1999-01-01

    Full Text Available This paper presents a project of Web-based Distance Learning support system. The system has been built based on the Internet and World Wide Web facility. The system could be accessed with a web browser which is directed to a certain web server address so that students can do learning process just like in the real situation, such as student admissions, taking course materials, syllabus, assignments, students grades, class discussions through web, and doing online quizzes. Students could also join collaboration works by giving opinions, feedback and student produced paper/web which can be shared to the entire learning community. Therefore, it will build a collaborative learning environment where lectures together with students make constructive knowledge databases for entire learning community. This system has been developed based on Active Server Pages (ASP technology from Microsoft which is embedded in a web server. Web pages reside in a web server which is connected to an SQL Database Server. Database server is used to store structured data such as lectures/students personal information, course lists, syllabus and its descriptions, announcement texts from lecturers, commentaries for discussion forum, student's study evaluations, scores for each assignment, quizzes for each course, assignments text from lectures, assignments which are collected by students and students contribution/materials. This system has been maintained by an administrator for maintaining and developing web pages using HTML. The administrator also does ASP scripts programming to convert web pages into active server pages. Lectures and students could contribute some course materials and share their ideas through their web browser. This web-based collaborative learning system gives the students more active role in the information gathering and learning process, making the distance students feel part of a learning community, therefore increasing motivation, comprehension and

  9. The SubCons webserver: A user friendly web interface for state-of-the-art subcellular localization prediction.

    Science.gov (United States)

    Salvatore, M; Shu, N; Elofsson, A

    2017-09-13

    SubCons is a recently developed method that predicts the subcellular localization of a protein. It combines predictions from four predictors using a Random Forest classifier. Here, we present the user-friendly web-interface implementation of SubCons. Starting from a protein sequence, the server rapidly predicts the subcellular localizations of an individual protein. In addition, the server accepts the submission of sets of proteins either by uploading the files or programmatically by using command line WSDL API scripts. This makes SubCons ideal for proteome wide analyses allowing the user to scan a whole proteome in few days. From the web page, it is also possible to download precalculated predictions for several eukaryotic organisms. To evaluate the performance of SubCons we present a benchmark of LocTree3 and SubCons using two recent mass-spectrometry based datasets of mouse and drosophila proteins. The server is available at http://subcons.bioinfo.se/. © 2017 The Protein Society.

  10. Missing Links: The Enduring Web

    Directory of Open Access Journals (Sweden)

    Marieke Guy

    2009-10-01

    Full Text Available The Web runs at risk. Our generation has witnessed a revolution in human communications on a trajectory similar to that of the origins of the written word and language itself. Early Web pages have an historical importance comparable with prehistoric cave paintings or proto-historic pressed clay ciphers. They are just as fragile. The ease of creation, editing and revising gives content a flexible immediacy: ensuring that sources are up to date and, with appropriate concern for interoperability, content can be folded seamlessly into any number of presentation layers. How can we carve a legacy from such complexity and volatility?

  11. stage/page/play

    DEFF Research Database (Denmark)

    stage/page/play is an anthology written by scholars coming from a broad range of academic backgrounds. Drawing on disciplines such as rhetoric, theology, philosophy, and anthropology, the articles all seek to explore new approaches to theatre and theatricality. stage analyzes the theatre as a uni......stage/page/play is an anthology written by scholars coming from a broad range of academic backgrounds. Drawing on disciplines such as rhetoric, theology, philosophy, and anthropology, the articles all seek to explore new approaches to theatre and theatricality. stage analyzes the theatre...... as a unique platform for aesthetic examinations of contemporary, cultural and political issues. page focuses on the drama text in a scenic, performative context, and on innovative dramaturgical strategies. play studies how theatricality comes into play in our everyday life in a broad popular and ritual...

  12. Dark Web

    CERN Document Server

    Chen, Hsinchun

    2012-01-01

    The University of Arizona Artificial Intelligence Lab (AI Lab) Dark Web project is a long-term scientific research program that aims to study and understand the international terrorism (Jihadist) phenomena via a computational, data-centric approach. We aim to collect "ALL" web content generated by international terrorist groups, including web sites, forums, chat rooms, blogs, social networking sites, videos, virtual world, etc. We have developed various multilingual data mining, text mining, and web mining techniques to perform link analysis, content analysis, web metrics (technical

  13. UPGRADE OF THE CENTRAL WEB SERVERS

    CERN Document Server

    WEB Services

    2000-01-01

    During the weekend of the 25-26 March, the infrastructure of the CERN central web servers will undergo a major upgrade.As a result, the web services hosted by the central servers (that is, the services the address of which starts with www.cern.ch) will be unavailable Friday 24th, from 17:30 to 18:30, and may suffer from short interruptions until 20:00. This includes access to the CERN top-level page as well as the services referenced by this page (such as access to the scientific program and events information, or training, recruitment, housing services).After the upgrade, the change will be transparent to the users. Expert readers may however notice that when they connect to a web page starting with www.cern.ch this address is slightly changed when the page is actually displayed on their screen (e.g. www.cern.ch/Press will be changed to Press.web.cern.ch/Press). They should not worry: this behaviour, necessary for technical reasons, is normal.web.services@cern.chTel 74989

  14. Web 25

    DEFF Research Database (Denmark)

    Web 25: Histories from the First 25 Years of the World Wide Web celebrates the 25th anniversary of the Web. Since the beginning of the 1990s, the Web has played an important role in the development of the Internet as well as in the development of most societies at large, from its early grey...... and blue webpages introducing the hyperlink for a wider public, to today’s multifacted uses of the Web as an integrated part of our daily lives. This is the rst book to look back at 25 years of Web evolution, and it tells some of the histories about how the Web was born and has developed. It takes...... are presented alongside methodological re ections on how the past Web can be studied, as well as accounts of how one of the most important source types of our time is provided, namely the archived Web. Web 25: Histories from the First 25 Years of the World Wide Web is a must-read for anyone interested in how...

  15. 75 FR 16510 - Submission for OMB Review: Comment Request

    Science.gov (United States)

    2010-04-01

    ... response, and estimated total burden may be obtained from the RegInfo.gov Web site at http://www.reginfo...: 420. Total Estimated Annual Costs Burden (Operation and Maintenance): $0. Description: This ICR... Register on November 17, 2009 (74 FR, page 59244). Agency: Employment and Training Administration. Type of...

  16. A Survey On Various Web Template Detection And Extraction Methods

    Directory of Open Access Journals (Sweden)

    Neethu Mary Varghese

    2015-03-01

    Full Text Available Abstract In todays digital world reliance on the World Wide Web as a source of information is extensive. Users increasingly rely on web based search engines to provide accurate search results on a wide range of topics that interest them. The search engines in turn parse the vast repository of web pages searching for relevant information. However majority of web portals are designed using web templates which are designed to provide consistent look and feel to end users. The presence of these templates however can influence search results leading to inaccurate results being delivered to the users. Therefore to improve the accuracy and reliability of search results identification and removal of web templates from the actual content is essential. A wide range of approaches are commonly employed to achieve this and this paper focuses on the study of the various approaches of template detection and extraction that can be applied across homogenous as well as heterogeneous web pages.

  17. Medical Movies on the Web Debuts with Gene Kelly's "Combat Fatigue Irritability" 1945 Film | NIH MedlinePlus the ...

    Science.gov (United States)

    ... page please turn JavaScript on. Medical Movies on the Web Debuts with Gene Kelly's "Combat Fatigue Irritability" ... Library of Medicine To view Medical Movies on the Web, go to: www.nlm.nih.gov/hmd/ ...

  18. An Expertise Recommender using Web Mining

    Science.gov (United States)

    Joshi, Anupam; Chandrasekaran, Purnima; ShuYang, Michelle; Ramakrishnan, Ramya

    2001-01-01

    This report explored techniques to mine web pages of scientists to extract information regarding their expertise, build expertise chains and referral webs, and semi automatically combine this information with directory information services to create a recommender system that permits query by expertise. The approach included experimenting with existing techniques that have been reported in research literature in recent past , and adapted them as needed. In addition, software tools were developed to capture and use this information.

  19. PageRank, HITS and a unified framework for link analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Chris; He, Xiaofeng; Husbands, Parry; Zha, Hongyuan; Simon, Horst

    2001-10-01

    Two popular webpage ranking algorithms are HITS and PageRank. HITS emphasizes mutual reinforcement between authority and hub webpages, while PageRank emphasizes hyperlink weight normalization and web surfing based on random walk models. We systematically generalize/combine these concepts into a unified framework. The ranking framework contains a large algorithm space; HITS and PageRank are two extreme ends in this space. We study several normalized ranking algorithms which are intermediate between HITS and PageRank, and obtain closed-form solutions. We show that, to first order approximation, all ranking algorithms in this framework, including PageRank and HITS, lead to same ranking which is highly correlated with ranking by indegree. These results support the notion that in web resource ranking indegree and outdegree are of fundamental importance. Rankings of webgraphs of different sizes and queries are presented to illustrate our analysis.

  20. Measuring the Utilization of On-Page Search Engine Optimization in Selected Domain

    Directory of Open Access Journals (Sweden)

    Goran Matošević

    2015-12-01

    Full Text Available Search engine optimization (SEO techniques involve „on-page“ and „off-page“ actions taken by web developers and SEO specialists with aim to increase the ranking of web pages in search engine results pages (SERP by following recommendations from major search engine companies. In this paper we explore the possibility of creating a metric for evaluating on-page SEO of a website. A novel „k-rank“ metric is proposed which takes into account not only the presence of certain tags in HTML of a page, but how those tags are used with selected keywords in selected domain. The „k-rank“ is tested in domain of education by inspecting 20 university websites and comparing them with expert scores. The overview of results showed that „k-rank“ can be used as a metric for on-page SEO.

  1. Folding worlds between pages

    CERN Multimedia

    Meier, Matthias

    2010-01-01

    "We all remember pop-up books form our childhood. As fascinated as we were back then, we probably never imagined how much engineering know-how went into these books. Pop-up engineer Anton Radevsky has even managed to fold a 27-kilometre particle accelerator into a book" (4 pages)

  2. Full page fax print

    Indian Academy of Sciences (India)

    user

    548 Molecular Mechanism of Heterogeneous Catalysis. The 2007 Nobel Prize in Chemistry. R S Swathi and K L Sebastian. 519. 541. 561. Tylophora indica. Azadirachta indica. Eclipta alba. Page 2. 509. RESONANCE | June 2008. DEPARTMENTS. Editorial. G Nagendrappa. Spiral waves in CO oxidation on a Pt surface.

  3. Title and title page.

    Science.gov (United States)

    Peh, W C G; Ng, K H

    2008-08-01

    The title gives the first impression of a scientific article, and should accurately convey to a reader what the whole article is about. A good title is short, informative and attractive. The title page provides information about the authors, their affiliations and the corresponding author's contact details.

  4. Un solo menú para toda la Web

    OpenAIRE

    Rovira, Cristòfol

    2003-01-01

    This navigation element consists on a menu or navigation bar generated by Javascript code that will appear in all the pages of a hosted Web. The Javascript code is contained in a single independent file. It is very easy to modify or to extend the menu because when publishing single file of the code Javascript, the menu is modified in all the pages of the host Web. In addition, the menu gives information of context because the active page option appears without connection, indicating where it ...

  5. Digital plagiarism--the Web giveth and the Web shall taketh.

    Science.gov (United States)

    Barrie, J M; Presti, D E

    2000-01-01

    Publishing students' and researchers' papers on the World Wide Web (WWW) facilitates the sharing of information within and between academic communities. However, the ease of copying and transporting digital information leaves these authors' ideas open to plagiarism. Using tools such as the Plagiarism.org database, which compares submissions to reports and papers available on the Internet, could discover instances of plagiarism, revolutionize the peer review process, and raise the quality of published research everywhere.

  6. ESAP plus: a web-based server for EST-SSR marker development.

    Science.gov (United States)

    Ponyared, Piyarat; Ponsawat, Jiradej; Tongsima, Sissades; Seresangtakul, Pusadee; Akkasaeng, Chutipong; Tantisuwichwong, Nathpapat

    2016-12-22

    download all the results through the web interface. ESAP Plus is a comprehensive and convenient web-based bioinformatic tool for SSR marker development. ESAP Plus offers all necessary EST-SSR development processes with various adjustable options that users can easily use to identify SSR markers from a large EST collection. With familiar web interface, users can upload the raw EST using the data submission page and visualize/download the corresponding EST-SSR information from within ESAP Plus. ESAP Plus can handle considerably large EST datasets. This EST-SSR discovery tool can be accessed directly from: http://gbp.kku.ac.th/esap_plus/ .

  7. Chapter 07: Species description pages

    Science.gov (United States)

    Alex C. Wiedenhoeft

    2011-01-01

    These pages are written to be the final step in the identification process; you will be directed to them by the key in Chapter 6. Each species or group of similar species in the same genus has its own set of pages. The information in the first page describes the characteristics of the wood covered in the manual. The page shows images of similar or confusable woods,...

  8. Web Engineering

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-06-23

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.

  9. Scatter Matters: Regularities and Implications for the Scatter of Healthcare Information on the Web

    OpenAIRE

    Bhavnani, Suresh K.; Peck, Frederick A.

    2010-01-01

    Despite the development of huge healthcare Web sites and powerful search engines, many searchers end their searches prematurely with incomplete information. Recent studies suggest that users often retrieve incomplete information because of the complex scatter of relevant facts about a topic across Web pages. However, little is understood about regularities underlying such information scatter. To probe regularities within the scatter of facts across Web pages, this article presents the results...

  10. Off the Beaten tracks: Exploring Three Aspects of Web Navigation

    NARCIS (Netherlands)

    Weinreich, H.; Obendorf, H.; Herder, E.; Mayer, M.; Edmonds, H.; Hawkey, K.; Kellar, M.; Turnbull, D.

    2006-01-01

    This paper presents results of a long-term client-side Web usage study, updating previous studies that range in age from five to ten years. We focus on three aspects of Web navigation: changes in the distribution of navigation actions, speed of navigation and within-page navigation. “Navigation

  11. Classical Hypermedia Virtues on the Web with Webstrates

    DEFF Research Database (Denmark)

    Bouvin, Niels Olof; Klokmose, Clemens Nylandsted

    2016-01-01

    We show and analyze herein how Webstrates can augment the Web from a classical hypermedia perspective. Webstrates turns the DOM of Web pages into persistent and collaborative objects. We demonstrate how this can be applied to realize bidirectional links, shared collaborative annotations, and in......-browser authorship and development....

  12. The SPIRIT collection: an overview of a large web collection

    OpenAIRE

    Joho, H.; Sanderson, M.

    2004-01-01

    A large scale collection of web pages has been essential for research in information retrieval and related areas. This paper provides an overview of a large web collection used in the SPIRIT project for the design and testing of spatially-aware retrieval systems. Several statistics are derived and presented to show the characteristics of the collection.

  13. A Metro Map Metaphor for Guided Tours on the Web

    DEFF Research Database (Denmark)

    Sandvad, Elmer Sørensen; Grønbæk, Kaj; Sloth, Lennert

    2001-01-01

    maps and route maps with indication of which stations of a tour have been visited; and finally (4) support for arbitrary web pages as stations on the tour. The paper discusses the Webvise Guided Tour System and illustrates its use in a digital library portal. The system is compared to other recent Web...

  14. 21 CFR 1304.45 - Internet Web site disclosure requirements.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Internet Web site disclosure requirements. 1304.45... OF REGISTRANTS Online Pharmacies § 1304.45 Internet Web site disclosure requirements. (a) Each online... the following information on the homepage of each Internet site it operates, or on a page directly...

  15. DW3 Classical Music Resources: Managing Mozart on the Web.

    Science.gov (United States)

    Fineman, Yale

    2001-01-01

    Discusses the development of DW3 (Duke World Wide Web) Classical Music Resources, a vertical portal that comprises the most comprehensive collection of classical music resources on the Web with links to more than 2800 non-commercial pages/sites in over a dozen languages. Describes the hierarchical organization of subject headings and considers…

  16. Traitor: associating concepts using the world wide web

    NARCIS (Netherlands)

    Drijfhout, Wanno; Oliver, J.; Oliver, Jundt; Wevers, L.; Hiemstra, Djoerd

    We use Common Crawl's 25TB data set of web pages to construct a database of associated concepts using Hadoop. The database can be queried through a web application with two query interfaces. A textual interface allows searching for similarities and differences between multiple concepts using a query

  17. Ten years on, the web spans the globe

    CERN Multimedia

    Dalton, A W

    2003-01-01

    Short article on the history of the WWW. Prof Berner-Lee states that one of the main reasons the web was such a success was due to CERN's decision to make the web foundations and protocols available on a royalty-free basis (1/2 page).

  18. Sample-based XPath Ranking for Web Information Extraction

    NARCIS (Netherlands)

    Jundt, Oliver; van Keulen, Maurice

    Web information extraction typically relies on a wrapper, i.e., program code or a configuration that specifies how to extract some information from web pages at a specific website. Manually creating and maintaining wrappers is a cumbersome and error-prone task. It may even be prohibitive as some

  19. PageRank model of opinion formation on Ulam networks

    Science.gov (United States)

    Chakhmakhchyan, L.; Shepelyansky, D.

    2013-12-01

    We consider a PageRank model of opinion formation on Ulam networks, generated by the intermittency map and the typical Chirikov map. The Ulam networks generated by these maps have certain similarities with such scale-free networks as the World Wide Web (WWW), showing an algebraic decay of the PageRank probability. We find that the opinion formation process on Ulam networks has certain similarities but also distinct features comparing to the WWW. We attribute these distinctions to internal differences in network structure of the Ulam and WWW networks. We also analyze the process of opinion formation in the frame of generalized Sznajd model which protects opinion of small communities.

  20. Fermilab joins in global live Web cast

    CERN Document Server

    Polansek, Tom

    2005-01-01

    From 2 to 3:30 p.m., Lederman, who won the Nobel Prize for physics in 1988, will host his own wacky, science-centered talk show at Fermi National Accelerator Laboratory as part of a lvie, 12-hour, international Web cast celebrating Albert Einstein and the world Year of Physics (2/3 page)

  1. Web Widgets Barriers for Visually Impaired Users.

    Science.gov (United States)

    Seixas Pereira, Letícia; Archambault, Dominique

    2017-01-01

    Currently, websites are mainly composed of web widgets, dynamic elements and updatable sections - like autosuggest list, carousel, slideshow etc. In order to contribute with the development of accessible rich internet applications, this work aims to better understand the interaction of severely visually impaired users with these pages, gathering their main barriers and difficulties.

  2. Knighthood for 'father of the web'

    CERN Multimedia

    Uhlig, R

    2003-01-01

    "Tim Berners-Lee, the father of the world wide web, was awarded a knighthood for services to the internet, which his efforts transformed from a haunt of computer geeks, scientists and the military into a global phenomenon" (1/2 page).

  3. A Web Browser Interface to Manage the Searching and Organizing of Information on the Web by Learners

    Science.gov (United States)

    Li, Liang-Yi; Chen, Gwo-Dong

    2010-01-01

    Information Gathering is a knowledge construction process. Web learners make a plan for their Information Gathering task based on their prior knowledge. The plan is evolved with new information encountered and their mental model is constructed through continuously assimilating and accommodating new information gathered from different Web pages. In…

  4. Web archives

    DEFF Research Database (Denmark)

    Finnemann, Niels Ole

    2018-01-01

    a broad and rich archive. Section six is concerned with inherent limitations and why web archives are always flawed. The last sections deal with the question how web archives may fit into the rapidly expanding, but fragmented landscape of digital repositories taking care of various parts...... of the exponentially growing amounts of still more heterogeneous data materials....

  5. Equipped Search Results Using Machine Learning from Web Databases

    OpenAIRE

    Ahmed Mudassar Ali; Ramakrishnan, M.

    2015-01-01

    Aim of this study is to form a cluster of search results based on similarity and to assign meaningful label to it Database driven web pages play a vital role in multiple domains like online shopping, e-education systems, cloud computing and other. Such databases are accessible through HTML forms and user interfaces. They return the result pages come from the underlying databases as per the nature of the user query. Such types of databases are termed as Web Databases (WDB). Web databases have ...

  6. Reflect: a practical approach to web semantics

    DEFF Research Database (Denmark)

    O'Donoghue, S.I.; Horn, Heiko; Pafilisa, E.

    2010-01-01

    To date, adding semantic capabilities to web content usually requires considerable server-side re-engineering, thus only a tiny fraction of all web content currently has semantic annotations. Recently, we announced Reflect (http://reflect.ws), a free service that takes a more practical approach......: Reflect uses augmented browsing to allow end-users to add systematic semantic annotations to any web-page in real-time, typically within seconds. In this paper we describe the tagging process in detail and show how further entity types can be added to Reflect; we also describe how publishers and content...

  7. Humanities Review Journal: Submissions

    African Journals Online (AJOL)

    Author Guidelines. Papers for Humanities Review Journal should be submitted in English and should not exceed 25 pages typed double-spaced on A4 paper. References should be listed alphabetically at the end of the paper. Two copies of the paper, plus a 3_ diskette in Microsoft Word should be submitted. Neither ...

  8. Nigerian Music Review: Submissions

    African Journals Online (AJOL)

    3. The length of the article may not be more than 15 pages, including illustrations and. references. 4. Manuscripts should be submitted electronically as e-mail attachment in Microsoft format. File name(s) should be clearly stated for easy accessibility. 5. Articles must not have been published or sent for publication elsewhere.

  9. Scientific Medical Journal: Submissions

    African Journals Online (AJOL)

    Author Guidelines. An abstract should be written at the beginning of the article. This should be typed on a separate page and is not to exceed 200 words. The abstract should be a concise but comprehensive statement of the aim of study, material and methods, results and conclusions. An Arabic summary should be included ...

  10. Nigeria Agricultural Journal: Submissions

    African Journals Online (AJOL)

    The author's name comes first; followed by year of publication (in parentheses); title of paper with initial letter of first word and proper nouns capitalized; full name of the journal in italics, volume number in Arabic numerals, journal number if any in Arabic numerals and in parentheses, colon, the first page of paper, hyphen, ...

  11. Africa Insight: Submissions

    African Journals Online (AJOL)

    This means that all relevant information should be provided such as the author's surname and initials, year of publication, full title (including subtitle, where applicable), publisher, place of publication, date of publication (in cases where this is applicable such as newspaper articles), journal issue number, page reference, etc.

  12. News from the Library: The CERN Web Archive

    CERN Multimedia

    CERN Library

    2012-01-01

    The World Wide Web was born at CERN in 1989. However, although historic paper documents from over 50 years ago survive in the CERN Archive, it is by no means certain that we will be able to consult today's web pages 50 years from now.   The Internet Archive's Wayback Machine includes an impressive collection of archived CERN web pages from 1996 onwards. However, their coverage is not complete - they aim for broad coverage of the whole Internet, rather than in-depth coverage of particular organisations. To try to fill this gap, the CERN Archive has entered into a partnership agreement with the Internet Memory Foundation. Harvesting of CERN's publicly available web pages is now being carried out on a regular basis, and the results are available here. 

  13. DERIVING USER ACCESS PATTERNS AND MINING WEB COMMUNITY WITH WEB-LOG DATA FOR PREDICTING USER SESSIONS WITH PAJEK

    Directory of Open Access Journals (Sweden)

    S. Balaji

    2012-10-01

    Full Text Available Web logs are a young and dynamic media type. Due to the intrinsic relationship among Web objects and the deficiency of a uniform schema of web documents, Web community mining has become significant area for Web data management and analysis. The research of Web communities extents a number of research domains. In this paper an ontological model has been present with some recent studies on this topic, which cover finding relevant Web pages based on linkage information, discovering user access patterns through analyzing Web log files from Web data. A simulation has been created with the academic website crawled data. The simulation is done in JAVA and ORACLE environment. Results show that prediction of user session could give us plenty of vital information for the Business Intelligence. Search Engine Optimization could also use these potential results which are discussed in the paper in detail.

  14. Egyptian Journal of Medical Laboratory Sciences: Submissions

    African Journals Online (AJOL)

    Egyptian Journal of Medical Laboratory Sciences: Submissions. Journal Home > About the Journal > Egyptian Journal of Medical Laboratory Sciences: Submissions. Log in or Register to get access to full text downloads.

  15. African Journal of Finance and Management: Submissions

    African Journals Online (AJOL)

    African Journal of Finance and Management: Submissions. Journal Home > About the Journal > African Journal of Finance and Management: Submissions. Log in or Register to get access to full text downloads.

  16. Semantic Web Portals: Design and Development Technologies and Tools

    OpenAIRE

    Ansari, Aftab

    2012-01-01

    Ansari, Aftab 2012. Semantic Web Portals: Design and Development Technologies and Tools: Bachelor’s Thesis. Kemi-Tornio University of Applied Sciences. Business and Culture. Pages 67. Semantic Web is one important and relevant research area in computer science. A growing research attention to this field can be explained by the opportunities the Semantic Web could provide by representing and reasoning about semantic information. The objective of this thesis is to study the technologies for...

  17. Web Accessibility in Higher Education Institutions of Ecuador: Year 2016

    OpenAIRE

    Nelly Karina Esparza Cruz; Zoila Merino Acosta; Hugo Guerrero Torres

    2016-01-01

    This research was conducted in order to establish the progress made by the web accessibility in the pages of higher education institutions verify ing how many of them implement the NTE INEN ISO/IEC40500standard,which establishes accessibility guidelines for web content and which was adopted in 2014 to ensure access for people with disabilities to the content published on the web. By reviewing the websites of the IES using free tools online, the average application of accessibility standards w...

  18. Web watch

    CERN Multimedia

    Dodson, S

    2002-01-01

    British Telecom is claiming it invented hypertext and has a 1976 US patent to prove it. The company is accusing 17 of the biggest US internet service providers of using its technology without paying a royalty fee (1/2 page).

  19. New Abstract Submission Software System for AGU Meetings

    Science.gov (United States)

    Ward, Joanna

    2009-07-01

    New software for submitting abstracts has been deployed by AGU for the 2009 Fall Meeting. “Abstract Central” is a simplified interface providing a secure, complete method for abstract submission with easy-to-follow steps and a fresh look. A major component of the system will be an itinerary planner, downloadable to mobile devices, to help meeting attendees schedule their time at AGU conferences. Increased access to customer service is a key element that abstract submitters will find especially helpful. A call center, as well as 24-hour Web-based and e-mail technical support, will be available to help members.

  20. Enhancing UCSF Chimera through web services.

    Science.gov (United States)

    Huang, Conrad C; Meng, Elaine C; Morris, John H; Pettersen, Eric F; Ferrin, Thomas E

    2014-07-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. A Web Server for MACCS Magnetometer Data

    Science.gov (United States)

    Engebretson, Mark J.

    1998-01-01

    NASA Grant NAG5-3719 was provided to Augsburg College to support the development of a web server for the Magnetometer Array for Cusp and Cleft Studies (MACCS), a two-dimensional array of fluxgate magnetometers located at cusp latitudes in Arctic Canada. MACCS was developed as part of the National Science Foundation's GEM (Geospace Environment Modeling) Program, which was designed in part to complement NASA's Global Geospace Science programs during the decade of the 1990s. This report describes the successful use of these grant funds to support a working web page that provides both daily plots and file access to any user accessing the worldwide web. The MACCS home page can be accessed at http://space.augsburg.edu/space/MaccsHome.html.

  2. Demonstration: SpaceExplorer - A Tool for Designing Ubiquitous Web Applications for Collections of Displays

    DEFF Research Database (Denmark)

    Hansen, Thomas Riisgaard

    2007-01-01

    This demonstration presents a simple browser plug-in that grant web applications the ability to use multiple nearby devices for displaying web content. A web page can e.g. be designed to present additional information on nearby devices. The demonstration introduces a light weight peer-to-peer arc...

  3. Web party effect: a cocktail party effect in the web environment.

    Science.gov (United States)

    Rigutti, Sara; Fantoni, Carlo; Gerbino, Walter

    2015-01-01

    In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all) web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots) on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search). Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment): users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others.

  4. Web party effect: a cocktail party effect in the web environment

    Directory of Open Access Journals (Sweden)

    Sara Rigutti

    2015-03-01

    Full Text Available In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search. Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment: users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others.

  5. African Journal of Infectious Diseases: Submissions

    African Journals Online (AJOL)

    Copying text, photographs, tables or graphics from any source and using it as ones own is considered plagiarism whether or not a reference to the copied portion is given. Submission Preparation Checklist As part of the submission process, authors are required to check off their submission's compliance with all of the ...

  6. Onco-STS: a web-based laboratory information management system for sample and analysis tracking in oncogenomic experiments.

    Science.gov (United States)

    Gavrielides, Mike; Furney, Simon J; Yates, Tim; Miller, Crispin J; Marais, Richard

    2014-01-01

    Whole genomes, whole exomes and transcriptomes of tumour samples are sequenced routinely to identify the drivers of cancer. The systematic sequencing and analysis of tumour samples, as well other oncogenomic experiments, necessitates the tracking of relevant sample information throughout the investigative process. These meta-data of the sequencing and analysis procedures include information about the samples and projects as well as the sequencing centres, platforms, data locations, results locations, alignments, analysis specifications and further information relevant to the experiments. The current work presents a sample tracking system for oncogenomic studies (Onco-STS) to store these data and make them easily accessible to the researchers who work with the samples. The system is a web application, which includes a database and a front-end web page that allows the remote access, submission and updating of the sample data in the database. The web application development programming framework Grails was used for the development and implementation of the system. The resulting Onco-STS solution is efficient, secure and easy to use and is intended to replace the manual data handling of text records. Onco-STS allows simultaneous remote access to the system making collaboration among researchers more effective. The system stores both information on the samples in oncogenomic studies and details of the analyses conducted on the resulting data. Onco-STS is based on open-source software, is easy to develop and can be modified according to a research group's needs. Hence it is suitable for laboratories that do not require a commercial system.

  7. Multiplex PageRank.

    Directory of Open Access Journals (Sweden)

    Arda Halu

    Full Text Available Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation.

  8. Multiplex PageRank.

    Science.gov (United States)

    Halu, Arda; Mondragón, Raúl J; Panzarasa, Pietro; Bianconi, Ginestra

    2013-01-01

    Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation.

  9. A comprehensive and cost-effective preparticipation exam implemented on the World Wide Web.

    Science.gov (United States)

    Peltz, J E; Haskell, W L; Matheson, G O

    1999-12-01

    Mandatory preparticipation examinations (PPE) are labor intensive, offer little routine health maintenance and are poor predictors of future injury or illness. Our objective was to develop a new PPE for the Stanford University varsity athletes that improved both quality of primary and preventive care and physician time efficiency. This PPE is based on the annual submission, by each athlete, of a comprehensive medical history questionnaire that is then summarized in a two-page report for the examining physician. The questionnaire was developed through a search of MEDLINE from 1966 to 1997, review of PPE from 11 other institutions, and discussion with two experts from each of seven main content areas: medical and musculoskeletal history, eating, menstrual and sleep disorders, stress and health risk behaviors. Content validity was assessed by 10 sports medicine physicians and four epidemiologists. It was then programmed for the World Wide Web (http:// www.stanford.edu/dept/sportsmed/). The questionnaire demonstrated a 97 +/- 2% sensitivity in detecting positive responses requiring physician attention. Sixteen physicians administered the 1997/98 PPE; using the summary reports, 15 found improvement in their ability to provide overall medical care including health issues beyond clearance; 13 noted a decrease in time needed for each athlete exam. Over 90% of athletes who used the web site found it "easy" or "moderately easy" to access and complete. Initial assessment of this new PPE format shows good athlete compliance, improved exam efficiency and a strong increase in subjective physician satisfaction with the quality of screening and medical care provided. The data indicate a need for improvement of routine health maintenance in this population. The database offers opportunities to study trends, risk factors, and results of interventions.

  10. White Hat Search Engine Optimization (SEO: Structured Web Data for Libraries

    Directory of Open Access Journals (Sweden)

    Dan Scott

    2015-06-01

    Full Text Available “White hat” search engine optimization refers to the practice of publishing web pages that are useful to humans, while enabling search engines and web applications to better understand the structure and content of your website. This article teaches you to add structured data to your website so that search engines can more easily connect patrons to your library locations, hours, and contact information. A web page for a branch of the Greater Sudbury Public Library retrieved in January 2015 is used as the basis for examples that progressively enhance the page with structured data. Finally, some of the advantages structured data enables beyond search engine optimization are explored

  11. Nigerian Journal of Clinical and Counselling Psychology: Submissions

    African Journals Online (AJOL)

    Submission Preparation Checklist. As part of the submission process, authors are required to check off their submission's compliance with all of the following items, and submissions may be returned to authors that do not adhere to these guidelines. The submission has not been previously published, nor is it before another ...

  12. Nigerian Food Journal: Submissions

    African Journals Online (AJOL)

    Nigerian Food Journal 25: 130-140. For textbooks, the name(s) of the author(s) and the title of the book, followed by the publisher, city of publishing and the pages referred e.g. Iwe, M. O. (2003). The Science and technology of soybean, Rojoint publishers Enugu, pp. 123-230. For chapters in a book, the names of the author, ...

  13. Higher-order web link analysis using multilinear algebra.

    Energy Technology Data Exchange (ETDEWEB)

    Kenny, Joseph P.; Bader, Brett William (Sandia National Laboratories, Albuquerque, NM); Kolda, Tamara Gibson

    2005-07-01

    Linear algebra is a powerful and proven tool in web search. Techniques, such as the PageRank algorithm of Brin and Page and the HITS algorithm of Kleinberg, score web pages based on the principal eigenvector (or singular vector) of a particular non-negative matrix that captures the hyperlink structure of the web graph. We propose and test a new methodology that uses multilinear algebra to elicit more information from a higher-order representation of the hyperlink graph. We start by labeling the edges in our graph with the anchor text of the hyperlinks so that the associated linear algebra representation is a sparse, three-way tensor. The first two dimensions of the tensor represent the web pages while the third dimension adds the anchor text. We then use the rank-1 factors of a multilinear PARAFAC tensor decomposition, which are akin to singular vectors of the SVD, to automatically identify topics in the collection along with the associated authoritative web pages.

  14. NOTE FROM WEB COMMUNICATIONS & PUBLIC EDUCATION ETT DIVISION

    CERN Multimedia

    Web Communications team

    2000-01-01

    Dear Web Authors,As you will have read elsewhere the official name of our Organization is 'European Organization for Nuclear Research'. To this may be added the description 'European Laboratory for Particle Physics', but this last description may not be used alone. The official name 'European Organization for Nuclear Research' must always appear first. Therefore, a number of Web pages have to be modified. We have already modified the top pages of the CERN web site. We have also modified the page banner for use in your own pages. Everyone who has so far used the correct reference to the CERN banner needs to do nothing. Others are requested to correct their pages so as to use the image athttp://www.cern.ch/CommonImages/Banners/CERNHeadE.giforhttp://www.cern.ch/CommonImages/Banners/CERNHeadF.gif(version in French).All other banners are not official and must be discarded. Best Regards,Web Communications team, ETT Division, web.communications@cern.ch, tel. 72406

  15. Constructing a web recommender system using web usage mining and user’s profiles

    Directory of Open Access Journals (Sweden)

    T. Mombeini

    2014-12-01

    Full Text Available The World Wide Web is a great source of information, which is nowadays being widely used due to the availability of useful information changing, dynamically. However, the large number of webpages often confuses many users and it is hard for them to find information on their interests. Therefore, it is necessary to provide a system capable of guiding users towards their desired choices and services. Recommender systems search among a large collection of user interests and recommend those, which are likely to be favored the most by the user. Web usage mining was designed to function on web server records, which are included in user search results. Therefore, recommender servers use the web usage mining technique to predict users’ browsing patterns and recommend those patterns in the form of a suggestion list. In this article, a recommender system based on web usage mining phases (online and offline was proposed. In the offline phase, the first step is to analyze user access records to identify user sessions. Next, user profiles are built using data from server records based on the frequency of access to pages, the time spent by the user on each page and the date of page view. Date is of importance since it is more possible for users to request new pages more than old ones and old pages are less probable to be viewed, as users mostly look for new information. Following the creation of user profiles, users are categorized in clusters using the Fuzzy C-means clustering algorithm and S(c criterion based on their similarities. In the online phase, a neural network is offered to identify the suggested model while online suggestions are generated using the suggestion module for the active user. Search engines analyze suggestion lists based on rate of user interest in pages and page rank and finally suggest appropriate pages to the active user. Experiments show that the proposed method of predicting user recent requested pages has more accuracy and

  16. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.

    Directory of Open Access Journals (Sweden)

    Mansour Alsaleh

    Full Text Available Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.

  17. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.

    Science.gov (United States)

    Alsaleh, Mansour; Alarifi, Abdulrahman

    2016-01-01

    Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.

  18. Young Children's Ability to Recognize Advertisements in Web Page Designs

    Science.gov (United States)

    Ali, Moondore; Blades, Mark; Oates, Caroline; Blumberg, Fran

    2009-01-01

    Identifying what is, and what is not an advertisement is the first step in realizing that an advertisement is a marketing message. Children can distinguish television advertisements from programmes by about 5 years of age. Although previous researchers have investigated television advertising, little attention has been given to advertisements in…

  19. Programmes de conception de pages Web (article en arabe ...

    African Journals Online (AJOL)

    aux collectivités. Raisons pour lesquelles sont testés dix programmes dans une optique débutant, notamment HTML, script JAVA et autres. Testes qui ont permis de dégager les avantages et inconvénients de chacun des programmes cités.

  20. A novel visualization model for web search results.

    Science.gov (United States)

    Nguyen, Tien N; Zhang, Jin

    2006-01-01

    This paper presents an interactive visualization system, named WebSearchViz, for visualizing the Web search results and acilitating users' navigation and exploration. The metaphor in our model is the solar system with its planets and asteroids revolving around the sun. Location, color, movement, and spatial distance of objects in the visual space are used to represent the semantic relationships between a query and relevant Web pages. Especially, the movement of objects and their speeds add a new dimension to the visual space, illustrating the degree of relevance among a query and Web search results in the context of users' subjects of interest. By interacting with the visual space, users are able to observe the semantic relevance between a query and a resulting Web page with respect to their subjects of interest, context information, or concern. Users' subjects of interest can be dynamically changed, redefined, added, or deleted from the visual space.

  1. The RDF Generator (RDFG) - First Unit in the Semantic Web Framework (SWF)

    Science.gov (United States)

    Nada, Ahmed; Sartawi, Badie

    The Resources Description Framework RDF Generator (RDFG) is a platform generates the RDF documents from any web page using predefined models for each internet domain, using special web pages classification system. The RDFG is one of the SWF units aimed to standardize the researchers efforts in Semantic Web, by classifying the internet sites to domains, and preparing special RDF model for each domain. RDFG used web intelligent methods for preparing RDF documents such as ontology based semantics matching system to detect the type of web page and knowledgebase machine learning system to create the RDF documents accurately and according to the standard models. RDFG reduce the complexity of the RDF modeling and realize the web entities creating, sharing and reusing.

  2. Evaluating Web accessibility at different processing phases

    Science.gov (United States)

    Fernandes, N.; Lopes, R.; Carriço, L.

    2012-09-01

    Modern Web sites use several techniques (e.g. DOM manipulation) that allow for the injection of new content into their Web pages (e.g. AJAX), as well as manipulation of the HTML DOM tree. This has the consequence that the Web pages that are presented to users (i.e. after browser processing) are different from the original structure and content that is transmitted through HTTP communication (i.e. after browser processing). This poses a series of challenges for Web accessibility evaluation, especially on automated evaluation software. This article details an experimental study designed to understand the differences posed by accessibility evaluation after Web browser processing. We implemented a Javascript-based evaluator, QualWeb, that can perform WCAG 2.0 based accessibility evaluations in the two phases of browser processing. Our study shows that, in fact, there are considerable differences between the HTML DOM trees in both phases, which have the consequence of having distinct evaluation results. We discuss the impact of these results in the light of the potential problems that these differences can pose to designers and developers that use accessibility evaluators that function before browser processing.

  3. #NoMorePage3

    DEFF Research Database (Denmark)

    Glozer, Sarah; McCarthy, Lauren; Whelan, Glen

    2015-01-01

    Fourth wave feminists are currently seeking to bring an end to The Sun’s Page 3, a British institution infamous for featuring a topless female model daily. This paper investigates the No More Page 3 (NMP3) campaign through which feminist activists have sought to disrupt the institutionalized obje...

  4. TDCCREC: AN EFFICIENT AND SCALABLE WEB-BASED RECOMMENDATION SYSTEM

    Directory of Open Access Journals (Sweden)

    K.Latha

    2010-10-01

    Full Text Available Web browsers are provided with complex information space where the volume of information available to them is huge. There comes the Recommender system which effectively recommends web pages that are related to the current webpage, to provide the user with further customized reading material. To enhance the performance of the recommender systems, we include an elegant proposed web based recommendation system; Truth Discovery based Content and Collaborative RECommender (TDCCREC which is capable of addressing scalability. Existing approaches such as Learning automata deals with usage and navigational patterns of users. On the other hand, Weighted Association Rule is applied for recommending web pages by assigning weights to each page in all the transactions. Both of them have their own disadvantages. The websites recommended by the search engines have no guarantee for information correctness and often delivers conflicting information. To solve them, content based filtering and collaborative filtering techniques are introduced for recommending web pages to the active user along with the trustworthiness of the website and confidence of facts which outperforms the existing methods. Our results show how the proposed recommender system performs better in predicting the next request of web users.

  5. Advanced express web application development

    CERN Document Server

    Keig, Andrew

    2013-01-01

    A practical book, guiding the reader through the development of a single page application using a feature-driven approach.If you are an experienced JavaScript developer who wants to build highly scalable, real-world applications using Express, this book is ideal for you. This book is an advanced title and assumes that the reader has some experience with node, Javascript MVC web development frameworks, and has heard of Express before, or is familiar with it. You should also have a basic understanding of Redis and MongoDB. This book is not a tutorial on Node, but aims to explore some of the more

  6. Review Pages: Cities, Energy and Built Environment

    Directory of Open Access Journals (Sweden)

    Gennaro Angiello

    2015-07-01

    Full Text Available Starting from the relationship between urban planning and mobility management, TeMA has gradually expanded the view of the covered topics, always remaining in the groove of rigorous scientific in-depth analysis. During the last two years a particular attention has been paid on the Smart Cities theme and on the different meanings that come with it. The last section of the journal is formed by the Review Pages. They have different aims: to inform on the problems, trends and evolutionary processes; to investigate on the paths by highlighting the advanced relationships among apparently distant disciplinary fields; to explore the interaction’s areas, experiences and potential applications; to underline interactions, disciplinary developments but also, if present, defeats and setbacks. Inside the journal the Review Pages have the task of stimulating as much as possible the circulation of ideas and the discovery of new points of view. For this reason the section is founded on a series of basic’s references, required for the identification of new and more advanced interactions. These references are the research, the planning acts, the actions and the applications, analysed and investigated both for their ability to give a systematic response to questions concerning the urban and territorial planning, and for their attention to aspects such as the environmental sustainability and the innovation in the practices. For this purpose the Review Pages are formed by five sections (Web Resources; Books; Laws; Urban Practices; News and Events, each of which examines a specific aspect of the broader information storage of interest for TeMA.

  7. Review Pages: Cities, Energy and Mobility

    Directory of Open Access Journals (Sweden)

    Gennaro Angiello

    2015-12-01

    Full Text Available Starting from the relationship between urban planning and mobility management, TeMA has gradually expanded the view of the covered topics, always remaining in the groove of rigorous scientific in-depth analysis. During the last two years a particular attention has been paid on the Smart Cities theme and on the different meanings that come with it. The last section of the journal is formed by the Review Pages. They have different aims: to inform on the problems, trends and evolutionary processes; to investigate on the paths by highlighting the advanced relationships among apparently distant disciplinary fields; to explore the interaction’s areas, experiences and potential applications; to underline interactions, disciplinary developments but also, if present, defeats and setbacks. Inside the journal the Review Pages have the task of stimulating as much as possible the circulation of ideas and the discovery of new points of view. For this reason the section is founded on a series of basic’s references, required for the identification of new and more advanced interactions. These references are the research, the planning acts, the actions and the applications, analysed and investigated both for their ability to give a systematic response to questions concerning the urban and territorial planning, and for their attention to aspects such as the environmental sustainability and the innovation in the practices. For this purpose the Review Pages are formed by five sections (Web Resources; Books; Laws; Urban Practices; News and Events, each of which examines a specific aspect of the broader information storage of interest for TeMA.

  8. Deep web

    OpenAIRE

    Bago, Neven

    2016-01-01

    Završnom radu „Deep Web“ je cilj da se u osnovi nauči što je on te koliko je rasprostranjen. Korištenjem programa TOR pristupa se „sakrivenom“ dijelu interneta poznatom kao Deep Web. U radu je opisan proces pristupanja Deep Webu pomoću spomenutog programa. Navodi sve njegove mogućnosti i prednosti nad ostalim web pretraživačima. Istražena je valuta BitCoin koja se koristi u online transakcijama zbog mogućnosti kojom pruža anonimnost. Cilj ovog rada je pokazati do koje mjere ...

  9. Pro single page application development using Backbone.js and ASP.NET

    CERN Document Server

    Fink, Gil

    2014-01-01

    One of the most important and exciting trends in web development in recent years is the move towards single page applications, or SPAs. Instead of clicking through hyperlinks and waiting for each page to load, the user loads a site once and all the interactivity is handled fluidly by a rich JavaScript front end. If you come from a background in ASP.NET development, you'll be used to handling most interactions on the server side. Pro Single Page Application Development will guide you through your transition to this powerful new application type.The book starts in Part I by laying the groundwork

  10. Mise en page et mise en texte avec les feuilles de style CSS

    OpenAIRE

    Buquet, Thierry

    2004-01-01

    http://lemo.irht.cnrs.fr/43/mo43-13.htm; International audience; Après dix ans d'histoire, les feuilles de style CSS s'imposent aujourd'hui comme un standard incontournable pour la mise en page et la mise en texte sur le web. Malgré un support parfois défectueux par les navigateurs web, leur utilisation apporte de nombreux avantages : séparation contenu / mise en forme, contrôle précis de la typographie se rapprochant des normes habituelles des publications scientifiques, mise en pages person...

  11. WebWise 2.0: The Power of Community. WebWise Conference on Libraries and Museums in the Digital World Proceedings (9th, Miami Beach, Florida, March 5-7, 2008)

    Science.gov (United States)

    Green, David

    2009-01-01

    Since it was coined by Tim O'Reilly in formulating the first Web 2.0 Conference in 2004, the term "Web 2.0" has definitely caught on as a designation of a second generation of Web design and experience that emphasizes a high degree of interaction with, and among, users. Rather than simply consulting and reading Web pages, the Web 2.0 generation is…

  12. Annales des Sciences Agronomiques: Submissions

    African Journals Online (AJOL)

    Toutefois, une copie du texte-type pourra accompagner les trois (3) copies requises. 4. Les manuscrits seront subdivisés en diverses parties sur des pages séparées : a) Page 1, Le titre (20 mots au maximum). Cette page doit indiquer clairement : * le titre de l'article : objet, taxon s'il y en a avec les noms scientifiques sans ...

  13. PSRS : A web-Based Paper Submission and Reviewing system

    African Journals Online (AJOL)

    Segun Fatumo

    Conference planning, organization and administration are very tedious tasks. In most cases the conference programme committee has to convene several meetings where submitted papers (via emails in most cases) are downloaded, discussed and accepted or rejected for presentation at the conference. This paper ...

  14. World Wide Web archive for electric power engineering education

    Energy Technology Data Exchange (ETDEWEB)

    Doulai, P. [Wollongong Univ., NSW (Australia)

    1995-12-31

    In recent years, the Internet and its new glamour, the World Wide Web (WWW or Web for short), has introduced radical changes in the direction of knowledge and information disseminations world-wide. The Department of Electrical and Computer Engineering at University of Wollongong has recently set up a WWW public domain clearinghouse (Home Page, URL http://www.uow.edu.au/public/pwrsysed/homepage.html) supporting and promoting the use of computers in engineering education in general and electric power education in particular. This paper provides a brief introduction to the Home Page and its five major nodes (sub-pages). It also provides some background information about the World Wide Web and its potential usage for engineering education. (author). 3 figs., 3 refs.

  15. Semantic Indexing of Web Documents Based on Domain Ontology

    OpenAIRE

    Abdeslem DENNAI; Sidi Mohammed BENSLIMANE

    2015-01-01

    The first phase of reverse engineering of web-oriented applications is the extraction of concepts hidden in HTML pages including tables, lists and forms, or marked in XML documents. In this paper, we present an approach to index semantically these two sources of information (HTML page and XML document) using on the one hand, domain ontology to validate the extracted concepts and on the other hand the similarity measurement between ontology concepts with the aim of enrichment the index. This a...

  16. Web Sitings.

    Science.gov (United States)

    Lo, Erika

    2001-01-01

    Presents seven mathematics games, located on the World Wide Web, for elementary students, including: Absurd Math: Pre-Algebra from Another Dimension; The Little Animals Activity Centre; MathDork Game Room (classic video games focusing on algebra); Lemonade Stand (students practice math and business skills); Math Cats (teaches the artistic beauty…

  17. Fiber webs

    Science.gov (United States)

    Roger M. Rowell; James S. Han; Von L. Byrd

    2005-01-01

    Wood fibers can be used to produce a wide variety of low-density three-dimensional webs, mats, and fiber-molded products. Short wood fibers blended with long fibers can be formed into flexible fiber mats, which can be made by physical entanglement, nonwoven needling, or thermoplastic fiber melt matrix technologies. The most common types of flexible mats are carded, air...

  18. La construcción de sociedades a través de plataformas web.

    OpenAIRE

    Calvopiña Recalde, Joseph Fabricio; Moreno Barreto, Jorge Luis

    2016-01-01

    The object of this article is to provide knowledge about network society, digital language, which promote new terms like communicology, cybercultures and social engineering, from the emergence of the web. These terms have spoken, understood and valued by netizens to understand and explain this complex universe that is called internet. In deep navigation internet web containers exist ranging from blogs, web pages, personal files found on social networks, web working en...

  19. 36 CFR 1194.22 - Web-based intranet and internet information and applications.

    Science.gov (United States)

    2010-07-01

    ... Checkpoints of the Web Content Accessibility Guidelines 1.0 (WCAG 1.0) (May 5, 1999) published by the Web Accessibility Initiative of the World Wide Web Consortium: Section 1194.22paragraph WCAG 1.0 checkpoint (a) 1.1...), (m), (n), (o), and (p) of this section are different from WCAG 1.0. Web pages that conform to WCAG 1...

  20. Sustainable Materials Management (SMM) Web Academy Webinar: Managing Wasted Food with Anaerobic Digestion: Incentives and Innovations

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  1. Sustainable Materials Management (SMM) Web Academy Webinar: Food Waste Reduction Alliance, a Unique Industry Collaboration

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  2. Sustainable Materials Management (SMM) Web Academy Webinar: Reducing Wasted Food: How Packaging Can Help

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  3. Sustainable Materials Management (SMM) Web Academy Webinar: The Changing Waste Stream

    Science.gov (United States)

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  4. Né à Genève, le Web embobine la planète

    CERN Multimedia

    Broute, Anne-Muriel

    2009-01-01

    On the earth, today, one people about six is connected to the Web. Twenty years ago, they were only two: the english computer specialist Tim Berners-Lee and the belgium ingenner Robert Cailliau. (1,5 page)

  5. Web révolution; il y a 17 ans, qui le connaissait?

    CERN Multimedia

    2007-01-01

    When tim Berners Lee, a CERN scientist, found the World Wide Web in 1989, the aim was to find an automatic sharing of the information for scientists working in the universities or institutes in the whole world. (2 pages)

  6. Flink: Semantic Web technology for the extraction and analysis of social networks

    NARCIS (Netherlands)

    Mika, P.

    2005-01-01

    We present the Flink system for the extraction, aggregation and visualization of online social networks. Flink employs semantic technology for reasoning with personal information extracted from a number of electronic information sources including web pages, emails, publication archives and FOAF

  7. Experimental development based on mapping rule between requirements analysis model and web framework specific design model

    National Research Council Canada - National Science Library

    Okuda, Hirotaka; Ogata, Shinpei; Matsuura, Saeko

    2013-01-01

    ...). The main feature of our method is to automatically generate a Web user interface prototype from UML requirements analysis model so that we can confirm validity of input/output data for each page...

  8. BibSword Implementation of SWORD client in Invenio for the automated submission of digital objects to arXiv

    CERN Document Server

    Barras, Mathieu; Abou Khaled, Omar; Mugellini, Elena

    2010-01-01

    Since recently, arXiv offers a new submission interface implementing SWORD. SWORD is a brand new protocol that defines a simple way to deposit digital document on web repositories. The most part of digital documents submitted on Invenio are also deposited on arXiv. For this reason, it is a relevant added value for Invenio to offers such an automated “Forward to ArXiv” option to its users. The aim of this project is then to analyse, design and implements a library oriented SWORD client module (BibSword). This module will offer an user interface and will be integrated in the Invenio submission process.

  9. Automatic web filtering approach based on multimodal content information

    Science.gov (United States)

    Ming, Wei H.; Rossi, Lorenzo; Li, Ying; Kuo, C.-C. Jay

    2001-07-01

    An automatic web content classification system is proposed in this research for web information filtering. A sample group of web contents are first collected via commercial search engines. Then, they are classified into different subject group and more related web pages can be searched for further analysis. It can free from the troublesome and routine process that are performed by human beings in most search engines. And the clustered information can be updated at any specified time automatically. Preliminary experimental results are used to demonstrate the effectiveness of the performance of the proposed system.

  10. Characteristics of food industry web sites and "advergames" targeting children.

    Science.gov (United States)

    Culp, Jennifer; Bell, Robert A; Cassady, Diana

    2010-01-01

    To assess the content of food industry Web sites targeting children by describing strategies used to prolong their visits and foster brand loyalty; and to document health-promoting messages on these Web sites. A content analysis was conducted of Web sites advertised on 2 children's networks, Cartoon Network and Nickelodeon. A total of 290 Web pages and 247 unique games on 19 Internet sites were examined. Games, found on 81% of Web sites, were the most predominant promotion strategy used. All games had at least 1 brand identifier, with logos being most frequently used. On average Web sites contained 1 "healthful" message for every 45 exposures to brand identifiers. Food companies use Web sites to extend their television advertising to promote brand loyalty among children. These sites almost exclusively promoted food items high in sugar and fat. Health professionals need to monitor food industry marketing practices used in "new media." Published by Elsevier Inc.

  11. A grammar checker based on web searching

    Directory of Open Access Journals (Sweden)

    Joaquim Moré

    2006-05-01

    Full Text Available This paper presents an English grammar and style checker for non-native English speakers. The main characteristic of this checker is the use of an Internet search engine. As the number of web pages written in English is immense, the system hypothesises that a piece of text not found on the Web is probably badly written. The system also hypothesises that the Web will provide examples of how the content of the text segment can be expressed in a grammatically correct and idiomatic way. Thus, when the checker warns the user about the odd nature of a text segment, the Internet engine searches for contexts that can help the user decide whether he/she should correct the segment or not. By means of a search engine, the checker also suggests use of other expressions that appear on the Web more often than the expression he/she actually wrote.

  12. Discovery and Selection of Semantic Web Services

    CERN Document Server

    Wang, Xia

    2013-01-01

    For advanced web search engines to be able not only to search for semantically related information dispersed over different web pages, but also for semantic services providing certain functionalities, discovering semantic services is the key issue. Addressing four problems of current solution, this book presents the following contributions. A novel service model independent of semantic service description models is proposed, which clearly defines all elements necessary for service discovery and selection. It takes service selection as its gist and improves efficiency. Corresponding selection algorithms and their implementation as components of the extended Semantically Enabled Service-oriented Architecture in the Web Service Modeling Environment are detailed. Many applications of semantic web services, e.g. discovery, composition and mediation, can benefit from a general approach for building application ontologies. With application ontologies thus built, services are discovered in the same way as with single...

  13. Human dynamics revealed through Web analytics

    Science.gov (United States)

    Gonçalves, Bruno; Ramasco, José J.

    2008-08-01

    The increasing ubiquity of Internet access and the frequency with which people interact with it raise the possibility of using the Web to better observe, understand, and monitor several aspects of human social behavior. Web sites with large numbers of frequently returning users are ideal for this task. If these sites belong to companies or universities, their usage patterns can furnish information about the working habits of entire populations. In this work, we analyze the properly anonymized logs detailing the access history to Emory University’s Web site. Emory is a medium-sized university located in Atlanta, Georgia. We find interesting structure in the activity patterns of the domain and study in a systematic way the main forces behind the dynamics of the traffic. In particular, we find that linear preferential linking, priority-based queuing, and the decay of interest for the contents of the pages are the essential ingredients to understand the way users navigate the Web.

  14. WEB BASED LEARNING OF COMPUTER NETWORK COURSE

    Directory of Open Access Journals (Sweden)

    Hakan KAPTAN

    2004-04-01

    Full Text Available As a result of developing on Internet and computer fields, web based education becomes one of the area that many improving and research studies are done. In this study, web based education materials have been explained for multimedia animation and simulation aided Computer Networks course in Technical Education Faculties. Course content is formed by use of university course books, web based education materials and technology web pages of companies. Course content is formed by texts, pictures and figures to increase motivation of students and facilities of learning some topics are supported by animations. Furthermore to help working principles of routing algorithms and congestion control algorithms simulators are constructed in order to interactive learning

  15. Interactive Visualization and Navigation of Web Search Results Revealing Community Structures and Bridges

    OpenAIRE

    Sallaberry, Arnaud; Zaidi, Faraz; Pich, C.; Melançon, Guy

    2010-01-01

    International audience; With the information overload on the Internet, organization and visualization of web search results so as to facilitate faster access to information is a necessity. The classical methods present search results as an ordered list of web pages ranked in terms of relevance to the searched topic. Users thus have to scan text snippets or navigate through various pages before finding the required information. In this paper we present an interactive visualization system for c...

  16. The poor quality of information about laparoscopy on the World Wide Web as indexed by popular search engines.

    Science.gov (United States)

    Allen, J W; Finch, R J; Coleman, M G; Nathanson, L K; O'Rourke, N A; Fielding, G A

    2002-01-01

    This study was undertaken to determine the quality of information on the Internet regarding laparoscopy. Four popular World Wide Web search engines were used with the key word "laparoscopy." Advertisements, patient- or physician-directed information, and controversial material were noted. A total of 14,030 Web pages were found, but only 104 were unique Web sites. The majority of the sites were duplicate pages, subpages within a main Web page, or dead links. Twenty-eight of the 104 pages had a medical product for sale, 26 were patient-directed, 23 were written by a physician or group of physicians, and six represented corporations. The remaining 21 were "miscellaneous." The 46 pages containing educational material were critically reviewed. At least one of the senior authors found that 32 of the pages contained controversial or misleading statements. All of the three senior authors (LKN, NAO, GAF) independently agreed that 17 of the 46 pages contained controversial information. The World Wide Web is not a reliable source for patient or physician information about laparoscopy. Authenticating medical information on the World Wide Web is a difficult task, and no government or surgical society has taken the lead in regulating what is presented as fact on the World Wide Web.

  17. A Study of HTML Title Tag Creation Behavior of Academic Web Sites

    Science.gov (United States)

    Noruzi, Alireza

    2007-01-01

    The HTML title tag information should identify and describe exactly what a Web page contains. This paper analyzes the "Title element" and raises a significant question: "Why is the title tag important?" Search engines base search results and page rankings on certain criteria. Among the most important criteria is the presence of the search keywords…

  18. How To Do Field Searching in Web Search Engines: A Field Trip.

    Science.gov (United States)

    Hock, Ran

    1998-01-01

    Describes the field search capabilities of selected Web search engines (AltaVista, HotBot, Infoseek, Lycos, Yahoo!) and includes a chart outlining what fields (date, title, URL, images, audio, video, links, page depth) are searchable, where to go on the page to search them, the syntax required (if any), and how field search queries are entered.…

  19. Interfacing Media: User-Centered Design for Media-Rich Web Sites.

    Science.gov (United States)

    Horton, Sarah

    2000-01-01

    Discusses multimedia Web site design that may include images, animations, audio, and video. Highlights include interfaces that stress user-centered design; using only relevant media; placing high-demand content on secondary pages and keeping the home page simpler; providing information about the media; considering users with disabilities; and user…

  20. Web Components and the Semantic Web

    OpenAIRE

    Casey, Máire; Pahl, Claus

    2003-01-01

    Component-based software engineering on the Web differs from traditional component and software engineering. We investigate Web component engineering activites that are crucial for the development,com position, and deployment of components on the Web. The current Web Services and Semantic Web initiatives strongly influence our work. Focussing on Web component composition we develop description and reasoning techniques that support a component developer in the composition activities,fo cussing...

  1. Bringing Control System User Interfaces to the Web

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xihui [ORNL; Kasemir, Kay [ORNL

    2013-01-01

    With the evolution of web based technologies, especially HTML5 [1], it becomes possible to create web-based control system user interfaces (UI) that are cross-browser and cross-device compatible. This article describes two technologies that facilitate this goal. The first one is the WebOPI [2], which can seamlessly display CSS BOY [3] Operator Interfaces (OPI) in web browsers without modification to the original OPI file. The WebOPI leverages the powerful graphical editing capabilities of BOY and provides the convenience of re-using existing OPI files. On the other hand, it uses generic JavaScript and a generic communication mechanism between the web browser and web server. It is not optimized for a control system, which results in unnecessary network traffic and resource usage. Our second technology is the WebSocket-based Process Data Access (WebPDA) [4]. It is a protocol that provides efficient control system data communication using WebSocket [5], so that users can create web-based control system UIs using standard web page technologies such as HTML, CSS and JavaScript. WebPDA is control system independent, potentially supporting any type of control system.

  2. Medium-sized Universities Connect to Their Libraries: Links on University Home Pages and User Group Pages

    Directory of Open Access Journals (Sweden)

    Pamela Harpel-Burk

    2006-03-01

    Full Text Available From major tasks—such as recruitment of new students and staff—to the more mundane but equally important tasks—such as providing directions to campus—college and university Web sites perform a wide range of tasks for a varied assortment of users. Overlapping functions and user needs meld to create the need for a Web site with three major functions: promotion and marketing, access to online services, and providing a means of communication between individuals and groups. In turn, college and university Web sites that provide links to their library home page can be valuable assets for recruitment, public relations, and for helping users locate online services.

  3. Journal of Business Research: Submissions

    African Journals Online (AJOL)

    Author Guidelines. GUIDELINES FOR AUTHORS: Submission of Papers The JBR welcomes papers from the general academia and professionals. Authors are encouraged to submit papers for publications in the JBR at any time. The Journal will also at specific times solicit for reviews on topical issues of interest. Procedure ...

  4. South African Medical Journal: Submissions

    African Journals Online (AJOL)

    Authorship should be based on: (i) substantial contribution to conceptualisation, design, analysis and interpretation of data; (ii) drafting or critical revision of important scientific ... If authors' names are added or deleted after submission of an article, or the order of the names is changed, all authors must agree to this in writing.

  5. Ghana Journal of Linguistics: Submissions

    African Journals Online (AJOL)

    Author Guidelines. PLEASE follow these guidelines closely when preparing your paper for submission. The editors reserve the right to reject inadequately prepared papers. All areas of linguistics are invited – the journal is not limited to articles on languages of or in Ghana or Africa. ALL CONTRIBUTIONS must be submitted ...

  6. Nigerian Journal of Technology: Submissions

    African Journals Online (AJOL)

    This article acts as the template for preparing articles for submission to Nigerian Journal of Technology. The abstract should be a clear statement defining the problems of study, methodology adopted, results and conclusions. Please do not refer readers to other literature articles in the abstract. The abstract should be brief ...

  7. Ghana Journal of Geography: Submissions

    African Journals Online (AJOL)

    Author Guidelines. Submission to the Ghana Journal of Geography. Papers submitted to the journal should follow the guidelines set out below. All correspondence between editor and author is performed by e-mail, and paper copies are not required at all stages. A manuscript must be submitted electronically as an email ...

  8. Shakespeare in Southern Africa: Submissions

    African Journals Online (AJOL)

    Author Guidelines. Shakespeare in Southern Africa sets out to publish articles, commentary and reviews on all aspects of Shakespearean studies and performance, with a particular emphasis on the response to Shakespeare in southern Africa. Scholarly notes of a factual nature are also welcome. Submissions are reviewed ...

  9. Nigerian Journal of Paediatrics: Submissions

    African Journals Online (AJOL)

    If digital images are the only source of images, ensure that the image has minimum resolution of 300 dpi or 1800 x 1600 pixels in TIFF format. ... Nigerian Journal of Paediatrics charges Nigerian Naira 5000 (USD25) on submission of manuscript as processing fees and Nigerian Naira 25,000 (USD125) publication fees on ...

  10. Journal for Juridical Science: Submissions

    African Journals Online (AJOL)

    Author Guidelines. 1. Manuscripts may be submitted to Journal for Juridical Science in Afrikaans or English. The desired length of articles is 7 000 words, while 4 500 words is regarded as the minimum and 11 000 as the maximum. 2. Two typed copies of manuscripts must be submitted. In addition submission on computer ...

  11. Orient Journal of Medicine: Submissions

    African Journals Online (AJOL)

    Charges: Authors are required, at the submission of each article, to pay a sum of N15,000 (Fifteen Thousand Naira only) as processing fee at the Journal Office and obtain a written receipt, or pay into the Orient Journal of Medicine Bank Account (Account No. should be obtained directly from the Editor)and mail a scanned ...

  12. Research in Hospitality Management: Submissions

    African Journals Online (AJOL)

    Author Guidelines. Original research papers, substantive topic reviews, viewpoints and short communications that make an original contribution to the understanding of hospitality and hospitality management in a global context will be considered for publication in the Journal. Submissions should be e-mailed to the ...

  13. ChemSearch Journal: Submissions

    African Journals Online (AJOL)

    It publishes original quality articles which are reporting advances in theory, techniques methodology applications and practice, general survey and critical reviews, etc. SUBMISSION OF ARTICLE ... c/o Department of Pure and Industrial Chemistry, Bayero University, P.M.B. 3011, Kano, Nigeria. or. via our Email address: ...

  14. Journal of Pharmacy & Bioresources: Submissions

    African Journals Online (AJOL)

    Manuscripts should be typed double-spaced on A4 size paper and should be arranged as follows: Title page; Abstract; Keywords; Introduction; Experimental; Results and Discussion; Acknowledgement; References; Tables and Illustrations. Title page - should contain the full title of the article, each author's name (first name ...

  15. Nigerian Journal of Fisheries: Submissions

    African Journals Online (AJOL)

    Author Guidelines. Manuscripts are to be submitted in triplicates following the format below: Title page: Title must be concise. Page also to include Author's name and affiliation. Abstract: Maximum of 200 words to present brief summary of the report. Below the abstract should include the keywords (maximum of 6).

  16. 3Drefine: an interactive web server for efficient protein structure refinement.

    Science.gov (United States)

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-07-08

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Nejdl, Wolfgang

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web p...... are crucial to be formalized by the semantic web ontologies for adaptive web. We use examples from an eLearning domain to illustrate the principles which are broadly applicable to any information domain on the web.......Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...

  18. The invisible Web uncovering information sources search engines can't see

    CERN Document Server

    Sherman, Chris

    2001-01-01

    Enormous expanses of the Internet are unreachable with standard web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, informa

  19. 24 CFR 1710.105 - Cover page.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 5 2010-04-01 2010-04-01 false Cover page. 1710.105 Section 1710... Cover page. The cover page of the Property Report shall be prepared in accordance with the following... period and the Cover Page must reflect the requirements of the longer period, rather than the seven days...

  20. Architecture for large-scale automatic web accessibility evaluation based on the UWEM methodology

    DEFF Research Database (Denmark)

    Ulltveit-Moe, Nils; Olsen, Morten Goodwin; Pillai, Anand B.

    2008-01-01

    The European Internet Accessibility project (EIAO) has developed an Observatory for performing large scale automatic web accessibility evaluations of public sector web sites in Europe. The architecture includes a distributed web crawler that crawls web sites for links until either a given budget...... of web pages have been identified or the web site has been crawled exhaustively. Subsequently, a uniform random subset of the crawled web pages is sampled and sent for accessibility evaluation and the evaluation results are stored in a Resource Description Format (RDF) database that is later loaded...... challenges that the project faced and the solutions developed towards building a system capable of regular large-scale accessibility evaluations with sufficient capacity and stability. It also outlines some possible future architectural improvements....

  1. Recent trends in print portals and Web2Print applications

    Science.gov (United States)

    Tuijn, Chris

    2009-01-01

    case, the ordering process is, of course, not fully automated. Standardized products, on the other hand, are easily identified and the cost charged to the print buyer can be retrieved from predefined price lists. Typically, higher volumes will result in more attractive prices. An additional advantage of this type of products is that they are often defined such that they can be produced in bulk using conventional printing techniques. If one wants to automate the ganging, a connection must be established between the on-line ordering and the production planning system. (For digital printing, there typically is no need to gang products since they can be produced more effectively separately.) Many of the on-line print solutions support additional features also available in general purpose e-commerce sites. We here think of the availability of virtual shopping baskets, the connectivity with payment gateways and the support of special facilities for interfacing with courier services (bar codes, connectivity to courier web sites for tracking shipments etc.). Supporting these features also assumes an intimate link with the print production system. Another development that goes beyond the on-line ordering of printed material and the submission of full pages and/or documents, is the interactive, on-line definition of the content itself. Typical applications in this respect are, e.g., the creation of business cards, leaflets, letter heads etc. On a more professional level, we also see that more and more publishing organizations start using on-line publishing platforms to organize their work. These professional platforms can also be connected directly to printing portals and thus enable extra automation. In this paper, we will discuss for each of the different applications presented above (traditional Print Portals, Web2Print applications and professional, on-line publishing platforms) how they interact with prepress and print production systems and how they contribute to the

  2. Submission of content to a digital object repository using a configurable workflow system

    OpenAIRE

    Hense, Andreas; Mueller, Johannes

    2007-01-01

    The prototype of a workflow system for the submission of content to a digital object repository is here presented. It is based entirely on open-source standard components and features a service-oriented architecture. The front-end consists of Java Business Process Management (jBPM), Java Server Faces (JSF), and Java Server Pages (JSP). A Fedora Repository and a mySQL data base management system serve as a back-end. The communication between front-end and back-end uses a SOAP minimal binding s...

  3. Venue Recommendation and Web Search Based on Anchor Text

    Science.gov (United States)

    2014-11-01

    candidate s (i.e., p(s)), the probability of the presence of a context c given a sug- gestion candidate s (i.e., p(c|s)), and the probability of the...observation motivates us to utilize PageRank of sug- gestion candidates in order to estimate the prior probability of them in our proposed model, so...Amsterdam’s submissions: p(c|θs) = ∏ t∈c p(t|θs)n(t,c), in which, p(t|θs) is the probability of term t given the sug- gestion language model θs, and n(t, c) is

  4. First in the web, but where are the pieces

    Energy Technology Data Exchange (ETDEWEB)

    Deken, J.M.

    1998-04-01

    The World Wide Web (WWW) does matter to the SLAC Archives and History Office for two very important, and related, reasons. The first reason is that the early Web at SLAC is historically significant: it was the first of its kind on this continent, and it achieved new and important things. The second reason is that the Web at SLAC--in its present and future forms--is a large and changing collection of official documents of the organization, many of which exist in no other form or environment. As of the first week of August, 1997, SLAC had 8,940 administratively-accounted-for web pages, and an estimated 2,000 to 4,000 additional pages that are hard to administratively track because they either reside on the main server in users directories several levels below their top-level pages, or they reside on one of the more than 60 non-main servers at the Center. A very small sampling of the information that SLAC WWW pages convey includes: information for the general public about programs and activities at SLAC; pages which allow physics experiment collaborators to monitor data, arrange work schedules and analyze results; pages that convey information to staff and visiting scientists about seminar and activity schedules, publication procedures, and ongoing experiments; and pages that allow staff and outside users to access databases maintained at SLAC. So, when SLAC's Archives and History Office begins to approach collecting the documents of their WWW presence, what are they collecting, and how are they to go about the process of collecting it. In this paper, the author discusses the effort to archive SLAC's Web in two parts, concentrating on the first task that has been undertaken: the initial effort to identify and gather into the archives evidence and documentation of the early days of the SLAC Web. The second task, which is the effort to collect present and future web pages at SLAC, are also covered, although in less detail, since it is an effort that is only

  5. Design and Development of ARM9 Based Embedded Web Server

    OpenAIRE

    Niturkar Priyanka; Prof. V.D.Shinde

    2015-01-01

    This paper describes the design of embedded web server based on ARM9 processor and Linux platform. It analyses hardware configuration and software implementation for monitoring and controlling systems or devices. User can monitor and control temperature and smoke information. It consists of application program written in „C‟ for accessing data through the serial port and updating the web page, porting of Linux 2.6.3x Kernel with application program on ARM9 board and booting it fro...

  6. WISE: a content-based Web image search engine

    Science.gov (United States)

    Qiu, Guoping; Palmer, R. D.

    2000-12-01

    This paper describes the development of a prototype of a Web Image Search Engine (WISE), which allows users to search for images on the WWW by image examples, in a similar fashion to current search engines that allow users to find related Web pages using text matching on keywords. The system takes an image specified by the user and finds similar images available on the WWW by comparing the image contents using low level image features. The current version of the WISE system consists of a graphical user interface (GUI), an autonomous Web agent, an image comparison program and a query processing program. The users specify the URL of a target image and the URL of the starting Web page from where the program will 'crawl' the Web, finding images along the way and retrieve those satisfying a certain constraints. The program then computes the visual features of the retrieved images and performs content-based comparison with the target image. The results of the comparison are then sorted according to a certain similarity measure, which along with thumbnails and information associated with the images, such as the URLs; image size, etc. are then written to an HTML page. The resultant page is stored on a Web server and is outputted onto the user's Web browser once the search process is complete. A unique feature of the current version of WISE is its image content comparison algorithm. It is based on the comparison of image palettes and it therefore very efficient in retrieving one of the two universally accepted image formats on the Web, 'gif.' In gif images, the color palette is contained in its header and therefore it is only necessary to retrieve the header information rather than the whole images, thus making it very efficient.

  7. Understanding the role of outsourced labor in web service abuse

    OpenAIRE

    Motoyama, Marti A.

    2011-01-01

    Modern Web services are typically free and open access, often supported by advertising revenue. These attributes, however, leave services vulnerable to many forms of abuse, including sending spam via Web-based email accounts, inflating page rank scores by spamming backlinks on blogs, etc. However, many of these schemes are nontrivial to execute, requiring technical expertise and access to ancillary resources (e.g. IP diversity, telephone numbers, etc.). Thus, many scammers prefer to offload t...

  8. Web content accessibility of consumer health information web sites for people with disabilities: a cross sectional evaluation.

    Science.gov (United States)

    Zeng, Xiaoming; Parmanto, Bambang

    2004-06-21

    The World Wide Web (WWW) has become an increasingly essential resource for health information consumers. The ability to obtain accurate medical information online quickly, conveniently and privately provides health consumers with the opportunity to make informed decisions and participate actively in their personal care. Little is known, however, about whether the content of this online health information is equally accessible to people with disabilities who must rely on special devices or technologies to process online information due to their visual, hearing, mobility, or cognitive limitations. To construct a framework for an automated Web accessibility evaluation; to evaluate the state of accessibility of consumer health information Web sites; and to investigate the possible relationships between accessibility and other features of the Web sites, including function, popularity and importance. We carried out a cross-sectional study of the state of accessibility of health information Web sites to people with disabilities. We selected 108 consumer health information Web sites from the directory service of a Web search engine. A measurement framework was constructed to automatically measure the level of Web Accessibility Barriers (WAB) of Web sites following Web accessibility specifications. We investigated whether there was a difference between WAB scores across various functional categories of the Web sites, and also evaluated the correlation between the WAB and Alexa traffic rank and Google Page Rank of the Web sites. We found that none of the Web sites we looked at are completely accessible to people with disabilities, i.e., there were no sites that had no violation of Web accessibility rules. However, governmental and educational health information Web sites do exhibit better Web accessibility than the other categories of Web sites (P health information Web sites shows that no Web site scrupulously abides by Web accessibility specifications, even for entities

  9. A cohesive page ranking and depth-first crawling scheme for ...

    African Journals Online (AJOL)

    The quality of the results collections, displayed to users of web search engines today still remains a mirage with regard to the factors used in their ranking process. In this work we combined page rank crawling method and depth first crawling method to create a hybridized method. Our major objective is to unify into one ...

  10. Semantic Web

    Directory of Open Access Journals (Sweden)

    Anna Lamandini

    2011-06-01

    Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.

  11. Acquiring geographical data with web harvesting

    Science.gov (United States)

    Dramowicz, K.

    2016-04-01

    Many websites contain very attractive and up to date geographical information. This information can be extracted, stored, analyzed and mapped using web harvesting techniques. Poorly organized data from websites are transformed with web harvesting into a more structured format, which can be stored in a database and analyzed. Almost 25% of web traffic is related to web harvesting, mostly while using search engines. This paper presents how to harvest geographic information from web documents using the free tool called the Beautiful Soup, one of the most commonly used Python libraries for pulling data from HTML and XML files. It is a relatively easy task to process one static HTML table. The more challenging task is to extract and save information from tables located in multiple and poorly organized websites. Legal and ethical aspects of web harvesting are discussed as well. The paper demonstrates two case studies. The first one shows how to extract various types of information about the Good Country Index from the multiple web pages, load it into one attribute table and map the results. The second case study shows how script tools and GIS can be used to extract information from one hundred thirty six websites about Nova Scotia wines. In a little more than three minutes a database containing one hundred and six liquor stores selling these wines is created. Then the availability and spatial distribution of various types of wines (by grape types, by wineries, and by liquor stores) are mapped and analyzed.

  12. Web Applications of Bibliometrics and Link Analysis

    Directory of Open Access Journals (Sweden)

    Faribourz Droudi

    2010-04-01

    Full Text Available The present study aims to introduce and analyze bibliometric application within Web and also to expounds on the status of link analysis in order to point out its application with respect to the existing web-based information sources. Findings indicate that bibliometrics could have required application in the area of digital resources available through Net. Link analysis is a process by which one could make statistical analysis of correlation between hyperlinks and therefore understand the accuracy, veracity and efficacy of citations within a digital document. Link analysis, in effect, is counted as a part of information ranking algorithm within the web environment. The number, linkage and quality of given links to a website are of utmost importance for ranking/status in the Web. The tools applied in this topic include, page ranking strategy, link analysis algorithm, latent semantic indexing and the classical input-output model. The present study analyzes Big Web and Small Web link analysis and explains the means for utilizing web charts in order to better understand the link analysis process.

  13. Relevancy Redacted: Web-Scale Discovery and the “Filter Bubble”

    OpenAIRE

    Davis, Corey

    2012-01-01

    Web-scale discovery has arrived. With products like Summon and WorldCat Local, hundreds of millions of articles and books are accessible at lightning speed from a single search box via the library. But there's a catch. As the size of the index grows, so too does the challenge of relevancy. When Google launched in 1998 with an index of only 25 million pages, its patented PageRank algorithm was powerful enough to provide outstanding results. But the web has grown to well over a trillion pages, ...

  14. Pacifier use: a systematic review of selected parenting web sites.

    Science.gov (United States)

    Cornelius, Aubrie N; D'Auria, Jennifer P; Wise, Lori M

    2008-01-01

    The purpose of this study was to explore and describe content related to pacifier use on parenting Web sites. Sixteen parenting Web sites met the inclusion criteria of the study. Two checklists were used to evaluate and describe different aspects of the Web sites. The first checklist provided a quality assessment of the Web sites. The second checklist was constructed to identify content categories of pacifier use. The majority of sites met quality assessment criteria. Eleven content categories regarding pacifier use were identified. Nine of the 16 sites contained eight or more of the 11 content areas. The most common types of Web pages containing pacifier information included pacifier specific (articles), questions and answer pages, and related content pages. Most of the parenting Web sites met the quality measures for online information. The content categories reflected the current controversies and information regarding pacifier use found in the expert literature. The findings of this study suggest the need to establish pacifier recommendations in the United States to guide parents and health care providers with decision making.

  15. Creating a Facebook Page for the Seismological Society of America

    Science.gov (United States)

    Newman, S. B.

    2009-12-01

    In August, 2009 I created a Facebook “fan” page for the Seismological Society of America. We had been exploring cost-effective options for providing forums for two-way communication for some months. We knew that a number of larger technical societies had invested significant sums of money to create customized social networking sites but that a small society would need to use existing low-cost software options. The first thing I discovered when I began to set up the fan page was that an unofficial SSA Facebook group already existed, established by Steven J. Gibbons, a member in Norway. Steven had done an excellent job of posting material about SSA. Partly because of the existing group, the official SSA fan page gained fans rapidly. We began by posting information about our own activities and then added links to activities in the broader geoscience community. While much of this material also appeared on our website and in our publication, Seismological Research Letters (SRL), the tone on the FB page is different. It is less formal with more emphasis on photos and links to other sites, including our own. Fans who are active on FB see the posts as part of their social network and do not need to take the initiative to go to the SSA site. Although the goal was to provide a forum for two-way communication, our initial experience was that people were clearly reading the page but not contributing content. This appears to be case with fan pages of sister geoscience societies. FB offers some demographic information to fan site administrators. In an initial review of the demographics it appeared that fans were younger than the overall demographics of the Society. It appeared that a few of the fans are not members or even scientists. Open questions are: what content will be most useful to fans? How will the existence of the page benefit the membership as a whole? Will the page ultimately encourage two-way communication as hoped? Web 2.0 is generating a series of new

  16. 29 CFR 99.320 - Report submission.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Report submission. 99.320 Section 99.320 Labor Office of the Secretary of Labor AUDITS OF STATES, LOCAL GOVERNMENTS, AND NON-PROFIT ORGANIZATIONS Auditees § 99.320 Report submission. (a) General. The audit shall be completed and the data collection form described in...

  17. Southern African Journal of Environmental Education: Submissions

    African Journals Online (AJOL)

    Online Submissions. Already have a Username/Password for Southern African Journal of Environmental Education? Go to Login. Need a Username/Password? Go to Registration. Registration and login are required to submit items online and to check the status of current submissions.

  18. 28 CFR 51.22 - Premature submissions.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Premature submissions. 51.22 Section 51.22 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) PROCEDURES FOR THE ADMINISTRATION OF... § 51.22 Premature submissions. The Attorney General will not consider on the merits: (a) Any proposal...

  19. 6 CFR 27.210 - Submissions schedule.

    Science.gov (United States)

    2010-01-01

    ... Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical Facility Security Program § 27.210 Submissions schedule. (a) Initial Submission. The... of any of the chemicals listed in appendix A at or above the STQ for any applicable Security Issue...

  20. West African Journal of Applied Ecology: Submissions

    African Journals Online (AJOL)

    Author Guidelines. Instructions To Authors Papers for submission to the West African Journal of Applied Ecology should be written in English and should not exceed 8,000 words in total length. Papers should not have been submitted or be considered for submission for publication elsewhere. Ideas expressed in papers that ...

  1. Science, Technology and Arts Research Journal: Submissions

    African Journals Online (AJOL)

    Author Guidelines. Manuscripts Submission Manuscript must be submitted with a covering letter from the author of correspondence to the Editor in Chief by e-mail. After the successful submission of manuscript the corresponding author will be acknowledged within 72 hours. Any quarry regarding the preparation ...

  2. KCA Journal of Business Management: Submissions

    African Journals Online (AJOL)

    Online Submissions. Already have a Username/Password for KCA Journal of Business Management? Go to Login. Need a Username/Password? Go to Registration. Registration and login are required to submit items online and to check the status of current submissions.

  3. African Journal of Marine Science: Submissions

    African Journals Online (AJOL)

    African Journal of Marine Science: Submissions. Journal Home > About the Journal > African Journal of Marine Science: Submissions. Log in or Register to get access to full text downloads. Username, Password, Remember me, or Register · Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue ...

  4. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    Science.gov (United States)

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  5. Journal of Environmental Extension: Submissions

    African Journals Online (AJOL)

    Manuscripts should be written in English and computer word processed on Microsoft Word, Word Perfect 6.1 or lower versions or PageMaker 6.5 or lower versions (keep a 3.5 floppy file copy). Manually typed manuscripts will be charged for type-setting. Three (3) printed copies of not more than ten (10) A4 pages in double ...

  6. A Tutorial in Creating Web-Enabled Databases with Inmagic DB/TextWorks through ODBC.

    Science.gov (United States)

    Breeding, Marshall

    2000-01-01

    Explains how to create Web-enabled databases. Highlights include Inmagic's DB/Text WebPublisher product called DB/TextWorks; ODBC (Open Database Connectivity) drivers; Perl programming language; HTML coding; Structured Query Language (SQL); Common Gateway Interface (CGI) programming; and examples of HTML pages and Perl scripts. (LRW)

  7. From Grass Roots to Corporate Image--The Maturation of the Web.

    Science.gov (United States)

    Quinn, Christine A.

    1995-01-01

    The experience of Stanford University (California) in developing the institutional image it portrayed on the World Wide Web is discussed. Principles and practical suggestions for developing such an image through layout and content are offered, including a list of things not to do on a Web page. (MSE)

  8. Developing heuristics for Web communication: an introduction to this special issue

    NARCIS (Netherlands)

    van der Geest, Thea; Spyridakis, Jan H.

    2000-01-01

    This article describes the role of heuristics in the Web design process. The five sets of heuristics that appear in this issue are also described, as well as the research methods used in their development. The heuristics were designed to help designers and developers of Web pages or sites to

  9. Uncovering the Hidden Web, Part I: Finding What the Search Engines Don't. ERIC Digest.

    Science.gov (United States)

    Mardis, Marcia

    Currently, the World Wide Web contains an estimated 7.4 million sites (OCLC, 2001). Yet even the most experienced searcher, using the most robust search engines, can access only about 16% of these pages (Dahn, 2001). The other 84% of the publicly available information on the Web is referred to as the "hidden,""invisible," or…

  10. Corporate Writing in the Web of Postmodern Culture and Postindustrial Capitalism.

    Science.gov (United States)

    Boje, David M.

    2001-01-01

    Uses Nike as an example to explore the impact of corporate writing (in annual reports, press releases, advertisements, web pages, sponsored research, and consultant reports). Shows how the intertextual web of "Nike Writing," as it legitimates industry-wide labor and ecological practices has significant, negative consequences for academic…

  11. Discovering How Students Search a Library Web Site: A Usability Case Study.

    Science.gov (United States)

    Augustine, Susan; Greene, Courtney

    2002-01-01

    Discusses results of a usability study at the University of Illinois Chicago that investigated whether Internet search engines have influenced the way students search library Web sites. Results show students use the Web site's internal search engine rather than navigating through the pages; have difficulty interpreting library terminology; and…

  12. Students' Evaluation Strategies in a Web Research Task: Are They Sensitive to Relevance and Reliability?

    Science.gov (United States)

    Rodicio, Héctor García

    2015-01-01

    When searching and using resources on the Web, students have to evaluate Web pages in terms of relevance and reliability. This evaluation can be done in a more or less systematic way, by either considering deep or superficial cues of relevance and reliability. The goal of this study was to examine how systematic students are when evaluating Web…

  13. 77 FR 13155 - Agency Information Collection Activities: Submission for the Office of Management and Budget (OMB...

    Science.gov (United States)

    2012-03-05

    ... applicants for initial operator licenses or renewal of operator licenses and for the maintenance of medical.... OMB clearance requests are available at the NRC's Web site: http://www.nrc.gov/public-involve/doc-comment/omb/index.html . The document will be available on the NRC home page site for 60 days after the...

  14. 77 FR 55217 - Submission for OMB Review; Comment Request: Cognitive Testing of Instrumentation and Materials...

    Science.gov (United States)

    2012-09-07

    ..., 2012, page 30540 and allowed 60-days for public comment. No public comments were received. The purpose... personal interviews , audio computer assisted self- interviews , web-based interviews). Cognitive testing..., operating or maintenance costs. Table 1--Estimated Annual Reporting Burden for Screening of PATH Study...

  15. 75 FR 39586 - Agency Information Collection Activities: Submission for the Office of Management and Budget (OMB...

    Science.gov (United States)

    2010-07-09

    ... of NRC Regulatory Authority and Assumption Thereof By States Through Agreement,'' Maintenance of... available at the NRC worldwide Web site: http://www.nrc.gov/public-involve/doc-comment/omb/index.html . The document will be available on the NRC home page site for 60 days after the signature date of this notice...

  16. 76 FR 60557 - Agency Information Collection Activities: Submission for the Office of Management and Budget (OMB...

    Science.gov (United States)

    2011-09-29

    ... surveys, inspection, and maintenance. Part 36 also contains the recordkeeping and reporting requirements..., Rockville, Maryland 20852. OMB clearance requests are available at the NRC Web site: http://www.nrc.gov/public-involve/doc-comment/omb/index.html . The document will be available on the NRC home page site for...

  17. 77 FR 15398 - Agency Information Collection Activities: Submission for the Office of Management and Budget (OMB...

    Science.gov (United States)

    2012-03-15

    ..., maintenance of radiation protection programs, maintenance of radiation records recording of radiation received..., 11555 Rockville Pike, Rockville, Maryland 20852. OMB clearance requests are available at the NRC's Web... home page site for 60 days after the signature date of this notice. Comments and questions should be...

  18. 78 FR 63249 - OMB Agency Information Collection Activities: Submission for the Office of Management and Budget...

    Science.gov (United States)

    2013-10-23

    ... Agreement,'' Maintenance of Existing Agreement State Programs, Request for Information Through the..., Rockville, Maryland 20852. The OMB clearance requests are available at the NRC's Web site: http://www.nrc.gov/public-involve/doc-comment/omb/ . The document will be available on the NRC's home page site for...

  19. 77 FR 19036 - Agency Information Collection Activities; Submission for OMB Review; Comment Request; Labor...

    Science.gov (United States)

    2012-03-29

    ...: Notice. SUMMARY: On March 30, 2012, the Department of Labor (DOL) will submit the Employment and Training... [Federal Register Volume 77, Number 61 (Thursday, March 29, 2012)] [Notices] [Page 19036] [FR Doc... burden may be obtained from the RegInfo.gov Web site, http://www.reginfo.gov/public/do/PRAMain , on March...

  20. IBWS: IST Bioinformatics Web Services.

    Science.gov (United States)

    Zappa, Achille; Miele, Mariangela; Romano, Paolo

    2010-07-01

    The Bioinformatics group at the National Cancer Research Institute (IST) of Genoa has been involved since many years in the development and maintenance of biomedical information systems. Among them, the Common Access to Biological Resources and Information network services offer access to more than 130,000 biological resources, like strains of micro-organisms and human and animal cell lines, included in 29 collections from some of the most known European Biological Resource Centers. An Sequence Retrieval System (SRS) implementation of the TP53 Mutation Database of the International Agency for Research on Cancer (Lyon) was made available in order to improve interoperability of this data with other molecular biology databases. 'SRS by WS (SWS)', a system for retrieving information on public SRS sites and for directly querying them, was also implemented. In order to make this information available through application programming interfaces, we implemented a suite of free web services (WS), called the 'IST Bioinformatics Web Services (IBWS)'. A support web site, including a description of the system, a list of available WS together with help pages, links to corresponding WSDLs and forms for testing services, is available at http://bioinformatics.istge.it/ibws/. WSDL definitions can also be retrieved directly at http://bioinformatics.istge.it:8080/axis/services.