WorldWideScience

Sample records for acids research web

  1. Amino Acid Interaction (INTAA) web server.

    Science.gov (United States)

    Galgonek, Jakub; Vymetal, Jirí; Jakubec, David; Vondrášek, Jirí

    2017-07-03

    Large biomolecules-proteins and nucleic acids-are composed of building blocks which define their identity, properties and binding capabilities. In order to shed light on the energetic side of interactions of amino acids between themselves and with deoxyribonucleotides, we present the Amino Acid Interaction web server (http://bioinfo.uochb.cas.cz/INTAA/). INTAA offers the calculation of the residue Interaction Energy Matrix for any protein structure (deposited in Protein Data Bank or submitted by the user) and a comprehensive analysis of the interfaces in protein-DNA complexes. The Interaction Energy Matrix web application aims to identify key residues within protein structures which contribute significantly to the stability of the protein. The application provides an interactive user interface enhanced by 3D structure viewer for efficient visualization of pairwise and net interaction energies of individual amino acids, side chains and backbones. The protein-DNA interaction analysis part of the web server allows the user to view the relative abundance of various configurations of amino acid-deoxyribonucleotide pairs found at the protein-DNA interface and the interaction energies corresponding to these configurations calculated using a molecular mechanical force field. The effects of the sugar-phosphate moiety and of the dielectric properties of the solvent on the interaction energies can be studied for the various configurations. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. PseKRAAC: a flexible web server for generating pseudo K-tuple reduced amino acids composition.

    Science.gov (United States)

    Zuo, Yongchun; Li, Yuan; Chen, Yingli; Li, Guangpeng; Yan, Zhenhe; Yang, Lei

    2017-01-01

    The reduced amino acids perform powerful ability for both simplifying protein complexity and identifying functional conserved regions. However, dealing with different protein problems may need different kinds of cluster methods. Encouraged by the success of pseudo-amino acid composition algorithm, we developed a freely available web server, called PseKRAAC (the pseudo K-tuple reduced amino acids composition). By implementing reduced amino acid alphabets, the protein complexity can be significantly simplified, which leads to decrease chance of overfitting, lower computational handicap and reduce information redundancy. PseKRAAC delivers more capability for protein research by incorporating three crucial parameters that describes protein composition. Users can easily generate many different modes of PseKRAAC tailored to their needs by selecting various reduced amino acids alphabets and other characteristic parameters. It is anticipated that the PseKRAAC web server will become a very useful tool in computational proteomics and protein sequence analysis. Freely available on the web at http://bigdata.imu.edu.cn/psekraac CONTACTS: yczuo@imu.edu.cn or imu.hema@foxmail.com or yanglei_hmu@163.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. WEB-server for search of a periodicity in amino acid and nucleotide sequences

    Science.gov (United States)

    E Frenkel, F.; Skryabin, K. G.; Korotkov, E. V.

    2017-12-01

    A new web server (http://victoria.biengi.ac.ru/splinter/login.php) was designed and developed to search for periodicity in nucleotide and amino acid sequences. The web server operation is based upon a new mathematical method of searching for multiple alignments, which is founded on the position weight matrices optimization, as well as on implementation of the two-dimensional dynamic programming. This approach allows the construction of multiple alignments of the indistinctly similar amino acid and nucleotide sequences that accumulated more than 1.5 substitutions per a single amino acid or a nucleotide without performing the sequences paired comparisons. The article examines the principles of the web server operation and two examples of studying amino acid and nucleotide sequences, as well as information that could be obtained using the web server.

  4. RBscore&NBench: a high-level web server for nucleic acid binding residues prediction with a large-scale benchmarking database.

    Science.gov (United States)

    Miao, Zhichao; Westhof, Eric

    2016-07-08

    RBscore&NBench combines a web server, RBscore and a database, NBench. RBscore predicts RNA-/DNA-binding residues in proteins and visualizes the prediction scores and features on protein structures. The scoring scheme of RBscore directly links feature values to nucleic acid binding probabilities and illustrates the nucleic acid binding energy funnel on the protein surface. To avoid dataset, binding site definition and assessment metric biases, we compared RBscore with 18 web servers and 3 stand-alone programs on 41 datasets, which demonstrated the high and stable accuracy of RBscore. A comprehensive comparison led us to develop a benchmark database named NBench. The web server is available on: http://ahsoka.u-strasbg.fr/rbscorenbench/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Anthropogenic and natural sources of acidity and metals and their influence on the structure of stream food webs

    International Nuclear Information System (INIS)

    Hogsden, Kristy L.; Harding, Jon S.

    2012-01-01

    We compared food web structure in 20 streams with either anthropogenic or natural sources of acidity and metals or circumneutral water chemistry in New Zealand. Community and diet analysis indicated that mining streams receiving anthropogenic inputs of acidic and metal-rich drainage had much simpler food webs (fewer species, shorter food chains, less links) than those in naturally acidic, naturally high metal, and circumneutral streams. Food webs of naturally high metal streams were structurally similar to those in mining streams, lacking fish predators and having few species. Whereas, webs in naturally acidic streams differed very little from those in circumneutral streams due to strong similarities in community composition and diets of secondary and top consumers. The combined negative effects of acidity and metals on stream food webs are clear. However, elevated metal concentrations, regardless of source, appear to play a more important role than acidity in driving food web structure. - Highlights: ► Food webs in acid mine drainage impacted streams are small and extremely simplified. ► Conductivity explained differences in food web properties between streams. ► Number of links and web size accounted for much dissimilarity between food webs. ► Food web structure was comparable in naturally acidic and circumneutral streams. - Food web structure differs in streams with anthropogenic and natural sources of acidity and metals.

  6. Fatty acid biomarkers: validation of food web and trophic markers using C-13-labelled fatty acids in juvenile sandeel ( Ammodytes tobianus )

    DEFF Research Database (Denmark)

    Dalsgaard, Anne Johanne Tang; St. John, Michael

    2004-01-01

    A key issue in marine science is parameterizing trophic interactions in marine food webs, thereby developing an understanding of the importance of top-down and bottom-up controls on populations of key trophic players. This study validates the utility of fatty acid food web and trophic markers usi......), respectively. Lack of temporal trends in nonlabelled fatty acids confirmed the conservative incorporation of labelled fatty acids by the fish.......A key issue in marine science is parameterizing trophic interactions in marine food webs, thereby developing an understanding of the importance of top-down and bottom-up controls on populations of key trophic players. This study validates the utility of fatty acid food web and trophic markers using...... C-13-labelled fatty acids to verify the conservative incorporation of fatty acid tracers by juvenile sandeel (Ammodytes tobianus) and assess their uptake, clearance, and metabolic turnover rates. Juvenile sandeel were fed for 16 days in the laboratory on a formulated diet enriched in (13)C16...

  7. Worldwide Research, Worldwide Participation: Web-Based Test Logger

    Science.gov (United States)

    Clark, David A.

    1998-01-01

    Thanks to the World Wide Web, a new paradigm has been born. ESCORT (steady state data system) facilities can now be configured to use a Web-based test logger, enabling worldwide participation in tests. NASA Lewis Research Center's new Web-based test logger for ESCORT automatically writes selected test and facility parameters to a browser and allows researchers to insert comments. All data can be viewed in real time via Internet connections, so anyone with a Web browser and the correct URL (universal resource locator, or Web address) can interactively participate. As the test proceeds and ESCORT data are taken, Web browsers connected to the logger are updated automatically. The use of this logger has demonstrated several benefits. First, researchers are free from manual data entry and are able to focus more on the tests. Second, research logs can be printed in report format immediately after (or during) a test. And finally, all test information is readily available to an international public.

  8. The Role of Highly Unsaturated Fatty Acids in Aquatic Food Webs

    Science.gov (United States)

    Perhar, G.; Arhonditsis, G. B.

    2009-05-01

    .e. diatoms), at their base can attain inverted biomass distributions with efficient energy transfer between trophic levels, making HUFA pathways in aquatic food webs of special interest to fisheries and environmental managers. Built on our previous work, which implicitly considered HUFAs through a proxy (generic food quality term, which also indexes ingestibility, digestibility and toxicity), our aim is to elucidate the underlying mechanisms controlling HUFA transport through the lower aquatic food web, with an emphasis on the hypothesized somatic growth limiting potential. A biochemical submodel coupled to a plankton model has been formulated and calibrated, accounting explicitly for the omega 3 and omega 6 families of fatty acids; specifically, Alpha Linoleic acid (ALA, a precursor to EPA), EPA and DHA. Further insights into the role of HUFAs on food web dynamics and the subsequent implications on ecosystem functioning are gained through bifurcation analysis of the model. Our research aims to elucidate the existing gaps in the literature pertaining to the role and impact of HUFAs on plankton dynamics, which have traditionally been thought to be driven by stoichiometric ratios and limiting nutrients. In this study, we challenge the notion of nutrients being the primary driving factor of aquatic ecosystem patterns by introducing a modeling framework that accounts for the interplay between nutrients and HUFAs.

  9. Web indicators for research evaluation a practical guide

    CERN Document Server

    Thelwall, Michael

    2017-01-01

    In recent years there has been an increasing demand for research evaluation within universities and other research-based organisations. In parallel, there has been an increasing recognition that traditional citation-based indicators are not able to reflect the societal impacts of research and are slow to appear. This has led to the creation of new indicators for different types of research impact as well as timelier indicators, mainly derived from the Web. These indicators have been called altmetrics, webometrics or just web metrics. This book describes and evaluates a range of web indicators for aspects of societal or scholarly impact, discusses the theory and practice of using and evaluating web indicators for research assessment and outlines practical strategies for obtaining many web indicators. In addition to describing impact indicators for traditional scholarly outputs, such as journal articles and monographs, it also covers indicators for videos, datasets, software and other non-standard scholarly out...

  10. Tracing carbon sources through aquatic and terrestrial food webs using amino acid stable isotope fingerprinting.

    Directory of Open Access Journals (Sweden)

    Thomas Larsen

    Full Text Available Tracing the origin of nutrients is a fundamental goal of food web research but methodological issues associated with current research techniques such as using stable isotope ratios of bulk tissue can lead to confounding results. We investigated whether naturally occurring δ(13C patterns among amino acids (δ(13CAA could distinguish between multiple aquatic and terrestrial primary production sources. We found that δ(13CAA patterns in contrast to bulk δ(13C values distinguished between carbon derived from algae, seagrass, terrestrial plants, bacteria and fungi. Furthermore, we showed for two aquatic producers that their δ(13CAA patterns were largely unaffected by different environmental conditions despite substantial shifts in bulk δ(13C values. The potential of assessing the major carbon sources at the base of the food web was demonstrated for freshwater, pelagic, and estuarine consumers; consumer δ(13C patterns of essential amino acids largely matched those of the dominant primary producers in each system. Since amino acids make up about half of organismal carbon, source diagnostic isotope fingerprints can be used as a new complementary approach to overcome some of the limitations of variable source bulk isotope values commonly encountered in estuarine areas and other complex environments with mixed aquatic and terrestrial inputs.

  11. Tracing carbon sources through aquatic and terrestrial food webs using amino acid stable isotope fingerprinting.

    Science.gov (United States)

    Larsen, Thomas; Ventura, Marc; Andersen, Nils; O'Brien, Diane M; Piatkowski, Uwe; McCarthy, Matthew D

    2013-01-01

    Tracing the origin of nutrients is a fundamental goal of food web research but methodological issues associated with current research techniques such as using stable isotope ratios of bulk tissue can lead to confounding results. We investigated whether naturally occurring δ(13)C patterns among amino acids (δ(13)CAA) could distinguish between multiple aquatic and terrestrial primary production sources. We found that δ(13)CAA patterns in contrast to bulk δ(13)C values distinguished between carbon derived from algae, seagrass, terrestrial plants, bacteria and fungi. Furthermore, we showed for two aquatic producers that their δ(13)CAA patterns were largely unaffected by different environmental conditions despite substantial shifts in bulk δ(13)C values. The potential of assessing the major carbon sources at the base of the food web was demonstrated for freshwater, pelagic, and estuarine consumers; consumer δ(13)C patterns of essential amino acids largely matched those of the dominant primary producers in each system. Since amino acids make up about half of organismal carbon, source diagnostic isotope fingerprints can be used as a new complementary approach to overcome some of the limitations of variable source bulk isotope values commonly encountered in estuarine areas and other complex environments with mixed aquatic and terrestrial inputs.

  12. Advancing translational research with the Semantic Web

    Science.gov (United States)

    Ruttenberg, Alan; Clark, Tim; Bug, William; Samwald, Matthias; Bodenreider, Olivier; Chen, Helen; Doherty, Donald; Forsberg, Kerstin; Gao, Yong; Kashyap, Vipul; Kinoshita, June; Luciano, Joanne; Marshall, M Scott; Ogbuji, Chimezie; Rees, Jonathan; Stephens, Susie; Wong, Gwendolyn T; Wu, Elizabeth; Zaccagnini, Davide; Hongsermeier, Tonya; Neumann, Eric; Herman, Ivan; Cheung, Kei-Hoi

    2007-01-01

    Background A fundamental goal of the U.S. National Institute of Health (NIH) "Roadmap" is to strengthen Translational Research, defined as the movement of discoveries in basic research to application at the clinical level. A significant barrier to translational research is the lack of uniformly structured data across related biomedical domains. The Semantic Web is an extension of the current Web that enables navigation and meaningful use of digital resources by automatic processes. It is based on common formats that support aggregation and integration of data drawn from diverse sources. A variety of technologies have been built on this foundation that, together, support identifying, representing, and reasoning across a wide range of biomedical data. The Semantic Web Health Care and Life Sciences Interest Group (HCLSIG), set up within the framework of the World Wide Web Consortium, was launched to explore the application of these technologies in a variety of areas. Subgroups focus on making biomedical data available in RDF, working with biomedical ontologies, prototyping clinical decision support systems, working on drug safety and efficacy communication, and supporting disease researchers navigating and annotating the large amount of potentially relevant literature. Results We present a scenario that shows the value of the information environment the Semantic Web can support for aiding neuroscience researchers. We then report on several projects by members of the HCLSIG, in the process illustrating the range of Semantic Web technologies that have applications in areas of biomedicine. Conclusion Semantic Web technologies present both promise and challenges. Current tools and standards are already adequate to implement components of the bench-to-bedside vision. On the other hand, these technologies are young. Gaps in standards and implementations still exist and adoption is limited by typical problems with early technology, such as the need for a critical mass of

  13. Advancing translational research with the Semantic Web.

    Science.gov (United States)

    Ruttenberg, Alan; Clark, Tim; Bug, William; Samwald, Matthias; Bodenreider, Olivier; Chen, Helen; Doherty, Donald; Forsberg, Kerstin; Gao, Yong; Kashyap, Vipul; Kinoshita, June; Luciano, Joanne; Marshall, M Scott; Ogbuji, Chimezie; Rees, Jonathan; Stephens, Susie; Wong, Gwendolyn T; Wu, Elizabeth; Zaccagnini, Davide; Hongsermeier, Tonya; Neumann, Eric; Herman, Ivan; Cheung, Kei-Hoi

    2007-05-09

    A fundamental goal of the U.S. National Institute of Health (NIH) "Roadmap" is to strengthen Translational Research, defined as the movement of discoveries in basic research to application at the clinical level. A significant barrier to translational research is the lack of uniformly structured data across related biomedical domains. The Semantic Web is an extension of the current Web that enables navigation and meaningful use of digital resources by automatic processes. It is based on common formats that support aggregation and integration of data drawn from diverse sources. A variety of technologies have been built on this foundation that, together, support identifying, representing, and reasoning across a wide range of biomedical data. The Semantic Web Health Care and Life Sciences Interest Group (HCLSIG), set up within the framework of the World Wide Web Consortium, was launched to explore the application of these technologies in a variety of areas. Subgroups focus on making biomedical data available in RDF, working with biomedical ontologies, prototyping clinical decision support systems, working on drug safety and efficacy communication, and supporting disease researchers navigating and annotating the large amount of potentially relevant literature. We present a scenario that shows the value of the information environment the Semantic Web can support for aiding neuroscience researchers. We then report on several projects by members of the HCLSIG, in the process illustrating the range of Semantic Web technologies that have applications in areas of biomedicine. Semantic Web technologies present both promise and challenges. Current tools and standards are already adequate to implement components of the bench-to-bedside vision. On the other hand, these technologies are young. Gaps in standards and implementations still exist and adoption is limited by typical problems with early technology, such as the need for a critical mass of practitioners and installed base

  14. Advancing translational research with the Semantic Web

    Directory of Open Access Journals (Sweden)

    Marshall M Scott

    2007-05-01

    Full Text Available Abstract Background A fundamental goal of the U.S. National Institute of Health (NIH "Roadmap" is to strengthen Translational Research, defined as the movement of discoveries in basic research to application at the clinical level. A significant barrier to translational research is the lack of uniformly structured data across related biomedical domains. The Semantic Web is an extension of the current Web that enables navigation and meaningful use of digital resources by automatic processes. It is based on common formats that support aggregation and integration of data drawn from diverse sources. A variety of technologies have been built on this foundation that, together, support identifying, representing, and reasoning across a wide range of biomedical data. The Semantic Web Health Care and Life Sciences Interest Group (HCLSIG, set up within the framework of the World Wide Web Consortium, was launched to explore the application of these technologies in a variety of areas. Subgroups focus on making biomedical data available in RDF, working with biomedical ontologies, prototyping clinical decision support systems, working on drug safety and efficacy communication, and supporting disease researchers navigating and annotating the large amount of potentially relevant literature. Results We present a scenario that shows the value of the information environment the Semantic Web can support for aiding neuroscience researchers. We then report on several projects by members of the HCLSIG, in the process illustrating the range of Semantic Web technologies that have applications in areas of biomedicine. Conclusion Semantic Web technologies present both promise and challenges. Current tools and standards are already adequate to implement components of the bench-to-bedside vision. On the other hand, these technologies are young. Gaps in standards and implementations still exist and adoption is limited by typical problems with early technology, such as the need

  15. Bipolar disorder research 2.0: Web technologies for research capacity and knowledge translation.

    Science.gov (United States)

    Michalak, Erin E; McBride, Sally; Barnes, Steven J; Wood, Chanel S; Khatri, Nasreen; Balram Elliott, Nusha; Parikh, Sagar V

    2017-12-01

    Current Web technologies offer bipolar disorder (BD) researchers many untapped opportunities for conducting research and for promoting knowledge exchange. In the present paper, we document our experiences with a variety of Web 2.0 technologies in the context of an international BD research network: The Collaborative RESearch Team to Study psychosocial issues in BD (CREST.BD). Three technologies were used as tools for enabling research within CREST.BD and for encouraging the dissemination of the results of our research: (1) the crestbd.ca website, (2) social networking tools (ie, Facebook, Twitter), and (3) several sorts of file sharing (ie YouTube, FileShare). For each Web technology, we collected quantitative assessments of their effectiveness (in reach, exposure, and engagement) over a 6-year timeframe (2010-2016). In general, many of our strategies were deemed successful for promoting knowledge exchange and other network goals. We discuss how we applied our Web analytics to inform adaptations and refinements of our Web 2.0 platforms to maximise knowledge exchange with people with BD, their supporters, and health care providers. We conclude with some general recommendations for other mental health researchers and research networks interested in pursuing Web 2.0 strategies. © 2017 John Wiley & Sons, Ltd.

  16. Anthropogenic and natural sources of acidity and metals and their influence on the structure of stream food webs.

    Science.gov (United States)

    Hogsden, Kristy L; Harding, Jon S

    2012-03-01

    We compared food web structure in 20 streams with either anthropogenic or natural sources of acidity and metals or circumneutral water chemistry in New Zealand. Community and diet analysis indicated that mining streams receiving anthropogenic inputs of acidic and metal-rich drainage had much simpler food webs (fewer species, shorter food chains, less links) than those in naturally acidic, naturally high metal, and circumneutral streams. Food webs of naturally high metal streams were structurally similar to those in mining streams, lacking fish predators and having few species. Whereas, webs in naturally acidic streams differed very little from those in circumneutral streams due to strong similarities in community composition and diets of secondary and top consumers. The combined negative effects of acidity and metals on stream food webs are clear. However, elevated metal concentrations, regardless of source, appear to play a more important role than acidity in driving food web structure. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Customizable scientific web portal for fusion research

    International Nuclear Information System (INIS)

    Abla, G.; Kim, E.N.; Schissel, D.P.; Flanagan, S.M.

    2010-01-01

    Web browsers have become a major application interface for participating in scientific experiments such as those in magnetic fusion. The recent advances in web technologies motivated the deployment of interactive web applications with rich features. In the scientific world, web applications have been deployed in portal environments. When used in a scientific research environment, such as fusion experiments, web portals can present diverse sources of information in a unified interface. However, the design and development of a scientific web portal has its own challenges. One such challenge is that a web portal needs to be fast and interactive despite the high volume of information and number of tools it presents. Another challenge is that the visual output of the web portal must not be overwhelming to the end users, despite the high volume of data generated by fusion experiments. Therefore, the applications and information should be customizable depending on the needs of end users. In order to meet these challenges, the design and implementation of a web portal needs to support high interactivity and user customization. A web portal has been designed to support the experimental activities of DIII-D researchers worldwide by providing multiple services, such as real-time experiment status monitoring, diagnostic data access and interactive data visualization. The web portal also supports interactive collaborations by providing a collaborative logbook, shared visualization and online instant messaging services. The portal's design utilizes the multi-tier software architecture and has been implemented utilizing web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services, which allows users to create a unique, personalized working environment to fit their own needs and interests. This paper describes the software

  18. Customizable scientific web portal for fusion research

    Energy Technology Data Exchange (ETDEWEB)

    Abla, G., E-mail: abla@fusion.gat.co [General Atomics, P.O. Box 85608, San Diego, CA (United States); Kim, E.N.; Schissel, D.P.; Flanagan, S.M. [General Atomics, P.O. Box 85608, San Diego, CA (United States)

    2010-07-15

    Web browsers have become a major application interface for participating in scientific experiments such as those in magnetic fusion. The recent advances in web technologies motivated the deployment of interactive web applications with rich features. In the scientific world, web applications have been deployed in portal environments. When used in a scientific research environment, such as fusion experiments, web portals can present diverse sources of information in a unified interface. However, the design and development of a scientific web portal has its own challenges. One such challenge is that a web portal needs to be fast and interactive despite the high volume of information and number of tools it presents. Another challenge is that the visual output of the web portal must not be overwhelming to the end users, despite the high volume of data generated by fusion experiments. Therefore, the applications and information should be customizable depending on the needs of end users. In order to meet these challenges, the design and implementation of a web portal needs to support high interactivity and user customization. A web portal has been designed to support the experimental activities of DIII-D researchers worldwide by providing multiple services, such as real-time experiment status monitoring, diagnostic data access and interactive data visualization. The web portal also supports interactive collaborations by providing a collaborative logbook, shared visualization and online instant messaging services. The portal's design utilizes the multi-tier software architecture and has been implemented utilizing web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services, which allows users to create a unique, personalized working environment to fit their own needs and interests. This paper describes the software

  19. Mfold web server for nucleic acid folding and hybridization prediction.

    Science.gov (United States)

    Zuker, Michael

    2003-07-01

    The abbreviated name, 'mfold web server', describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces), the server circumvents the problem of portability of this software. Detailed output, in the form of structure plots with or without reliability information, single strand frequency plots and 'energy dot plots', are available for the folding of single sequences. A variety of 'bulk' servers give less information, but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/mfold. This URL will be referred to as 'MFOLDROOT'.

  20. URBANIZATION ALTERS FATTY ACID CONCENTRATIONS OF STREAM FOOD WEBS IN THE NARRAGANSETT BAY WATERSHED

    Science.gov (United States)

    Urbanization and associated human activities negatively affect stream algal and invertebrate assemblages, likely altering food webs. Our goal was to determine if urbanization affects food web essential fatty acids (EFAs) and if EFAs could be useful ecological indicators in monito...

  1. Identification of trophic interactions within an estuarine food web (northern New Zealand) using fatty acid biomarkers and stable isotopes

    Science.gov (United States)

    Alfaro, Andrea C.; Thomas, François; Sergent, Luce; Duxbury, Mark

    2006-10-01

    Fatty acid biomarkers and stable isotope signatures were used to identify the trophic dynamics of a mangrove/seagrass estuarine food web at Matapouri, northern New Zealand. Specific fatty acids were used to identify the preferred food sources (i.e., mangroves, seagrass, phytoplankton, macroalgae, bacteria, and zooplankton) of dominant fauna (i.e., filter feeders, grazing snails, scavenger/predatory snails, shrimp, crabs, and fish), and their presence in water and sediment samples throughout the estuary. The diets of filter feeders were found to be dominated by dinoflagellates, whereas grazers showed a higher diatom contribution. Bacteria associated with organic debris on surface sediments and brown algal ( Hormosira banksii) material in the form of suspended organic matter also accounted for a high proportion of most animal diets. Animals within higher trophic levels had diverse fatty acid profiles, revealing their varied feeding strategies and carbon sources. The stable isotope (δ 13C and δ 15N) analyses of major primary producers and consumers/predators revealed a trend of 15N enrichment with increasing trophic level, while δ 13C values provided a generally good description of carbon flow through the food web. Overall results from both fatty acid profiles and stable isotopes indicate that a variety of carbon sources with a range of trophic pathways typify this food web. Moreover, none of the animals studied was dependent on a single food source. This study is the first to use a comprehensive fatty acid biomarker and stable isotope approach to investigate the food web dynamics within a New Zealand temperate mangrove/seagrass estuary. This quantitative research may contribute to the currently developing management strategies for estuaries in northern New Zealand, especially for those perceived to have expanding mangrove fringes.

  2. The use of advanced web-based survey design in Delphi research.

    Science.gov (United States)

    Helms, Christopher; Gardner, Anne; McInnes, Elizabeth

    2017-12-01

    A discussion of the application of metadata, paradata and embedded data in web-based survey research, using two completed Delphi surveys as examples. Metadata, paradata and embedded data use in web-based Delphi surveys has not been described in the literature. The rapid evolution and widespread use of online survey methods imply that paper-based Delphi methods will likely become obsolete. Commercially available web-based survey tools offer a convenient and affordable means of conducting Delphi research. Researchers and ethics committees may be unaware of the benefits and risks of using metadata in web-based surveys. Discussion paper. Two web-based, three-round Delphi surveys were conducted sequentially between August 2014 - January 2015 and April - May 2016. Their aims were to validate the Australian nurse practitioner metaspecialties and their respective clinical practice standards. Our discussion paper is supported by researcher experience and data obtained from conducting both web-based Delphi surveys. Researchers and ethics committees should consider the benefits and risks of metadata use in web-based survey methods. Web-based Delphi research using paradata and embedded data may introduce efficiencies that improve individual participant survey experiences and reduce attrition across iterations. Use of embedded data allows the efficient conduct of multiple simultaneous Delphi surveys across a shorter timeframe than traditional survey methods. The use of metadata, paradata and embedded data appears to improve response rates, identify bias and give possible explanation for apparent outlier responses, providing an efficient method of conducting web-based Delphi surveys. © 2017 John Wiley & Sons Ltd.

  3. Invisible Web and Academic Research: A Partnership for Quality

    Science.gov (United States)

    Alyami, Huda Y.; Assiri, Eman A.

    2018-01-01

    The present study aims to identify the most significant roles of the invisible web in improving academic research and the main obstacles and challenges facing the use of the invisible web in improving academic research from the perspective of academics in Saudi universities. The descriptive analytical approach was utilized in this study. It…

  4. The Use of Web Search Engines in Information Science Research.

    Science.gov (United States)

    Bar-Ilan, Judit

    2004-01-01

    Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…

  5. A decade of Web Server updates at the Bioinformatics Links Directory: 2003-2012.

    Science.gov (United States)

    Brazas, Michelle D; Yim, David; Yeung, Winston; Ouellette, B F Francis

    2012-07-01

    The 2012 Bioinformatics Links Directory update marks the 10th special Web Server issue from Nucleic Acids Research. Beginning with content from their 2003 publication, the Bioinformatics Links Directory in collaboration with Nucleic Acids Research has compiled and published a comprehensive list of freely accessible, online tools, databases and resource materials for the bioinformatics and life science research communities. The past decade has exhibited significant growth and change in the types of tools, databases and resources being put forth, reflecting both technology changes and the nature of research over that time. With the addition of 90 web server tools and 12 updates from the July 2012 Web Server issue of Nucleic Acids Research, the Bioinformatics Links Directory at http://bioinformatics.ca/links_directory/ now contains an impressive 134 resources, 455 databases and 1205 web server tools, mirroring the continued activity and efforts of our field.

  6. Strategies to address participant misrepresentation for eligibility in Web-based research.

    Science.gov (United States)

    Kramer, Jessica; Rubin, Amy; Coster, Wendy; Helmuth, Eric; Hermos, John; Rosenbloom, David; Moed, Rich; Dooley, Meghan; Kao, Ying-Chia; Liljenquist, Kendra; Brief, Deborah; Enggasser, Justin; Keane, Terence; Roy, Monica; Lachowicz, Mark

    2014-03-01

    Emerging methodological research suggests that the World Wide Web ("Web") is an appropriate venue for survey data collection, and a promising area for delivering behavioral intervention. However, the use of the Web for research raises concerns regarding sample validity, particularly when the Web is used for recruitment and enrollment. The purpose of this paper is to describe the challenges experienced in two different Web-based studies in which participant misrepresentation threatened sample validity: a survey study and an online intervention study. The lessons learned from these experiences generated three types of strategies researchers can use to reduce the likelihood of participant misrepresentation for eligibility in Web-based research. Examples of procedural/design strategies, technical/software strategies and data analytic strategies are provided along with the methodological strengths and limitations of specific strategies. The discussion includes a series of considerations to guide researchers in the selection of strategies that may be most appropriate given the aims, resources and target population of their studies. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Customisable Scientific Web Portal for Fusion Research

    Energy Technology Data Exchange (ETDEWEB)

    Abla, G; Kim, E; Schissel, D; Flannagan, S [General Atomics, San Diego (United States)

    2009-07-01

    The Web browser has become one of the major application interfaces for remotely participating in magnetic fusion. Web portals are used to present very diverse sources of information in a unified way. While a web portal has several benefits over other software interfaces, such as providing single point of access for multiple computational services, and eliminating the need for client software installation, the design and development of a web portal has unique challenges. One of the challenges is that a web portal needs to be fast and interactive despite a high volume of tools and information that it presents. Another challenge is the visual output on the web portal often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments; therefore the applications and information should be customizable depending on the needs of users. An appropriate software architecture and web technologies can meet these problems. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It utilizes a multi-tier software architecture, and web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services for users to select. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, shared visualization and online instant message services. Furthermore, the web portal will provide a mechanism to allow users to create their own applications on the web portal as well as bridging capabilities to external applications such as Twitter and other social networks. In this series of slides, we describe the software architecture of this scientific web portal and our experiences in utilizing web 2.0 technologies. A

  8. Augmenting Research, Education, and Outreach with Client-Side Web Programming.

    Science.gov (United States)

    Abriata, Luciano A; Rodrigues, João P G L M; Salathé, Marcel; Patiny, Luc

    2018-05-01

    The evolution of computing and web technologies over the past decade has enabled the development of fully fledged scientific applications that run directly on web browsers. Powered by JavaScript, the lingua franca of web programming, these 'web apps' are starting to revolutionize and democratize scientific research, education, and outreach. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Collecting behavioural data using the world wide web: considerations for researchers.

    Science.gov (United States)

    Rhodes, S D; Bowie, D A; Hergenrather, K C

    2003-01-01

    To identify and describe advantages, challenges, and ethical considerations of web based behavioural data collection. This discussion is based on the authors' experiences in survey development and study design, respondent recruitment, and internet research, and on the experiences of others as found in the literature. The advantages of using the world wide web to collect behavioural data include rapid access to numerous potential respondents and previously hidden populations, respondent openness and full participation, opportunities for student research, and reduced research costs. Challenges identified include issues related to sampling and sample representativeness, competition for the attention of respondents, and potential limitations resulting from the much cited "digital divide", literacy, and disability. Ethical considerations include anonymity and privacy, providing and substantiating informed consent, and potential risks of malfeasance. Computer mediated communications, including electronic mail, the world wide web, and interactive programs will play an ever increasing part in the future of behavioural science research. Justifiable concerns regarding the use of the world wide web in research exist, but as access to, and use of, the internet becomes more widely and representatively distributed globally, the world wide web will become more applicable. In fact, the world wide web may be the only research tool able to reach some previously hidden population subgroups. Furthermore, many of the criticisms of online data collection are common to other survey research methodologies.

  10. Accelerating cancer systems biology research through Semantic Web technology.

    Science.gov (United States)

    Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S

    2013-01-01

    Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute's caBIG, so users can interact with the DMR not only through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers' intellectual property. Copyright © 2012 Wiley Periodicals, Inc.

  11. Customizable Scientific Web Portal for Fusion Research

    Energy Technology Data Exchange (ETDEWEB)

    Abla, G; Kim, E; Schissel, D; Flannagan, S [General Atomics, San Diego (United States)

    2009-07-01

    The Web browser has become one of the major application interfaces for remotely participating in magnetic fusion experiments. Recently in other areas, web portals have begun to be deployed. These portals are used to present very diverse sources of information in a unified way. While a web portal has several benefits over other software interfaces, such as providing single point of access for multiple computational services, and eliminating the need for client software installation, the design and development of a web portal has unique challenges. One of the challenges is that a web portal needs to be fast and interactive despite a high volume of tools and information that it presents. Another challenge is the visual output on the web portal often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments; therefore the applications and information should be customizable depending on the needs of users. An appropriate software architecture and web technologies can meet these problems. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It utilizes a multi-tier software architecture, and web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services for users to select. The users can create a unique personalized working environment to fit their own needs and interests. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, shared visualization and online instant message services. Furthermore, the web portal will provide a mechanism to allow users to create their own applications on the web portal as well as bridging capabilities to external applications such as

  12. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-08-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  13. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-12-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  14. Research, Collaboration, and Open Science Using Web 2.0

    Directory of Open Access Journals (Sweden)

    Kevin Shee

    2010-10-01

    Full Text Available There is little doubt that the Internet has transformed the world in which we live. Information that was once archived in bricks and mortar libraries is now only a click away, and people across the globe have become connected in a manner inconceivable only 20 years ago. Although many scientists and educators have embraced the Internet as an invaluable tool for research, education and data sharing, some have been somewhat slower to take full advantage of emerging Web 2.0 technologies. Here we discuss the benefits and challenges of integrating Web 2.0 applications into undergraduate research and education programs, based on our experience utilizing these technologies in a summer undergraduate research program in synthetic biology at Harvard University. We discuss the use of applications including wiki-based documentation, digital brainstorming, and open data sharing via the Web, to facilitate the educational aspects and collaborative progress of undergraduate research projects. We hope to inspire others to integrate these technologies into their own coursework or research projects.

  15. Characterizing interdisciplinarity of researchers and research topics using web search engines.

    Science.gov (United States)

    Sayama, Hiroki; Akaishi, Jin

    2012-01-01

    Researchers' networks have been subject to active modeling and analysis. Earlier literature mostly focused on citation or co-authorship networks reconstructed from annotated scientific publication databases, which have several limitations. Recently, general-purpose web search engines have also been utilized to collect information about social networks. Here we reconstructed, using web search engines, a network representing the relatedness of researchers to their peers as well as to various research topics. Relatedness between researchers and research topics was characterized by visibility boost-increase of a researcher's visibility by focusing on a particular topic. It was observed that researchers who had high visibility boosts by the same research topic tended to be close to each other in their network. We calculated correlations between visibility boosts by research topics and researchers' interdisciplinarity at the individual level (diversity of topics related to the researcher) and at the social level (his/her centrality in the researchers' network). We found that visibility boosts by certain research topics were positively correlated with researchers' individual-level interdisciplinarity despite their negative correlations with the general popularity of researchers. It was also found that visibility boosts by network-related topics had positive correlations with researchers' social-level interdisciplinarity. Research topics' correlations with researchers' individual- and social-level interdisciplinarities were found to be nearly independent from each other. These findings suggest that the notion of "interdisciplinarity" of a researcher should be understood as a multi-dimensional concept that should be evaluated using multiple assessment means.

  16. Research on Web Search Behavior: How Online Query Data Inform Social Psychology.

    Science.gov (United States)

    Lai, Kaisheng; Lee, Yan Xin; Chen, Hao; Yu, Rongjun

    2017-10-01

    The widespread use of web searches in daily life has allowed researchers to study people's online social and psychological behavior. Using web search data has advantages in terms of data objectivity, ecological validity, temporal resolution, and unique application value. This review integrates existing studies on web search data that have explored topics including sexual behavior, suicidal behavior, mental health, social prejudice, social inequality, public responses to policies, and other psychosocial issues. These studies are categorized as descriptive, correlational, inferential, predictive, and policy evaluation research. The integration of theory-based hypothesis testing in future web search research will result in even stronger contributions to social psychology.

  17. Web-ethics from the Perspective of a Series of Social Research Projects

    OpenAIRE

    CRUZ, HERNANDO; Docente Dpto. Ciencia de la Información - Pontificia Universidad Javeriana; Bogotá

    2009-01-01

    This article puts forth the perspective of an ethics for the web or web-ethics, which the author has identified while doing research in Colombia. The research work has dealt with education, management, design, communication, and use and retrieval of information in the web from 1998 to 2007, particularly the theoretical revision and critical analyses of a specific corpus of research work. These analyses have in turn lead to new questions and challenges related to the balance which must be foun...

  18. The state of web-based research: A survey and call for inclusion in curricula.

    Science.gov (United States)

    Krantz, John H; Reips, Ulf-Dietrich

    2017-10-01

    The first papers that reported on conducting psychological research on the web were presented at the Society for Computers in Psychology conference 20 years ago, in 1996. Since that time, there has been an explosive increase in the number of studies that use the web for data collection. As such, it seems a good time, 20 years on, to examine the health and adoption of sound practices of research on the web. The number of studies conducted online has increased dramatically. Overall, it seems that the web can be a method for conducting valid psychological studies. However, it is less clear that students and researchers are aware of the nature of web research. While many studies are well conducted, there is also a certain laxness appearing regarding the design and conduct of online studies. This laxness appears both anecdotally to the authors as managers of large sites for posting links to online studies, and in a survey of current researchers. One of the deficiencies discovered is that there is no coherent approach to educating researchers as to the unique features of web research.

  19. EVpedia: a community web portal for extracellular vesicles research.

    Science.gov (United States)

    Kim, Dae-Kyum; Lee, Jaewook; Kim, Sae Rom; Choi, Dong-Sic; Yoon, Yae Jin; Kim, Ji Hyun; Go, Gyeongyun; Nhung, Dinh; Hong, Kahye; Jang, Su Chul; Kim, Si-Hyun; Park, Kyong-Su; Kim, Oh Youn; Park, Hyun Taek; Seo, Ji Hye; Aikawa, Elena; Baj-Krzyworzeka, Monika; van Balkom, Bas W M; Belting, Mattias; Blanc, Lionel; Bond, Vincent; Bongiovanni, Antonella; Borràs, Francesc E; Buée, Luc; Buzás, Edit I; Cheng, Lesley; Clayton, Aled; Cocucci, Emanuele; Dela Cruz, Charles S; Desiderio, Dominic M; Di Vizio, Dolores; Ekström, Karin; Falcon-Perez, Juan M; Gardiner, Chris; Giebel, Bernd; Greening, David W; Gross, Julia Christina; Gupta, Dwijendra; Hendrix, An; Hill, Andrew F; Hill, Michelle M; Nolte-'t Hoen, Esther; Hwang, Do Won; Inal, Jameel; Jagannadham, Medicharla V; Jayachandran, Muthuvel; Jee, Young-Koo; Jørgensen, Malene; Kim, Kwang Pyo; Kim, Yoon-Keun; Kislinger, Thomas; Lässer, Cecilia; Lee, Dong Soo; Lee, Hakmo; van Leeuwen, Johannes; Lener, Thomas; Liu, Ming-Lin; Lötvall, Jan; Marcilla, Antonio; Mathivanan, Suresh; Möller, Andreas; Morhayim, Jess; Mullier, François; Nazarenko, Irina; Nieuwland, Rienk; Nunes, Diana N; Pang, Ken; Park, Jaesung; Patel, Tushar; Pocsfalvi, Gabriella; Del Portillo, Hernando; Putz, Ulrich; Ramirez, Marcel I; Rodrigues, Marcio L; Roh, Tae-Young; Royo, Felix; Sahoo, Susmita; Schiffelers, Raymond; Sharma, Shivani; Siljander, Pia; Simpson, Richard J; Soekmadji, Carolina; Stahl, Philip; Stensballe, Allan; Stępień, Ewa; Tahara, Hidetoshi; Trummer, Arne; Valadi, Hadi; Vella, Laura J; Wai, Sun Nyunt; Witwer, Kenneth; Yáñez-Mó, María; Youn, Hyewon; Zeidler, Reinhard; Gho, Yong Song

    2015-03-15

    Extracellular vesicles (EVs) are spherical bilayered proteolipids, harboring various bioactive molecules. Due to the complexity of the vesicular nomenclatures and components, online searches for EV-related publications and vesicular components are currently challenging. We present an improved version of EVpedia, a public database for EVs research. This community web portal contains a database of publications and vesicular components, identification of orthologous vesicular components, bioinformatic tools and a personalized function. EVpedia includes 6879 publications, 172 080 vesicular components from 263 high-throughput datasets, and has been accessed more than 65 000 times from more than 750 cities. In addition, about 350 members from 73 international research groups have participated in developing EVpedia. This free web-based database might serve as a useful resource to stimulate the emerging field of EV research. The web site was implemented in PHP, Java, MySQL and Apache, and is freely available at http://evpedia.info. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Data management of web archive research data

    DEFF Research Database (Denmark)

    Zierau, Eld; Jurik, Bolette

    This paper will provide recommendations to overcome various challenges for data management of web materials. The recommendations are based on results from two independent Danish research projects with different requirements to data management: The first project focuses on high precision on a par...

  1. Action Research on a WebQuest as an Instructional Tool for Writing Abstracts of Research Articles

    Directory of Open Access Journals (Sweden)

    Krismiyati Latuperissa

    2012-08-01

    Full Text Available The massive growth of and access to information technology (IT has enabled the integration of technology into classrooms. One such integration is the use of WebQuests as an instructional tool in teaching targeted learning activities such as writing abstracts of research articles in English for English as a Foreign Language (EFL learners. In the academic world, writing an abstract of a research paper or final project in English can be challenging for EFL students. This article presents an action research project on the process and outcomes of using a WebQuest designed to help 20 Indonesian university IT students write a research article’s abstract in English. Findings reveal that despite positive feedback, changes need to be made to make the WebQuest a more effective instructional tool for the purpose it was designed.

  2. Raising Reliability of Web Search Tool Research through Replication and Chaos Theory

    OpenAIRE

    Nicholson, Scott

    1999-01-01

    Because the World Wide Web is a dynamic collection of information, the Web search tools (or "search engines") that index the Web are dynamic. Traditional information retrieval evaluation techniques may not provide reliable results when applied to the Web search tools. This study is the result of ten replications of the classic 1996 Ding and Marchionini Web search tool research. It explores the effects that replication can have on transforming unreliable results from one iteration into replica...

  3. Web-based (HTML5) interactive graphics for fusion research and collaboration

    International Nuclear Information System (INIS)

    Kim, E.N.; Schissel, D.P.; Abla, G.; Flanagan, S.; Lee, X.

    2012-01-01

    Highlights: ► Interactive data visualization is supported via the Web without a browser plugin and provides users easy, real-time access to data of different types from various locations. ► Crosshair, zoom, pan as well as toggling dimensionality and a slice bar for multi-dimensional data are available. ► Data with PHP API can be applied: MDSplus and SQL have been tested. ► Modular in design, this has been deployed to support both the experimental and the simulation research arenas. - Abstract: With the continuing development of web technologies, it is becoming feasible for websites to operate a lot like a scientific desktop application. This has opened up more possibilities for utilizing the web browser for interactive scientific research and providing new means of on-line communication and collaboration. This paper describes the research and deployment for utilizing these enhanced web graphics capabilities on the fusion research tools which has led to a general toolkit that can be deployed as required. It allows users to dynamically create, interact with and share with others, the large sets of data generated by the fusion experiments and simulations. Hypertext Preprocessor (PHP), a general-purpose scripting language for the Web, is used to process a series of inputs, and determine the data source types and locations to fetch and organize the data. Protovis, a Javascript and SVG based web graphics package, then quickly draws the interactive graphs and makes it available to the worldwide audience. This toolkit has been deployed to both the simulation and experimental arenas. The deployed applications will be presented as well as the architecture and technologies used in producing the general graphics toolkit.

  4. Research on Artificial Spider Web Model for Farmland Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Jun Wang

    2018-01-01

    Full Text Available Through systematic analysis of the structural characteristics and invulnerability of spider web, this paper explores the possibility of combining the advantages of spider web such as network robustness and invulnerability with farmland wireless sensor network. A universally applicable definition and mathematical model of artificial spider web structure are established. The comparison between artificial spider web and traditional networks is discussed in detail. The simulation result shows that the networking structure of artificial spider web is better than that of traditional networks in terms of improving the overall reliability and invulnerability of communication system. A comprehensive study on the advantage characteristics of spider web has important theoretical and practical significance for promoting the invulnerability research of farmland wireless sensor network.

  5. Using a WebCT to Develop a Research Skills Module

    OpenAIRE

    Bellew Martin, Kelli; Lee, Jennifer

    2003-01-01

    At the start of every academic year, the University of Calgary Library welcomes 1,000 first-year biology students to basic library research skills sessions. These sessions are traditionally taught in lecture format with a PowerPoint presentation and students following along on computers. As part of a pilot project in the Fall of 2002, 200 first-year biology students received the session via WebCT. WebCT is the web-based course management system utilized by the University of Calgary1; it d...

  6. A Study on the Role of Web Technology in Enhancing Research Pursuance among University Academia

    Science.gov (United States)

    Hussain, Irshad; Durrani, Muhammad Ismail

    2012-01-01

    The purpose of this study was to evaluate the role of web technologies in promoting research pursuance among university teachers, examine the use of web technologies by university teachers in conducting research and identify the problems of university academia in using web technologies for research. The study was delimited to academia of social…

  7. Web-based (HTML5) interactive graphics for fusion research and collaboration

    Energy Technology Data Exchange (ETDEWEB)

    Kim, E.N., E-mail: kimny@fusion.gat.com [General Atomics, P.O. Box 85608, San Diego, CA (United States); Schissel, D.P.; Abla, G.; Flanagan, S.; Lee, X. [General Atomics, P.O. Box 85608, San Diego, CA (United States)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Interactive data visualization is supported via the Web without a browser plugin and provides users easy, real-time access to data of different types from various locations. Black-Right-Pointing-Pointer Crosshair, zoom, pan as well as toggling dimensionality and a slice bar for multi-dimensional data are available. Black-Right-Pointing-Pointer Data with PHP API can be applied: MDSplus and SQL have been tested. Black-Right-Pointing-Pointer Modular in design, this has been deployed to support both the experimental and the simulation research arenas. - Abstract: With the continuing development of web technologies, it is becoming feasible for websites to operate a lot like a scientific desktop application. This has opened up more possibilities for utilizing the web browser for interactive scientific research and providing new means of on-line communication and collaboration. This paper describes the research and deployment for utilizing these enhanced web graphics capabilities on the fusion research tools which has led to a general toolkit that can be deployed as required. It allows users to dynamically create, interact with and share with others, the large sets of data generated by the fusion experiments and simulations. Hypertext Preprocessor (PHP), a general-purpose scripting language for the Web, is used to process a series of inputs, and determine the data source types and locations to fetch and organize the data. Protovis, a Javascript and SVG based web graphics package, then quickly draws the interactive graphs and makes it available to the worldwide audience. This toolkit has been deployed to both the simulation and experimental arenas. The deployed applications will be presented as well as the architecture and technologies used in producing the general graphics toolkit.

  8. Webs on the Web (WOW): 3D visualization of ecological networks on the WWW for collaborative research and education

    Science.gov (United States)

    Yoon, Ilmi; Williams, Rich; Levine, Eli; Yoon, Sanghyuk; Dunne, Jennifer; Martinez, Neo

    2004-06-01

    This paper describes information technology being developed to improve the quality, sophistication, accessibility, and pedagogical simplicity of ecological network data, analysis, and visualization. We present designs for a WWW demonstration/prototype web site that provides database, analysis, and visualization tools for research and education related to food web research. Our early experience with a prototype 3D ecological network visualization guides our design of a more flexible architecture design. 3D visualization algorithms include variable node and link sizes, placements according to node connectivity and tropic levels, and visualization of other node and link properties in food web data. The flexible architecture includes an XML application design, FoodWebML, and pipelining of computational components. Based on users" choices of data and visualization options, the WWW prototype site will connect to an XML database (Xindice) and return the visualization in VRML format for browsing and further interactions.

  9. Aligning Web-Based Tools to the Research Process Cycle: A Resource for Collaborative Research Projects

    Science.gov (United States)

    Price, Geoffrey P.; Wright, Vivian H.

    2012-01-01

    Using John Creswell's Research Process Cycle as a framework, this article describes various web-based collaborative technologies useful for enhancing the organization and efficiency of educational research. Visualization tools (Cacoo) assist researchers in identifying a research problem. Resource storage tools (Delicious, Mendeley, EasyBib)…

  10. The Inclusion of African-American Study Participants in Web-Based Research Studies: Viewpoint

    OpenAIRE

    Watson, Bekeela; Robinson, Dana H.Z; Harker, Laura; Arriola, Kimberly R. Jacob

    2016-01-01

    The use of Web-based methods for research recruitment and intervention delivery has greatly increased as Internet usage continues to grow. These Internet-based strategies allow for researchers to quickly reach more people. African-Americans are underrepresented in health research studies. Due to this, African-Americans get less benefit from important research that could address the disproportionate health outcomes they face. Web-based research studies are one promising way to engage more Afri...

  11. Wood Utilization Research Dissemination on the World Wide Web: A Case Study

    Science.gov (United States)

    Daniel L. Schmoldt; Matthew F. Winn; Philip A. Araman

    1997-01-01

    Because many research products are informational rather than tangible, emerging information technologies, such as the multi-media format of the World Wide Web, provide an open and easily accessible mechanism for transferring research to user groups. We have found steady, increasing use of our Web site over the first 6-1/2 months of operation; almost one-third of the...

  12. A Random-Dot Kinematogram for Web-Based Vision Research

    Directory of Open Access Journals (Sweden)

    Sivananda Rajananda

    2018-01-01

    Full Text Available Web-based experiments using visual stimuli have become increasingly common in recent years, but many frequently-used stimuli in vision research have yet to be developed for online platforms. Here, we introduce the first open access random-dot kinematogram (RDK for use in web browsers. This fully customizable RDK offers options to implement several different types of noise (random position, random walk, random direction and parameters to control aperture shape, coherence level, the number of dots, and other features. We include links to commented JavaScript code for easy implementation in web-based experiments, as well as an example of how this stimulus can be integrated as a plugin with a JavaScript library for online studies (jsPsych.

  13. Position paper: Web tutorials and Information Literacy research

    DEFF Research Database (Denmark)

    Hyldegård, Jette

    2011-01-01

    Position paper on future research challenges regarding web tutorials with the aim of supporting and facilitating Information Literacy in an academic context. Presented and discussed at the workshop: Social media & Information Practices, track on Information literacy practices, University of Borås...

  14. Research on the Method of Enterprise Knowledge Management Based on Web 2.0

    Directory of Open Access Journals (Sweden)

    Le Chengyi

    2017-06-01

    Full Text Available [Purpose/significance] It is the key for the research of enterprise knowledge management to improve the efficiency of enterprise knowledge management by using the advantages of Web 2.0, such as its fastness, public participation and strong interaction. [Method/process] Based on the analysis of the characteristics and main technologies of Web 2.0, this paper discussed the role and application of Web 2.0 related technologies in the enterprise knowledge management, and then put forward the enterprise knowledge management methods based on Web 2.0, including knowledge acquisition method, knowledge classification and organization method, knowledge sharing and evaluation method by usingWeb2 .0. [Result/conclusion] Through the introduction of Web 2.0 related technologies into the knowledge management of enterprises, the research provides convenient and low-cost tools and methods for knowledge management related activities, and helps all users to participate in enterprise knowledge management activities quickly and easily.

  15. Millennial Undergraduate Research Strategies in Web and Library Information Retrieval Systems

    Science.gov (United States)

    Porter, Brandi

    2011-01-01

    This article summarizes the author's dissertation regarding search strategies of millennial undergraduate students in Web and library online information retrieval systems. Millennials bring a unique set of search characteristics and strategies to their research since they have never known a world without the Web. Through the use of search engines,…

  16. What We Know about the Impacts of WebQuests: A Review of Research

    Science.gov (United States)

    Abbitt, Jason; Ophus, John

    2008-01-01

    This article examines the body of research investigating the impacts of the WebQuest instructional strategy on teaching and learning. The WebQuest instructional strategy is often praised as an inquiry-oriented activity, which effectively integrates technology into teaching and learning. The results of research suggest that while this strategy may…

  17. A Community-Based Research Approach to Develop an Educational Web Portal

    Science.gov (United States)

    Preiser-Houy, Lara; Navarrete, Carlos J.

    2011-01-01

    Service-learning projects are becoming more prevalent in Information Systems education. This study explores the use of community-based research, a special kind of a service-learning strategy, in an Information Systems web development course. The paper presents a case study of a service-learning project to develop an educational web portal for a…

  18. Web-based research publications on Sub-Saharan Africa's prized ...

    African Journals Online (AJOL)

    The study confirms Africa's deep interest in the grasscutter which is not shared by other parts of the world. We recommend increased publication of research on cane rats in web-based journals to quickly spread the food value of this prized meat rodent to other parts of the world and so attract research interest and funding.

  19. Collaborative web hosting challenges and research directions

    CERN Document Server

    Ahmed, Reaz

    2014-01-01

    This brief presents a peer-to-peer (P2P) web-hosting infrastructure (named pWeb) that can transform networked, home-entertainment devices into lightweight collaborating Web servers for persistently storing and serving multimedia and web content. The issues addressed include ensuring content availability, Plexus routing and indexing, naming schemes, web ID, collaborative web search, network architecture and content indexing. In pWeb, user-generated voluminous multimedia content is proactively uploaded to a nearby network location (preferably within the same LAN or at least, within the same ISP)

  20. Integration of Web mining and web crawler: Relevance and State of Art

    OpenAIRE

    Subhendu kumar pani; Deepak Mohapatra,; Bikram Keshari Ratha

    2010-01-01

    This study presents the role of web crawler in web mining environment. As the growth of the World Wide Web exceeded all expectations,the research on Web mining is growing more and more.web mining research topic which combines two of the activated research areas: Data Mining and World Wide Web .So, the World Wide Web is a very advanced area for data mining research. Search engines that are based on web crawling framework also used in web mining to find theinteracted web pages. This paper discu...

  1. Semantic Web Requirements through Web Mining Techniques

    OpenAIRE

    Hassanzadeh, Hamed; Keyvanpour, Mohammad Reza

    2012-01-01

    In recent years, Semantic web has become a topic of active research in several fields of computer science and has applied in a wide range of domains such as bioinformatics, life sciences, and knowledge management. The two fast-developing research areas semantic web and web mining can complement each other and their different techniques can be used jointly or separately to solve the issues in both areas. In addition, since shifting from current web to semantic web mainly depends on the enhance...

  2. A Web-Based Platform for Educating Researchers About Bioethics and Biobanking.

    Science.gov (United States)

    Sehovic, Ivana; Gwede, Clement K; Meade, Cathy D; Sodeke, Stephen; Pentz, Rebecca; Quinn, Gwendolyn P

    2016-06-01

    Participation in biobanking among individuals with familial risk for hereditary cancer (IFRs) and underserved/minority populations is vital for biobanking research. To address gaps in researcher knowledge regarding ethical concerns of these populations, we developed a web-based curriculum. Based on formative research and expert panel assessments, a curriculum and website was developed in an integrative, systematic manner. Researchers were recruited to evaluate the curriculum. Public health graduate students were recruited to pilot test the curriculum. All 14 researchers agreed the curriculum was easy to understand, adequately addressed the domains, and contained appropriate post-test questions. The majority evaluated the dialgoue animations as interesting and valuable. Twenty-two graduate students completed the curriculum, and 77 % improved their overall test score. A web-based curriculum is an acceptable and effective way to provide information to researchers about vulnerable populations' biobanking concerns. Future goals are to incorporate the curriculum with larger organizations.

  3. Correct software in web applications and web services

    CERN Document Server

    Thalheim, Bernhard; Prinz, Andreas; Buchberger, Bruno

    2015-01-01

    The papers in this volume aim at obtaining a common understanding of the challenging research questions in web applications comprising web information systems, web services, and web interoperability; obtaining a common understanding of verification needs in web applications; achieving a common understanding of the available rigorous approaches to system development, and the cases in which they have succeeded; identifying how rigorous software engineering methods can be exploited to develop suitable web applications; and at developing a European-scale research agenda combining theory, methods a

  4. Increasing Scalability of Researcher Network Extraction from the Web

    Science.gov (United States)

    Asada, Yohei; Matsuo, Yutaka; Ishizuka, Mitsuru

    Social networks, which describe relations among people or organizations as a network, have recently attracted attention. With the help of a social network, we can analyze the structure of a community and thereby promote efficient communications within it. We investigate the problem of extracting a network of researchers from the Web, to assist efficient cooperation among researchers. Our method uses a search engine to get the cooccurences of names of two researchers and calculates the streangth of the relation between them. Then we label the relation by analyzing the Web pages in which these two names cooccur. Research on social network extraction using search engines as ours, is attracting attention in Japan as well as abroad. However, the former approaches issue too many queries to search engines to extract a large-scale network. In this paper, we propose a method to filter superfluous queries and facilitates the extraction of large-scale networks. By this method we are able to extract a network of around 3000-nodes. Our experimental results show that the proposed method reduces the number of queries significantly while preserving the quality of the network as compared to former methods.

  5. The open research system: a web-based metadata and data repository for collaborative research

    Science.gov (United States)

    Charles M. Schweik; Alexander Stepanov; J. Morgan Grove

    2005-01-01

    Beginning in 1999, a web-based metadata and data repository we call the "open research system" (ORS) was designed and built to assist geographically distributed scientific research teams. The purpose of this innovation was to promote the open sharing of data within and across organizational lines and across geographic distances. As the use of the system...

  6. Federated Search and the Library Web Site: A Study of Association of Research Libraries Member Web Sites

    Science.gov (United States)

    Williams, Sarah C.

    2010-01-01

    The purpose of this study was to investigate how federated search engines are incorporated into the Web sites of libraries in the Association of Research Libraries. In 2009, information was gathered for each library in the Association of Research Libraries with a federated search engine. This included the name of the federated search service and…

  7. The Use of Web Questionnaires in Second Language Acquisition and Bilingualism Research

    Science.gov (United States)

    Wilson, Rosemary; Dewaele, Jean-Marc

    2010-01-01

    The present article focuses on data collection through web questionnaires, as opposed to the traditional pen-and-paper method for research in second language acquisition and bilingualism. It is argued that web questionnaires, which have been used quite widely in psychology, have the advantage of reaching out to a larger and more diverse pool of…

  8. Web Mining

    Science.gov (United States)

    Fürnkranz, Johannes

    The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to Web data and documents. This chapter provides a brief overview of web mining techniques and research areas, most notably hypertext classification, wrapper induction, recommender systems and web usage mining.

  9. PR2ALIGN: a stand-alone software program and a web-server for protein sequence alignment using weighted biochemical properties of amino acids.

    Science.gov (United States)

    Kuznetsov, Igor B; McDuffie, Michael

    2015-05-07

    Alignment of amino acid sequences is the main sequence comparison method used in computational molecular biology. The selection of the amino acid substitution matrix best suitable for a given alignment problem is one of the most important decisions the user has to make. In a conventional amino acid substitution matrix all elements are fixed and their values cannot be easily adjusted. Moreover, most existing amino acid substitution matrices account for the average (dis)similarities between amino acid types and do not distinguish the contribution of a specific biochemical property to these (dis)similarities. PR2ALIGN is a stand-alone software program and a web-server that provide the functionality for implementing flexible user-specified alignment scoring functions and aligning pairs of amino acid sequences based on the comparison of the profiles of biochemical properties of these sequences. Unlike the conventional sequence alignment methods that use 20x20 fixed amino acid substitution matrices, PR2ALIGN uses a set of weighted biochemical properties of amino acids to measure the distance between pairs of aligned residues and to find an optimal minimal distance global alignment. The user can provide any number of amino acid properties and specify a weight for each property. The higher the weight for a given property, the more this property affects the final alignment. We show that in many cases the approach implemented in PR2ALIGN produces better quality pair-wise alignments than the conventional matrix-based approach. PR2ALIGN will be helpful for researchers who wish to align amino acid sequences by using flexible user-specified alignment scoring functions based on the biochemical properties of amino acids instead of the amino acid substitution matrix. To the best of the authors' knowledge, there are no existing stand-alone software programs or web-servers analogous to PR2ALIGN. The software is freely available from http://pr2align.rit.albany.edu.

  10. Commentary: Building Web Research Strategies for Teachers and Students

    Science.gov (United States)

    Maloy, Robert W.

    2016-01-01

    This paper presents web research strategies for teachers and students to use in building Dramatic Event, Historical Biography, and Influential Literature wiki pages for history/social studies learning. Dramatic Events refer to milestone or turning point moments in history. Historical Biographies and Influential Literature pages feature…

  11. Research on SaaS and Web Service Based Order Tracking

    Science.gov (United States)

    Jiang, Jianhua; Sheng, Buyun; Gong, Lixiong; Yang, Mingzhong

    To solve the order tracking of across enterprises in Dynamic Virtual Enterprise (DVE), a SaaS and web service based order tracking solution was designed by analyzing the order management process in DVE. To achieve the system, the SaaS based architecture of data management on order tasks manufacturing states was constructed, and the encapsulation method of transforming application system into web service was researched. Then the process of order tracking in the system was given out. Finally, the feasibility of this study was verified by the development of a prototype system.

  12. Competency-Based Assessment for Clinical Supervisors: Design-Based Research on a Web-Delivered Program

    Science.gov (United States)

    Williams, Lauren Therese; Grealish, Laurie; Jamieson, Maggie

    2015-01-01

    Background Clinicians need to be supported by universities to use credible and defensible assessment practices during student placements. Web-based delivery of clinical education in student assessment offers professional development regardless of the geographical location of placement sites. Objective This paper explores the potential for a video-based constructivist Web-based program to support site supervisors in their assessments of student dietitians during clinical placements. Methods This project was undertaken as design-based research in two stages. Stage 1 describes the research consultation, development of the prototype, and formative feedback. In Stage 2, the program was pilot-tested and evaluated by a purposeful sample of nine clinical supervisors. Data generated as a result of user participation during the pilot test is reported. Users’ experiences with the program were also explored via interviews (six in a focus group and three individually). The interviews were transcribed verbatim and thematic analysis conducted from a pedagogical perspective using van Manen’s highlighting approach. Results This research succeeded in developing a Web-based program, “Feed our Future”, that increased supervisors’ confidence with their competency-based assessments of students on clinical placements. Three pedagogical themes emerged: constructivist design supports transformative Web-based learning; videos make abstract concepts tangible; and accessibility, usability, and pedagogy are interdependent. Conclusions Web-based programs, such as Feed our Future, offer a viable means for universities to support clinical supervisors in their assessment practices during clinical placements. A design-based research approach offers a practical process for such Web-based tool development, highlighting pedagogical barriers for planning purposes. PMID:25803172

  13. Web-Beagle: a web server for the alignment of RNA secondary structures.

    Science.gov (United States)

    Mattei, Eugenio; Pietrosanto, Marco; Ferrè, Fabrizio; Helmer-Citterich, Manuela

    2015-07-01

    Web-Beagle (http://beagle.bio.uniroma2.it) is a web server for the pairwise global or local alignment of RNA secondary structures. The server exploits a new encoding for RNA secondary structure and a substitution matrix of RNA structural elements to perform RNA structural alignments. The web server allows the user to compute up to 10 000 alignments in a single run, taking as input sets of RNA sequences and structures or primary sequences alone. In the latter case, the server computes the secondary structure prediction for the RNAs on-the-fly using RNAfold (free energy minimization). The user can also compare a set of input RNAs to one of five pre-compiled RNA datasets including lncRNAs and 3' UTRs. All types of comparison produce in output the pairwise alignments along with structural similarity and statistical significance measures for each resulting alignment. A graphical color-coded representation of the alignments allows the user to easily identify structural similarities between RNAs. Web-Beagle can be used for finding structurally related regions in two or more RNAs, for the identification of homologous regions or for functional annotation. Benchmark tests show that Web-Beagle has lower computational complexity, running time and better performances than other available methods. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Enhancing UCSF Chimera through web services.

    Science.gov (United States)

    Huang, Conrad C; Meng, Elaine C; Morris, John H; Pettersen, Eric F; Ferrin, Thomas E

    2014-07-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Integrating web 2.0 in clinical research education in a developing country.

    Science.gov (United States)

    Amgad, Mohamed; AlFaar, Ahmad Samir

    2014-09-01

    The use of Web 2.0 tools in education and health care has received heavy attention over the past years. Over two consecutive years, Children's Cancer Hospital - Egypt 57357 (CCHE 57357), in collaboration with Egyptian universities, student bodies, and NGOs, conducted a summer course that supports undergraduate medical students to cross the gap between clinical practice and clinical research. This time, there was a greater emphasis on reaching out to the students using social media and other Web 2.0 tools, which were heavily used in the course, including Google Drive, Facebook, Twitter, YouTube, Mendeley, Google Hangout, Live Streaming, Research Electronic Data Capture (REDCap), and Dropbox. We wanted to investigate the usefulness of integrating Web 2.0 technologies into formal educational courses and modules. The evaluation survey was filled in by 156 respondents, 134 of whom were course candidates (response rate = 94.4 %) and 22 of whom were course coordinators (response rate = 81.5 %). The course participants came from 14 different universities throughout Egypt. Students' feedback was positive and supported the integration of Web 2.0 tools in academic courses and modules. Google Drive, Facebook, and Dropbox were found to be most useful.

  16. Fatty acid composition at the base of aquatic food webs is influenced by habitat type and watershed land use

    Science.gov (United States)

    Larson, James H.; Richardson, William B.; Knights, Brent C.; Bartsch, Lynn; Bartsch, Michelle; Nelson, J. C.; Veldboom, Jason A.; Vallazza, Jonathan M.

    2013-01-01

    Spatial variation in food resources strongly influences many aspects of aquatic consumer ecology. Although large-scale controls over spatial variation in many aspects of food resources are well known, others have received little study. Here we investigated variation in the fatty acid (FA) composition of seston and primary consumers within (i.e., among habitats) and among tributary systems of Lake Michigan, USA. FA composition of food is important because all metazoans require certain FAs for proper growth and development that cannot be produced de novo, including many polyunsaturated fatty acids (PUFAs). Here we sampled three habitat types (river, rivermouth and nearshore zone) in 11 tributaries of Lake Michigan to assess the amount of FA in seston and primary consumers of seston. We hypothesize that among-system and among-habitat variation in FAs at the base of food webs would be related to algal production, which in turn is influenced by three land cover characteristics: 1) combined agriculture and urban lands (an indication of anthropogenic nutrient inputs that fuel algal production), 2) the proportion of surface waters (an indication of water residence times that allow algal producers to accumulate) and 3) the extent of riparian forested buffers (an indication of stream shading that reduces algal production). Of these three land cover characteristics, only intense land use appeared to strongly related to seston and consumer FA and this effect was only strong in rivermouth and nearshore lake sites. River seston and consumer FA composition was highly variable, but that variation does not appear to be driven by the watershed land cover characteristics investigated here. Whether the spatial variation in FA content at the base of these food webs significantly influences the production of economically important species higher in the food web should be a focus of future research.

  17. The Use of RESTful Web Services in Medical Informatics and Clinical Research and Its Implementation in Europe.

    Science.gov (United States)

    Aerts, Jozef

    2017-01-01

    RESTful web services nowadays are state-of-the-art in business transactions over the internet. They are however not very much used in medical informatics and in clinical research, especially not in Europe. To make an inventory of RESTful web services that can be used in medical informatics and clinical research, including those that can help in patient empowerment in the DACH region and in Europe, and to develop some new RESTful web services for use in clinical research and regulatory review. A literature search on available RESTful web services has been performed and new RESTful web services have been developed on an application server using the Java language. Most of the web services found originate from institutes and organizations in the USA, whereas no similar web services could be found that are made available by European organizations. New RESTful web services have been developed for LOINC codes lookup, for UCUM conversions and for use with CDISC Standards. A comparison is made between "top down" and "bottom up" web services, the latter meant to answer concrete questions immediately. The lack of RESTful web services made available by European organizations in healthcare and medical informatics is striking. RESTful web services may in short future play a major role in medical informatics, and when localized for the German language and other European languages, can help to considerably facilitate patient empowerment. This however requires an EU equivalent of the US National Library of Medicine.

  18. 07051 Abstracts Collection -- Programming Paradigms for the Web: Web Programming and Web Services

    OpenAIRE

    Hull, Richard; Thiemann, Peter; Wadler, Philip

    2007-01-01

    From 28.01. to 02.02.2007, the Dagstuhl Seminar 07051 ``Programming Paradigms for the Web: Web Programming and Web Services'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The firs...

  19. The CAD-score web server: contact area-based comparison of structures and interfaces of proteins, nucleic acids and their complexes.

    Science.gov (United States)

    Olechnovič, Kliment; Venclovas, Ceslovas

    2014-07-01

    The Contact Area Difference score (CAD-score) web server provides a universal framework to compute and analyze discrepancies between different 3D structures of the same biological macromolecule or complex. The server accepts both single-subunit and multi-subunit structures and can handle all the major types of macromolecules (proteins, RNA, DNA and their complexes). It can perform numerical comparison of both structures and interfaces. In addition to entire structures and interfaces, the server can assess user-defined subsets. The CAD-score server performs both global and local numerical evaluations of structural differences between structures or interfaces. The results can be explored interactively using sortable tables of global scores, profiles of local errors, superimposed contact maps and 3D structure visualization. The web server could be used for tasks such as comparison of models with the native (reference) structure, comparison of X-ray structures of the same macromolecule obtained in different states (e.g. with and without a bound ligand), analysis of nuclear magnetic resonance (NMR) structural ensemble or structures obtained in the course of molecular dynamics simulation. The web server is freely accessible at: http://www.ibt.lt/bioinformatics/cad-score. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. The Researcher's Journey: Scholarly Navigation of an Academic Library Web Site

    Science.gov (United States)

    McCann, Steve; Ravas, Tammy; Zoellner, Kate

    2010-01-01

    A qualitative study of the Maureen and Mike Mansfield Library's Web site identified the ways in which students and faculty of the University of Montana used the site for research purposes. This study employed open-ended interview questions and observations to spontaneously capture a user's experience in researching topics in which they…

  1. The semantic web : research and applications : 7th extended semantic web conference, ESWC 2010, Heraklion, Crete, Greece, May 30 - June 3, 2010 : proceedings

    NARCIS (Netherlands)

    Aroyo, L.M.; Antoniou, G.; Hyvönen, E.; Teije, ten A.; Stuckenschmidt, H.; Cabral, L.; Tudorache, T.

    2010-01-01

    Preface. This volume contains papers from the technical program of the 7th Extended Semantic Web Conference (ESWC 2010), held from May 30 to June 3, 2010, in Heraklion, Greece. ESWC 2010 presented the latest results in research and applications of Semantic Web technologies. ESWC 2010 built on the

  2. Perfluoroalkyl Acids (PFAAs) and Selected Precursors in the Baltic Sea Environment: Do Precursors Play a Role in Food Web Accumulation of PFAAs?

    Science.gov (United States)

    Gebbink, Wouter A; Bignert, Anders; Berger, Urs

    2016-06-21

    The present study examined the presence of perfluoroalkyl acids (PFAAs) and selected precursors in the Baltic Sea abiotic environment and guillemot food web, and investigated the relative importance of precursors in food web accumulation of PFAAs. Sediment, water, zooplankton, herring, sprat, and guillemot eggs were analyzed for perfluoroalkane sulfonic acids (PFSAs; C4,6,8,10) and perfluoroalkyl carboxylic acids (PFCAs; C6-15) along with six perfluoro-octane sulfonic acid (PFOS) precursors and 11 polyfluoroalkyl phosphoric acid diesters (diPAPs). FOSA, FOSAA and its methyl and ethyl derivatives (Me- and EtFOSAA), and 6:2/6:2 diPAP were detected in sediment and water. While FOSA and the three FOSAAs were detected in all biota, a total of nine diPAPs were only detected in zooplankton. Concentrations of PFOS precursors and diPAPs exceeded PFOS and PFCA concentrations, respectively, in zooplankton, but not in fish and guillemot eggs. Although PFOS precursors were present at all trophic levels, they appear to play a minor role in food web accumulation of PFOS based on PFOS precursor/PFOS ratios and PFOS and FOSA isomer patterns. The PFCA pattern in fish could not be explained by the intake pattern based on PFCAs and analyzed precursors, that is, diPAPs. Exposure to additional precursors might therefore be a dominant exposure pathway compared to direct PFCA exposure for fish.

  3. Web 2.0 in Computer-Assisted Language Learning: A Research Synthesis and Implications for Instructional Design and Educational Practice

    Science.gov (United States)

    Parmaxi, Antigoni; Zaphiris, Panayiotis

    2017-01-01

    This study explores the research development pertaining to the use of Web 2.0 technologies in the field of Computer-Assisted Language Learning (CALL). Published research manuscripts related to the use of Web 2.0 tools in CALL have been explored, and the following research foci have been determined: (1) Web 2.0 tools that dominate second/foreign…

  4. 07051 Executive Summary -- Programming Paradigms for the Web: Web Programming and Web Services

    OpenAIRE

    Hull, Richard; Thiemann, Peter; Wadler, Philip

    2007-01-01

    The world-wide web raises a variety of new programming challenges. To name a few: programming at the level of the web browser, data-centric approaches, and attempts to automatically discover and compose web services. This seminar brought together researchers from the web programming and web services communities and strove to engage them in communication with each other. The seminar was held in an unusual style, in a mixture of short presentations and in-depth discussio...

  5. Web technology in the separation of strontium and cesium from INEL-ICPP radioactive acid waste (WM-185)

    International Nuclear Information System (INIS)

    Bray, L.A.; Brown, G.N.

    1995-01-01

    Strontium and cesium were successfully removed from radioactive acidic waste (WM-185) at the Idaho National Engineering Laboratory, Idaho Chemical Processing Plant (ICPP), with web technology from 3M and IBC Advanced Technologies, Inc. (IBC). A technical team from Pacific Northwest Laboratory, ICPP, 3M and IBC conducted a very successful series of experiments from August 15 through 18, 1994. The ICPP, Remote Analytical Laboratory, Idaho Falls, Idaho, provided the hot cell facilities and staff to complete these milestone experiments. The actual waste experiments duplicated the initial 'cold' simulated waste results and confirmed the selective removal provided by ligand-particle web technology

  6. Usage and applications of Semantic Web techniques and technologies to support chemistry research.

    Science.gov (United States)

    Borkum, Mark I; Frey, Jeremy G

    2014-01-01

    The drug discovery process is now highly dependent on the management, curation and integration of large amounts of potentially useful data. Semantics are necessary in order to interpret the information and derive knowledge. Advances in recent years have mitigated concerns that the lack of robust, usable tools has inhibited the adoption of methodologies based on semantics. THIS PAPER PRESENTS THREE EXAMPLES OF HOW SEMANTIC WEB TECHNIQUES AND TECHNOLOGIES CAN BE USED IN ORDER TO SUPPORT CHEMISTRY RESEARCH: a controlled vocabulary for quantities, units and symbols in physical chemistry; a controlled vocabulary for the classification and labelling of chemical substances and mixtures; and, a database of chemical identifiers. This paper also presents a Web-based service that uses the datasets in order to assist with the completion of risk assessment forms, along with a discussion of the legal implications and value-proposition for the use of such a service. We have introduced the Semantic Web concepts, technologies, and methodologies that can be used to support chemistry research, and have demonstrated the application of those techniques in three areas very relevant to modern chemistry research, generating three new datasets that we offer as exemplars of an extensible portfolio of advanced data integration facilities. We have thereby established the importance of Semantic Web techniques and technologies for meeting Wild's fourth "grand challenge".

  7. Web 2.0 and Second Language Learning: What Does the Research Tell Us?

    Science.gov (United States)

    Wang, Shenggao; Vasquez, Camilla

    2012-01-01

    This article reviews current research on the use of Web 2.0 technologies in second language (L2) learning. Its purpose is to investigate the theoretical perspectives framing it, to identify some of the benefits of using Web 2.0 technologies in L2 learning, and to discuss some of the limitations. The review reveals that blogs and wikis have been…

  8. Semantic Web research anno 2006 : Main streams, popular fallacies, current status and future challenges

    NARCIS (Netherlands)

    Van Harmelen, Frank

    2006-01-01

    In this topical paper we try to give an analysis and overview of the current state of Semantic Web research. We point to different interpretations of the Semantic Web as the reason underlying many controversies, we list (and debunk) four false objections which are often raised against the Semantic

  9. Web Engineering

    Energy Technology Data Exchange (ETDEWEB)

    White, Bebo

    2003-06-23

    Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: (a) why is it needed? (b) what is its domain of operation? (c) how does it help and what should it do to improve Web application development? and (d) how should it be incorporated in education and training? The paper discusses the significant differences that exist between Web applications and conventional software, the taxonomy of Web applications, the progress made so far and the research issues and experience of creating a specialization at the master's level. The paper reaches a conclusion that Web Engineering at this stage is a moving target since Web technologies are constantly evolving, making new types of applications possible, which in turn may require innovations in how they are built, deployed and maintained.

  10. Validation of the omega-3 fatty acid intake measured by a web-based food frequency questionnaire against omega-3 fatty acids in red blood cells in men with prostate cancer.

    Science.gov (United States)

    Allaire, J; Moreel, X; Labonté, M-È; Léger, C; Caron, A; Julien, P; Lamarche, B; Fradet, V

    2015-09-01

    The objective of this study was to evaluate the ability of a web-based self-administered food frequency questionnaire (web-FFQ) to assess the omega-3 (ω-3) fatty acids (FAs) intake of men affected with prostate cancer (PCa) against a biomarker. The study presented herein is a sub-study from a phase II clinical trial. Enrolled patients afflicted with PCa were included in the sub-study analysis if the FA profiles from the red blood cell (RBC) membranes and FA intakes at baseline were both determined at the time of the data analysis (n=60). Spearman's correlation coefficients were calculated to estimate the correlations between FA intakes and their proportions in the RBC membranes. Intakes of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) were highly correlated with their respective proportions in the RBC membranes (both rs=0.593, Pstudies carried out in men with PCa.

  11. Health Research Governance: Introduction of a New Web-based Research Evaluation Model in Iran: One-decade Experience

    Science.gov (United States)

    MALEKZADEH, Reza; AKHONDZADEH, Shahin; EBADIFAR, Asghar; BARADARAN EFTEKHARI, Monir; OWLIA, Parviz; GHANEI, Mostafa; FALAHAT, Katayoun; HABIBI, Elham; SOBHANI, Zahra; DJALALINIA, Shirin; PAYKARI, Niloofar; MOJARRAB, Shahnaz; ELTEMASI, Masoumeh; LAALI, Reza

    2016-01-01

    Background: Governance is one of the main functions of Health Research System (HRS) that consist of four essential elements such as setting up evaluation system. The goal of this study was to introduce a new web based research evaluation model in Iran. Methods: Based on main elements of governance, research indicators have been clarified and with cooperation of technical team, appropriate software was designed. Three main steps in this study consist of developing of mission-oriented program, creating enabling environment and set up Iran Research Medical Portal as a center for research evaluation. Results: Fifty-two universities of medical sciences in three types have been participated. After training the evaluation focal points in all of medical universities, access to data entry and uploading all of documents were provided. Regarding to mission – based program, the contribution of medical universities in knowledge production was 60% for type one, 31% for type two and 9% for type three. The research priorities based on Essential National Health Research (ENHR) approach and mosaic model were gathered from universities of medical sciences and aggregated to nine main areas as national health research priorities. Ethical committees were established in all of medical universities. Conclusion: Web based research evaluation model is a comprehensive and integrated system for data collection in research. This system is appropriate tool to national health research ranking. PMID:27957437

  12. Applying Web-Based Tools for Research, Engineering, and Operations

    Science.gov (United States)

    Ivancic, William D.

    2011-01-01

    Personnel in the NASA Glenn Research Center Network and Architectures branch have performed a variety of research related to space-based sensor webs, network centric operations, security and delay tolerant networking (DTN). Quality documentation and communications, real-time monitoring and information dissemination are critical in order to perform quality research while maintaining low cost and utilizing multiple remote systems. This has been accomplished using a variety of Internet technologies often operating simultaneously. This paper describes important features of various technologies and provides a number of real-world examples of how combining Internet technologies can enable a virtual team to act efficiently as one unit to perform advanced research in operational systems. Finally, real and potential abuses of power and manipulation of information and information access is addressed.

  13. Implementation of clinical research trials using web-based and mobile devices: challenges and solutions

    Directory of Open Access Journals (Sweden)

    Roy Eagleson

    2017-03-01

    Full Text Available Abstract Background With the increasing implementation of web-based, mobile health interventions in clinical trials, it is crucial for researchers to address the security and privacy concerns of patient information according to high ethical standards. The full process of meeting these standards is often made more complicated due to the use of internet-based technology and smartphones for treatment, telecommunication, and data collection; however, this process is not well-documented in the literature. Results The Smart Heart Trial is a single-arm feasibility study that is currently assessing the effects of a web-based, mobile lifestyle intervention for overweight and obese children and youth with congenital heart disease in Southwestern Ontario. Participants receive telephone counseling regarding nutrition and fitness; and complete goal-setting activities on a web-based application. This paper provides a detailed overview of the challenges the study faced in meeting the high standards of our Research Ethics Board, specifically regarding patient privacy. Conclusion We outline our solutions, successes, limitations, and lessons learned to inform future similar studies; and model much needed transparency in ensuring high quality security and protection of patient privacy when using web-based and mobile devices for telecommunication and data collection in clinical research.

  14. AFAL: a web service for profiling amino acids surrounding ligands in proteins

    Science.gov (United States)

    Arenas-Salinas, Mauricio; Ortega-Salazar, Samuel; Gonzales-Nilo, Fernando; Pohl, Ehmke; Holmes, David S.; Quatrini, Raquel

    2014-11-01

    With advancements in crystallographic technology and the increasing wealth of information populating structural databases, there is an increasing need for prediction tools based on spatial information that will support the characterization of proteins and protein-ligand interactions. Herein, a new web service is presented termed amino acid frequency around ligand (AFAL) for determining amino acids type and frequencies surrounding ligands within proteins deposited in the Protein Data Bank and for assessing the atoms and atom-ligand distances involved in each interaction (availability: http://structuralbio.utalca.cl/AFAL/index.html). AFAL allows the user to define a wide variety of filtering criteria (protein family, source organism, resolution, sequence redundancy and distance) in order to uncover trends and evolutionary differences in amino acid preferences that define interactions with particular ligands. Results obtained from AFAL provide valuable statistical information about amino acids that may be responsible for establishing particular ligand-protein interactions. The analysis will enable investigators to compare ligand-binding sites of different proteins and to uncover general as well as specific interaction patterns from existing data. Such patterns can be used subsequently to predict ligand binding in proteins that currently have no structural information and to refine the interpretation of existing protein models. The application of AFAL is illustrated by the analysis of proteins interacting with adenosine-5'-triphosphate.

  15. Implementation of Web 2.0 services in academic, medical and research libraries: a scoping review.

    Science.gov (United States)

    Gardois, Paolo; Colombi, Nicoletta; Grillo, Gaetano; Villanacci, Maria C

    2012-06-01

    Academic, medical and research libraries frequently implement Web 2.0 services for users. Several reports notwithstanding, characteristics and effectiveness of services are unclear. To find out: the Web 2.0 services implemented by medical, academic and research libraries; study designs, measures and types of data used in included articles to evaluate effectiveness; whether the identified body of literature is amenable to a systematic review of results. Scoping review mapping the literature on the topic. Searches were performed in 19 databases. research articles in English, Italian, German, French and Spanish (publication date ≥ 2006) about Web 2.0 services for final users implemented by academic, medical and research libraries. Reviewers' agreement was measured by Cohen's kappa. From a data set of 6461 articles, 255 (4%) were coded and analysed. Conferencing/chat/instant messaging, blogging, podcasts, social networking, wikis and aggregators were frequently examined. Services were mainly targeted at general academic users of English-speaking countries. Data prohibit a reliable estimate of the relative frequency of implemented Web 2.0 services. Case studies were the prevalent design. Most articles evaluated different outcomes using diverse assessment methodologies. A systematic review is recommended to assess the effectiveness of such services. © 2012 The authors. Health Information and Libraries Journal © 2012 Health Libraries Group.

  16. Web services foundations

    CERN Document Server

    Bouguettaya, Athman; Daniel, Florian

    2013-01-01

    Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet.Web Services Foundations is the first installment of a two-book collection coverin

  17. FedWeb Greatest Hits: Presenting the New Test Collection for Federated Web Search

    NARCIS (Netherlands)

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Zhou, Ke; Nguyen, Dong-Phuong; Hiemstra, Djoerd

    This paper presents 'FedWeb Greatest Hits', a large new test collection for research in web information retrieval. As a combination and extension of the datasets used in the TREC Federated Web Search Track, this collection opens up new research possibilities on federated web search challenges, as

  18. Customizable Electronic Laboratory Online (CELO): A Web-based Data Management System Builder for Biomedical Research Laboratories

    Science.gov (United States)

    Fong, Christine; Brinkley, James F.

    2006-01-01

    A common challenge among today’s biomedical research labs is managing growing amounts of research data. In order to reduce the time and resource costs of building data management tools, we designed the Customizable Electronic Laboratory Online (CELO) system. CELO automatically creates a generic database and web interface for laboratories that submit a simple web registration form. Laboratories can then use a collection of predefined XML templates to assist with the design of a database schema. Users can immediately utilize the web-based system to query data, manage multimedia files, and securely share data remotely over the internet. PMID:17238541

  19. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network.

    Science.gov (United States)

    Frey, Lewis J; Sward, Katherine A; Newth, Christopher J L; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-11-01

    To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Towards a semantic web connecting knowledge in academic research

    CERN Document Server

    Cope, Bill; Magee, Liam

    2011-01-01

    This book addresses the question of how knowledge is currently documented, and may soon be documented in the context of what it calls 'semantic publishing'. This takes two forms: a more narrowly and technically defined 'semantic web'; as well as a broader notion of semantic publishing. This book examines the ways in which knowledge is represented in journal articles and books. By contrast, it goes on to explore the potential impacts of semantic publishing on academic research and authorship. It sets this in the context of changing knowledge ecologies: the way research is done; the way knowledg

  1. Web Developer | IDRC - International Development Research Centre

    International Development Research Centre (IDRC) Digital Library (Canada)

    Primary Duties or Responsibilities Web Development Leads all technical web ... design, and maintain the corporate website and any other internet properties IDRC ... and testing site for use by members of the website and social media team.

  2. Web cache location

    Directory of Open Access Journals (Sweden)

    Boffey Brian

    2004-01-01

    Full Text Available Stress placed on network infrastructure by the popularity of the World Wide Web may be partially relieved by keeping multiple copies of Web documents at geographically dispersed locations. In particular, use of proxy caches and replication provide a means of storing information 'nearer to end users'. This paper concentrates on the locational aspects of Web caching giving both an overview, from an operational research point of view, of existing research and putting forward avenues for possible further research. This area of research is in its infancy and the emphasis will be on themes and trends rather than on algorithm construction. Finally, Web caching problems are briefly related to referral systems more generally.

  3. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    OpenAIRE

    Filistea Naude; Chris Rensleigh; Adeline S.A. du Toit

    2010-01-01

    This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa) was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The re...

  4. Opportunities and Constraints in Disseminating Qualitative Research in Web 2.0 Virtual Environments.

    Science.gov (United States)

    Hays, Charles A; Spiers, Judith A; Paterson, Barbara

    2015-11-01

    The Web 2.0 digital environment is revolutionizing how users communicate and relate to each other, and how information is shared, created, and recreated within user communities. The social media technologies in the Web 2.0 digital ecosystem are fundamentally changing the opportunities and dangers in disseminating qualitative health research. The social changes influenced by digital innovations shift dissemination from passive consumption to user-centered, apomediated cooperative approaches, the features of which are underutilized by many qualitative researchers. We identify opportunities new digital media presents for knowledge dissemination activities including access to wider audiences with few gatekeeper constraints, new perspectives, and symbiotic relationships between researchers and users. We also address some of the challenges in embracing these technologies including lack of control, potential for unethical co-optation of work, and cyberbullying. Finally, we offer solutions to enhance research dissemination in sustainable, ethical, and effective strategies. © The Author(s) 2015.

  5. Hypermedia and the Semantic Web: A Research Agenda

    NARCIS (Netherlands)

    J.R. van Ossenbruggen (Jacco); L. Hardman (Lynda); L. Rutledge (Lloyd)

    2002-01-01

    textabstractUntil recently, the Semantic Web was little more than a name for the next generation Web infrastructure as envisioned by its inventor, Tim Berners-Lee. Now, with the introduction of XML and RDF, and new developments such as RDF Schema and DAML+OIL, the Semantic Web is rapidly taking

  6. Hypermedia and the semantic web: a research agenda

    NARCIS (Netherlands)

    J.R. van Ossenbruggen (Jacco); L. Hardman (Lynda); L. Rutledge (Lloyd)

    2001-01-01

    textabstractUntil recently, the Semantic Web was little more than a name for the next generation Web infrastructure as envisioned by its inventor, Tim Berners-Lee. Now, with the introduction of XML and RDF, and new developments such as RDF Schema and DAML+OIL, the Semantic Web is rapidly taking

  7. Technical Note: Harmonizing met-ocean model data via standard web services within small research groups

    Science.gov (United States)

    Signell, Richard; Camossi, E.

    2016-01-01

    Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.

  8. Web Apollo: a web-based genomic annotation editing platform.

    Science.gov (United States)

    Lee, Eduardo; Helt, Gregg A; Reese, Justin T; Munoz-Torres, Monica C; Childers, Chris P; Buels, Robert M; Stein, Lincoln; Holmes, Ian H; Elsik, Christine G; Lewis, Suzanna E

    2013-08-30

    Web Apollo is the first instantaneous, collaborative genomic annotation editor available on the web. One of the natural consequences following from current advances in sequencing technology is that there are more and more researchers sequencing new genomes. These researchers require tools to describe the functional features of their newly sequenced genomes. With Web Apollo researchers can use any of the common browsers (for example, Chrome or Firefox) to jointly analyze and precisely describe the features of a genome in real time, whether they are in the same room or working from opposite sides of the world.

  9. A Bibliometric Analysis of Research on Zika Virus Indexed in Web of Science

    Directory of Open Access Journals (Sweden)

    Saima Nasir

    2018-05-01

    Full Text Available Background: The spread of Zika virus is of great concern as it is recently becoming the third global infectious disease outburst after H1N1 flu and the Ebola virus infections. Centre for Disease Control (CDC categorized Pakistan, India, and Bangladesh as countries vulnerable to Zika Risk. Realizing health implications of this emerging epidemic, it is a dire need to build an all-inclusive view of the status of research on Zika virus disease, and a lucid picture of the research output and scientific collaborations in the field. Methods: All the articles published globally on Zika virus during 2008-2017 and documented in Web of Science were analyzed using Microsoft Excel and Word Cloud tool. The data were extracted from all databases of the Web of Science, obtaining a total of 3384 articles for analysis. Results: 3384 records on Zika virus research were indexed in the Web of Science database during 2008-2017. The retrieved data indicate that over the past ten years, not much research has been done on this virus and the focus shifted to research on Zika in the last three years only and the number of researches increased from just 38 in 2015 to 1962 in 2017. Pakistan has a low share in global publications on Zika with a total number of 24 publications. “Honein Margaret” is considered the most active researcher in the field, by contributing to 80 articles. Most of the published research on Zika virus is from US (47.07%. Conclusion: When compared with other countries, the contribution of Pakistan is negligible with a global share of 0.71% on Zika virus. Serious focus on research is needed in this field realizing the severe medical, ethical, and economic implications of this emerging epidemic in Pakistan.

  10. Designing Effective Web Forms for Older Web Users

    Science.gov (United States)

    Li, Hui; Rau, Pei-Luen Patrick; Fujimura, Kaori; Gao, Qin; Wang, Lin

    2012-01-01

    This research aims to provide insight for web form design for older users. The effects of task complexity and information structure of web forms on older users' performance were examined. Forty-eight older participants with abundant computer and web experience were recruited. The results showed significant differences in task time and error rate…

  11. Interactive Voice/Web Response System in clinical research.

    Science.gov (United States)

    Ruikar, Vrishabhsagar

    2016-01-01

    Emerging technologies in computer and telecommunication industry has eased the access to computer through telephone. An Interactive Voice/Web Response System (IxRS) is one of the user friendly systems for end users, with complex and tailored programs at its backend. The backend programs are specially tailored for easy understanding of users. Clinical research industry has experienced revolution in methodologies of data capture with time. Different systems have evolved toward emerging modern technologies and tools in couple of decades from past, for example, Electronic Data Capture, IxRS, electronic patient reported outcomes, etc.

  12. Introduction to Webometrics Quantitative Web Research for the Social Sciences

    CERN Document Server

    Thelwall, Michael

    2009-01-01

    Webometrics is concerned with measuring aspects of the web: web sites, web pages, parts of web pages, words in web pages, hyperlinks, web search engine results. The importance of the web itself as a communication medium and for hosting an increasingly wide array of documents, from journal articles to holiday brochures, needs no introduction. Given this huge and easily accessible source of information, there are limitless possibilities for measuring or counting on a huge scale (e.g., the number of web sites, the number of web pages, the number of blogs) or on a smaller scale (e.g., the number o

  13. Semantic Web-Based Services for Supporting Voluntary Collaboration among Researchers Using an Information Dissemination Platform

    Directory of Open Access Journals (Sweden)

    Hanmin Jung

    2007-05-01

    Full Text Available Information dissemination platforms for supporting voluntary collaboration among researchers should assure that controllable and verified information is being disseminated. However, previous related studies on this field narrowed their research scopes into information type and information specification. This paper focuses on the verification and the tracing of information using an information dissemination platform and other Semantic Web-based services. Services on our platform include information dissemination services to support reliable information exchange among researchers and knowledge service to provide unrevealed information. The latter is also divided into the two: knowledgization using ontology and inference using a Semantic Web-based inference engine. This paper discusses how this platform supports instant knowledge addition and inference. We demonstrate our approach by constructing an ontology for national R&D reference information using 37,656 RDF triples from about 2,300 KISTI (Korea Institute of Science and Technology Information outcomes. Three knowledge services including 'Communities of Practice', 'Researcher Tracing,' and 'Research Map' were implemented on our platform using a Jena framework. Our study shows that information dissemination platforms will make a meaningful contribution to the possibility of realizing a practical Semantic Web-based information dissemination platform.

  14. Alignment-Annotator web server: rendering and annotating sequence alignments.

    Science.gov (United States)

    Gille, Christoph; Fähling, Michael; Weyand, Birgit; Wieland, Thomas; Gille, Andreas

    2014-07-01

    Alignment-Annotator is a novel web service designed to generate interactive views of annotated nucleotide and amino acid sequence alignments (i) de novo and (ii) embedded in other software. All computations are performed at server side. Interactivity is implemented in HTML5, a language native to web browsers. The alignment is initially displayed using default settings and can be modified with the graphical user interfaces. For example, individual sequences can be reordered or deleted using drag and drop, amino acid color code schemes can be applied and annotations can be added. Annotations can be made manually or imported (BioDAS servers, the UniProt, the Catalytic Site Atlas and the PDB). Some edits take immediate effect while others require server interaction and may take a few seconds to execute. The final alignment document can be downloaded as a zip-archive containing the HTML files. Because of the use of HTML the resulting interactive alignment can be viewed on any platform including Windows, Mac OS X, Linux, Android and iOS in any standard web browser. Importantly, no plugins nor Java are required and therefore Alignment-Anotator represents the first interactive browser-based alignment visualization. http://www.bioinformatics.org/strap/aa/ and http://strap.charite.de/aa/. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Persistence and availability of Web services in computational biology.

    Science.gov (United States)

    Schultheiss, Sebastian J; Münch, Marc-Christian; Andreeva, Gergana D; Rätsch, Gunnar

    2011-01-01

    We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository.

  16. Applying semantic web services to enterprise web

    OpenAIRE

    Hu, Y; Yang, Q P; Sun, X; Wei, P

    2008-01-01

    Enterprise Web provides a convenient, extendable, integrated platform for information sharing and knowledge management. However, it still has many drawbacks due to complexity and increasing information glut, as well as the heterogeneity of the information processed. Research in the field of Semantic Web Services has shown the possibility of adding higher level of semantic functionality onto the top of current Enterprise Web, enhancing usability and usefulness of resource, enabling decision su...

  17. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services.

    Science.gov (United States)

    Gessler, Damian D G; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-09-23

    SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at http://sswap.info (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at http://sswap.info/protocol.jsp, developer tools at http://sswap.info/developer.jsp, and a portal to third-party ontologies at http://sswapmeet.sswap.info (a "swap meet"). SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the

  18. Web X-Ray: Developing and Adopting Web Best Practices in Enterprises

    Directory of Open Access Journals (Sweden)

    Reinaldo Ferreira

    2016-12-01

    Full Text Available The adoption of Semantic Web technologies constitutes a promising approach to data structuring and integration, both for public and private usage. While these technologies have been around for some time, their adoption is behind overall expectations, particularly in the case of Enterprises. Having that in mind, we developed a Semantic Web Implementation Model that measures and facilitates the implementation of the technology. The advantages of using the model proposed are two-fold: the model serves as a guide for driving the implementation of the Semantic Web as well as it helps to evaluate the impact of the introduction of the technology. The model was adopted by 19 enterprises in an Action Research intervention of one year with promising results: according to the model's scale, in average, all enterprises evolved from a 6% evaluation to 46% during that period. Furthermore, practical implementation recommendations, a typical consulting tool, were developed and adopted during the project by all enterprises, providing important guidelines for the identification of a development path that may be adopted on a larger scale. Meanwhile, the project also outlined that most enterprises were interested in an even broader scope of the Implementation Model and the ambition of a "All Web Technologies" approach arose. One model that could embrace the observable overlapping of different Web generations, namely the Web of Documents, the Social Web, the Web of Data and, ultimately, the Web of Context. One model that could combine the evaluation and guidance for all enterprises to follow. That's the goal of the undergoing "Project Web X-ray" that aims to involve 200 enterprises in the adoption of best practices that may lead to their business development based on Web technologies. This paper presents a case of how Action Research promoted the simultaneous advancement of academic research and enterprise development and introduces the framework and opportunities

  19. Advanced web services

    CERN Document Server

    Bouguettaya, Athman; Daniel, Florian

    2013-01-01

    Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet. This book is the second installment of a two-book collection covering the state-o

  20. GASS-WEB: a web server for identifying enzyme active sites based on genetic algorithms.

    Science.gov (United States)

    Moraes, João P A; Pappa, Gisele L; Pires, Douglas E V; Izidoro, Sandro C

    2017-07-03

    Enzyme active sites are important and conserved functional regions of proteins whose identification can be an invaluable step toward protein function prediction. Most of the existing methods for this task are based on active site similarity and present limitations including performing only exact matches on template residues, template size restraints, despite not being capable of finding inter-domain active sites. To fill this gap, we proposed GASS-WEB, a user-friendly web server that uses GASS (Genetic Active Site Search), a method based on an evolutionary algorithm to search for similar active sites in proteins. GASS-WEB can be used under two different scenarios: (i) given a protein of interest, to match a set of specific active site templates; or (ii) given an active site template, looking for it in a database of protein structures. The method has shown to be very effective on a range of experiments and was able to correctly identify >90% of the catalogued active sites from the Catalytic Site Atlas. It also managed to achieve a Matthew correlation coefficient of 0.63 using the Critical Assessment of protein Structure Prediction (CASP 10) dataset. In our analysis, GASS was ranking fourth among 18 methods. GASS-WEB is freely available at http://gass.unifei.edu.br/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Web-Based Scientific Exploration and Analysis of 3D Scanned Cuneiform Datasets for Collaborative Research

    Directory of Open Access Journals (Sweden)

    Denis Fisseler

    2017-12-01

    Full Text Available The three-dimensional cuneiform script is one of the oldest known writing systems and a central object of research in Ancient Near Eastern Studies and Hittitology. An important step towards the understanding of the cuneiform script is the provision of opportunities and tools for joint analysis. This paper presents an approach that contributes to this challenge: a collaborative compatible web-based scientific exploration and analysis of 3D scanned cuneiform fragments. The WebGL -based concept incorporates methods for compressed web-based content delivery of large 3D datasets and high quality visualization. To maximize accessibility and to promote acceptance of 3D techniques in the field of Hittitology, the introduced concept is integrated into the Hethitologie-Portal Mainz, an established leading online research resource in the field of Hittitology, which until now exclusively included 2D content. The paper shows that increasing the availability of 3D scanned archaeological data through a web-based interface can provide significant scientific value while at the same time finding a trade-off between copyright induced restrictions and scientific usability.

  2. Research Proposal for Distributed Deep Web Search

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien

    2010-01-01

    This proposal identifies two main problems related to deep web search, and proposes a step by step solution for each of them. The first problem is about searching deep web content by means of a simple free-text interface (with just one input field, instead of a complex interface with many input

  3. Launching a virtual decision lab: development and field-testing of a web-based patient decision support research platform.

    Science.gov (United States)

    Hoffman, Aubri S; Llewellyn-Thomas, Hilary A; Tosteson, Anna N A; O'Connor, Annette M; Volk, Robert J; Tomek, Ivan M; Andrews, Steven B; Bartels, Stephen J

    2014-12-12

    Over 100 trials show that patient decision aids effectively improve patients' information comprehension and values-based decision making. However, gaps remain in our understanding of several fundamental and applied questions, particularly related to the design of interactive, personalized decision aids. This paper describes an interdisciplinary development process for, and early field testing of, a web-based patient decision support research platform, or virtual decision lab, to address these questions. An interdisciplinary stakeholder panel designed the web-based research platform with three components: a) an introduction to shared decision making, b) a web-based patient decision aid, and c) interactive data collection items. Iterative focus groups provided feedback on paper drafts and online prototypes. A field test assessed a) feasibility for using the research platform, in terms of recruitment, usage, and acceptability; and b) feasibility of using the web-based decision aid component, compared to performance of a videobooklet decision aid in clinical care. This interdisciplinary, theory-based, patient-centered design approach produced a prototype for field-testing in six months. Participants (n = 126) reported that: the decision aid component was easy to use (98%), information was clear (90%), the length was appropriate (100%), it was appropriately detailed (90%), and it held their interest (97%). They spent a mean of 36 minutes using the decision aid and 100% preferred using their home/library computer. Participants scored a mean of 75% correct on the Decision Quality, Knowledge Subscale, and 74 out of 100 on the Preparation for Decision Making Scale. Completing the web-based decision aid reduced mean Decisional Conflict scores from 31.1 to 19.5 (p development of a web-based patient decision support research platform that was feasible for use in research studies in terms of recruitment, acceptability, and usage. Within this platform, the web

  4. THE IMPORTANCE OF WEB DESIGN: VISUAL DESIGN EVALUATION OF DESTINATION WEB SITES

    OpenAIRE

    Fırlar, Belma; Okat Özdem, Özen

    2013-01-01

    As in the literature, the researchs about web site efficiency are mostly about site context. The analysis about function are mostly superficial. Whereas, controlling every little part of a web site respective is a necessity to show its efficiency. Here in this context in the study of perception and response event web sites that play an important role in visual design criteria are below the lens as featured and the web sites evaulated by heuristic evaluation method.The research focus of this s...

  5. CSAR-web: a web server of contig scaffolding using algebraic rearrangements.

    Science.gov (United States)

    Chen, Kun-Tze; Lu, Chin Lung

    2018-05-04

    CSAR-web is a web-based tool that allows the users to efficiently and accurately scaffold (i.e. order and orient) the contigs of a target draft genome based on a complete or incomplete reference genome from a related organism. It takes as input a target genome in multi-FASTA format and a reference genome in FASTA or multi-FASTA format, depending on whether the reference genome is complete or incomplete, respectively. In addition, it requires the users to choose either 'NUCmer on nucleotides' or 'PROmer on translated amino acids' for CSAR-web to identify conserved genomic markers (i.e. matched sequence regions) between the target and reference genomes, which are used by the rearrangement-based scaffolding algorithm in CSAR-web to order and orient the contigs of the target genome based on the reference genome. In the output page, CSAR-web displays its scaffolding result in a graphical mode (i.e. scalable dotplot) allowing the users to visually validate the correctness of scaffolded contigs and in a tabular mode allowing the users to view the details of scaffolds. CSAR-web is available online at http://genome.cs.nthu.edu.tw/CSAR-web.

  6. Advanced Techniques in Web Intelligence-2 Web User Browsing Behaviour and Preference Analysis

    CERN Document Server

    Palade, Vasile; Jain, Lakhmi

    2013-01-01

    This research volume focuses on analyzing the web user browsing behaviour and preferences in traditional web-based environments, social  networks and web 2.0 applications,  by using advanced  techniques in data acquisition, data processing, pattern extraction and  cognitive science for modeling the human actions.  The book is directed to  graduate students, researchers/scientists and engineers  interested in updating their knowledge with the recent trends in web user analysis, for developing the next generation of web-based systems and applications.

  7. Facebook advertisements recruit parents of children with cancer for an online survey of web-based research preferences.

    Science.gov (United States)

    Akard, Terrah Foster; Wray, Sarah; Gilmer, Mary Jo

    2015-01-01

    Studies involving samples of children with life-threatening illnesses and their families face significant challenges, including inadequate sample sizes and limited diversity. Social media recruitment and Web-based research methods may help address such challenges yet have not been explored in pediatric cancer populations. This study examined the feasibility of using Facebook advertisements to recruit parent caregivers of children and teenagers with cancer. We also explored the feasibility of Web-based video recording in pediatric palliative care populations by surveying parents of children with cancer regarding (a) their preferences for research methods and (b) technological capabilities of their computers and phones. Facebook's paid advertising program was used to recruit parent caregivers of children currently living with cancer to complete an electronic survey about research preferences and technological capabilities. The advertising campaign generated 3 897 981 impressions, which resulted in 1050 clicks at a total cost of $1129.88. Of 284 screened individuals, 106 were eligible. Forty-five caregivers of children with cancer completed the entire electronic survey. Parents preferred and had technological capabilities for Web-based and electronic research methods. Participant survey responses are reported. Facebook was a useful, cost-effective method to recruit a diverse sample of parent caregivers of children with cancer. Web-based video recording and data collection may be feasible and desirable in samples of children with cancer and their families. Web-based methods (eg, Facebook, Skype) may enhance communication and access between nurses and pediatric oncology patients and their families.

  8. Web-based interventions in nursing.

    Science.gov (United States)

    Im, Eun-Ok; Chang, Sun Ju

    2013-02-01

    With recent advances in computer and Internet technologies and high funding priority on technological aspects of nursing research, researchers at the field level began to develop, use, and test various types of Web-based interventions. Despite high potential impacts of Web-based interventions, little is still known about Web-based interventions in nursing. In this article, to identify strengths and weaknesses of Web-based nursing interventions, a literature review was conducted using multiple databases with combined keywords of "online," "Internet" or "Web," "intervention," and "nursing." A total of 95 articles were retrieved through the databases and sorted by research topics. These articles were then analyzed to identify strengths and weaknesses of Web-based interventions in nursing. A strength of the Web-based interventions was their coverage of various content areas. In addition, many of them were theory-driven. They had advantages in their flexibility and comfort. They could provide consistency in interventions and require less cost in the intervention implementation. However, Web-based intervention studies had selected participants. They lacked controllability and had high dropouts. They required technical expertise and high development costs. Based on these findings, directions for future Web-based intervention research were provided.

  9. Web Mining and Social Networking

    DEFF Research Database (Denmark)

    Xu, Guandong; Zhang, Yanchun; Li, Lin

    This book examines the techniques and applications involved in the Web Mining, Web Personalization and Recommendation and Web Community Analysis domains, including a detailed presentation of the principles, developed algorithms, and systems of the research in these areas. The applications of web ...... sense of individuals or communities. The volume will benefit both academic and industry communities interested in the techniques and applications of web search, web data management, web mining and web knowledge discovery, as well as web community and social network analysis.......This book examines the techniques and applications involved in the Web Mining, Web Personalization and Recommendation and Web Community Analysis domains, including a detailed presentation of the principles, developed algorithms, and systems of the research in these areas. The applications of web...... mining, and the issue of how to incorporate web mining into web personalization and recommendation systems are also reviewed. Additionally, the volume explores web community mining and analysis to find the structural, organizational and temporal developments of web communities and reveal the societal...

  10. Effect of Changing Solvents on Poly(-Caprolactone Nanofibrous Webs Morphology

    Directory of Open Access Journals (Sweden)

    A. Gholipour Kanani

    2011-01-01

    Full Text Available Polycaprolactone nanofibers were prepared using five different solvents (glacial acetic acid, 90% acetic acid, methylene chloride/DMF 4/1, glacial formic acid, and formic acid/acetone 4/1 by electrospinning process. The effect of solution concentrations (5%, 10%, 15% and 20% and applied voltages during spinning (10 KV to 20 KV on the nanofibers formation, morphology, and structure were investigated. SEM micrographs showed successful production of PCL nanofibers with different solvents. With increasing the polymer concentration, the average diameter of nanofibers increases. In glacial acetic acid solvent, above 15% concentration bimodal web without beads was obtained. In MC/DMF beads was observed only at 5% solution concentration. However, in glacial formic acid a uniform web without beads were obtained above 10% and the nanofibers were brittle. In formic acid/acetone solution the PCL web formed showed lots of beads along with fine fibers. Increasing applied voltage resulted in fibers with larger diameter.

  11. Essential versus potentially toxic dietary substances: A seasonal comparison of essential fatty acids and methyl mercury concentrations in the planktonic food web

    Energy Technology Data Exchange (ETDEWEB)

    Kainz, Martin [Aquatic Ecosystem Management Research Division, National Water Research Institute, Environment Canada, 867 Lakeshore Road, P.O. Box 505, Burlington, ON L7R 4A6 (Canada)], E-mail: martin.kainz@donau-uni.ac.at; Arts, Michael T. [Water and Aquatic Sciences Research Program, University of Victoria, Department of Biology, P.O. Box 3020, Stn. CSC, Victoria, BC V8W 3N5 (Canada); Mazumder, Asit [Aquatic Ecosystem Management Research Division, National Water Research Institute, Environment Canada, 867 Lakeshore Road, P.O. Box 505, Burlington, ON L7R 4A6 (Canada)

    2008-09-15

    We investigated seasonal variability of essential fatty acids (EFA) and methyl mercury (MeHg) concentrations in four size categories of planktonic organisms in two coastal lakes. MeHg concentrations increased significantly with increasing plankton size and were independent of plankton taxonomy. However, total EFA increased from seston to mesozooplankton, but decreased in the cladoceran-dominated macrozooplankton size-class. Analysis of EFA patterns revealed that linoleic, alpha-linolenic, arachidonic, and eicosapentaenoic acids increased with increasing zooplankton size, but docosahexaenoic acid (DHA) in the cladoceran-dominated macrozooplankton was generally lower than in seston. This consistent pattern demonstrates that cladocerans, although bioaccumulating MeHg, convey less DHA than similar-sized copepods to their consumers. It is thus evident that fish consuming cladocerans have restricted access to DHA, yet unrestricted dietary access to MeHg. Thus, the structure of planktonic food webs clearly affects the composition of EFA and regulates dietary supply of these essential nutrients, while MeHg bioaccumulates with increasing zooplankton size. - The structure of planktonic food webs largely regulates the composition and dietary supply of essential fatty acids, while MeHg bioaccumulates with zooplankton size.

  12. Essential versus potentially toxic dietary substances: A seasonal comparison of essential fatty acids and methyl mercury concentrations in the planktonic food web

    International Nuclear Information System (INIS)

    Kainz, Martin; Arts, Michael T.; Mazumder, Asit

    2008-01-01

    We investigated seasonal variability of essential fatty acids (EFA) and methyl mercury (MeHg) concentrations in four size categories of planktonic organisms in two coastal lakes. MeHg concentrations increased significantly with increasing plankton size and were independent of plankton taxonomy. However, total EFA increased from seston to mesozooplankton, but decreased in the cladoceran-dominated macrozooplankton size-class. Analysis of EFA patterns revealed that linoleic, alpha-linolenic, arachidonic, and eicosapentaenoic acids increased with increasing zooplankton size, but docosahexaenoic acid (DHA) in the cladoceran-dominated macrozooplankton was generally lower than in seston. This consistent pattern demonstrates that cladocerans, although bioaccumulating MeHg, convey less DHA than similar-sized copepods to their consumers. It is thus evident that fish consuming cladocerans have restricted access to DHA, yet unrestricted dietary access to MeHg. Thus, the structure of planktonic food webs clearly affects the composition of EFA and regulates dietary supply of these essential nutrients, while MeHg bioaccumulates with increasing zooplankton size. - The structure of planktonic food webs largely regulates the composition and dietary supply of essential fatty acids, while MeHg bioaccumulates with zooplankton size

  13. Duke Surgery Research Central: an open-source Web application for the improvement of compliance with research regulation.

    Science.gov (United States)

    Pietrobon, Ricardo; Shah, Anand; Kuo, Paul; Harker, Matthew; McCready, Mariana; Butler, Christeen; Martins, Henrique; Moorman, C T; Jacobs, Danny O

    2006-07-27

    Although regulatory compliance in academic research is enforced by law to ensure high quality and safety to participants, its implementation is frequently hindered by cost and logistical barriers. In order to decrease these barriers, we have developed a Web-based application, Duke Surgery Research Central (DSRC), to monitor and streamline the regulatory research process. The main objective of DSRC is to streamline regulatory research processes. The application was built using a combination of paper prototyping for system requirements and Java as the primary language for the application, in conjunction with the Model-View-Controller design model. The researcher interface was designed for simplicity so that it could be used by individuals with different computer literacy levels. Analogously, the administrator interface was designed with functionality as its primary goal. DSRC facilitates the exchange of regulatory documents between researchers and research administrators, allowing for tasks to be tracked and documents to be stored in a Web environment accessible from an Intranet. Usability was evaluated using formal usability tests and field observations. Formal usability results demonstrated that DSRC presented good speed, was easy to learn and use, had a functionality that was easily understandable, and a navigation that was intuitive. Additional features implemented upon request by initial users included: extensive variable categorization (in contrast with data capture using free text), searching capabilities to improve how research administrators could search an extensive number of researcher names, warning messages before critical tasks were performed (such as deleting a task), and confirmatory e-mails for critical tasks (such as completing a regulatory task). The current version of DSRC was shown to have excellent overall usability properties in handling research regulatory issues. It is hoped that its release as an open-source application will promote improved

  14. Duke Surgery Research Central: an open-source Web application for the improvement of compliance with research regulation

    Directory of Open Access Journals (Sweden)

    Martins Henrique

    2006-07-01

    Full Text Available Abstract Background Although regulatory compliance in academic research is enforced by law to ensure high quality and safety to participants, its implementation is frequently hindered by cost and logistical barriers. In order to decrease these barriers, we have developed a Web-based application, Duke Surgery Research Central (DSRC, to monitor and streamline the regulatory research process. Results The main objective of DSRC is to streamline regulatory research processes. The application was built using a combination of paper prototyping for system requirements and Java as the primary language for the application, in conjunction with the Model-View-Controller design model. The researcher interface was designed for simplicity so that it could be used by individuals with different computer literacy levels. Analogously, the administrator interface was designed with functionality as its primary goal. DSRC facilitates the exchange of regulatory documents between researchers and research administrators, allowing for tasks to be tracked and documents to be stored in a Web environment accessible from an Intranet. Usability was evaluated using formal usability tests and field observations. Formal usability results demonstrated that DSRC presented good speed, was easy to learn and use, had a functionality that was easily understandable, and a navigation that was intuitive. Additional features implemented upon request by initial users included: extensive variable categorization (in contrast with data capture using free text, searching capabilities to improve how research administrators could search an extensive number of researcher names, warning messages before critical tasks were performed (such as deleting a task, and confirmatory e-mails for critical tasks (such as completing a regulatory task. Conclusion The current version of DSRC was shown to have excellent overall usability properties in handling research regulatory issues. It is hoped that its

  15. Maintenance-Ready Web Application Development

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2016-01-01

    Full Text Available The current paper tackles the subject of developing maintenance-ready web applications. Maintenance is presented as a core stage in a web application’s lifecycle. The concept of maintenance-ready is defined in the context of web application development. Web application maintenance tasks types are enunciated and suitable task types are identified for further analysis. The research hypothesis is formulated based on a direct link between tackling maintenance in the development stage and reducing overall maintenance costs. A live maintenance-ready web application is presented and maintenance related aspects are highlighted. The web application’s features, that render it maintenance-ready, are emphasize. The cost of designing and building the web-application to be maintenance-ready are disclosed. The savings in maintenance development effort facilitated by maintenance ready features are also disclosed. Maintenance data is collected from 40 projects implemented by a web development company. Homogeneity and diversity of collected data is evaluated. A data sample is presented and the size and comprehensive nature of the entire dataset is depicted. Research hypothesis are validated and conclusions are formulated on the topic of developing maintenance-ready web applications. The limits of the research process which represented the basis for the current paper are enunciated. Future research topics are submitted for debate.

  16. Facebook Ads Recruit Parents of Children with Cancer for an Online Survey of Web-Based Research Preferences

    Science.gov (United States)

    Akard, Terrah Foster; Wray, Sarah; Gilmer, Mary

    2014-01-01

    Background Studies involving samples of children with life-threatening illnesses and their families face significant challenges, including inadequate sample sizes and limited diversity. Social media recruitment and web-based research methods may help address such challenges yet have not been explored in pediatric cancer populations. Objective This study examined the feasibility of using Facebook ads to recruit parent caregivers of children and teens with cancer. We also explored the feasibility of web-based video recording in pediatric palliative care populations by surveying parents of children with cancer regarding (a) their preferences for research methods and (b) technological capabilities of their computers and phones. Methods Facebook's paid advertising program was used to recruit parent caregivers of children currently living with cancer to complete an electronic survey about research preferences and technological capabilities. Results The advertising campaign generated 3,897,981 impressions which resulted in 1050 clicks at a total cost of $1129.88. Of 284 screened individuals, 106 were eligible. Forty-five caregivers of children with cancer completed the entire electronic survey. Parents preferred and had technological capabilities for web-based and electronic research methods. Participant survey responses are reported. Conclusion Facebook was a useful, cost-effective method to recruit a diverse sample of parent caregivers of children with cancer. Web-based video recording and data collection may be feasible and desirable in samples of children with cancer and their families. Implications for Practice Web-based methods (e.g., Facebook, Skype) may enhance communication and access between nurses and pediatric oncology patients and their families. PMID:24945264

  17. Responsive web design workflow

    OpenAIRE

    LAAK, TIMO

    2013-01-01

    Responsive Web Design Workflow is a literature review about Responsive Web Design, a web standards based modern web design paradigm. The goals of this research were to define what responsive web design is, determine its importance in building modern websites and describe a workflow for responsive web design projects. Responsive web design is a paradigm to create adaptive websites, which respond to the properties of the media that is used to render them. The three key elements of responsi...

  18. High potency fish oil supplement improves omega-3 fatty acid status in healthy adults: an open-label study using a web-based, virtual platform.

    Science.gov (United States)

    Udani, Jay K; Ritz, Barry W

    2013-08-08

    The health benefits of omega-3 fatty acids from fish are well known, and fish oil supplements are used widely in a preventive manner to compensate the low intake in the general population. The aim of this open-label study was to determine if consumption of a high potency fish oil supplement could improve blood levels of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) and impact SF-12 mental and physical health scores in healthy adults. A novel virtual clinical research organization was used along with the HS-Omega-3 Index, a measure of EPA and DHA in red blood cell membranes expressed as a percentage of total fatty acids that has been shown to correlate with a reduction in cardiovascular and other risk factors. Briefly, adult subjects (mean age 44 years) were recruited from among U.S. health food store employees and supplemented with 1.1 g/d of omega-3 from fish oil (756 mg EPA, 228 mg DHA, Minami Nutrition MorEPA Platinum) for 120 days (n = 157). Omega-3 status and mental health scores increased with supplementation (p < 0.001), while physical health scores remained unchanged. The use of a virtual, web-based platform shows considerable potential for engaging in clinical research with normal, healthy subjects. A high potency fish oil supplement may further improve omega-3 status in a healthy population regularly consuming an omega-3 supplement.

  19. Pathview Web: user friendly pathway visualization and data integration.

    Science.gov (United States)

    Luo, Weijun; Pant, Gaurav; Bhavnasi, Yeshvant K; Blanchard, Steven G; Brouwer, Cory

    2017-07-03

    Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. IL web tutorials

    DEFF Research Database (Denmark)

    Hyldegård, Jette; Lund, Haakon

    2012-01-01

    The paper presents the results from a study on information literacy in a higher education (HE) context based on a larger research project evaluating 3 Norwegian IL web tutorials at 6 universities and colleges in Norway. The aim was to evaluate how the 3 web tutorials served students’ information...... seeking and writing process in an study context and to identify barriers to the employment and use of the IL web tutorials, hence to the underlying information literacy intentions by the developer. Both qualitative and quantitative methods were employed. A clear mismatch was found between intention...... and use of the web tutorials. In addition, usability only played a minor role compared to relevance. It is concluded that the positive expectations of the IL web tutorials tend to be overrated by the developers. Suggestions for further research are presented....

  1. Development of a Web-Based Visualization Platform for Climate Research Using Google Earth

    Science.gov (United States)

    Sun, Xiaojuan; Shen, Suhung; Leptoukh, Gregory G.; Wang, Panxing; Di, Liping; Lu, Mingyue

    2011-01-01

    Recently, it has become easier to access climate data from satellites, ground measurements, and models from various data centers, However, searching. accessing, and prc(essing heterogeneous data from different sources are very tim -consuming tasks. There is lack of a comprehensive visual platform to acquire distributed and heterogeneous scientific data and to render processed images from a single accessing point for climate studies. This paper. documents the design and implementation of a Web-based visual, interoperable, and scalable platform that is able to access climatological fields from models, satellites, and ground stations from a number of data sources using Google Earth (GE) as a common graphical interface. The development is based on the TCP/IP protocol and various data sharing open sources, such as OPeNDAP, GDS, Web Processing Service (WPS), and Web Mapping Service (WMS). The visualization capability of integrating various measurements into cE extends dramatically the awareness and visibility of scientific results. Using embedded geographic information in the GE, the designed system improves our understanding of the relationships of different elements in a four dimensional domain. The system enables easy and convenient synergistic research on a virtual platform for professionals and the general public, gr$tly advancing global data sharing and scientific research collaboration.

  2. Extracting Macroscopic Information from Web Links.

    Science.gov (United States)

    Thelwall, Mike

    2001-01-01

    Discussion of Web-based link analysis focuses on an evaluation of Ingversen's proposed external Web Impact Factor for the original use of the Web, namely the interlinking of academic research. Studies relationships between academic hyperlinks and research activities for British universities and discusses the use of search engines for Web link…

  3. Web 2.0 applications in medicine: trends and topics in the literature.

    Science.gov (United States)

    Boudry, Christophe

    2015-04-01

    The World Wide Web has changed research habits, and these changes were further expanded when "Web 2.0" became popular in 2005. Bibliometrics is a helpful tool used for describing patterns of publication, for interpreting progression over time, and the geographical distribution of research in a given field. Few studies employing bibliometrics, however, have been carried out on the correlative nature of scientific literature and Web 2.0. The aim of this bibliometric analysis was to provide an overview of Web 2.0 implications in the biomedical literature. The objectives were to assess the growth rate of literature, key journals, authors, and country contributions, and to evaluate whether the various Web 2.0 applications were expressed within this biomedical literature, and if so, how. A specific query with keywords chosen to be representative of Web 2.0 applications was built for the PubMed database. Articles related to Web 2.0 were downloaded in Extensible Markup Language (XML) and were processed through developed hypertext preprocessor (PHP) scripts, then imported to Microsoft Excel 2010 for data processing. A total of 1347 articles were included in this study. The number of articles related to Web 2.0 has been increasing from 2002 to 2012 (average annual growth rate was 106.3% with a maximum of 333% in 2005). The United States was by far the predominant country for authors, with 514 articles (54.0%; 514/952). The second and third most productive countries were the United Kingdom and Australia, with 87 (9.1%; 87/952) and 44 articles (4.6%; 44/952), respectively. Distribution of number of articles per author showed that the core population of researchers working on Web 2.0 in the medical field could be estimated at approximately 75. In total, 614 journals were identified during this analysis. Using Bradford's law, 27 core journals were identified, among which three (Studies in Health Technology and Informatics, Journal of Medical Internet Research, and Nucleic Acids

  4. Estimating Maintenance Cost for Web Applications

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2016-01-01

    Full Text Available The current paper tackles the issue of determining a method for estimating maintenance costs for web applications. The current state of research in the field of web application maintenance is summarized and leading theories and results are highlighted. The cost of web maintenance is determined by the number of man-hours invested in maintenance tasks. Web maintenance tasks are categorized into content maintenance and technical maintenance. Research is centered on analyzing technical maintenance tasks. The research hypothesis is formulated on the assumption that the number of man-hours invested in maintenance tasks can be assessed based on the web application’s user interaction level, complexity and content update effort. Data regarding the costs of maintenance tasks is collected from 24 maintenance projects implemented by a web development company that tackles a wide area of web applications. Homogeneity and diversity of collected data is submitted for debate by presenting a sample of the data and depicting the overall size and comprehensive nature of the entire dataset. A set of metrics dedicated to estimating maintenance costs in web applications is defined based on conclusions formulated by analyzing the collected data and the theories and practices dominating the current state of research. Metrics are validated with regards to the initial research hypothesis. Research hypothesis are validated and conclusions are formulated on the topic of estimating the maintenance cost of web applications. The limits of the research process which represented the basis for the current paper are enunciated. Future research topics are submitted for debate.

  5. Current trends and new challenges of databases and web applications for systems driven biological research

    Directory of Open Access Journals (Sweden)

    Pradeep Kumar eSreenivasaiah

    2010-12-01

    Full Text Available Dynamic and rapidly evolving nature of systems driven research imposes special requirements on the technology, approach, design and architecture of computational infrastructure including database and web application. Several solutions have been proposed to meet the expectations and novel methods have been developed to address the persisting problems of data integration. It is important for researchers to understand different technologies and approaches. Having familiarized with the pros and cons of the existing technologies, researchers can exploit its capabilities to the maximum potential for integrating data. In this review we discuss the architecture, design and key technologies underlying some of the prominent databases (DBs and web applications. We will mention their roles in integration of biological data and investigate some of the emerging design concepts and computational technologies that are likely to have a key role in the future of systems driven biomedical research.

  6. WebMGA: a customizable web server for fast metagenomic sequence analysis.

    Science.gov (United States)

    Wu, Sitao; Zhu, Zhengwei; Fu, Liming; Niu, Beifang; Li, Weizhong

    2011-09-07

    The new field of metagenomics studies microorganism communities by culture-independent sequencing. With the advances in next-generation sequencing techniques, researchers are facing tremendous challenges in metagenomic data analysis due to huge quantity and high complexity of sequence data. Analyzing large datasets is extremely time-consuming; also metagenomic annotation involves a wide range of computational tools, which are difficult to be installed and maintained by common users. The tools provided by the few available web servers are also limited and have various constraints such as login requirement, long waiting time, inability to configure pipelines etc. We developed WebMGA, a customizable web server for fast metagenomic analysis. WebMGA includes over 20 commonly used tools such as ORF calling, sequence clustering, quality control of raw reads, removal of sequencing artifacts and contaminations, taxonomic analysis, functional annotation etc. WebMGA provides users with rapid metagenomic data analysis using fast and effective tools, which have been implemented to run in parallel on our local computer cluster. Users can access WebMGA through web browsers or programming scripts to perform individual analysis or to configure and run customized pipelines. WebMGA is freely available at http://weizhongli-lab.org/metagenomic-analysis. WebMGA offers to researchers many fast and unique tools and great flexibility for complex metagenomic data analysis.

  7. WebMGA: a customizable web server for fast metagenomic sequence analysis

    Directory of Open Access Journals (Sweden)

    Niu Beifang

    2011-09-01

    Full Text Available Abstract Background The new field of metagenomics studies microorganism communities by culture-independent sequencing. With the advances in next-generation sequencing techniques, researchers are facing tremendous challenges in metagenomic data analysis due to huge quantity and high complexity of sequence data. Analyzing large datasets is extremely time-consuming; also metagenomic annotation involves a wide range of computational tools, which are difficult to be installed and maintained by common users. The tools provided by the few available web servers are also limited and have various constraints such as login requirement, long waiting time, inability to configure pipelines etc. Results We developed WebMGA, a customizable web server for fast metagenomic analysis. WebMGA includes over 20 commonly used tools such as ORF calling, sequence clustering, quality control of raw reads, removal of sequencing artifacts and contaminations, taxonomic analysis, functional annotation etc. WebMGA provides users with rapid metagenomic data analysis using fast and effective tools, which have been implemented to run in parallel on our local computer cluster. Users can access WebMGA through web browsers or programming scripts to perform individual analysis or to configure and run customized pipelines. WebMGA is freely available at http://weizhongli-lab.org/metagenomic-analysis. Conclusions WebMGA offers to researchers many fast and unique tools and great flexibility for complex metagenomic data analysis.

  8. How to Increase Reach and Adherence of Web-Based Interventions: A Design Research Viewpoint.

    Science.gov (United States)

    Ludden, Geke D S; van Rompay, Thomas J L; Kelders, Saskia M; van Gemert-Pijnen, Julia E W C

    2015-07-10

    Nowadays, technology is increasingly used to increase people's well-being. For example, many mobile and Web-based apps have been developed that can support people to become mentally fit or to manage their daily diet. However, analyses of current Web-based interventions show that many systems are only used by a specific group of users (eg, women, highly educated), and that even they often do not persist and drop out as the intervention unfolds. In this paper, we assess the impact of design features of Web-based interventions on reach and adherence and conclude that the power that design can have has not been used to its full potential. We propose looking at design research as a source of inspiration for new (to the field) design approaches. The paper goes on to specify and discuss three of these approaches: personalization, ambient information, and use of metaphors. Central to our viewpoint is the role of positive affect triggered by well-designed persuasive features to boost adherence and well-being. Finally, we discuss the future of persuasive eHealth interventions and suggest avenues for follow-up research.

  9. Integrating the hospital library with patient care, teaching and research: model and Web 2.0 tools to create a social and collaborative community of clinical research in a hospital setting.

    Science.gov (United States)

    Montano, Blanca San José; Garcia Carretero, Rafael; Varela Entrecanales, Manuel; Pozuelo, Paz Martin

    2010-09-01

    Research in hospital settings faces several difficulties. Information technologies and certain Web 2.0 tools may provide new models to tackle these problems, allowing for a collaborative approach and bridging the gap between clinical practice, teaching and research. We aim to gather a community of researchers involved in the development of a network of learning and investigation resources in a hospital setting. A multi-disciplinary work group analysed the needs of the research community. We studied the opportunities provided by Web 2.0 tools and finally we defined the spaces that would be developed, describing their elements, members and different access levels. WIKINVESTIGACION is a collaborative web space with the aim of integrating the management of all the hospital's teaching and research resources. It is composed of five spaces, with different access privileges. The spaces are: Research Group Space 'wiki for each individual research group', Learning Resources Centre devoted to the Library, News Space, Forum and Repositories. The Internet, and most notably the Web 2.0 movement, is introducing some overwhelming changes in our society. Research and teaching in the hospital setting will join this current and take advantage of these tools to socialise and improve knowledge management.

  10. Comparing web and mail responses in a mixed mode survey in college alcohol use research

    Science.gov (United States)

    McCabe, Sean Esteban; Diez, Alison; Boyd, Carol J.; Nelson, Toben F.; Weitzman, Elissa R.

    2011-01-01

    Objective This exploratory study examined potential mode effects (web versus U.S. mail) in a mixed mode design survey of alcohol use at eight U.S. colleges. Methods Randomly selected students from eight U.S. colleges were invited to participate in a self-administered survey on their alcohol use in the spring of 2002. Data were collected initially by web survey (n =2619) and non-responders to this mode were mailed a hardcopy survey (n =628). Results College students who were male, living on-campus and under 21 years of age were significantly more likely to complete the initial web survey. Multivariate analyses revealed few substantive differences between survey modality and alcohol use measures. Conclusions The findings from this study provide preliminary evidence that web and mail surveys produce comparable estimates of alcohol use in a non-randomized mixed mode design. The results suggest that mixed mode survey designs could be effective at reaching certain college sub-populations and improving overall response rate while maintaining valid measurement of alcohol use. Web surveys are gaining popularity in survey research and more work is needed to examine whether these results can extend to web surveys generally or are specific to mixed mode designs. PMID:16460882

  11. Update of the FANTOM web resource: high resolution transcriptome of diverse cell types in mammals.

    Science.gov (United States)

    Lizio, Marina; Harshbarger, Jayson; Abugessaisa, Imad; Noguchi, Shuei; Kondo, Atsushi; Severin, Jessica; Mungall, Chris; Arenillas, David; Mathelier, Anthony; Medvedeva, Yulia A; Lennartsson, Andreas; Drabløs, Finn; Ramilowski, Jordan A; Rackham, Owen; Gough, Julian; Andersson, Robin; Sandelin, Albin; Ienasescu, Hans; Ono, Hiromasa; Bono, Hidemasa; Hayashizaki, Yoshihide; Carninci, Piero; Forrest, Alistair R R; Kasukawa, Takeya; Kawaji, Hideya

    2017-01-04

    Upon the first publication of the fifth iteration of the Functional Annotation of Mammalian Genomes collaborative project, FANTOM5, we gathered a series of primary data and database systems into the FANTOM web resource (http://fantom.gsc.riken.jp) to facilitate researchers to explore transcriptional regulation and cellular states. In the course of the collaboration, primary data and analysis results have been expanded, and functionalities of the database systems enhanced. We believe that our data and web systems are invaluable resources, and we think the scientific community will benefit for this recent update to deepen their understanding of mammalian cellular organization. We introduce the contents of FANTOM5 here, report recent updates in the web resource and provide future perspectives. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. A reasonable Semantic Web

    NARCIS (Netherlands)

    Hitzler, Pascal; Van Harmelen, Frank

    2010-01-01

    The realization of Semantic Web reasoning is central to substantiating the Semantic Web vision. However, current mainstream research on this topic faces serious challenges, which forces us to question established lines of research and to rethink the underlying approaches. We argue that reasoning for

  13. Exploring Individual, Social and Organisational Effects on Web 2.0-Based Workplace Learning: A Research Agenda for a Systematic Approach

    Science.gov (United States)

    Zhao, Fang; Kemp, Linzi

    2013-01-01

    Web 2.0-based workplace learning is defined in this article as informal learning that takes place in the workplace through connections and collaborations mediated by Web 2.0 technology. Web 2.0-based workplace learning has the potential to enhance organisational learning and development. However, little systematic research has been published that…

  14. Semantic web for dummies

    CERN Document Server

    Pollock, Jeffrey T

    2009-01-01

    Semantic Web technology is already changing how we interact with data on the Web. By connecting random information on the Internet in new ways, Web 3.0, as it is sometimes called, represents an exciting online evolution. Whether you're a consumer doing research online, a business owner who wants to offer your customers the most useful Web site, or an IT manager eager to understand Semantic Web solutions, Semantic Web For Dummies is the place to start! It will help you:Know how the typical Internet user will recognize the effects of the Semantic WebExplore all the benefits the data Web offers t

  15. Correlations between Fruit, Vegetables, Fish, Vitamins, and Fatty Acids Estimated by Web-Based Nonconsecutive Dietary Records and Respective Biomarkers of Nutritional Status.

    Science.gov (United States)

    Lassale, Camille; Castetbon, Katia; Laporte, François; Deschamps, Valérie; Vernay, Michel; Camilleri, Géraldine M; Faure, Patrice; Hercberg, Serge; Galan, Pilar; Kesse-Guyot, Emmanuelle

    2016-03-01

    It is of major importance to measure the validity of self-reported dietary intake using web-based instruments before applying them in large-scale studies. This study aimed to validate self-reported intake of fish, fruit and vegetables, and selected micronutrient intakes assessed by a web-based self-administered dietary record tool used in the NutriNet-Santé prospective cohort study, against the following concentration biomarkers: plasma beta carotene, vitamin C, and n-3 polyunsaturated fatty acids. One hundred ninety-eight adult volunteers (103 men and 95 women, mean age=50.5 years) were included in the protocol: they completed 3 nonconsecutive-day dietary records and two blood samples were drawn 3 weeks apart. The study was conducted in the area of Paris, France, between October 2012 and May 2013. Reported fish, fruit and vegetables, and selected micronutrient intakes and plasma beta carotene, vitamin C, and n-3 polyunsaturated fatty acid levels were compared. Simple and adjusted Spearman's rank correlation coefficients were estimated after de-attenuation for intra-individual variation. Regarding food groups in men, adjusted correlations ranged from 0.20 for vegetables and plasma vitamin C to 0.49 for fruits and plasma vitamin C, and from 0.40 for fish and plasma c20:5 n-3 (eicosapentaenoic acid [EPA]) to 0.55 for fish and plasma c22:6 n-3 (docosahexaenoic acid). In women, correlations ranged from 0.13 (nonsignificant) for vegetables and plasma vitamin C to 0.41 for fruits and vegetables and plasma beta carotene, and from 0.27 for fatty fish and EPA to 0.54 for fish and EPA+docosahexaenoic acid. Regarding micronutrients, adjusted correlations ranged from 0.36 (EPA) to 0.58 (vitamin C) in men and from 0.32 (vitamin C) to 0.38 (EPA) in women. The findings suggest that three nonconsecutive web-based dietary records provide reasonable estimates of true intake of fruits, vegetables, fish, beta carotene, vitamin C, and n-3 fatty acids. Along with other validation

  16. Ethics of Research into Learning and Teaching with Web 2.0: Reflections on Eight Case Studies

    Science.gov (United States)

    Chang, Rosemary L.; Gray, Kathleen

    2013-01-01

    The unique features and educational affordances of Web 2.0 technologies pose new challenges for conducting learning and teaching research in ways that adequately address ethical issues of informed consent, beneficence, respect, justice, research merit and integrity. This paper reviews these conceptual bases of human research ethics and gives…

  17. Distributed Management of Concurrent Web Service Transactions

    DEFF Research Database (Denmark)

    Alrifai, Mohammad; Dolog, Peter; Balke, Wolf-Tilo

    2009-01-01

    Business processes involve dynamic compositions of interleaved tasks. Therefore, ensuring reliable transactional processing of Web services is crucial for the success of Web service-based B2B and B2C applications. But the inherent autonomy and heterogeneity of Web services render the applicability...... of conventional ACID transaction models for Web services far from being straightforward. Current Web service transaction models relax the isolation property and rely on compensation mechanisms to ensure atomicity of business transactions in the presence of service failures. However, ensuring consistency...... in the open and dynamic environment of Web services, where interleaving business transactions enter and exit the system independently, remains an open issue. In this paper, we address this problem and propose an architecture that supports concurrency control on the Web services level. An extension...

  18. Web Mining and Social Networking

    CERN Document Server

    Xu, Guandong; Li, Lin

    2011-01-01

    This book examines the techniques and applications involved in the Web Mining, Web Personalization and Recommendation and Web Community Analysis domains, including a detailed presentation of the principles, developed algorithms, and systems of the research in these areas. The applications of web mining, and the issue of how to incorporate web mining into web personalization and recommendation systems are also reviewed. Additionally, the volume explores web community mining and analysis to find the structural, organizational and temporal developments of web communities and reveal the societal s

  19. QoS management of web services

    CERN Document Server

    Zheng, Zibin

    2013-01-01

    Quality-of-Service (QoS) is normally used to describe the non-functional characteristics of Web services and as a criterion for evaluating different Web services. QoS Management of Web Services presents an innovative QoS evaluation framework for these services. Moreover, three QoS prediction methods and two methods for creating fault-tolerant Web services are also proposed in this book. It not only provides the latest research findings, but also presents an excellent overview of the QoS management of Web services, making it a valuable resource for researchers and graduate students in service computing.   Zibin Zheng is an associate research fellow at the Shenzhen Research Institute, the Chinese University of Hong Kong, China. Professor Michael R. Lyu also works at the same institute.

  20. The Semantic Web: opportunities and challenges for next-generation Web applications

    Directory of Open Access Journals (Sweden)

    2002-01-01

    Full Text Available Recently there has been a growing interest in the investigation and development of the next generation web - the Semantic Web. While most of the current forms of web content are designed to be presented to humans, but are barely understandable by computers, the content of the Semantic Web is structured in a semantic way so that it is meaningful to computers as well as to humans. In this paper, we report a survey of recent research on the Semantic Web. In particular, we present the opportunities that this revolution will bring to us: web-services, agent-based distributed computing, semantics-based web search engines, and semantics-based digital libraries. We also discuss the technical and cultural challenges of realizing the Semantic Web: the development of ontologies, formal semantics of Semantic Web languages, and trust and proof models. We hope that this will shed some light on the direction of future work on this field.

  1. The Ark: a customizable web-based data management tool for health and medical research.

    Science.gov (United States)

    Bickerstaffe, Adrian; Ranaweera, Thilina; Endersby, Travis; Ellis, Christopher; Maddumarachchi, Sanjaya; Gooden, George E; White, Paul; Moses, Eric K; Hewitt, Alex W; Hopper, John L

    2017-02-15

    The Ark is an open-source web-based tool that allows researchers to manage health and medical research data for humans and animals without specialized database skills or programming expertise. The system provides data management for core research information including demographic, phenotype, biospecimen and pedigree data, in addition to supporting typical investigator requirements such as tracking participant consent and correspondence, whilst also being able to generate custom data exports and reports. The Ark is 'study generic' by design and highly configurable via its web interface, allowing researchers to tailor the system to the specific data management requirements of their study. Source code for The Ark can be obtained freely from the website https://github.com/The-Ark-Informatics/ark/ . The source code can be modified and redistributed under the terms of the GNU GPL v3 license. Documentation and a pre-configured virtual appliance can be found at the website http://sphinx.org.au/the-ark/ . adrianb@unimelb.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  2. The Creative Web.

    Science.gov (United States)

    Yudess, Jo

    2003-01-01

    This article lists the Web sites of 12 international not-for-profit creativity associations designed to trigger more creative thought and research possibilities. Along with Web addresses, the entries include telephone contact information and a brief description of the organization. (CR)

  3. Using a web-based survey tool to undertake a Delphi study: application for nurse education research.

    Science.gov (United States)

    Gill, Fenella J; Leslie, Gavin D; Grech, Carol; Latour, Jos M

    2013-11-01

    The Internet is increasingly being used as a data collection medium to access research participants. This paper reports on the experience and value of using web-survey software to conduct an eDelphi study to develop Australian critical care course graduate practice standards. The eDelphi technique used involved the iterative process of administering three rounds of surveys to a national expert panel. The survey was developed online using SurveyMonkey. Panel members responded to statements using one rating scale for round one and two scales for rounds two and three. Text boxes for panel comments were provided. For each round, the SurveyMonkey's email tool was used to distribute an individualized email invitation containing the survey web link. The distribution of panel responses, individual responses and a summary of comments were emailed to panel members. Stacked bar charts representing the distribution of responses were generated using the SurveyMonkey software. Panel response rates remained greater than 85% over all rounds. An online survey provided numerous advantages over traditional survey approaches including high quality data collection, ease and speed of survey administration, direct communication with the panel and rapid collation of feedback allowing data collection to be undertaken in 12 weeks. Only minor challenges were experienced using the technology. Ethical issues, specific to using the Internet to conduct research and external hosting of web-based software, lacked formal guidance. High response rates and an increased level of data quality were achieved in this study using web-survey software and the process was efficient and user-friendly. However, when considering online survey software, it is important to match the research design with the computer capabilities of participants and recognize that ethical review guidelines and processes have not yet kept pace with online research practices. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. DMINDA: an integrated web server for DNA motif identification and analyses.

    Science.gov (United States)

    Ma, Qin; Zhang, Hanyuan; Mao, Xizeng; Zhou, Chuan; Liu, Bingqiang; Chen, Xin; Xu, Ying

    2014-07-01

    DMINDA (DNA motif identification and analyses) is an integrated web server for DNA motif identification and analyses, which is accessible at http://csbl.bmb.uga.edu/DMINDA/. This web site is freely available to all users and there is no login requirement. This server provides a suite of cis-regulatory motif analysis functions on DNA sequences, which are important to elucidation of the mechanisms of transcriptional regulation: (i) de novo motif finding for a given set of promoter sequences along with statistical scores for the predicted motifs derived based on information extracted from a control set, (ii) scanning motif instances of a query motif in provided genomic sequences, (iii) motif comparison and clustering of identified motifs, and (iv) co-occurrence analyses of query motifs in given promoter sequences. The server is powered by a backend computer cluster with over 150 computing nodes, and is particularly useful for motif prediction and analyses in prokaryotic genomes. We believe that DMINDA, as a new and comprehensive web server for cis-regulatory motif finding and analyses, will benefit the genomic research community in general and prokaryotic genome researchers in particular. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. WebGimm: An integrated web-based platform for cluster analysis, functional analysis, and interactive visualization of results.

    Science.gov (United States)

    Joshi, Vineet K; Freudenberg, Johannes M; Hu, Zhen; Medvedovic, Mario

    2011-01-17

    Cluster analysis methods have been extensively researched, but the adoption of new methods is often hindered by technical barriers in their implementation and use. WebGimm is a free cluster analysis web-service, and an open source general purpose clustering web-server infrastructure designed to facilitate easy deployment of integrated cluster analysis servers based on clustering and functional annotation algorithms implemented in R. Integrated functional analyses and interactive browsing of both, clustering structure and functional annotations provides a complete analytical environment for cluster analysis and interpretation of results. The Java Web Start client-based interface is modeled after the familiar cluster/treeview packages making its use intuitive to a wide array of biomedical researchers. For biomedical researchers, WebGimm provides an avenue to access state of the art clustering procedures. For Bioinformatics methods developers, WebGimm offers a convenient avenue to deploy their newly developed clustering methods. WebGimm server, software and manuals can be freely accessed at http://ClusterAnalysis.org/.

  6. Health research networks on the web: an analysis of the Brazilian presence

    Directory of Open Access Journals (Sweden)

    Pamela Barreto Lang

    2014-02-01

    Full Text Available In order to map Brazilian institutions’ web presence in an international network of health research institutions, a study was conducted in 2009, including 190 institutions from 42 countries. The sample was based on WHO (World Health Organization collaborating centers, and the methodology used webometric analyses and techniques, especially interlinks, and social network analysis. The results showed the presence of five Brazilian institutions, featuring the Oswaldo Cruz Foundation (Fiocruz, showing links to 20 countries and 42 institutions. Through the interface between the health field and the web, the study aims to contribute to future analyses and a plan for strategic repositioning of these institutions in the virtual world, as well as to the elaboration of public policies and recognition of webometrics as an area to be explored and applied to various other fields of knowledge.

  7. Web-Based Inquiry Learning: Facilitating Thoughtful Literacy with WebQuests

    Science.gov (United States)

    Ikpeze, Chinwe H.; Boyd, Fenice B.

    2007-01-01

    An action research study investigated how the multiple tasks found in WebQuests facilitate fifth-grade students' literacy skills and higher order thinking. Findings indicate that WebQuests are most successful when activities are carefully selected and systematically delivered. Implications for teaching include the necessity for adequate planning,…

  8. Work of the Web Weavers: Web Development in Academic Libraries

    Science.gov (United States)

    Bundza, Maira; Vander Meer, Patricia Fravel; Perez-Stable, Maria A.

    2009-01-01

    Although the library's Web site has become a standard tool for seeking information and conducting research in academic institutions, there are a variety of ways libraries approach the often challenging--and sometimes daunting--process of Web site development and maintenance. Three librarians at Western Michigan University explored issues related…

  9. Research on Artificial Spider Web Model for Farmland Wireless Sensor Network

    OpenAIRE

    Jun Wang; Song Gao; Shimin Zhao; Guang Hu; Xiaoli Zhang; Guowang Xie

    2018-01-01

    Through systematic analysis of the structural characteristics and invulnerability of spider web, this paper explores the possibility of combining the advantages of spider web such as network robustness and invulnerability with farmland wireless sensor network. A universally applicable definition and mathematical model of artificial spider web structure are established. The comparison between artificial spider web and traditional networks is discussed in detail. The simulation result shows tha...

  10. Genericity versus expressivity - an exercise in semantic interoperable research information systems for Web Science

    NARCIS (Netherlands)

    Guéret, Christophe; Chambers, Tamy; Reijnhoudt, Linda; Most, Frank van der; Scharnhorst, Andrea

    2013-01-01

    The web does not only enable new forms of science, it also creates new possibilities to study science and new digital scholarship. This paper brings together multiple perspectives: from individual researchers seeking the best options to display their activities and market their skills on the

  11. Avatar Web-Based Self-Report Survey System Technology for Public Health Research: Technical Outcome Results and Lessons Learned.

    Science.gov (United States)

    Savel, Craig; Mierzwa, Stan; Gorbach, Pamina M; Souidi, Samir; Lally, Michelle; Zimet, Gregory; Interventions, Aids

    2016-01-01

    This paper reports on a specific Web-based self-report data collection system that was developed for a public health research study in the United States. Our focus is on technical outcome results and lessons learned that may be useful to other projects requiring such a solution. The system was accessible from any device that had a browser that supported HTML5. Report findings include: which hardware devices, Web browsers, and operating systems were used; the rate of survey completion; and key considerations for employing Web-based surveys in a clinical trial setting.

  12. Designing a WebQuest

    Science.gov (United States)

    Salsovic, Annette R.

    2009-01-01

    A WebQuest is an inquiry-based lesson plan that uses the Internet. This article explains what a WebQuest is, shows how to create one, and provides an example. When engaged in a WebQuest, students use technology to experience cooperative learning and discovery learning while honing their research, writing, and presentation skills. It has been found…

  13. Web corpus construction

    CERN Document Server

    Schafer, Roland

    2013-01-01

    The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are several adavantages of this approach: (i) Working with such corpora obviates the problems encountered when using Internet search engines in quantitative linguistic research (such as non-transparent ranking algorithms). (ii) Creating a corpus from web data is virtually free. (iii) The size of corpora compiled from the WWW may exceed by several orders of magnitudes the size of language resources offered elsewhere. (iv) The data is locally available to the user, and it can be linguistically post-processed and queried with the tools preferred by her/him. This book addresses the main practical tasks in the creation of web corpora up to giga-token size. Among these tasks are the sampling process (i.e., web crawling) and the usual cleanups including boilerplate removal and rem...

  14. Developing a Web-Based Nursing Practice and Research Information Management System: A Pilot Study.

    Science.gov (United States)

    Choi, Jeeyae; Lapp, Cathi; Hagle, Mary E

    2015-09-01

    Many hospital information systems have been developed and implemented to collect clinical data from the bedside and have used the information to improve patient care. Because of a growing awareness that the use of clinical information improves quality of care and patient outcomes, measuring tools (electronic and paper based) have been developed, but most of them require multiple steps of data collection and analysis. This necessitated the development of a Web-based Nursing Practice and Research Information Management System that processes clinical nursing data to measure nurses' delivery of care and its impact on patient outcomes and provides useful information to clinicians, administrators, researchers, and policy makers at the point of care. This pilot study developed a computer algorithm based on a falls prevention protocol and programmed the prototype Web-based Nursing Practice and Research Information Management System. It successfully measured performance of nursing care delivered and its impact on patient outcomes successfully using clinical nursing data from the study site. Although Nursing Practice and Research Information Management System was tested with small data sets, results of study revealed that it has the potential to measure nurses' delivery of care and its impact on patient outcomes, while pinpointing components of nursing process in need of improvement.

  15. Research on Web-Based Networked Virtual Instrument System

    International Nuclear Information System (INIS)

    Tang, B P; Xu, C; He, Q Y; Lu, D

    2006-01-01

    The web-based networked virtual instrument (NVI) system is designed by using the object oriented methodology (OOM). The architecture of the NVI system consists of two major parts: client-web server interaction and instrument server-virtual instrument (VI) communication. The web server communicates with the instrument server and the clients connected to it over the Internet, and it handles identifying the user's name, managing the connection between the user and the instrument server, adding, removing and configuring VI's information. The instrument server handles setting the parameters of VI, confirming the condition of VI and saving the VI's condition information into the database. The NVI system is required to be a general-purpose measurement system that is easy to maintain, adapt and extend. Virtual instruments are connected to the instrument server and clients can remotely configure and operate these virtual instruments. An application of The NVI system is given in the end of the paper

  16. The World-Wide Web: An Interface between Research and Teaching in Bioinformatics

    Directory of Open Access Journals (Sweden)

    James F. Aiton

    1994-01-01

    Full Text Available The rapid expansion occurring in World-Wide Web activity is beginning to make the concepts of ‘global hypermedia’ and ‘universal document readership’ realistic objectives of the new revolution in information technology. One consequence of this increase in usage is that educators and students are becoming more aware of the diversity of the knowledge base which can be accessed via the Internet. Although computerised databases and information services have long played a key role in bioinformatics these same resources can also be used to provide core materials for teaching and learning. The large datasets and arch ives th at have been compiled for biomedical research can be enhanced with the addition of a variety of multimedia elements (images. digital videos. animation etc.. The use of this digitally stored information in structured and self-directed learning environments is likely to increase as activity across World-Wide Web increases.

  17. Conducting Web-based Surveys.

    OpenAIRE

    David J. Solomon

    2001-01-01

    Web-based surveying is becoming widely used in social science and educational research. The Web offers significant advantages over more traditional survey techniques however there are still serious methodological challenges with using this approach. Currently coverage bias or the fact significant numbers of people do not have access, or choose not to use the Internet is of most concern to researchers. Survey researchers also have much to learn concerning the most effective ways to conduct s...

  18. Research and implementation of a Web-based remote desktop image monitoring system

    International Nuclear Information System (INIS)

    Ren Weijuan; Li Luofeng; Wang Chunhong

    2010-01-01

    It studied and implemented an ISS (Image Snapshot Server) system based on Web, using Java Web technology. The ISS system consisted of client web browser and server. The server part could be divided into three modules as the screen shots software, web server and Oracle database. Screen shots software intercepted the desktop environment of the remote monitored PC and sent these pictures to a Tomcat web server for displaying on the web at real time. At the same time, these pictures were also saved in an Oracle database. Through the web browser, monitor person can view the real-time and historical desktop pictures of the monitored PC during some period. It is very convenient for any user to monitor the desktop image of remote monitoring PC. (authors)

  19. 3Drefine: an interactive web server for efficient protein structure refinement.

    Science.gov (United States)

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-07-08

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Introducing the PRIDE Archive RESTful web services.

    Science.gov (United States)

    Reisinger, Florian; del-Toro, Noemi; Ternent, Tobias; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-07-01

    The PRIDE (PRoteomics IDEntifications) database is one of the world-leading public repositories of mass spectrometry (MS)-based proteomics data and it is a founding member of the ProteomeXchange Consortium of proteomics resources. In the original PRIDE database system, users could access data programmatically by accessing the web services provided by the PRIDE BioMart interface. New REST (REpresentational State Transfer) web services have been developed to serve the most popular functionality provided by BioMart (now discontinued due to data scalability issues) and address the data access requirements of the newly developed PRIDE Archive. Using the API (Application Programming Interface) it is now possible to programmatically query for and retrieve peptide and protein identifications, project and assay metadata and the originally submitted files. Searching and filtering is also possible by metadata information, such as sample details (e.g. species and tissues), instrumentation (mass spectrometer), keywords and other provided annotations. The PRIDE Archive web services were first made available in April 2014. The API has already been adopted by a few applications and standalone tools such as PeptideShaker, PRIDE Inspector, the Unipept web application and the Python-based BioServices package. This application is free and open to all users with no login requirement and can be accessed at http://www.ebi.ac.uk/pride/ws/archive/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. SEMANTIC WEB MINING: ISSUES AND CHALLENGES

    OpenAIRE

    Karan Singh*, Anil kumar, Arun Kumar Yadav

    2016-01-01

    The combination of the two fast evolving scientific research areas “Semantic Web” and “Web Mining” are well-known as “Semantic Web Mining” in computer science. These two areas cover way for the mining of related and meaningful information from the web, by this means giving growth to the term “Semantic Web Mining”. The “Semantic Web” makes mining easy and “Web Mining” can construct new structure of Web. Web Mining applies Data Mining technique on web content, Structure and Usage. This paper gi...

  2. SaaS ve web designu

    OpenAIRE

    Míka, Filip

    2011-01-01

    This thesis is aimed to evaluate if the current SaaS market is able to meet functional re-quirements of web design in order to appropriately support web design's activities. The theoretical part introduces the web design model which describes web design's functional requirements. The next section presents a research concept that describes model assessment (i.e. solutions delivered as SaaS that support web design) and evaluation process. The results show that the current SaaS market is able to...

  3. miRQuest: integration of tools on a Web server for microRNA research.

    Science.gov (United States)

    Aguiar, R R; Ambrosio, L A; Sepúlveda-Hermosilla, G; Maracaja-Coutinho, V; Paschoal, A R

    2016-03-28

    This report describes the miRQuest - a novel middleware available in a Web server that allows the end user to do the miRNA research in a user-friendly way. It is known that there are many prediction tools for microRNA (miRNA) identification that use different programming languages and methods to realize this task. It is difficult to understand each tool and apply it to diverse datasets and organisms available for miRNA analysis. miRQuest can easily be used by biologists and researchers with limited experience with bioinformatics. We built it using the middleware architecture on a Web platform for miRNA research that performs two main functions: i) integration of different miRNA prediction tools for miRNA identification in a user-friendly environment; and ii) comparison of these prediction tools. In both cases, the user provides sequences (in FASTA format) as an input set for the analysis and comparisons. All the tools were selected on the basis of a survey of the literature on the available tools for miRNA prediction. As results, three different cases of use of the tools are also described, where one is the miRNA identification analysis in 30 different species. Finally, miRQuest seems to be a novel and useful tool; and it is freely available for both benchmarking and miRNA identification at http://mirquest.integrativebioinformatics.me/.

  4. Research of web application based on B/S structure testing

    International Nuclear Information System (INIS)

    Ou Ge; Zhang Hongmei; Song Liming

    2007-01-01

    Software testing is very important method used to assure the quality of Web application. With the fast development of Web application, the old testing techniques can not satisfied the require any more. Because of this people begin to classify different part of the application, find out the content that can be tested by the test tools and studies the structure of testing to enhance his efficiency. This paper analyses the testing based on the feature of Web application, sums up the testing method and gives some improvements of them. (authors)

  5. Spatiotemporal analysis of tropical disease research combining Europe PMC and affiliation mapping web services.

    Science.gov (United States)

    Palmblad, Magnus; Torvik, Vetle I

    2017-01-01

    Tropical medicine appeared as a distinct sub-discipline in the late nineteenth century, during a period of rapid European colonial expansion in Africa and Asia. After a dramatic drop after World War II, research on tropical diseases have received more attention and research funding in the twenty-first century. We used Apache Taverna to integrate Europe PMC and MapAffil web services, containing the spatiotemporal analysis workflow from a list of PubMed queries to a list of publication years and author affiliations geoparsed to latitudes and longitudes. The results could then be visualized in the Quantum Geographic Information System (QGIS). Our workflows automatically matched 253,277 affiliations to geographical coordinates for the first authors of 379,728 papers on tropical diseases in a single execution. The bibliometric analyses show how research output in tropical diseases follow major historical shifts in the twentieth century and renewed interest in and funding for tropical disease research in the twenty-first century. They show the effects of disease outbreaks, WHO eradication programs, vaccine developments, wars, refugee migrations, and peace treaties. Literature search and geoparsing web services can be combined in scientific workflows performing a complete spatiotemporal bibliometric analyses of research in tropical medicine. The workflows and datasets are freely available and can be used to reproduce or refine the analyses and test specific hypotheses or look into particular diseases or geographic regions. This work exceeds all previously published bibliometric analyses on tropical diseases in both scale and spatiotemporal range.

  6. Web survey methodology

    CERN Document Server

    Callegaro, Mario; Vehovar, Asja

    2015-01-01

    Web Survey Methodology guides the reader through the past fifteen years of research in web survey methodology. It both provides practical guidance on the latest techniques for collecting valid and reliable data and offers a comprehensive overview of research issues. Core topics from preparation to questionnaire design, recruitment testing to analysis and survey software are all covered in a systematic and insightful way. The reader will be exposed to key concepts and key findings in the literature, covering measurement, non-response, adjustments, paradata, and cost issues. The book also discusses the hottest research topics in survey research today, such as internet panels, virtual interviewing, mobile surveys and the integration with passive measurements, e-social sciences, mixed modes and business intelligence. The book is intended for students, practitioners, and researchers in fields such as survey and market research, psychological research, official statistics and customer satisfaction research.

  7. Caught in the Web

    Energy Technology Data Exchange (ETDEWEB)

    Gillies, James

    1995-06-15

    The World-Wide Web may have taken the Internet by storm, but many people would be surprised to learn that it owes its existence to CERN. Around half the world's particle physicists come to CERN for their experiments, and the Web is the result of their need to share information quickly and easily on a global scale. Six years after Tim Berners-Lee's inspired idea to marry hypertext to the Internet in 1989, CERN is handing over future Web development to the World-Wide Web Consortium, run by the French National Institute for Research in Computer Science and Control, INRIA, and the Laboratory for Computer Science of the Massachusetts Institute of Technology, MIT, leaving itself free to concentrate on physics. The Laboratory marked this transition with a conference designed to give a taste of what the Web can do, whilst firmly stamping it with the label ''Made in CERN''. Over 200 European journalists and educationalists came to CERN on 8 - 9 March for the World-Wide Web Days, resulting in wide media coverage. The conference was opened by UK Science Minister David Hunt who stressed the importance of fundamental research in generating new ideas. ''Who could have guessed 10 years ago'', he said, ''that particle physics research would lead to a communication system which would allow every school to have the biggest library in the world in a single computer?''. In his introduction, the Minister also pointed out that ''CERN and other basic research laboratories help to break new technological ground and sow the seeds of what will become mainstream manufacturing in the future.'' Learning the jargon is often the hardest part of coming to grips with any new invention, so CERN put it at the top of the agenda. Jacques Altaber, who helped introduce the Internet to CERN in the early 1980s, explained that without the Internet, the Web couldn't exist. The Internet began as a US Defense Department research project in the 1970s and has grown into a global network-ofnetworks linking some

  8. Nuclear Physics (Education) on the Web

    International Nuclear Information System (INIS)

    Bar-Noy, T.

    1999-01-01

    The Web has long became an important source of information for researchers and educators. In the present paper we will shed some light on its main resources: Newsgroups, Mailing lists, Catalogs, Research- and Education-oriented Web-sites, and (Java) simulations

  9. Current status of acid fog research. Sanseimu kenkyu no genjo

    Energy Technology Data Exchange (ETDEWEB)

    Murano, K. (National Inst. for Environmental Studies, Tsukuba (Japan))

    1993-07-10

    Acid fog research was behind in comparison with acid rain research. In case of acid fog, it is because the place generating sufficiently thick fog to collect is limited, the generating place is mountainous, its survey needs a lot of works, its collector is not convenient like in acid rain, or its sampling is difficult on its automation. Since the 1980s, an extensive survey on acid fog had been carried out centering the west coast of California, USA, and low pH fog (minimum pH 2.2) was observed. In the course of these researches, string type active fogwater collectors became a major sampling method, and the simulation of acidification of fog droplet in the atmosphere was extensively conducted. In Japan, already in the 1960s, field surveys on acid fog were conducted, in 1984 acid fog survey started on Mt. Akagi under a viewpoint of ecological impact, and there was a report that low pH fog (pH 3 to 4) continued more than 10 hours. It was pointed out that there were plant damage by acid fog in several locations, especially the tree mortality mechanism in Tomakomai was clarified. 50 refs., 10 figs., 6 tabs.

  10. A study on the Web intelligence

    Institute of Scientific and Technical Information of China (English)

    Sang-Geun Kim

    2004-01-01

    This paper surveys important aspects of Web Intelligence (WI). WI explores the fundamental roles as well as practical impacts of Artificial Intelligence (AI) and advanced Information Technology (IT) on the next generation of Web - related products, systens, and activities. As a direction for scientific research and devlopment, WI can be extremely beneficial for the field of Artificial Intelligence in Education (AIED). This paper covers these issues only very briefly. It focuses more on other issues in WI, such as intelligent Web services, and semantic web, and proposes how to use them as basis for tackling new and challenging research problems in AIED.

  11. Penilaian Risiko Aplikasi Web Menggunakan Model DREAD

    Directory of Open Access Journals (Sweden)

    Didit Suprihanto

    2016-01-01

    Full Text Available Application that  is developed by web based, beside has surplus in WWW technology, it has susceptibility side that can be threat too. Susceptibility generate risk and can bring out big trouble even effect big disadvantage. The goal of this research is design and build document risk assessment system of threat level and prevention advice. It use DREAD model as method to solve trouble by giving qualified information. This information are used to produce risk level in web application. The result of this research is web application risk assessment system by using DREAD model to know risk threat level and equate perception of web threat risk to application developer, minimize of threat risk and maximize performance of web application.   Keywords : DREAD model, web threat risk, web risk assessment system

  12. Beyond Web 2.0 … and Beyond the Semantic Web

    Science.gov (United States)

    Bénel, Aurélien; Zhou, Chao; Cahier, Jean-Pierre

    Tim O'Reilly, the famous technology book publisher, changed the life of many of us when he coined the name "Web 2.0" (O' Reilly 2005). Our research topics suddenly became subjects for open discussion in various cultural formats such as radio and TV, while at the same time they became part of an inappropriate marketing discourse according to several scientific reviewers. Indeed Tim O'Reilly's initial thoughts were about economic consequence, since it was about the resurrection of the Web after the bursting of the dot-com bubble. Some opponents of the concept do not think the term should be used at all since it is underpinned by no technological revolution. In contrast, we think that there was a paradigm shift when several sites based on user-generated content became some of the most visited Web sites and massive adoption of that kind is worthy of researchers' attention.

  13. miRNAFold: a web server for fast miRNA precursor prediction in genomes.

    Science.gov (United States)

    Tav, Christophe; Tempel, Sébastien; Poligny, Laurent; Tahi, Fariza

    2016-07-08

    Computational methods are required for prediction of non-coding RNAs (ncRNAs), which are involved in many biological processes, especially at post-transcriptional level. Among these ncRNAs, miRNAs have been largely studied and biologists need efficient and fast tools for their identification. In particular, ab initio methods are usually required when predicting novel miRNAs. Here we present a web server dedicated for miRNA precursors identification at a large scale in genomes. It is based on an algorithm called miRNAFold that allows predicting miRNA hairpin structures quickly with high sensitivity. miRNAFold is implemented as a web server with an intuitive and user-friendly interface, as well as a standalone version. The web server is freely available at: http://EvryRNA.ibisc.univ-evry.fr/miRNAFold. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Social networks, web-based tools and diseases: implications for biomedical research.

    Science.gov (United States)

    Costa, Fabricio F

    2013-03-01

    Advances in information technology have improved our ability to gather, collect and analyze information from individuals online. Social networks can be seen as a nonlinear superposition of a multitude of complex connections between people where the nodes represent individuals and the links between them capture a variety of different social interactions. The emergence of different types of social networks has fostered connections between individuals, thus facilitating data exchange in a variety of fields. Therefore, the question posed now is "can these same tools be applied to life sciences in order to improve scientific and medical research?" In this article, I will review how social networks and other web-based tools are changing the way we approach and track diseases in biomedical research. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Potential research money available from the Acid Deposition Program and Alberta Environment

    International Nuclear Information System (INIS)

    Primus, C.L.

    1992-01-01

    It is exceedingly difficult to demonstrate definitive long-term changes in animal health as a result of acid-forming emissions from sour gas wells. A summary is presented of current research in Alberta, followed by the potential for research funding by the Alberta Government/Industry Acid Deposition Program (ADRP). The Alberta Environment research budget consists of four programs in addition to the ADRP: acid deposition effects research in the Athabasca oil sands; western and northern Canada long-range transport of air pollutants; departmental monitoring; and inhalation toxicology and animal health. Animal health research, although a component of the acid deposition issue, is beyond the mandate of Alberta Environment, and the ADRP members committee does not forsee becoming involved in the long-term and complex research required to address the effects of acid-forming emissions on livestock. Funds for additional animal health research must come from other government departments and agencies whose mandate covers this area

  16. Semantic Web Services with Web Ontology Language (OWL-S) - Specification of Agent-Services for DARPA Agent Markup Language (DAML)

    National Research Council Canada - National Science Library

    Sycara, Katia P

    2006-01-01

    CMU did research and development on semantic web services using OWL-S, the semantic web service language under the Defense Advanced Research Projects Agency- DARPA Agent Markup Language (DARPA-DAML) program...

  17. PIQMIe: a web server for semi-quantitative proteomics data management and analysis.

    Science.gov (United States)

    Kuzniar, Arnold; Kanaar, Roland

    2014-07-01

    We present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates peptide and (non-redundant) protein identifications and quantitations from multiple experiments with additional biological information on the protein entries, and makes the linked data available in the form of a light-weight relational database, which enables dedicated data analyses (e.g. in R) and user-driven queries. Using the web interface, users are presented with a concise summary of their proteomics experiments in numerical and graphical forms, as well as with a searchable protein grid and interactive visualization tools to aid in the rapid assessment of the experiments and in the identification of proteins of interest. The web server not only provides data access through a web interface but also supports programmatic access through RESTful web service. The web server is available at http://piqmie.semiqprot-emc.cloudlet.sara.nl or http://www.bioinformatics.nl/piqmie. This website is free and open to all users and there is no login requirement. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. WebAL Comes of Age: A review of the first 21 years of Artificial Life on the Web

    DEFF Research Database (Denmark)

    Taylor, Tim; Auerbach, Joshua E; Bongard, Josh

    2016-01-01

    We present a survey of the first 21 years of web-based artificial life (WebAL) research and applications, broadly construed to include the many different ways in which artificial life and web technologies might intersect. Our survey covers the period from 1994—when the first WebAL work appeared...

  19. Web analytics tools and web metrics tools: An overview and comparative analysis

    Directory of Open Access Journals (Sweden)

    Ivan Bekavac

    2015-10-01

    Full Text Available The aim of the paper is to compare and analyze the impact of web analytics tools for measuring the performance of a business model. Accordingly, an overview of web analytics and web metrics tools is given, including their characteristics, main functionalities and available types. The data acquisition approaches and proper choice of web tools for particular business models are also reviewed. The research is divided in two sections. First, a qualitative focus is placed on reviewing web analytics tools to exploring their functionalities and ability to be integrated into the respective business model. Web analytics tools support the business analyst’s efforts in obtaining useful and relevant insights into market dynamics. Thus, generally speaking, selecting a web analytics and web metrics tool should be based on an investigative approach, not a random decision. The second section is a quantitative focus shifting from theory to an empirical approach, and which subsequently presents output data resulting from a study based on perceived user satisfaction of web analytics tools. The empirical study was carried out on employees from 200 Croatian firms from either an either IT or marketing branch. The paper contributes to highlighting the support for management that available web analytics and web metrics tools available on the market have to offer, and based on the growing needs of understanding and predicting global market trends.

  20. ProBiS-ligands: a web server for prediction of ligands by examination of protein binding sites.

    Science.gov (United States)

    Konc, Janez; Janežič, Dušanka

    2014-07-01

    The ProBiS-ligands web server predicts binding of ligands to a protein structure. Starting with a protein structure or binding site, ProBiS-ligands first identifies template proteins in the Protein Data Bank that share similar binding sites. Based on the superimpositions of the query protein and the similar binding sites found, the server then transposes the ligand structures from those sites to the query protein. Such ligand prediction supports many activities, e.g. drug repurposing. The ProBiS-ligands web server, an extension of the ProBiS web server, is open and free to all users at http://probis.cmm.ki.si/ligands. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Free amino acids in spider hemolymph.

    Science.gov (United States)

    Tillinghast, Edward K; Townley, Mark A

    2008-11-01

    We examined the free amino acid composition of hemolymph from representatives of five spider families with an interest in knowing if the amino acid profile in the hemolymph of orb-web-building spiders reflects the high demands for small organic compounds in the sticky droplets of their webs. In nearly all analyses, on both orb and non-orb builders, glutamine was the most abundant free amino acid. Glycine, taurine, proline, histidine, and alanine also tended to be well-represented in orb and non-orb builders. While indications of taxon-specific differences in amino acid composition were observed, it was not apparent that two presumptive precursors (glutamine, taurine) of orb web sticky droplet compounds were uniquely enriched in araneids (orb builders). However, total amino acid concentrations were invariably highest in the araneids and especially so in overwintering juveniles, even as several of the essential amino acids declined during this winter diapause. Comparing the data from this study with those from earlier studies revealed a number of discrepancies. The possible origins of these differences are discussed.

  2. ChEMBL web services: streamlining access to drug discovery data and utilities.

    Science.gov (United States)

    Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P

    2015-07-01

    ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. minepath.org: a free interactive pathway analysis web server.

    Science.gov (United States)

    Koumakis, Lefteris; Roussos, Panos; Potamias, George

    2017-07-03

    ( www.minepath.org ) is a web-based platform that elaborates on, and radically extends the identification of differentially expressed sub-paths in molecular pathways. Besides the network topology, the underlying MinePath algorithmic processes exploit exact gene-gene molecular relationships (e.g. activation, inhibition) and are able to identify differentially expressed pathway parts. Each pathway is decomposed into all its constituent sub-paths, which in turn are matched with corresponding gene expression profiles. The highly ranked, and phenotype inclined sub-paths are kept. Apart from the pathway analysis algorithm, the fundamental innovation of the MinePath web-server concerns its advanced visualization and interactive capabilities. To our knowledge, this is the first pathway analysis server that introduces and offers visualization of the underlying and active pathway regulatory mechanisms instead of genes. Other features include live interaction, immediate visualization of functional sub-paths per phenotype and dynamic linked annotations for the engaged genes and molecular relations. The user can download not only the results but also the corresponding web viewer framework of the performed analysis. This feature provides the flexibility to immediately publish results without publishing source/expression data, and get all the functionality of a web based pathway analysis viewer. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Challenge in Sharing Tacit Knowledge: Academicians’ Behavior towards Developing A Web Portal for Sharing Research Ideas

    Directory of Open Access Journals (Sweden)

    Hafiza Adenan

    2013-08-01

    Full Text Available Academicians’ collective memories soft information, such as research ideas, expertise, experiences, academic skills, know-what, know-how and know-why which inevitability it is considered should made accessible. The Higher Education Institution needs to identify, collect, classify, verbalize and diffuse the academicians’ soft information specifically research ideas present in the university for knowledge enrichment. This can be implemented by the academicians actively sharing their research ideas with others. Actively sharing research ideas by academicians will have great impact on the enrichment of their intellectual capability as most of the valuable knowledge resides in one’s brain. However, as there is no specific medium to bring their research ideas into the surface and be visible to others, the precious research ideas still remain in the academicians’ brains. Therefore, the objective of the study is to explore academicians’ behavior toward the development of a sharing research ideas web portal at private university colleges in Malaysia. This study used the qualitative method that is a multiple cases study. The study refers to four private university colleges in Malaysia. In-depth interview, focus group discussion and document analysis were formed the data collection for this study. The theory of Planned Behavior by Ajzen (1991 was used to determine academicians’ behavior. This study showed that the academicians’ attitude, subjective norms, and perceived behavioral control towards developing a web portal for sharing research ideas all affect their intention to share their research ideas with others.

  5. Developing a national and international research community in tree breeding through a web-based information system

    CSIR Research Space (South Africa)

    Hohls, DR

    2008-11-01

    Full Text Available CSIR research group has developed a web-based information system on tree breeding, which will link national and international partners, which data dating back more than 80 years. Tree breeding relies heavily on managing and exploiting data. While...

  6. Social web and knowledge management

    DEFF Research Database (Denmark)

    Dolog, Peter; Kroetz, Markus; Schaffert, Sebastian

    2009-01-01

    Knowledge Management is the study and practice of representing, communicating, organizing, and applying knowledge in organizations. Moreover, being used by organizations, it is inherently social. The Web, as a medium, enables new forms of communications and interactions and requires new ways...... to represent knowledge assets. It is therefore obvious that the Web will influence and change Knowledge Management, but it is very unclear what the impact of these changes will be. This chapter raises questions and discusses visions in the area that connects the Social Web and Knowledge Management – an area...... of research that is only just emerging. The World Wide Web conference 2008 in Beijing hosted a workshop on that question, bringing together researchers and practitioners to gain first insights toward answering questions of that area....

  7. GenProBiS: web server for mapping of sequence variants to protein binding sites.

    Science.gov (United States)

    Konc, Janez; Skrlj, Blaz; Erzen, Nika; Kunej, Tanja; Janezic, Dusanka

    2017-07-03

    Discovery of potentially deleterious sequence variants is important and has wide implications for research and generation of new hypotheses in human and veterinary medicine, and drug discovery. The GenProBiS web server maps sequence variants to protein structures from the Protein Data Bank (PDB), and further to protein-protein, protein-nucleic acid, protein-compound, and protein-metal ion binding sites. The concept of a protein-compound binding site is understood in the broadest sense, which includes glycosylation and other post-translational modification sites. Binding sites were defined by local structural comparisons of whole protein structures using the Protein Binding Sites (ProBiS) algorithm and transposition of ligands from the similar binding sites found to the query protein using the ProBiS-ligands approach with new improvements introduced in GenProBiS. Binding site surfaces were generated as three-dimensional grids encompassing the space occupied by predicted ligands. The server allows intuitive visual exploration of comprehensively mapped variants, such as human somatic mis-sense mutations related to cancer and non-synonymous single nucleotide polymorphisms from 21 species, within the predicted binding sites regions for about 80 000 PDB protein structures using fast WebGL graphics. The GenProBiS web server is open and free to all users at http://genprobis.insilab.org. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. The RCSB Protein Data Bank: redesigned web site and web services.

    Science.gov (United States)

    Rose, Peter W; Beran, Bojan; Bi, Chunxiao; Bluhm, Wolfgang F; Dimitropoulos, Dimitris; Goodsell, David S; Prlic, Andreas; Quesada, Martha; Quinn, Gregory B; Westbrook, John D; Young, Jasmine; Yukich, Benjamin; Zardecki, Christine; Berman, Helen M; Bourne, Philip E

    2011-01-01

    The RCSB Protein Data Bank (RCSB PDB) web site (http://www.pdb.org) has been redesigned to increase usability and to cater to a larger and more diverse user base. This article describes key enhancements and new features that fall into the following categories: (i) query and analysis tools for chemical structure searching, query refinement, tabulation and export of query results; (ii) web site customization and new structure alerts; (iii) pair-wise and representative protein structure alignments; (iv) visualization of large assemblies; (v) integration of structural data with the open access literature and binding affinity data; and (vi) web services and web widgets to facilitate integration of PDB data and tools with other resources. These improvements enable a range of new possibilities to analyze and understand structure data. The next generation of the RCSB PDB web site, as described here, provides a rich resource for research and education.

  9. Using the World Wide Web to Connect Research and Professional Practice: Towards Evidence-Based Practice

    Directory of Open Access Journals (Sweden)

    Daniel L. Moody

    2003-01-01

    Full Text Available In most professional (applied disciplines, research findings take a long time to filter into practice, if they ever do at all. The result of this is under-utilisation of research results and sub-optimal practices. There are a number of reasons for the lack of knowledge transfer. On the "demand side", people working in professional practice have little time available to keep up with the latest research in their field. In addition, the volume of research published each year means that the average practitioner would not have time to read all the research articles in their area of interest even if they devoted all their time to it. From the "supply side", academic research is primarily focused on the production rather than distribution of knowledge. While they have highly developed mechanisms for transferring knowledge among themselves, there is little investment in the distribution of research results be-yond research communities. The World Wide Web provides a potential solution to this problem, as it provides a global information infrastructure for connecting those who produce knowledge (researchers and those who need to apply this knowledge (practitioners. This paper describes two projects which use the World Wide Web to make research results directly available to support decision making in the workplace. The first is a successful knowledge management project in a health department which provides medical staff with on-line access to the latest medical research at the point of care. The second is a project currently in progress to implement a similar system to support decision making in IS practice. Finally, we draw some general lessons about how to improve transfers of knowledge from research and practice, which could be applied in any discipline.

  10. Web 2.0 collaboration tools to support student research in hydrology - an opinion

    Science.gov (United States)

    Pathirana, A.; Gersonius, B.; Radhakrishnan, M.

    2012-02-01

    A growing body of evidence suggests that it is unwise to make the a-priori assumption that university students are ready and eager to embrace modern online technologies employed to enhance the educational experience. We present an opinion on employing Wiki, a popular Web 2.0 technology, in small student groups, based on a case-study of using it customized as a personal learning environment (PLE) for supporting thesis research in hydrology. Since inception in 2006 the system presented has proven to facilitate knowledge construction and peer-communication within and across groups of students of different academic years and to stimulate learning. Being an open ended and egalitarian system, it was a minimal burden to maintain, as all students became content authors and shared responsibility. A number of unintended uses of the system were also observed, like using it as a backup medium and mobile storage. We attribute the success and sustainability of the proposed web 2.0-based approach to the fact that the efforts were not limited to the application of the technology, but comprised the creation of a supporting environment with educational activities organized around it. We propose that Wiki-based PLEs are much more suitable than traditional learning management systems for supporting non-classroom education activities like thesis research in hydrology.

  11. Web Mining of Hotel Customer Survey Data

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2008-12-01

    Full Text Available This paper provides an extensive literature review and list of references on the background of web mining as applied specifically to hotel customer survey data. This research applies the techniques of web mining to actual text of written comments for hotel customers using Megaputer PolyAnalyst®. Web mining functionalities utilized include those such as clustering, link analysis, key word and phrase extraction, taxonomy, and dimension matrices. This paper provides screen shots of the web mining applications using Megaputer PolyAnalyst®. Conclusions and future directions of the research are presented.

  12. Web Auctions in Europe

    NARCIS (Netherlands)

    A. Pouloudi; J. Paarlberg; H.W.G.M. van Heck (Eric)

    2001-01-01

    textabstractThis paper argues that a better understanding of the business model of web auctions can be reached if we adopt a broader view and provide empirical research from different sites. In this paper the business model of web auctions is refined into four dimensions. These are auction model,

  13. Caught in the Web

    International Nuclear Information System (INIS)

    Gillies, James

    1995-01-01

    The World-Wide Web may have taken the Internet by storm, but many people would be surprised to learn that it owes its existence to CERN. Around half the world's particle physicists come to CERN for their experiments, and the Web is the result of their need to share information quickly and easily on a global scale. Six years after Tim Berners-Lee's inspired idea to marry hypertext to the Internet in 1989, CERN is handing over future Web development to the World-Wide Web Consortium, run by the French National Institute for Research in Computer Science and Control, INRIA, and the Laboratory for Computer Science of the Massachusetts Institute of Technology, MIT, leaving itself free to concentrate on physics. The Laboratory marked this transition with a conference designed to give a taste of what the Web can do, whilst firmly stamping it with the label ''Made in CERN''. Over 200 European journalists and educationalists came to CERN on 8 - 9 March for the World-Wide Web Days, resulting in wide media coverage. The conference was opened by UK Science Minister David Hunt who stressed the importance of fundamental research in generating new ideas. ''Who could have guessed 10 years ago'', he said, ''that particle physics research would lead to a communication system which would allow every school to have the biggest library in the world in a single computer?''. In his introduction, the Minister also pointed out that ''CERN and other basic research laboratories help to break new technological ground and sow the seeds of what will become mainstream manufacturing in the future.'' Learning the jargon is often the hardest part of coming to grips with any new invention, so CERN put it at the top of the agenda. Jacques Altaber, who helped introduce the Internet to CERN in the early 1980s, explained that without the Internet, the Web couldn't exist. The Internet began as a US Defense

  14. Web-based communication tools in a European research project: the example of the TRACE project

    Directory of Open Access Journals (Sweden)

    Baeten V.

    2009-01-01

    Full Text Available The multi-disciplinary and international nature of large European projects requires powerful managerial and communicative tools to ensure the transmission of information to the end-users. One such project is TRACE entitled “Tracing Food Commodities in Europe”. One of its objectives is to provide a communication system dedicated to be the central source of information on food authenticity and traceability in Europe. This paper explores the web tools used and communication vehicles offered to scientists involved in the TRACE project to communicate internally as well as to the public. Two main tools have been built: an Intranet and a public website. The TRACE website can be accessed at http://www.trace.eu.org. A particular emphasis was placed on the efficiency, the relevance and the accessibility of the information, the publicity of the website as well as the use of the collaborative utilities. The rationale of web space design as well as integration of proprietary software solutions are presented. Perspectives on the using of web tools in the research projects are discussed.

  15. Interactive Web Services with Java

    DEFF Research Database (Denmark)

    Møller, Anders; Schwartzbach, Michael Ignatieff

    This slide collection about Java Web service programming, JSP, Servlets and JWIG is created by: Anders Møller and Michael I. Schwartzbach at the BRICS research center at University of Aarhus, Denmark.......This slide collection about Java Web service programming, JSP, Servlets and JWIG is created by: Anders Møller and Michael I. Schwartzbach at the BRICS research center at University of Aarhus, Denmark....

  16. 8th Chinese Conference on The Semantic Web and Web Science

    CERN Document Server

    Du, Jianfeng; Wang, Haofen; Wang, Peng; Ji, Donghong; Pan, Jeff Z; CSWS 2014

    2014-01-01

    This book constitutes the thoroughly refereed papers of the 8th Chinese Conference on The Semantic Web and Web Science, CSWS 2014, held in Wuhan, China, in August 2014. The 22 research papers presented were carefully reviewed and selected from 61 submissions. The papers are organized in topical sections such as ontology reasoning and learning; semantic data generation and management; and semantic technology and applications.

  17. WebQuests as Language-Learning Tools

    Science.gov (United States)

    Aydin, Selami

    2016-01-01

    This study presents a review of the literature that examines WebQuests as tools for second-language acquisition and foreign language-learning processes to guide teachers in their teaching activities and researchers in further research on the issue. The study first introduces the theoretical background behind WebQuest use in the mentioned…

  18. TCS: a web server for multiple sequence alignment evaluation and phylogenetic reconstruction.

    Science.gov (United States)

    Chang, Jia-Ming; Di Tommaso, Paolo; Lefort, Vincent; Gascuel, Olivier; Notredame, Cedric

    2015-07-01

    This article introduces the Transitive Consistency Score (TCS) web server; a service making it possible to estimate the local reliability of protein multiple sequence alignments (MSAs) using the TCS index. The evaluation can be used to identify the aligned positions most likely to contain structurally analogous residues and also most likely to support an accurate phylogenetic reconstruction. The TCS scoring scheme has been shown to be accurate predictor of structural alignment correctness among commonly used methods. It has also been shown to outperform common filtering schemes like Gblocks or trimAl when doing MSA post-processing prior to phylogenetic tree reconstruction. The web server is available from http://tcoffee.crg.cat/tcs. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. SurveyWiz and factorWiz: JavaScript Web pages that make HTML forms for research on the Internet.

    Science.gov (United States)

    Birnbaum, M H

    2000-05-01

    SurveyWiz and factorWiz are Web pages that act as wizards to create HTML forms that enable one to collect data via the Web. SurveyWiz allows the user to enter survey questions or personality test items with a mixture of text boxes and scales of radio buttons. One can add demographic questions of age, sex, education, and nationality with the push of a button. FactorWiz creates the HTML for within-subjects, two-factor designs as large as 9 x 9, or higher order factorial designs up to 81 cells. The user enters levels of the row and column factors, which can be text, images, or other multimedia. FactorWiz generates the stimulus combinations, randomizes their order, and creates the page. In both programs HTML is displayed in a window, and the user copies it to a text editor to save it. When uploaded to a Web server and supported by a CGI script, the created Web pages allow data to be collected, coded, and saved on the server. These programs are intended to assist researchers and students in quickly creating studies that can be administered via the Web.

  20. Comparison of student outcomes and preferences in a traditional vs. World Wide Web-based baccalaureate nursing research course.

    Science.gov (United States)

    Leasure, A R; Davis, L; Thievon, S L

    2000-04-01

    The purpose of this project was to compare student outcomes in an undergraduate research course taught using both World Wide Web-based distance learning technology and traditional pedagogy. Reasons given for enrolling in the traditional classroom section included the perception of increased opportunity for interaction, decreased opportunity to procrastinate, immediate feedback, and more meaningful learning activities. Reasons for selecting the Web group section included cost, convenience, and flexibility. Overall, there was no significant difference in examination scores between the two groups on the three multiple-choice examinations or for the course grades (t = -.96, P = .343). Students who reported that they were self-directed and had the ability to maintain their own pace and avoid procrastination were most suited to Web-based courses. The Web-based classes can help provide opportunities for methods of communication that are not traditionally nurtured in traditional classroom settings. Secondary benefits of the World Wide Web-based course were to increase student confidence with the computer, and introduce them to skills and opportunities they would not have had in the classroom. Additionally, over time and with practice, student's writing skills improved.

  1. The World Wide Web and Technology Transfer at NASA Langley Research Center

    Science.gov (United States)

    Nelson, Michael L.; Bianco, David J.

    1994-01-01

    NASA Langley Research Center (LaRC) began using the World Wide Web (WWW) in the summer of 1993, becoming the first NASA installation to provide a Center-wide home page. This coincided with a reorganization of LaRC to provide a more concentrated focus on technology transfer to both aerospace and non-aerospace industry. Use of the WWW and NCSA Mosaic not only provides automated information dissemination, but also allows for the implementation, evolution and integration of many technology transfer applications. This paper describes several of these innovative applications, including the on-line presentation of the entire Technology Opportunities Showcase (TOPS), an industrial partnering showcase that exists on the Web long after the actual 3-day event ended. During its first year on the Web, LaRC also developed several WWW-based information repositories. The Langley Technical Report Server (LTRS), a technical paper delivery system with integrated searching and retrieval, has proved to be quite popular. The NASA Technical Report Server (NTRS), an outgrowth of LTRS, provides uniform access to many logically similar, yet physically distributed NASA report servers. WWW is also the foundation of the Langley Software Server (LSS), an experimental software distribution system which will distribute LaRC-developed software with the possible phase-out of NASA's COSMIC program. In addition to the more formal technology distribution projects, WWW has been successful in connecting people with technologies and people with other people. With the completion of the LaRC reorganization, the Technology Applications Group, charged with interfacing with non-aerospace companies, opened for business with a popular home page.

  2. Nuclear expert web search and crawler algorithm

    International Nuclear Information System (INIS)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D.

    2013-01-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  3. Nuclear expert web search and crawler algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D., E-mail: thiagoreis@usp.br, E-mail: barroso@ipen.br, E-mail: bdbfilho@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  4. Research and performance evaluation on an HA integrated acid system for sandstone acidizing

    Directory of Open Access Journals (Sweden)

    Liqiang Zhao

    2018-03-01

    Full Text Available When the conventional sandstone acidizing technologies are adopted, many slugs are needed in the injection of prepad fluid, treatment fluid and postpad fluid, and consequently the production and operation suffers inconveniences and difficulties. In view of this, a kind of HA integrated acid system which is mainly composed of organic polybasic acids (HA+HCl + HF and an efficient organic solvent was developed in this paper based on the idea of integrated acid replacing ''multiple steps'' and high efficiency and intensification. Via this HA integrated acid system, the complicated blockage in sandstone reservoirs can be removed effectively. Then, experiments were carried out on this system to evaluate its performance in terms of its retardance, organic blockage dissolution, chelating and precipitation inhibition. It is indicated that this new system can not only realize the acidizing of conventional integrated acid, but also present a good retarding performance by controlling H+ multi-stage ionization step by step and by forming silica acid-aluminum phosphonate film on the surface of clay minerals; that via this new HA integrated acid system, the organic blockage can be removed efficiently; and that it is wider in pH solution range than conventional APCs (aminopolycarboxyliates chelants, stronger in chelating capacity of Ca2+, Mg2+ and Fe3+ than conventional chelants (e.g. EDTA, NTA and DTPA, and better in precipitation inhibition on metal fluoride, fluosilicic acid alkali metal, fluoaluminic acid alkali metal and hydroxide than multi-hydrogen acid, fluoboric acid and mud acid systems. These research results provide a technical support for the plugging removal in high-temperature deep oil and gas reservoirs. Keywords: Organic polybasic acid, Integrated acid, Retardance, Chelating, Precipitation, Acidizing, Sandstone, Reservoir

  5. Citizen Science and the Modern Web

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Beginning as a research project to help scientists communicate, the Web has transformed into a ubiquitous medium. As the sciences continue to transform, new techniques are needed to analyze the vast amounts of data being produced by large experiments. The advent of the Sloan Digital Sky Survey increased throughput of astronomical data, giving rise to Citizen Science projects such as Galaxy Zoo. The Web is no longer exclusively used by researchers, but rather, a place where anyone can share information, or even, partake in citizen science projects. As the Web continues to evolve, new and open technologies enable web applications to become more sophisticated. Scientific toolsets may now target the Web as a platform, opening an application to a wider audience, and potentially citizen scientists. With the latest browser technologies, scientific data may be consumed and visualized, opening the browser as a new platform for scientific analysis.

  6. SPARQLGraph: a web-based platform for graphically querying biological Semantic Web databases.

    Science.gov (United States)

    Schweiger, Dominik; Trajanoski, Zlatko; Pabinger, Stephan

    2014-08-15

    Semantic Web has established itself as a framework for using and sharing data across applications and database boundaries. Here, we present a web-based platform for querying biological Semantic Web databases in a graphical way. SPARQLGraph offers an intuitive drag & drop query builder, which converts the visual graph into a query and executes it on a public endpoint. The tool integrates several publicly available Semantic Web databases, including the databases of the just recently released EBI RDF platform. Furthermore, it provides several predefined template queries for answering biological questions. Users can easily create and save new query graphs, which can also be shared with other researchers. This new graphical way of creating queries for biological Semantic Web databases considerably facilitates usability as it removes the requirement of knowing specific query languages and database structures. The system is freely available at http://sparqlgraph.i-med.ac.at.

  7. Using Technology to Evaluate a Web-Based Clinical Social Work Research Course

    Directory of Open Access Journals (Sweden)

    Zvi Gellis

    2004-05-01

    Full Text Available This article reports on a clinical research methods course taught online to a total of 90 off-campus MSW students in the fall of 1999, 2000, and 2001. The course was taught in a mid-size public university in a CSWE-accredited School of Social Work. The purpose of the course was to teach single subject design research skills for the evaluation of clinical social work practice. The student experience of the online course was assessed using qualitative interviews that add a deeper, textured understanding of the various facets of online instruction from the learner's perspective. Important dimensions for social work instruction in online courseware were delineated. A collaborative learning and teaching framework is presented for those social work educators interested in implementing web-based courses.

  8. Using EMBL-EBI Services via Web Interface and Programmatically via Web Services.

    Science.gov (United States)

    Lopez, Rodrigo; Cowley, Andrew; Li, Weizhong; McWilliam, Hamish

    2014-12-12

    The European Bioinformatics Institute (EMBL-EBI) provides access to a wide range of databases and analysis tools that are of key importance in bioinformatics. As well as providing Web interfaces to these resources, Web Services are available using SOAP and REST protocols that enable programmatic access to our resources and allow their integration into other applications and analytical workflows. This unit describes the various options available to a typical researcher or bioinformatician who wishes to use our resources via Web interface or programmatically via a range of programming languages. Copyright © 2014 John Wiley & Sons, Inc.

  9. Web Page Recommendation Using Web Mining

    OpenAIRE

    Modraj Bhavsar; Mrs. P. M. Chavan

    2014-01-01

    On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each...

  10. Bringing the Web to America

    CERN Multimedia

    Kunz, P F

    1999-01-01

    On 12 December 1991, Dr. Kunz installed the first Web server outside of Europe at the Stanford Linear Accelerator Center. Today, if you do not have access to the Web you are considered disadvantaged. Before it made sense for Tim Berners-Lee to invent the Web at CERN, there had to a number of ingredients in place. Dr. Kunz will present a history of how these ingredients developed and the role the academic research community had in forming them. In particular, the role that big science, such as high energy physics, played in giving us the Web we have today...

  11. Building Grid applications using Web Services

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    There has been a lot of discussion within the Grid community about the use of Web Services technologies in building large-scale, loosely-coupled, cross-organisation applications. In this talk we are going to explore the principles that govern Service-Oriented Architectures and the promise of Web Services technologies for integrating applications that span administrative domains. We are going to see how existing Web Services specifications and practices could provide the necessary infrastructure for implementing Grid applications. Biography Dr. Savas Parastatidis is a Principal Research Associate at the School of Computing Science, University of Newcastle upon Tyne, UK. Savas is one of the authors of the "Grid Application Framework based on Web Services Specifications and Practices" document that was influential in the convergence between Grid and Web Services and the move away from OGSI (more information can be found at http://www.neresc.ac.uk/ws-gaf). He has done research on runtime support for distributed-m...

  12. Trust Networks on the Semantic Web

    National Research Council Canada - National Science Library

    Golbeck, Jennifer; Parisa, Bijan; Hendler, James

    2006-01-01

    The so-called "Web of Trust" is one of the ultimate goals of the Semantic Web. Research on the topic of trust in this domain has focused largely on digital signatures, certificates, and authentication...

  13. The EMBRACE web service collection.

    NARCIS (Netherlands)

    Pettifer, S.; Ison, J.; Kalas, M.; Thorne, D.; McDermott, P.; Jonassen, I.; Liaquat, A.; Fernandez, J.M.; Rodriguez, J.M.; Pisano, D.G.; Blanchet, C; Uludag, M.; Rice, P.; Bartaseviciute, E.; Rapacki, K.; Hekkelman, M.L.; Sand, O.; Stockinger, H.; Clegg, A.B.; Bongcam-Rudloff, E.; Salzemann, J.; Breton, V.; Attwood, T.K.; Cameron, G.; Vriend, G.

    2010-01-01

    The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order for

  14. Implementing an International Consultation on Earth System Research Priorities Using Web 2.0 Tools

    Science.gov (United States)

    Goldfarb, L.; Yang, A.

    2009-12-01

    Leah Goldfarb, Paul Cutler, Andrew Yang*, Mustapha Mokrane, Jacinta Legg and Deliang Chen The scientific community has been engaged in developing an international strategy on Earth system research. The initial consultation in this “visioning” process focused on gathering suggestions for Earth system research priorities that are interdisciplinary and address the most pressing societal issues. It was implemented this through a website that utilized Web 2.0 capabilities. The website (http://www.icsu-visioning.org/) collected input from 15 July to 1 September 2009. This consultation was the first in which the international scientific community was asked to help shape the future of a research theme. The site attracted over 7000 visitors from 133 countries, more than 1000 of whom registered and took advantage of the site’s functionality to contribute research questions (~300 questions), comment on posts, and/or vote on questions. To facilitate analysis of results, the site captured a small set of voluntary information about each contributor and their contribution. A group of ~50 international experts were invited to analyze the inputs at a “Visioning Earth System Research” meeting held in September 2009. The outcome of this meeting—a prioritized list of research questions to be investigated over the next decade—was then posted on the visioning website for additional comment from the community through an online survey tool. In general, many lessons were learned in the development and implementation of this website, both in terms of the opportunities offered by Web 2.0 capabilities and the application of these capabilities. It is hoped that this process may serve as a model for other scientific communities. The International Council for Science (ICSU) in cooperation with the International Social Science Council (ISSC) is responsible for organizing this Earth system visioning process.

  15. Experimental economics for web mining

    OpenAIRE

    Tagiew, Rustam; Ignatov, Dmitry I.; Amroush, Fadi

    2014-01-01

    This paper offers a step towards research infrastructure, which makes data from experimental economics efficiently usable for analysis of web data. We believe that regularities of human behavior found in experimental data also emerge in real world web data. A format for data from experiments is suggested, which enables its publication as open data. Once standardized datasets of experiments are available on-line, web mining can take advantages from this data. Further, the questions about the o...

  16. Teaching Web 2.0 technologies using Web 2.0 technologies.

    Science.gov (United States)

    Rethlefsen, Melissa L; Piorun, Mary; Prince, J Dale

    2009-10-01

    The research evaluated participant satisfaction with the content and format of the "Web 2.0 101: Introduction to Second Generation Web Tools" course and measured the impact of the course on participants' self-evaluated knowledge of Web 2.0 tools. The "Web 2.0 101" online course was based loosely on the Learning 2.0 model. Content was provided through a course blog and covered a wide range of Web 2.0 tools. All Medical Library Association members were invited to participate. Participants were asked to complete a post-course survey. Respondents who completed the entire course or who completed part of the course self-evaluated their knowledge of nine social software tools and concepts prior to and after the course using a Likert scale. Additional qualitative information about course strengths and weaknesses was also gathered. Respondents' self-ratings showed a significant change in perceived knowledge for each tool, using a matched pair Wilcoxon signed rank analysis (P<0.0001 for each tool/concept). Overall satisfaction with the course appeared high. Hands-on exercises were the most frequently identified strength of the course; the length and time-consuming nature of the course were considered weaknesses by some. Learning 2.0-style courses, though demanding time and self-motivation from participants, can increase knowledge of Web 2.0 tools.

  17. Semantic Web: Metadata, Linked Data, Open Data

    Directory of Open Access Journals (Sweden)

    Vanessa Russo

    2015-12-01

    Full Text Available What's the Semantic Web? What's the use? The inventor of the Web Tim Berners-Lee describes it as a research methodology able to take advantage of the network to its maximum capacity. This metadata system represents the innovative element through web 2.0 to web 3.0. In this context will try to understand what are the theoretical and informatic requirements of the Semantic Web. Finally will explain Linked Data applications to develop new tools for active citizenship.

  18. Personalization of Rule-based Web Services.

    Science.gov (United States)

    Choi, Okkyung; Han, Sang Yong

    2008-04-04

    Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.

  19. Current concepts in clinical research: web-based, automated, arthroscopic surgery prospective database registry.

    Science.gov (United States)

    Lubowitz, James H; Smith, Patrick A

    2012-03-01

    In 2011, postsurgical patient outcome data may be compiled in a research registry, allowing comparative-effectiveness research and cost-effectiveness analysis by use of Health Insurance Portability and Accountability Act-compliant, institutional review board-approved, Food and Drug Administration-approved, remote, Web-based data collection systems. Computerized automation minimizes cost and minimizes surgeon time demand. A research registry can be a powerful tool to observe and understand variations in treatment and outcomes, to examine factors that influence prognosis and quality of life, to describe care patterns, to assess effectiveness, to monitor safety, and to change provider practice through feedback of data. Registry of validated, prospective outcome data is required for arthroscopic and related researchers and the public to advocate with governments and health payers. The goal is to develop evidence-based data to determine the best methods for treating patients. Copyright © 2012 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  20. Classroom Research: GC Studies of Linoleic and Linolenic Fatty Acids Found in French Fries

    Science.gov (United States)

    Crowley, Janice P.; Deboise, Kristen L.; Marshall, Megan R.; Shaffer, Hannah M.; Zafar, Sara; Jones, Kevin A.; Palko, Nick R.; Mitsch, Stephen M.; Sutton, Lindsay A.; Chang, Margaret; Fromer, Ilana; Kraft, Jake; Meister, Jessica; Shah, Amar; Tan, Priscilla; Whitchurch, James

    2002-07-01

    A study of fatty-acid ratios in French fries has proved to be an excellent choice for an entry-level research class. This research develops reasoning skills and involves the subject of breast cancer, a major concern of American society. Analysis of tumor samples removed from women with breast cancer revealed high ratios of linoleic to linolenic acid, suggesting a link between the accelerated growth of breast tumors and the combination of these two fatty acids. When the ratio of linoleic to linolenic acid was approximately 9 to 1, accelerated growth was observed. Since these fatty acids are found in cooking oils, Wichita Collegiate students, under the guidance of their chemistry teacher, decided that an investigation of the ratios of these two fatty acids should be conducted. A research class was structured using a gas chromatograph for the analysis. Separation of linoleic from linolenic acid was successfully accomplished. The students experienced inductive experimental research chemistry as it applies to everyday life. The structure of this research class can serve as a model for high school and undergraduate college research curricula.

  1. Interacting Science through Web Quests

    Science.gov (United States)

    Unal, Ahmet; Karakus, Melek Altiparmak

    2016-01-01

    The purpose of this paper is to examine the effects of WebQuests on elementary students' science achievement, attitude towards science and attitude towards web supported education in teaching 7th grade subjects (Ecosystems, Solar System). With regard to this research, "Science Achievement Test," "Attitude towards Science Scale"…

  2. Citations and the h index of soil researchers and journals in the Web of Science, Scopus, and Google Scholar.

    Science.gov (United States)

    Minasny, Budiman; Hartemink, Alfred E; McBratney, Alex; Jang, Ho-Jun

    2013-01-01

    Citation metrics and h indices differ using different bibliometric databases. We compiled the number of publications, number of citations, h index and year since the first publication from 340 soil researchers from all over the world. On average, Google Scholar has the highest h index, number of publications and citations per researcher, and the Web of Science the lowest. The number of papers in Google Scholar is on average 2.3 times higher and the number of citations is 1.9 times higher compared to the data in the Web of Science. Scopus metrics are slightly higher than that of the Web of Science. The h index in Google Scholar is on average 1.4 times larger than Web of Science, and the h index in Scopus is on average 1.1 times larger than Web of Science. Over time, the metrics increase in all three databases but fastest in Google Scholar. The h index of an individual soil scientist is about 0.7 times the number of years since his/her first publication. There is a large difference between the number of citations, number of publications and the h index using the three databases. From this analysis it can be concluded that the choice of the database affects widely-used citation and evaluation metrics but that bibliometric transfer functions exist to relate the metrics from these three databases. We also investigated the relationship between journal's impact factor and Google Scholar's h5-index. The h5-index is a better measure of a journal's citation than the 2 or 5 year window impact factor.

  3. Citations and the h index of soil researchers and journals in the Web of Science, Scopus, and Google Scholar

    Directory of Open Access Journals (Sweden)

    Budiman Minasny

    2013-10-01

    Full Text Available Citation metrics and h indices differ using different bibliometric databases. We compiled the number of publications, number of citations, h index and year since the first publication from 340 soil researchers from all over the world. On average, Google Scholar has the highest h index, number of publications and citations per researcher, and the Web of Science the lowest. The number of papers in Google Scholar is on average 2.3 times higher and the number of citations is 1.9 times higher compared to the data in the Web of Science. Scopus metrics are slightly higher than that of the Web of Science. The h index in Google Scholar is on average 1.4 times larger than Web of Science, and the h index in Scopus is on average 1.1 times larger than Web of Science. Over time, the metrics increase in all three databases but fastest in Google Scholar. The h index of an individual soil scientist is about 0.7 times the number of years since his/her first publication. There is a large difference between the number of citations, number of publications and the h index using the three databases. From this analysis it can be concluded that the choice of the database affects widely-used citation and evaluation metrics but that bibliometric transfer functions exist to relate the metrics from these three databases. We also investigated the relationship between journal’s impact factor and Google Scholar’s h5-index. The h5-index is a better measure of a journal’s citation than the 2 or 5 year window impact factor.

  4. iSERVO: Implementing the International Solid Earth Research Virtual Observatory by Integrating Computational Grid and Geographical Information Web Services

    Science.gov (United States)

    Aktas, Mehmet; Aydin, Galip; Donnellan, Andrea; Fox, Geoffrey; Granat, Robert; Grant, Lisa; Lyzenga, Greg; McLeod, Dennis; Pallickara, Shrideep; Parker, Jay; Pierce, Marlon; Rundle, John; Sayar, Ahmet; Tullis, Terry

    2006-12-01

    We describe the goals and initial implementation of the International Solid Earth Virtual Observatory (iSERVO). This system is built using a Web Services approach to Grid computing infrastructure and is accessed via a component-based Web portal user interface. We describe our implementations of services used by this system, including Geographical Information System (GIS)-based data grid services for accessing remote data repositories and job management services for controlling multiple execution steps. iSERVO is an example of a larger trend to build globally scalable scientific computing infrastructures using the Service Oriented Architecture approach. Adoption of this approach raises a number of research challenges in millisecond-latency message systems suitable for internet-enabled scientific applications. We review our research in these areas.

  5. Data management on the spatial web

    DEFF Research Database (Denmark)

    Jensen, Christian S.

    2012-01-01

    Due in part to the increasing mobile use of the web and the proliferation of geo-positioning, the web is fast acquiring a significant spatial aspect. Content and users are being augmented with locations that are used increasingly by location-based services. Studies suggest that each week, several...... billion web queries are issued that have local intent and target spatial web objects. These are points of interest with a web presence, and they thus have locations as well as textual descriptions. This development has given prominence to spatial web data management, an area ripe with new and exciting...... opportunities and challenges. The research community has embarked on inventing and supporting new query functionality for the spatial web. Different kinds of spatial web queries return objects that are near a location argument and are relevant to a text argument. To support such queries, it is important...

  6. Using Web 2.0 for health promotion and social marketing efforts: lessons learned from Web 2.0 experts.

    Science.gov (United States)

    Dooley, Jennifer Allyson; Jones, Sandra C; Iverson, Don

    2014-01-01

    Web 2.0 experts working in social marketing participated in qualitative in-depth interviews. The research aimed to document the current state of Web 2.0 practice. Perceived strengths (such as the viral nature of Web 2.0) and weaknesses (such as the time consuming effort it took to learn new Web 2.0 platforms) existed when using Web 2.0 platforms for campaigns. Lessons learned were identified--namely, suggestions for engaging in specific types of content creation strategies (such as plain language and transparent communication practices). Findings present originality and value to practitioners working in social marketing who want to effectively use Web 2.0.

  7. ResearchEHR: use of semantic web technologies and archetypes for the description of EHRs.

    Science.gov (United States)

    Robles, Montserrat; Fernández-Breis, Jesualdo Tomás; Maldonado, Jose A; Moner, David; Martínez-Costa, Catalina; Bosca, Diego; Menárguez-Tortosa, Marcos

    2010-01-01

    In this paper, we present the ResearchEHR project. It focuses on the usability of Electronic Health Record (EHR) sources and EHR standards for building advanced clinical systems. The aim is to support healthcare professional, institutions and authorities by providing a set of generic methods and tools for the capture, standardization, integration, description and dissemination of health related information. ResearchEHR combines several tools to manage EHR at two different levels. The internal level that deals with the normalization and semantic upgrading of exiting EHR by using archetypes and the external level that uses Semantic Web technologies to specify clinical archetypes for advanced EHR architectures and systems.

  8. Gender and web design software

    Directory of Open Access Journals (Sweden)

    Gabor Horvath

    2007-12-01

    Full Text Available There are several studies dealing with the differences between sites originated by men and women. However, these references are mainly related to the "output", the final web site. In our research we examined the input side of web designing. We thoroughly analysed a number of randomly selected web designer softwares to see, whether and to what extent the templates they offer determine the final look of an individual's website. We have found that most of them are typical masculine templates, which makes it difficult to any women to design a feminine looking website. It can be one of the reasons of the masculine website hegemony on the web.

  9. Elements of a Spatial Web

    DEFF Research Database (Denmark)

    Jensen, Christian S.

    2010-01-01

    Driven by factors such as the increasingly mobile use of the web and the proliferation of geo-positioning technologies, the web is rapidly acquiring a spatial aspect. Specifically, content and users are being geo-tagged, and services are being developed that exploit these tags. The research...... community is hard at work inventing means of efficiently supporting new spatial query functionality. Points of interest with a web presence, called spatial web objects, have a location as well as a textual description. Spatio-textual queries return such objects that are near a location argument...... and are relevant to a text argument. An important element in enabling such queries is to be able to rank spatial web objects. Another is to be able to determine the relevance of an object to a query. Yet another is to enable the efficient processing of such queries. The talk covers recent results on spatial web...

  10. Exploring the academic invisible web

    OpenAIRE

    Lewandowski, Dirk

    2006-01-01

    The Invisible Web is often discussed in the academic context, where its contents (mainly in the form of databases) are of great importance. But this discussion is mainly based on some seminal research done by Sherman and Price (2001) and Bergman (2001), respectively. We focus on the types of Invisible Web content relevant for academics and the improvements made by search engines to deal with these content types. In addition, we question the volume of the Invisible Web as stated by Bergman. Ou...

  11. SEMANTIC WEB SERVICES – DISCOVERY, SELECTION AND COMPOSITION TECHNIQUES

    OpenAIRE

    Sowmya Kamath S; Ananthanarayana V.S

    2013-01-01

    Web services are already one of the most important resources on the Internet. As an integrated solution for realizing the vision of the Next Generation Web, semantic web services combine semantic web technology with web service technology, envisioning automated life cycle management of web services. This paper discusses the significance and importance of service discovery & selection to business logic, and the requisite current research in the various phases of the semantic web...

  12. Comparing Two Survey Research Approaches: E-Mail and Web-Based Technology versus Traditional Mail.

    Science.gov (United States)

    Howes, Colleen M.; Mailloux, Mark R.

    2001-01-01

    Contrasted two survey methodologies: e-mail-Web and traditional mail. Found that: (1) e-mail-Web respondents were proportionately more likely to be male and enrolled in school full-time; (2) more individual question non-response was present for the e-mail-Web sample; and (3) e-mail-Web respondents value different aspects of graduate school. (EV)

  13. Determination of PCDDs in spider webs: preliminary studies

    Science.gov (United States)

    Rybak, Justyna; Rutkowski, Radosław

    2018-01-01

    The application of spider webs for determination of polichlorinated dibenzo-para-dioxins (PCDDs) has been studied for the first time. The aim of the studies was to find out if spider webs are suitable for such examinations as it was proved in the previous research they are excellent indicators of air pollutants. Spiders are ubiquitous, thus collection of samples is easy and non-invasive. Studies were conducted within the city of Wrocław and surroundings, one of the biggest and at the same time heaviest polluted city in Poland. Five research sites have been chosen, where spider webs were collected after 60 days of continuous exposure time. Webs belonging to two genera Tegenaria sylvestris and Tegenaria ferruginea (family Agelenidae) have been chosen as they are large and very dense, thus they are very suitable for such examinations. Webs were found to retain dioxins probably mainly by external exposure. These promising results should be continued and expanded in the future research.

  14. The EMBRACE web service collection

    DEFF Research Database (Denmark)

    Pettifer, S.; Ison, J.; Kalas, M.

    2010-01-01

    The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order...... for web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM......, an ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection...

  15. Finding Web-Based Anxiety Interventions on the World Wide Web: A Scoping Review.

    Science.gov (United States)

    Ashford, Miriam Thiel; Olander, Ellinor K; Ayers, Susan

    2016-06-01

    One relatively new and increasingly popular approach of increasing access to treatment is Web-based intervention programs. The advantage of Web-based approaches is the accessibility, affordability, and anonymity of potentially evidence-based treatment. Despite much research evidence on the effectiveness of Web-based interventions for anxiety found in the literature, little is known about what is publically available for potential consumers on the Web. Our aim was to explore what a consumer searching the Web for Web-based intervention options for anxiety-related issues might find. The objectives were to identify currently publically available Web-based intervention programs for anxiety and to synthesize and review these in terms of (1) website characteristics such as credibility and accessibility; (2) intervention program characteristics such as intervention focus, design, and presentation modes; (3) therapeutic elements employed; and (4) published evidence of efficacy. Web keyword searches were carried out on three major search engines (Google, Bing, and Yahoo-UK platforms). For each search, the first 25 hyperlinks were screened for eligible programs. Included were programs that were designed for anxiety symptoms, currently publically accessible on the Web, had an online component, a structured treatment plan, and were available in English. Data were extracted for website characteristics, program characteristics, therapeutic characteristics, as well as empirical evidence. Programs were also evaluated using a 16-point rating tool. The search resulted in 34 programs that were eligible for review. A wide variety of programs for anxiety, including specific anxiety disorders, and anxiety in combination with stress, depression, or anger were identified and based predominantly on cognitive behavioral therapy techniques. The majority of websites were rated as credible, secure, and free of advertisement. The majority required users to register and/or to pay a program access

  16. Integrating Thematic Web Portal Capabilities into the NASA Earthdata Web Infrastructure

    Science.gov (United States)

    Wong, Minnie; Baynes, Kathleen E.; Huang, Thomas; McLaughlin, Brett

    2015-01-01

    This poster will present the process of integrating thematic web portal capabilities into the NASA Earth data web infrastructure, with examples from the Sea Level Change Portal. The Sea Level Change Portal will be a source of current NASA research, data and information regarding sea level change. The portal will provide sea level change information through articles, graphics, videos and animations, an interactive tool to view and access sea level change data and a dashboard showing sea level change indicators.

  17. Trust estimation of the semantic web using semantic web clustering

    Science.gov (United States)

    Shirgahi, Hossein; Mohsenzadeh, Mehran; Haj Seyyed Javadi, Hamid

    2017-05-01

    Development of semantic web and social network is undeniable in the Internet world these days. Widespread nature of semantic web has been very challenging to assess the trust in this field. In recent years, extensive researches have been done to estimate the trust of semantic web. Since trust of semantic web is a multidimensional problem, in this paper, we used parameters of social network authority, the value of pages links authority and semantic authority to assess the trust. Due to the large space of semantic network, we considered the problem scope to the clusters of semantic subnetworks and obtained the trust of each cluster elements as local and calculated the trust of outside resources according to their local trusts and trust of clusters to each other. According to the experimental result, the proposed method shows more than 79% Fscore that is about 11.9% in average more than Eigen, Tidal and centralised trust methods. Mean of error in this proposed method is 12.936, that is 9.75% in average less than Eigen and Tidal trust methods.

  18. EnviroAtlas National Layers Master Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This web service includes...

  19. Quality of reporting web-based and non-web-based survey studies: What authors, reviewers and consumers should consider.

    Science.gov (United States)

    Turk, Tarek; Elhady, Mohamed Tamer; Rashed, Sherwet; Abdelkhalek, Mariam; Nasef, Somia Ahmed; Khallaf, Ashraf Mohamed; Mohammed, Abdelrahman Tarek; Attia, Andrew Wassef; Adhikari, Purushottam; Amin, Mohamed Alsabbahi; Hirayama, Kenji; Huy, Nguyen Tien

    2018-01-01

    Several influential aspects of survey research have been under-investigated and there is a lack of guidance on reporting survey studies, especially web-based projects. In this review, we aim to investigate the reporting practices and quality of both web- and non-web-based survey studies to enhance the quality of reporting medical evidence that is derived from survey studies and to maximize the efficiency of its consumption. Reporting practices and quality of 100 random web- and 100 random non-web-based articles published from 2004 to 2016 were assessed using the SUrvey Reporting GuidelinE (SURGE). The CHERRIES guideline was also used to assess the reporting quality of Web-based studies. Our results revealed a potential gap in the reporting of many necessary checklist items in both web-based and non-web-based survey studies including development, description and testing of the questionnaire, the advertisement and administration of the questionnaire, sample representativeness and response rates, incentives, informed consent, and methods of statistical analysis. Our findings confirm the presence of major discrepancies in reporting results of survey-based studies. This can be attributed to the lack of availability of updated universal checklists for quality of reporting standards. We have summarized our findings in a table that may serve as a roadmap for future guidelines and checklists, which will hopefully include all types and all aspects of survey research.

  20. Going, going, still there: using the WebCite service to permanently archive cited web pages.

    Science.gov (United States)

    Eysenbach, Gunther; Trudel, Mathieu

    2005-12-30

    Scholars are increasingly citing electronic "web references" which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To "webcite" a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page. This journal has amended its "instructions for authors" accordingly, asking authors to archive cited Web pages before submitting a manuscript. Almost 200 other journals are already using the system. We discuss the rationale for WebCite, its technology, and how scholars, editors, and publishers can benefit from the service. Citing scholars initiate an archiving process of all cited Web references, ideally before they submit a manuscript. Authors of online documents and websites which are expected to be cited by others can ensure that their work is permanently available by creating an archived copy using WebCite and providing the citation information including the WebCite link on their Web document(s). Editors should ask their authors to cache all cited Web addresses (Uniform Resource Locators, or URLs) "prospectively" before submitting their manuscripts to their journal. Editors and publishers should also instruct their copyeditors to cache cited Web material if the author has not done so already. Finally, WebCite can process publisher submitted "citing articles" (submitted for example as eXtensible Markup Language [XML] documents) to automatically archive all cited Web pages shortly before or on publication. Finally, WebCite can act as a focussed crawler, caching retrospectively references of already published articles. Copyright issues are addressed by honouring respective Internet standards (robot exclusion files, no-cache and no-archive tags). Long-term preservation is ensured by agreements with libraries and digital preservation organizations. The resulting WebCite Index may also have applications for research

  1. Web components and the semantic web

    OpenAIRE

    Casey, Maire; Pahl, Claus

    2003-01-01

    Component-based software engineering on the Web differs from traditional component and software engineering. We investigate Web component engineering activites that are crucial for the development,com position, and deployment of components on the Web. The current Web Services and Semantic Web initiatives strongly influence our work. Focussing on Web component composition we develop description and reasoning techniques that support a component developer in the composition activities,fo cussing...

  2. Materializing the web of linked data

    CERN Document Server

    Konstantinou, Nikolaos

    2015-01-01

    This book explains the Linked Data domain by adopting a bottom-up approach: it introduces the fundamental Semantic Web technologies and building blocks, which are then combined into methodologies and end-to-end examples for publishing datasets as Linked Data, and use cases that harness scholarly information and sensor data. It presents how Linked Data is used for web-scale data integration, information management and search. Special emphasis is given to the publication of Linked Data from relational databases as well as from real-time sensor data streams. The authors also trace the transformation from the document-based World Wide Web into a Web of Data. Materializing the Web of Linked Data is addressed to researchers and professionals studying software technologies, tools and approaches that drive the Linked Data ecosystem, and the Web in general.

  3. Lactic Acid Bacteria : embarking on 30 more years of research

    NARCIS (Netherlands)

    Kok, Jan; Johansen, Eric; Kleerebezem, Michiel; Teusink, Bas

    2014-01-01

    The 11th International Symposium on Lactic Acid Bacteria Lactic Acid Bacteria play important roles in the pro- duction of food and feed and are increasingly used as health-promoting probiotics. The incessant scientific interest in these microorganisms by academic research groups as well as by

  4. AMMOS2: a web server for protein-ligand-water complexes refinement via molecular mechanics.

    Science.gov (United States)

    Labbé, Céline M; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O; Pajeva, Ilza; Miteva, Maria A

    2017-07-03

    AMMOS2 is an interactive web server for efficient computational refinement of protein-small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein-ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein-ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein-ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein-ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein-ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein-ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Semantic Web Development

    National Research Council Canada - National Science Library

    Berners-Lee, Tim; Swick, Ralph

    2006-01-01

    ...) project between 2002 and 2005 provided key steps in the research in the Semantic Web technology, and also played an essential role in delivering the technology to industry and government in the form...

  6. Web metrics for library and information professionals

    CERN Document Server

    Stuart, David

    2014-01-01

    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional.Th...

  7. Critical Reading of the Web

    Science.gov (United States)

    Griffin, Teresa; Cohen, Deb

    2012-01-01

    The ubiquity and familiarity of the world wide web means that students regularly turn to it as a source of information. In doing so, they "are said to rely heavily on simple search engines, such as Google to find what they want." Researchers have also investigated how students use search engines, concluding that "the young web users tended to…

  8. WebCN: A web-based computation tool for in situ-produced cosmogenic nuclides

    Energy Technology Data Exchange (ETDEWEB)

    Ma Xiuzeng [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States)]. E-mail: hongju@purdue.edu; Li Yingkui [Department of Geography, University of Missouri-Columbia, Columbia, MO 65211 (United States); Bourgeois, Mike [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Caffee, Marc [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Elmore, David [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Granger, Darryl [Department of Earth and Atmospheric Sciences, Purdue University, West Lafayette, IN 47907 (United States); Muzikar, Paul [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Smith, Preston [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States)

    2007-06-15

    Cosmogenic nuclide techniques are increasingly being utilized in geoscience research. For this it is critical to establish an effective, easily accessible and well defined tool for cosmogenic nuclide computations. We have been developing a web-based tool (WebCN) to calculate surface exposure ages and erosion rates based on the nuclide concentrations measured by the accelerator mass spectrometry. WebCN for {sup 10}Be and {sup 26}Al has been finished and published at http://www.physics.purdue.edu/primelab/for{sub u}sers/rockage.html. WebCN for {sup 36}Cl is under construction. WebCN is designed as a three-tier client/server model and uses the open source PostgreSQL for the database management and PHP for the interface design and calculations. On the client side, an internet browser and Microsoft Access are used as application interfaces to access the system. Open Database Connectivity is used to link PostgreSQL and Microsoft Access. WebCN accounts for both spatial and temporal distributions of the cosmic ray flux to calculate the production rates of in situ-produced cosmogenic nuclides at the Earth's surface.

  9. Web-Enhanced Instruction and Learning: Findings of a Short- and Long-Term Impact Study and Teacher Use of NASA Web Resources

    Science.gov (United States)

    McCarthy, Marianne C.; Grabowski, Barbara L.; Koszalka, Tiffany

    2003-01-01

    Over a three-year period, researchers and educators from the Pennsylvania State University (PSU), University Park, Pennsylvania, and the NASA Dryden Flight Research Center (DFRC), Edwards, California, worked together to analyze, develop, implement and evaluate materials and tools that enable teachers to use NASA Web resources effectively for teaching science, mathematics, technology and geography. Two conference publications and one technical paper have already been published as part of this educational research series on Web-based instruction and learning. This technical paper, Web-Enhanced Instruction and Learning: Findings of a Short- and Long-Term Impact Study, is the culminating report in this educational research series and is based on the final report submitted to NASA. This report describes the broad spectrum of data gathered from teachers about their experiences using NASA Web resources in the classroom. It also describes participating teachers responses and feedback about the use of the NASA Web-Enhanced Learning Environment Strategies reflection tool on their teaching practices. The reflection tool was designed to help teachers merge the vast array of NASA resources with the best teaching methods, taking into consideration grade levels, subject areas and teaching preferences. The teachers described their attitudes toward technology and innovation in the classroom and their experiences and perceptions as they attempted to integrate Web resources into science, mathematics, technology and geography instruction.

  10. Big Web data, small focus: An ethnosemiotic approach to culturally themed selective Web archiving

    Directory of Open Access Journals (Sweden)

    Saskia Huc-Hepher

    2015-07-01

    Full Text Available This paper proposes a multimodal ethnosemiotic conceptual framework for culturally themed selective Web archiving, taking as a practical example the curation of the London French Special Collection (LFSC in the UK Web Archive. Its focus on a particular ‘community’ is presented as advantageous in overcoming the sheer scale of data available on the Web; yet, it is argued that these ethnographic boundaries may be flawed if they do not map onto the collective self-perception of the London French. The approach establishes several theoretical meeting points between Pierre Bourdieu’s ethnography and Gunther Kress’s multimodal social semiotics, notably, the foregrounding of practice and the meaning-making potentialities of the everyday; the implications of language and categorisation; the interplay between (curating/researcher subject and (curated/research object; evolving notions of agency, authorship and audience; together with social engagement, and the archive as dynamic process and product. The curation rationale proposed stems from Bourdieu’s three-stage field analysis model, which places a strong emphasis on habitus, considered to be most accurately (represented through blogs, yet necessitates its contextualisation within the broader (diasporic field(s, through institutional websites, for example, whilst advocating a reflexive awareness of the researcher/curator’s (subjective role. This, alongside the Kressian acknowledgement of the inherent multimodality of on-line resources, lends itself convincingly to selection and valuation strategies, whilst the discussion of language, genre, authorship and audience is relevant to the potential cataloguing of Web objects. By conceptualising the culturally themed selective Web-archiving process within the ethnosemiotic framework constructed, concrete recommendations emerge regarding curation, classification and crowd-sourcing.

  11. Resource Selection for Federated Search on the Web

    NARCIS (Netherlands)

    Nguyen, Dong-Phuong; Demeester, Thomas; Trieschnigg, Rudolf Berend; Hiemstra, Djoerd

    A publicly available dataset for federated search reflecting a real web environment has long been bsent, making it difficult for researchers to test the validity of their federated search algorithms for the web setting. We present several experiments and analyses on resource selection on the web

  12. WebVis: a hierarchical web homepage visualizer

    Science.gov (United States)

    Renteria, Jose C.; Lodha, Suresh K.

    2000-02-01

    WebVis, the Hierarchical Web Home Page Visualizer, is a tool for managing home web pages. The user can access this tool via the WWW and obtain a hierarchical visualization of one's home web pages. WebVis is a real time interactive tool that supports many different queries on the statistics of internal files such as sizes, age, and type. In addition, statistics on embedded information such as VRML files, Java applets, images and sound files can be extracted and queried. Results of these queries are visualized using color, shape and size of different nodes of the hierarchy. The visualization assists the user in a variety of task, such as quickly finding outdated information or locate large files. WebVIs is one solution to the growing web space maintenance problem. Implementation of WebVis is realized with Perl and Java. Perl pattern matching and file handling routines are used to collect and process web space linkage information and web document information. Java utilizes the collected information to produce visualization of the web space. Java also provides WebVis with real time interactivity, while running off the WWW. Some WebVis examples of home web page visualization are presented.

  13. The Rise and Fall of Text on the Web: A Quantitative Study of Web Archives

    Science.gov (United States)

    Cocciolo, Anthony

    2015-01-01

    Introduction: This study addresses the following research question: is the use of text on the World Wide Web declining? If so, when did it start declining, and by how much has it declined? Method: Web pages are downloaded from the Internet Archive for the years 1999, 2002, 2005, 2008, 2011 and 2014, producing 600 captures of 100 prominent and…

  14. Accessible Web Design - The Power of the Personal Message.

    Science.gov (United States)

    Whitney, Gill

    2015-01-01

    The aim of this paper is to describe ongoing research being carried out to enable people with visual impairments to communicate directly with designers and specifiers of hobby and community web sites to maximise the accessibility of their sites. The research started with an investigation of the accessibility of community and hobby web sites as perceived by a group of visually impaired end users. It is continuing with an investigation into how to best to communicate with web designers who are not experts in web accessibility. The research is making use of communication theory to investigate how terminology describing personal experience can be used in the most effective and powerful way. By working with the users using a Delphi study the research has ensured that the views of the visually impaired end users is successfully transmitted.

  15. WebNet 99 : proceedings of WebNet 99 - World Conference on the WWW and Internet, Honolulu, Hawaii, October 24-30, 1999

    NARCIS (Netherlands)

    De Bra, P.M.E.; Leggett, J.

    1999-01-01

    The 1999 WebNet conference addressed research, new developments, and experiences related to the Internet and World Wide Web. The 394 contributions of WebNet 99 contained in this proceedings comprise the full and short papers accepted for presentation at the conference. Major topics covered include:

  16. Integrating Data Warehouses with Web Data

    DEFF Research Database (Denmark)

    Perez, Juan Manuel; Berlanga, Rafael; Aramburu, Maria Jose

    This paper surveys the most relevant research on combining Data Warehouse (DW) and Web data. It studies the XML technologies that are currently being used to integrate, store, query and retrieve web data, and their application to data warehouses. The paper addresses the problem of integrating...

  17. The effects of Web site structure: the role of personal difference.

    Science.gov (United States)

    Chung, Hwiman; Ahn, Euijin

    2007-12-01

    This study examined the effects of Web site structures in terms of advertising effectiveness- memory, attitude, and behavioral intentions. The primary research question for this study is, What type of Web site (Web ad) structure is most effective? In the pilot study, we tested the difference between two Web site structures, linear and interactive, in terms of traditional advertising effectiveness. Results from the pilot study did not support our research expectations. However, differences in terms of memory were noted between the two structures. After re-creating the Web site based on subjects' comments, in the final experiment, we examined the differences between the two structures and the moderating role of personality difference on the effects of Web site structure. The results confirm that participants' attitude, memory, and behavioral intentions were affected differently by the different Web site structures. However, some research hypotheses were not supported by the current data.

  18. Enlisting User Community Perspectives to Inform Development of a Semantic Web Application for Discovery of Cross-Institutional Research Information and Data

    Science.gov (United States)

    Johns, E. M.; Mayernik, M. S.; Boler, F. M.; Corson-Rikert, J.; Daniels, M. D.; Gross, M. B.; Khan, H.; Maull, K. E.; Rowan, L. R.; Stott, D.; Williams, S.; Krafft, D. B.

    2015-12-01

    Researchers seek information and data through a variety of avenues: published literature, search engines, repositories, colleagues, etc. In order to build a web application that leverages linked open data to enable multiple paths for information discovery, the EarthCollab project has surveyed two geoscience user communities to consider how researchers find and share scholarly output. EarthCollab, a cross-institutional, EarthCube funded project partnering UCAR, Cornell University, and UNAVCO, is employing the open-source semantic web software, VIVO, as the underlying technology to connect the people and resources of virtual research communities. This study will present an analysis of survey responses from members of the two case study communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. The survey results illustrate the types of research products that respondents indicate should be discoverable within a digital platform and the current methods used to find publications, data, personnel, tools, and instrumentation. The responses showed that scientists rely heavily on general purpose search engines, such as Google, to find information, but that data center websites and the published literature were also critical sources for finding collaborators, data, and research tools.The survey participants also identify additional features of interest for an information platform such as search engine indexing, connection to institutional web pages, generation of bibliographies and CVs, and outward linking to social media. Through the survey, the user communities prioritized the type of information that is most important to display and describe their work within a research profile. The analysis of this survey will inform our further development of a platform that will

  19. Simulator Posisi Matahari dan Bulan Berbasis Web dengan WebGL

    Directory of Open Access Journals (Sweden)

    Kamshory

    2014-09-01

    Full Text Available Moon as a satellite of the earth has an important role for the life of the earth. Apart from being a source of illumination, moon also has an effect on the earth both on land and at sea. The influence of the moon to the earth relate to the its position. This research aims to create a simulator of the sun and moon position to the earth based on the time and location of the observation. This simulator is web-based and made in 3-dimensional (3D using webGL technology. The position of the sun and moon expressed by latitude and longitude which is a position that is passed by the line connecting the earth to the sun or the line connecting the earth to the moon. The position of the sun and moon obtained from calculations based on previous research. Once the position is known, the earth, the sun, and the moon then described as a 3-D model. The position of the camera can be moved by dragging a web page so that the sun, earth, and moon can be seen from various positions. The camera always leads to the center of the earth to avoid user errors flipping the camera which causes the object is not visible.

  20. A web-based tool to engage stakeholders in informing research planning for future decisions on emerging materials

    International Nuclear Information System (INIS)

    Powers, Christina M.; Grieger, Khara D.; Hendren, Christine Ogilvie; Meacham, Connie A.; Gurevich, Gerald; Lassiter, Meredith Gooding; Money, Eric S.; Lloyd, Jennifer M.; Beaulieu, Stephen M.

    2014-01-01

    Prioritizing and assessing risks associated with chemicals, industrial materials, or emerging technologies is a complex problem that benefits from the involvement of multiple stakeholder groups. For example, in the case of engineered nanomaterials (ENMs), scientific uncertainties exist that hamper environmental, health, and safety (EHS) assessments. Therefore, alternative approaches to standard EHS assessment methods have gained increased attention. The objective of this paper is to describe the application of a web-based, interactive decision support tool developed by the U.S. Environmental Protection Agency (U.S. EPA) in a pilot study on ENMs. The piloted tool implements U.S. EPA's comprehensive environmental assessment (CEA) approach to prioritize research gaps. When pursued, such research priorities can result in data that subsequently improve the scientific robustness of risk assessments and inform future risk management decisions. Pilot results suggest that the tool was useful in facilitating multi-stakeholder prioritization of research gaps. Results also provide potential improvements for subsequent applications. The outcomes of future CEAWeb applications with larger stakeholder groups may inform the development of funding opportunities for emerging materials across the scientific community (e.g., National Science Foundation Science to Achieve Results [STAR] grants, National Institutes of Health Requests for Proposals). - Highlights: • A web-based, interactive decision support tool was piloted for emerging materials. • The tool (CEAWeb) was based on an established approach to prioritize research gaps. • CEAWeb facilitates multi-stakeholder prioritization of research gaps. • We provide recommendations for future versions and applications of CEAWeb

  1. Use of World Wide Web and NCSA Mcsaic at Langley

    Science.gov (United States)

    Nelson, Michael

    1994-01-01

    A brief history of the use of the World Wide Web at Langley Research Center is presented along with architecture of the Langley Web. Benefits derived from the Web and some Langley projects that have employed the World Wide Web are discussed.

  2. A Web Browsing Behavior Recording System

    OpenAIRE

    Ohmura, Hayato; Kitasuka, Teruaki; Aritsugi, Masayoshi; オオムラ, ハヤト; キタスカ, テルアキ; アリツギ, マサヨシ; 大村, 勇人; 北須賀, 輝明; 有次, 正義

    2011-01-01

    In this paper, we introduce a Web browsing behavior recording system for research. Web browsing behavior data can help us to providesophisticated services for human activities, because the data must indicate characteristics ofWeb users.We discuss the necessity of the data with potential benefits, and develop a system for collecting the data as an add-on for Firefox. We also report some results of preliminary experiments to test its usefulness in analyses on human activities in this paper.

  3. Web-based management of research groups - using the right tools and an adequate integration strategy

    Energy Technology Data Exchange (ETDEWEB)

    Barroso, Antonio Carlos de Oliveira; Menezes, Mario Olimpio de, E-mail: barroso@ipen.b, E-mail: mario@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil). Grupo de Pesquisa em Gestao do Conhecimento Aplicada a Area Nuclear

    2011-07-01

    Nowadays broad interest in a couple of inter linked subject areas can make the configuration of a research group to be much diversified both in terms of its components and of the binding relationships that glues the group together. That is the case of the research group for knowledge management and its applications to nuclear technology - KMANT at IPEN, a living entity born 7 years ago and that has sustainably attracted new collaborators. This paper describes the strategic planning of the group, its charter and credo, the present components of the group and the diversified nature of their relations with the group and with IPEN. Then the technical competencies and currently research lines (or programs) are described as well as the research projects, and the management scheme of the group. In the sequence the web-based management and collaboration tools are described as well our experience with their use. KMANT have experiment with over 20 systems and software in this area, but we will focus on those aimed at: (a) web-based project management (RedMine, ClockinIT, Who does, PhProjekt and Dotproject); (b) teaching platform (Moodle); (c) mapping and knowledge representation tools (Cmap, Freemind and VUE); (d) Simulation tools (Matlab, Vensim and NetLogo); (e) social network analysis tools (ORA, MultiNet and UciNet); (f) statistical analysis and modeling tools (R and SmartPLS). Special emphasis is given to the coupling of the group permanent activities like graduate courses and regular seminars and how newcomers are selected and trained to be able to enroll the group. A global assessment of the role the management strategy and available tool set for the group performance is presented. (author)

  4. Web-based management of research groups - using the right tools and an adequate integration strategy

    International Nuclear Information System (INIS)

    Barroso, Antonio Carlos de Oliveira; Menezes, Mario Olimpio de

    2011-01-01

    Nowadays broad interest in a couple of inter linked subject areas can make the configuration of a research group to be much diversified both in terms of its components and of the binding relationships that glues the group together. That is the case of the research group for knowledge management and its applications to nuclear technology - KMANT at IPEN, a living entity born 7 years ago and that has sustainably attracted new collaborators. This paper describes the strategic planning of the group, its charter and credo, the present components of the group and the diversified nature of their relations with the group and with IPEN. Then the technical competencies and currently research lines (or programs) are described as well as the research projects, and the management scheme of the group. In the sequence the web-based management and collaboration tools are described as well our experience with their use. KMANT have experiment with over 20 systems and software in this area, but we will focus on those aimed at: (a) web-based project management (RedMine, ClockinIT, Who does, PhProjekt and Dotproject); (b) teaching platform (Moodle); (c) mapping and knowledge representation tools (Cmap, Freemind and VUE); (d) Simulation tools (Matlab, Vensim and NetLogo); (e) social network analysis tools (ORA, MultiNet and UciNet); (f) statistical analysis and modeling tools (R and SmartPLS). Special emphasis is given to the coupling of the group permanent activities like graduate courses and regular seminars and how newcomers are selected and trained to be able to enroll the group. A global assessment of the role the management strategy and available tool set for the group performance is presented. (author)

  5. Pragmatic Computing - A Semiotic Perspective to Web Services

    Science.gov (United States)

    Liu, Kecheng

    The web seems to have evolved from a syntactic web, a semantic web to a pragmatic web. This evolution conforms to the study of information and technology from the theory of semiotics. The pragmatics, concerning with the use of information in relation to the context and intended purposes, is extremely important in web service and applications. Much research in pragmatics has been carried out; but in the same time, attempts and solutions have led to some more questions. After reviewing the current work in pragmatic web, the paper presents a semiotic approach to website services, particularly on request decomposition and service aggregation.

  6. The World Wide Web Revisited

    Science.gov (United States)

    Owston, Ron

    2007-01-01

    Nearly a decade ago the author wrote in one of the first widely-cited academic articles, Educational Researcher, about the educational role of the web. He argued that educators must be able to demonstrate that the web (1) can increase access to learning, (2) must not result in higher costs for learning, and (3) can lead to improved learning. These…

  7. An innovative methodology for the transmission of information, using Sensor Web Enablement, from ongoing research vessels.

    Science.gov (United States)

    Sorribas, Jordi; Sinquin, Jean Marc; Diviacco, Paolo; De Cauwer, Karien; Danobeitia, Juanjo; Olive, Joan; Bermudez, Luis

    2013-04-01

    Research vessels are sophisticated laboratories with complex data acquisition systems for a variety of instruments and sensors that acquire real-time information of many different parameters and disciplines. The overall data and metadata acquired commonly spread using well-established standards for data centers; however, the instruments and systems on board are not always well described and it may miss significant information. Thus, important information such as instrument calibration or operational data often does not reach to the data center. The OGC Sensor Web Enablement standards provide solutions to serve complex data along with the detailed description of the process used to obtain them. We show an innovative methodology on how to use Sensor Web Enablement standards to describe and serve information from the research vessels, the data acquisition systems used onboard, and data sets resulting from the onboard work. This methodology is designed to be used in research vessels, but also applies to data centers to avoid loss of information in between The proposed solution considers (I) the difficulty to describe a multidisciplinary and complex mobile sensor system, (II) it can be easily integrated with data acquisition systems onboard, (III) it uses the complex and incomplete typical vocabulary in marine disciplines, (IV) it provides contacts with the data and metadata services at the Data Centers, and (V) it manages the configuration changes with time of the instrument.

  8. The emergent discipline of health web science.

    Science.gov (United States)

    Luciano, Joanne S; Cumming, Grant P; Wilkinson, Mark D; Kahana, Eva

    2013-08-22

    The transformative power of the Internet on all aspects of daily life, including health care, has been widely recognized both in the scientific literature and in public discourse. Viewed through the various lenses of diverse academic disciplines, these transformations reveal opportunities realized, the promise of future advances, and even potential problems created by the penetration of the World Wide Web for both individuals and for society at large. Discussions about the clinical and health research implications of the widespread adoption of information technologies, including the Internet, have been subsumed under the disciplinary label of Medicine 2.0. More recently, however, multi-disciplinary research has emerged that is focused on the achievement and promise of the Web itself, as it relates to healthcare issues. In this paper, we explore and interrogate the contributions of the burgeoning field of Web Science in relation to health maintenance, health care, and health policy. From this, we introduce Health Web Science as a subdiscipline of Web Science, distinct from but overlapping with Medicine 2.0. This paper builds on the presentations and subsequent interdisciplinary dialogue that developed among Web-oriented investigators present at the 2012 Medicine 2.0 Conference in Boston, Massachusetts.

  9. Gas Hydrate Research Database and Web Dissemination Channel

    Energy Technology Data Exchange (ETDEWEB)

    Micheal Frenkel; Kenneth Kroenlein; V Diky; R.D. Chirico; A. Kazakow; C.D. Muzny; M. Frenkel

    2009-09-30

    To facilitate advances in application of technologies pertaining to gas hydrates, a United States database containing experimentally-derived information about those materials was developed. The Clathrate Hydrate Physical Property Database (NIST Standard Reference Database {number_sign} 156) was developed by the TRC Group at NIST in Boulder, Colorado paralleling a highly-successful database of thermodynamic properties of molecular pure compounds and their mixtures and in association with an international effort on the part of CODATA to aid in international data sharing. Development and population of this database relied on the development of three components of information-processing infrastructure: (1) guided data capture (GDC) software designed to convert data and metadata into a well-organized, electronic format, (2) a relational data storage facility to accommodate all types of numerical and metadata within the scope of the project, and (3) a gas hydrate markup language (GHML) developed to standardize data communications between 'data producers' and 'data users'. Having developed the appropriate data storage and communication technologies, a web-based interface for both the new Clathrate Hydrate Physical Property Database, as well as Scientific Results from the Mallik 2002 Gas Hydrate Production Research Well Program was developed and deployed at http://gashydrates.nist.gov.

  10. Semantic Web

    Directory of Open Access Journals (Sweden)

    Anna Lamandini

    2011-06-01

    Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.

  11. A mixed-method research to investigate the adoption of mobile devices and Web2.0 technologies among medical students and educators.

    Science.gov (United States)

    Fan, Si; Radford, Jan; Fabian, Debbie

    2016-04-19

    The past decade has witnessed the increasing adoption of Web 2.0 technologies in medical education. Recently, the notion of digital habitats, Web 2.0 supported learning environments, has also come onto the scene. While there has been initial research on the use of digital habitats for educational purposes, very limited research has examined the adoption of digital habitats by medical students and educators on mobile devices. This paper reports the Stage 1 findings of a two-staged study. The whole study aimed to develop and implement a personal digital habitat, namely digiMe, for medical students and educators at an Australian university. The first stage, however, examined the types of Web 2.0 tools and mobile devices that are being used by potential digiMe users, and reasons for their adoption. In this first stage of research, data were collected through a questionnaire and semi-structured interviews. Questionnaire data collected from 104 participants were analysed using the Predictive Analytics SoftWare (PASW). Frequencies, median and mean values were pursued. Kruskal Wallis tests were then performed to examine variations between views of different participant groups. Notes from the 6 interviews, together with responses to the open-ended section of the questionnaire, were analysed using the constructivist grounded theory approach, to generate key themes relevant to the adoption of Web 2.0 tools and mobile devices. The findings reflected the wide use of mobile devices, including both smart phones and computing tablets, by medical students and educators for learning, teaching and professional development purposes. Among the 22 types of Web 2.0 tools investigated, less than half of these tools were frequently used by the participants, this reflects the mismatch between users' desires and their actual practice. Age and occupation appeared to be the influential factors for their adoption. Easy access to information and improved communication are main purposes. This

  12. Web-conferencing as a viable method for group decision research

    Directory of Open Access Journals (Sweden)

    Michel J. J. Handgraaf

    2012-09-01

    Full Text Available Studying group decision-making is challenging for multiple reasons. An important logistic difficulty is studying a sufficiently large number of groups, each with multiple participants. Assembling groups online could make this process easier and also provide access to group members more representative of real-world work groups than the sample of college students that typically comprise lab Face-to-Face (FtF groups. The main goal of this paper is to compare the decisions of online groups to those of FtF groups. We did so in a study that manipulated gain/loss framing of a risky decision between groups and examined the decisions of both individual group members and groups. All of these dependent measures are compared for an online and an FtF sample. Our results suggest that web-conferencing can be a substitute for FtF interaction in group decision-making research, as we found no moderation effects of communication medium on individual or group decision outcome variables. The effects of medium that were found suggest that the use of online groups may be the preferred method for group research. To wit, discussions among the online groups were shorter, but generated a greater number of thought units, i.e., they made more efficient use of time.

  13. Quality issues in the management of web information

    CERN Document Server

    Bordogna, Gloria; Jain, Lakhmi

    2013-01-01

    This research volume presents a sample of recent contributions related to the issue of quality-assessment for Web Based information in the context of information access, retrieval, and filtering systems. The advent of the Web and the uncontrolled process of documents' generation have raised the problem of declining quality assessment to information on the Web, by considering both the nature of documents (texts, images, video, sounds, and so on), the genre of documents ( news, geographic information, ontologies, medical records, products records, and so on), the reputation of information sources and sites, and, last but not least the actions performed on documents (content indexing, retrieval and ranking, collaborative filtering, and so on). The volume constitutes a compendium of both heterogeneous approaches and sample applications focusing specific aspects of the quality assessment for Web-based information for researchers, PhD students and practitioners carrying out their research activity in the field of W...

  14. Web-based control application using WebSocket

    International Nuclear Information System (INIS)

    Furukawa, Y.

    2012-01-01

    The WebSocket allows asynchronous full-duplex communication between a Web-based (i.e. Java Script-based) application and a Web-server. WebSocket started as a part of HTML5 standardization but has now been separated from HTML5 and has been developed independently. Using WebSocket, it becomes easy to develop platform independent presentation layer applications for accelerator and beamline control software. In addition, a Web browser is the only application program that needs to be installed on client computer. The WebSocket-based applications communicate with the WebSocket server using simple text-based messages, so WebSocket is applicable message-based control system like MADOCA, which was developed for the SPring-8 control system. A simple WebSocket server for the MADOCA control system and a simple motor control application were successfully made as a first trial of the WebSocket control application. Using Google-Chrome (version 13.0) on Debian/Linux and Windows 7, Opera (version 11.0) on Debian/Linux and Safari (version 5.0.3) on Mac OS X as clients, the motors can be controlled using a WebSocket-based Web-application. Diffractometer control application use in synchrotron radiation diffraction experiment was also developed. (author)

  15. Web document engineering

    International Nuclear Information System (INIS)

    White, B.

    1996-05-01

    This tutorial provides an overview of several document engineering techniques which are applicable to the authoring of World Wide Web documents. It illustrates how pre-WWW hypertext research is applicable to the development of WWW information resources

  16. Bringing Web 2.0 to bioinformatics.

    Science.gov (United States)

    Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.

  17. Editorial: Web-Based Learning: Innovations and Challenges

    Directory of Open Access Journals (Sweden)

    Mudasser F. Wyne

    2010-12-01

    Full Text Available This special issue of the Knowledge Management & E-Learning: an international journal(KM&EL aims to stimulate interest in the web based issues in both teaching and learning, expose natural collaboration among the authors and readers, inform the larger research community of the interest and importance of this area and create a forum for evaluating innovations and challenges. We intend to bring together researchers and practitioners interested in developing and enhancing web-based learning environment. The objectives for this attempt are to provide a forum for discussion of ideas and techniques developed and used in web based learning. In addition the issue can also be used for educators and developers to discuss requirements for web-based education. Both theoretical papers and papers reporting implementation models, technology used and practical results are included in the issue.

  18. Exploring the Relationships between Web Usability and Students' Perceived Learning in Web-Based Multimedia (WBMM) Tutorials

    Science.gov (United States)

    Mackey, Thomas P.; Ho, Jinwon

    2008-01-01

    The purpose of this case study is to better understand the relationships between Web usability and students' perceived learning in the design and implementation of Web-based multimedia (WBMM) tutorials in blended courses. Much of the current research in this area focuses on the use of multimedia as a replacement for classroom instruction rather…

  19. EnviroAtlas Community Block Group Metrics Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This web service includes...

  20. Lynx web services for annotations and systems analysis of multi-gene disorders.

    Science.gov (United States)

    Sulakhe, Dinanath; Taylor, Andrew; Balasubramanian, Sandhya; Feng, Bo; Xie, Bingqing; Börnigen, Daniela; Dave, Utpal J; Foster, Ian T; Gilliam, T Conrad; Maltsev, Natalia

    2014-07-01

    Lynx is a web-based integrated systems biology platform that supports annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Lynx has integrated multiple classes of biomedical data (genomic, proteomic, pathways, phenotypic, toxicogenomic, contextual and others) from various public databases as well as manually curated data from our group and collaborators (LynxKB). Lynx provides tools for gene list enrichment analysis using multiple functional annotations and network-based gene prioritization. Lynx provides access to the integrated database and the analytical tools via REST based Web Services (http://lynx.ci.uchicago.edu/webservices.html). This comprises data retrieval services for specific functional annotations, services to search across the complete LynxKB (powered by Lucene), and services to access the analytical tools built within the Lynx platform. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. DESIGN FOR CONNECTING SPATIAL DATA INFRASTRUCTURES WITH SENSOR WEB (SENSDI

    Directory of Open Access Journals (Sweden)

    D. Bhattacharya

    2016-06-01

    Full Text Available Integrating Sensor Web With Spatial Data Infrastructures (SENSDI aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. It is about research to harness the sensed environment by utilizing domain specific sensor data to create a generalized sensor webframework. The challenges being semantic enablement for Spatial Data Infrastructures, and connecting the interfaces of SDI with interfaces of Sensor Web. The proposed research plan is to Identify sensor data sources, Setup an open source SDI, Match the APIs and functions between Sensor Web and SDI, and Case studies like hazard applications, urban applications etc. We take up co-operative development of SDI best practices to enable a new realm of a location enabled and semantically enriched World Wide Web - the "Geospatial Web" or "Geosemantic Web" by setting up one to one correspondence between WMS, WFS, WCS, Metadata and 'Sensor Observation Service' (SOS; 'Sensor Planning Service' (SPS; 'Sensor Alert Service' (SAS; a service that facilitates asynchronous message interchange between users and services, and between two OGC-SWE services, called the 'Web Notification Service' (WNS. Hence in conclusion, it is of importance to geospatial studies to integrate SDI with Sensor Web. The integration can be done through merging the common OGC interfaces of SDI and Sensor Web. Multi-usability studies to validate integration has to be undertaken as future research.

  2. Use of the Web by a Distributed Research group Performing Distributed Computing

    Science.gov (United States)

    Burke, David A.; Peterkin, Robert E.

    2001-06-01

    A distributed research group that uses distributed computers faces a spectrum of challenges--some of which can be met by using various electronic means of communication. The particular challenge of our group involves three physically separated research entities. We have had to link two collaborating groups at AFRL and NRL together for software development, and the same AFRL group with a LANL group for software applications. We are developing and using a pair of general-purpose, portable, parallel, unsteady, plasma physics simulation codes. The first collaboration is centered around a formal weekly video teleconference on relatively inexpensive equipment that we have set up in convenient locations in our respective laboratories. The formal virtual meetings are augmented with informal virtual meetings as the need arises. Both collaborations share research data in a variety of forms on a secure URL that is set up behind the firewall at the AFRL. Of course, a computer-generated animation is a particularly efficient way of displaying results from time-dependent numerical simulations, so we generally like to post such animations (along with proper documentation) on our web page. In this presentation, we will discuss some of our accomplishments and disappointments.

  3. Beginning ASPNET Web Pages with WebMatrix

    CERN Document Server

    Brind, Mike

    2011-01-01

    Learn to build dynamic web sites with Microsoft WebMatrix Microsoft WebMatrix is designed to make developing dynamic ASP.NET web sites much easier. This complete Wrox guide shows you what it is, how it works, and how to get the best from it right away. It covers all the basic foundations and also introduces HTML, CSS, and Ajax using jQuery, giving beginning programmers a firm foundation for building dynamic web sites.Examines how WebMatrix is expected to become the new recommended entry-level tool for developing web sites using ASP.NETArms beginning programmers, students, and educators with al

  4. The Aalborg Survey / Part 1 - Web Based Survey

    DEFF Research Database (Denmark)

    Harder, Henrik; Christensen, Cecilie Breinholm

    Background and purpose The Aalborg Survey consists of four independent parts: a web, GPS and an interview based survey and a literature study, which together form a consistent investigation and research into use of urban space, and specifically into young people’s use of urban space: what young......) and the research focus within the cluster of Mobility and Tracking Technologies (MoTT), AAU. Summary / Part 1 Web Base Survey The 1st part of the research project Diverse Urban Spaces (DUS) has been carried out during the period from December 1st 2007 to February 1st 2008 as a Web Based Survey of the 27.040 gross...... [statistikbanken.dk, a] young people aged 14-23 living in Aalborg Municipality in 2008. The web based questionnaire has been distributed among the group of young people studying at upper secondary schools in Aalborg, i.e. 7.680 young people [statistikbanken.dk, b]. The resulting data from those respondents who...

  5. BrainBrowser: distributed, web-based neurological data visualization

    Directory of Open Access Journals (Sweden)

    Tarek eSherif

    2015-01-01

    Full Text Available Recent years have seen massive, distributed datasets become the norm in neuroimaging research, and the methodologies used analyze them have, in response, become more collaborative and exploratory. Tools and infrastructure are continuously being developed and deployed to facilitate research in this context: grid computation platforms to process the data, distributed data stores to house and share them, high-speed networks to move them around and collaborative, often web-based, platforms to provide access to and sometimes manage the entire system. BrainBrowser is a lightweight, high-performance JavaScript visualization library built to provide easy-to-use, powerful, on-demand visualization of remote datasets in this new research environment. BrainBrowser leverages modern Web technologies, such as WebGL, HTML5 and Web Workers, to visualize 3D surface and volumetric neuroimaging data in any modern web browser without requiring any browser plugins. It is thus trivial to integrate BrainBrowser into any web-based platform. BrainBrowser is simple enough to produce a basic web-based visualization in a few lines of code, while at the same time being robust enough to create full-featured visualization applications. BrainBrowser can dynamically load the data required for a given visualization, so no network bandwidth needs to be waisted on data that will not be used. BrainBrowser's integration into the standardized web platform also allows users to consider using 3D data visualization in novel ways, such as for data distribution, data sharing and dynamic online publications. BrainBrowser is already being used in two major online platforms, CBRAIN and LORIS, and has been used to make the 1TB MACACC dataset openly accessible.

  6. Non-visual Web Browsing: Beyond Web Accessibility.

    Science.gov (United States)

    Ramakrishnan, I V; Ashok, Vikas; Billah, Syed Masum

    2017-07-01

    People with vision impairments typically use screen readers to browse the Web. To facilitate non-visual browsing, web sites must be made accessible to screen readers, i.e., all the visible elements in the web site must be readable by the screen reader. But even if web sites are accessible, screen-reader users may not find them easy to use and/or easy to navigate. For example, they may not be able to locate the desired information without having to listen to a lot of irrelevant contents. These issues go beyond web accessibility and directly impact web usability. Several techniques have been reported in the accessibility literature for making the Web usable for screen reading. This paper is a review of these techniques. Interestingly, the review reveals that understanding the semantics of the web content is the overarching theme that drives these techniques for improving web usability.

  7. Web 2.0 collaboration tool to support student research in hydrology - an opinion

    Science.gov (United States)

    Pathirana, A.; Gersonius, B.; Radhakrishnan, M.

    2012-08-01

    A growing body of evidence suggests that it is unwise to make the a-priori assumption that university students are ready and eager to embrace modern online technologies employed to enhance the educational experience. We present our opinion on employing Wiki, a popular Web 2.0 technology, in small student groups, based on a case-study of using it customized to work as a personal learning environment (PLE1) (Fiedler and Väljataga, 2011) for supporting thesis research in hydrology. Since inception in 2006, the system presented has proven to facilitate knowledge construction and peer-communication within and across groups of students of different academic years and to stimulate learning. Being an open ended and egalitarian system, it was a minimal burden to maintain, as all students became content authors and shared responsibility. A number of unintended uses of the system were also observed, like using it as a backup medium and mobile storage. We attribute the success and sustainability of the proposed Web 2.0-based approach to the fact that the efforts were not limited to the application of the technology, but comprised the creation of a supporting environment with educational activities organized around it. We propose that Wiki-based PLEs are much more suitable than traditional learning management systems for supporting non-classroom education activities like thesis research in hydrology. 1Here we use the term PLE to refer to the conceptual framework to make the process of knowledge construction a personalized experience - rather than to refer to the technology (in this case Wiki) used to attempt implementing such a system.

  8. A web-based tool to engage stakeholders in informing research planning for future decisions on emerging materials

    Energy Technology Data Exchange (ETDEWEB)

    Powers, Christina M., E-mail: powers.christina@epa.gov [National Center for Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Research Triangle Park, NC 27711 (United States); Grieger, Khara D., E-mail: kgrieger@rti.org [RTI International, 3040 Cornwallis Rd., Research Triangle Park, NC 27709 (United States); Hendren, Christine Ogilvie, E-mail: chendren@duke.edu [Center for the Environmental Implications of NanoTechnology, Duke University, Durham, NC 27708 (United States); Meacham, Connie A., E-mail: meacham.connie@epa.gov [National Center for Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Research Triangle Park, NC 27711 (United States); Gurevich, Gerald, E-mail: gurevich.gerald@epa.gov [National Center for Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Research Triangle Park, NC 27711 (United States); Lassiter, Meredith Gooding, E-mail: lassiter.meredith@epa.gov [National Center for Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Research Triangle Park, NC 27711 (United States); Money, Eric S., E-mail: emoney@rti.org [RTI International, 3040 Cornwallis Rd., Research Triangle Park, NC 27709 (United States); Lloyd, Jennifer M., E-mail: jml@rti.org [RTI International, 3040 Cornwallis Rd., Research Triangle Park, NC 27709 (United States); Beaulieu, Stephen M., E-mail: steveb@rti.org [RTI International, 3040 Cornwallis Rd., Research Triangle Park, NC 27709 (United States)

    2014-02-01

    Prioritizing and assessing risks associated with chemicals, industrial materials, or emerging technologies is a complex problem that benefits from the involvement of multiple stakeholder groups. For example, in the case of engineered nanomaterials (ENMs), scientific uncertainties exist that hamper environmental, health, and safety (EHS) assessments. Therefore, alternative approaches to standard EHS assessment methods have gained increased attention. The objective of this paper is to describe the application of a web-based, interactive decision support tool developed by the U.S. Environmental Protection Agency (U.S. EPA) in a pilot study on ENMs. The piloted tool implements U.S. EPA's comprehensive environmental assessment (CEA) approach to prioritize research gaps. When pursued, such research priorities can result in data that subsequently improve the scientific robustness of risk assessments and inform future risk management decisions. Pilot results suggest that the tool was useful in facilitating multi-stakeholder prioritization of research gaps. Results also provide potential improvements for subsequent applications. The outcomes of future CEAWeb applications with larger stakeholder groups may inform the development of funding opportunities for emerging materials across the scientific community (e.g., National Science Foundation Science to Achieve Results [STAR] grants, National Institutes of Health Requests for Proposals). - Highlights: • A web-based, interactive decision support tool was piloted for emerging materials. • The tool (CEAWeb) was based on an established approach to prioritize research gaps. • CEAWeb facilitates multi-stakeholder prioritization of research gaps. • We provide recommendations for future versions and applications of CEAWeb.

  9. Overview of the TREC 2014 Federated Web Search Track

    NARCIS (Netherlands)

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Nguyen, Dong-Phuong; Zhou, Ke; Hiemstra, Djoerd

    2014-01-01

    The TREC Federated Web Search track facilitates research in topics related to federated web search, by providing a large realistic data collection sampled from a multitude of online search engines. The FedWeb 2013 challenges of Resource Selection and Results Merging challenges are again included in

  10. Checklist of accessibility in Web informational environments

    Directory of Open Access Journals (Sweden)

    Christiane Gomes dos Santos

    2017-01-01

    Full Text Available This research deals with the process of search, navigation and retrieval of information by the person with blindness in web environment, focusing on knowledge of the areas of information recovery and architecture, to understanding the strategies used by these people to access the information on the web. It aims to propose the construction of an accessibility verification instrument, checklist, to be used to analyze the behavior of people with blindness in search actions, navigation and recovery sites and pages. It a research exploratory and descriptive of qualitative nature, with the research methodology, case study - the research to establish a specific study with the simulation of search, navigation and information retrieval using speech synthesis system, NonVisual Desktop Access, in assistive technologies laboratory, to substantiate the construction of the checklist for accessibility verification. It is considered the reliability of performed research and its importance for the evaluation of accessibility in web environment to improve the access of information for people with limited reading in order to be used on websites and pages accessibility check analysis.

  11. Teaching Web Evaluation: A Cognitive Development Approach

    Directory of Open Access Journals (Sweden)

    Candice Benjes-Small

    2013-08-01

    Full Text Available Web evaluation has been a standard information literacy offering for years and has always been a challenging topic for instruction librarians. Over time, the authors had tried a myriad of strategies to teach freshmen how to assess the credibility of Web sites but felt the efforts were insufficient. By familiarizing themselves with the cognitive development research, they were able to effectively revamp Web evaluation instruction to improve student learning. This article discusses the problems of traditional methods, such as checklists; summarizes the cognitive development research, particularly in regards to its relationship to the ACRL Information Literacy Standards; and details the instructional lesson plan developed by the authors that incorporates cognitive development theories.

  12. Web-based interventions for menopause: A systematic integrated literature review.

    Science.gov (United States)

    Im, Eun-Ok; Lee, Yaelim; Chee, Eunice; Chee, Wonshik

    2017-01-01

    Advances in computer and Internet technologies have allowed health care providers to develop, use, and test various types of Web-based interventions for their practice and research. Indeed, an increasing number of Web-based interventions have recently been developed and tested in health care fields. Despite the great potential for Web-based interventions to improve practice and research, little is known about the current status of Web-based interventions, especially those related to menopause. To identify the current status of Web-based interventions used in the field of menopause, a literature review was conducted using multiple databases, with the keywords "online," "Internet," "Web," "intervention," and "menopause." Using these keywords, a total of 18 eligible articles were analyzed to identify the current status of Web-based interventions for menopause. Six themes reflecting the current status of Web-based interventions for menopause were identified: (a) there existed few Web-based intervention studies on menopause; (b) Web-based decision support systems were mainly used; (c) there was a lack of detail on the interventions; (d) there was a lack of guidance on the use of Web-based interventions; (e) counselling was frequently combined with Web-based interventions; and (f) the pros and cons were similar to those of Web-based methods in general. Based on these findings, directions for future Web-based interventions for menopause are provided. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. E.B. White and Charlotte's Web.

    Science.gov (United States)

    Elledge, Scott

    2001-01-01

    Discusses the life and work of E.B. White, describing his research on spiders, examining his development of the story, "Charlotte's Web," and explaining how "Charlotte's Web" is a fabric of memories. Notes how this book faces a variety of truths about the human condition and how it celebrates a child's generous view of and love…

  14. DISTANCE LEARNING ONLINE WEB 3 .0

    Directory of Open Access Journals (Sweden)

    S. M. Petryk

    2015-05-01

    Full Text Available This article analyzes the existing methods of identification information in the semantic web, outlines the main problems of its implementation and researches the use of Semantic Web as the part of distance learning. Proposed alternative variant of identification and relationship construction of information and acquired knowledge based on the developed method “spectrum of knowledge”

  15. SME 2.0: Roadmap towards Web 2.0-Based Open Innovation in SME-Networks - A Case Study Based Research Framework

    Science.gov (United States)

    Lindermann, Nadine; Valcárcel, Sylvia; Schaarschmidt, Mario; von Kortzfleisch, Harald

    Small- and medium sized enterprises (SMEs) are of high social and economic importance since they represent 99% of European enterprises. With regard to their restricted resources, SMEs are facing a limited capacity for innovation to compete with new challenges in a complex and dynamic competitive environment. Given this context, SMEs need to increasingly cooperate to generate innovations on an extended resource base. Our research project focuses on the aspect of open innovation in SME-networks enabled by Web 2.0 applications and referring to innovative solutions of non-competitive daily life problems. Examples are industrial safety, work-life balance issues or pollution control. The project raises the question whether the use of Web 2.0 applications can foster the exchange of creativity and innovative ideas within a network of SMEs and hence catalyze new forms of innovation processes among its participants. Using Web 2.0 applications within SMEs implies consequently breaking down innovation processes to employees’ level and thus systematically opening up a heterogeneous and broader knowledge base to idea generation. In this paper we address first steps on a roadmap towards Web 2.0-based open innovation processes within SME-networks. It presents a general framework for interaction activities leading to open innovation and recommends a regional marketplace as a viable, trust-building driver for further collaborative activities. These findings are based on field research within a specific SME-network in Rhineland-Palatinate Germany, the “WirtschaftsForum Neuwied e.V.”, which consists of roughly 100 heterogeneous SMEs employing about 8,000 workers.

  16. Beyond Trust: Web Site Design Preferences Across Cultures

    OpenAIRE

    Dianne Cyr; Carole Bonanni; John Bowes; Joe Ilsever

    2005-01-01

    The growth of Internet shopping motivates a better understanding of how e-loyalty is built online between businesses and consumers. In this study, Web site design and culture are advanced as important to Web site trust, Web site satisfaction, and e-loyalty in online business relationships. Based on data collected in Canada, the U.S., Germany, and Japan, the research considers (1) examining within culture preferences for design elements of a local vs. a foreign Web site and subsequent particip...

  17. Web 2.0 and Marketing Education: Explanations and Experiential Applications

    Science.gov (United States)

    Granitz, Neil; Koernig, Stephen K.

    2011-01-01

    Although both experiential learning and Web 2.0 tools focus on creativity, sharing, and collaboration, sparse research has been published integrating a Web 2.0 paradigm with experiential learning in marketing. In this article, Web 2.0 concepts are explained. Web 2.0 is then positioned as a philosophy that can advance experiential learning through…

  18. Web Caching

    Indian Academy of Sciences (India)

    leveraged through Web caching technology. Specifically, Web caching becomes an ... Web routing can improve the overall performance of the Internet. Web caching is similar to memory system caching - a Web cache stores Web resources in ...

  19. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study.

    Science.gov (United States)

    Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek

    2018-04-26

    Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable

  20. Digital plagiarism--the Web giveth and the Web shall taketh.

    Science.gov (United States)

    Barrie, J M; Presti, D E

    2000-01-01

    Publishing students' and researchers' papers on the World Wide Web (WWW) facilitates the sharing of information within and between academic communities. However, the ease of copying and transporting digital information leaves these authors' ideas open to plagiarism. Using tools such as the Plagiarism.org database, which compares submissions to reports and papers available on the Internet, could discover instances of plagiarism, revolutionize the peer review process, and raise the quality of published research everywhere.

  1. A Framework for Dynamic Web Services Composition

    NARCIS (Netherlands)

    Lécué, F.; Goncalves da Silva, Eduardo; Ferreira Pires, Luis

    2007-01-01

    Dynamic composition of web services is a promising approach and at the same time a challenging research area for the dissemination of service-oriented applications. It is widely recognised that service semantics is a key element for the dynamic composition of Web services, since it allows the

  2. Overview of the TREC 2013 Federated Web Search Track

    NARCIS (Netherlands)

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Nguyen, Dong-Phuong; Hiemstra, Djoerd

    2014-01-01

    The TREC Federated Web Search track is intended to promote research related to federated search in a realistic web setting, and hereto provides a large data collection gathered from a series of online search engines. This overview paper discusses the results of the first edition of the track, FedWeb

  3. Recommendations for Benchmarking Web Site Usage among Academic Libraries.

    Science.gov (United States)

    Hightower, Christy; Sih, Julie; Tilghman, Adam

    1998-01-01

    To help library directors and Web developers create a benchmarking program to compare statistics of academic Web sites, the authors analyzed the Web server log files of 14 university science and engineering libraries. Recommends a centralized voluntary reporting structure coordinated by the Association of Research Libraries (ARL) and a method for…

  4. A design method for an intuitive web site

    Energy Technology Data Exchange (ETDEWEB)

    Quinniey, M.L.; Diegert, K.V.; Baca, B.G.; Forsythe, J.C.; Grose, E.

    1999-11-03

    The paper describes a methodology for designing a web site for human factor engineers that is applicable for designing a web site for a group of people. Many web pages on the World Wide Web are not organized in a format that allows a user to efficiently find information. Often the information and hypertext links on web pages are not organized into intuitive groups. Intuition implies that a person is able to use their knowledge of a paradigm to solve a problem. Intuitive groups are categories that allow web page users to find information by using their intuition or mental models of categories. In order to improve the human factors engineers efficiency for finding information on the World Wide Web, research was performed to develop a web site that serves as a tool for finding information effectively. The paper describes a methodology for designing a web site for a group of people who perform similar task in an organization.

  5. Utilizing mixed methods research in analyzing Iranian researchers’ informarion search behaviour in the Web and presenting current pattern

    Directory of Open Access Journals (Sweden)

    Maryam Asadi

    2015-12-01

    Full Text Available Using mixed methods research design, the current study has analyzed Iranian researchers’ information searching behaviour on the Web.Then based on extracted concepts, the model of their information searching behavior was revealed. . Forty-four participants, including academic staff from universities and research centers were recruited for this study selected by purposive sampling. Data were gathered from questionnairs including ten questions and semi-structured interview. Each participant’s memos were analyzed using grounded theory methods adapted from Strauss & Corbin (1998. Results showed that the main objectives of subjects were doing a research, writing a paper, studying, doing assignments, downloading files and acquiring public information in using Web. The most important of learning about how to search and retrieve information were trial and error and get help from friends among the subjects. Information resources are identified by searching in information resources (e.g. search engines, references in papers, and search in Online database… communications facilities & tools (e.g. contact with colleagues, seminars & workshops, social networking..., and information services (e.g. RSS, Alerting, and SDI. Also, Findings indicated that searching by search engines, reviewing references, searching in online databases, and contact with colleagues and studying last issue of the electronic journals were the most important for searching. The most important strategies were using search engines and scientific tools such as Google Scholar. In addition, utilizing from simple (Quick search method was the most common among subjects. Using of topic, keywords, title of paper were most important of elements for retrieval information. Analysis of interview showed that there were nine stages in researchers’ information searching behaviour: topic selection, initiating search, formulating search query, information retrieval, access to information

  6. Inside the Web: A Look at Digital Libraries and the Invisible/Deep Web

    Science.gov (United States)

    Su, Mila C.

    2009-01-01

    The evolution of the Internet and the World Wide Web continually exceeds expectations with the "swift pace" of technological innovations. Information is added, and just as quickly becomes outdated at a rapid pace. Researchers have found that Digital materials can provide access to primary source materials and connect the researcher to institutions…

  7. Omicseq: a web-based search engine for exploring omics datasets.

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S; Xu, Tianlei; Chen, Li; Zwick, Michael E; Jiang, Xiaoqian; Wang, Fusheng; Qin, Zhaohui S

    2017-07-03

    The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve 'findability' of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Sensor Webs to Constellations

    Science.gov (United States)

    Cole, M.

    2017-12-01

    Advanced technology plays a key role in enabling future Earth-observing missions needed for global monitoring and climate research. Rapid progress over the past decade and anticipated for the coming decades have diminished the size of some satellites while increasing the amount of data and required pace of integration and analysis. Sensor web developments provide correlations to constellations of smallsats. Reviewing current advances in sensor webs and requirements for constellations will improve planning, operations, and data management for future architectures of multiple satellites with a common mission goal.

  9. Web Intelligence and Artificial Intelligence in Education

    Science.gov (United States)

    Devedzic, Vladan

    2004-01-01

    This paper surveys important aspects of Web Intelligence (WI) in the context of Artificial Intelligence in Education (AIED) research. WI explores the fundamental roles as well as practical impacts of Artificial Intelligence (AI) and advanced Information Technology (IT) on the next generation of Web-related products, systems, services, and…

  10. Teaching Students about Plagiarism Using a Web-Based Module

    Science.gov (United States)

    Stetter, Maria Earman

    2013-01-01

    The following research delivered a web-based module about plagiarism and paraphrasing to avoid plagiarism in both a blended method, with live instruction paired with web presentation for 105 students, and a separate web-only method for 22 other students. Participants were graduates and undergraduates preparing to become teachers, the majority of…

  11. Architecting Web Sites for High Performance

    Directory of Open Access Journals (Sweden)

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  12. WebSelF: A Web Scraping Framework

    DEFF Research Database (Denmark)

    Thomsen, Jakob; Ernst, Erik; Brabrand, Claus

    2012-01-01

    We present, WebSelF, a framework for web scraping which models the process of web scraping and decomposes it into four conceptually independent, reusable, and composable constituents. We have validated our framework through a full parameterized implementation that is flexible enough to capture...... previous work on web scraping. We have experimentally evaluated our framework and implementation in an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over...... a period of more than one year. Our framework solves three concrete problems with current web scraping and our experimental results indicate that com- position of previous and our new techniques achieve a higher degree of accuracy, precision and specificity than existing techniques alone....

  13. An Introduction to Social Semantic Web Mining & Big Data Analytics for Political Attitudes and Mentalities Research

    Directory of Open Access Journals (Sweden)

    Markus Schatten

    2015-01-01

    Full Text Available The social web has become a major repository of social and behavioral data that is of exceptional interest to the social science and humanities research community. Computer science has only recently developed various technologies and techniques that allow for harvesting, organizing and analyzing such data and provide knowledge and insights into the structure and behavior or people on-line. Some of these techniques include social web mining, conceptual and social network analysis and modeling, tag clouds, topic maps, folksonomies, complex network visualizations, modeling of processes on networks, agent based models of social network emergence, speech recognition, computer vision, natural language processing, opinion mining and sentiment analysis, recommender systems, user profiling and semantic wikis. All of these techniques are briefly introduced, example studies are given and ideas as well as possible directions in the field of political attitudes and mentalities are given. In the end challenges for future studies are discussed.

  14. World Wide Web Usage Mining Systems and Technologies

    Directory of Open Access Journals (Sweden)

    Wen-Chen Hu

    2003-08-01

    Full Text Available Web usage mining is used to discover interesting user navigation patterns and can be applied to many real-world problems, such as improving Web sites/pages, making additional topic or product recommendations, user/customer behavior studies, etc. This article provides a survey and analysis of current Web usage mining systems and technologies. A Web usage mining system performs five major tasks: i data gathering, ii data preparation, iii navigation pattern discovery, iv pattern analysis and visualization, and v pattern applications. Each task is explained in detail and its related technologies are introduced. A list of major research systems and projects concerning Web usage mining is also presented, and a summary of Web usage mining is given in the last section.

  15. RESEARCH OF THE ADSORPTION OF ORGANIC ACIDS IN SUGARCANE BAGASSE ASH

    Directory of Open Access Journals (Sweden)

    Julio Omar Prieto García

    2017-07-01

    Full Text Available In this research a study of the adsorption of acetic, benzoic, butanoic, fumaric, maleic and succinic acids on sugarcane baggase ash is made. The adsorber material is characterized through physical criteria such as apparent and pictometric density, compressibility, porosity, superficial area and tortuosity. The sample has been examined by X-rays Diffraction, thermal analysis, IR-quality analysis. The isotherm for the sorption process is determined, where it is shown that the Freundlich model is adjusted to benzoic acid, the Langmuir and Toth model to acetic acid, Bunauer- Emmett- Teller (BET model to succinic acid and the butiric, maleic and fumaric acids are adjusted to Langmoir model. It is established that the first-order model is adjusted to the adsorption kinetics of the acetic and benzoic acids; while the rest of the acids are adjusted to a second-order model, in the case of the butanoic, succinic and maleic acids it is possible the occurrence of chemisorption processes.

  16. Metabonomics and its role in amino acid nutrition research.

    Science.gov (United States)

    He, Qinghua; Yin, Yulong; Zhao, Feng; Kong, Xiangfeng; Wu, Guoyao; Ren, Pingping

    2011-06-01

    Metabonomics combines metabolic profiling and multivariate data analysis to facilitate the high-throughput analysis of metabolites in biological samples. This technique has been developed as a powerful analytical tool and hence has found successful widespread applications in many areas of bioscience. Metabonomics has also become an important part of systems biology. As a sensitive and powerful method, metabonomics can quantitatively measure subtle dynamic perturbations of metabolic pathways in organisms due to changes in pathophysiological, nutritional, and epigenetic states. Therefore, metabonomics holds great promise to enhance our understanding of the complex relationship between amino acids and metabolism to define the roles for dietary amino acids in maintaining health and the development of disease. Such a technique also aids in the studies of functions, metabolic regulation, safety, and individualized requirements of amino acids. Here, we highlight the common workflow of metabonomics and some of the applications to amino acid nutrition research to illustrate the great potential of this exciting new frontier in bioscience.

  17. Digital plagiarism - The web giveth and the web shall taketh

    Science.gov (United States)

    Presti, David E

    2000-01-01

    Publishing students' and researchers' papers on the World Wide Web (WWW) facilitates the sharing of information within and between academic communities. However, the ease of copying and transporting digital information leaves these authors' ideas open to plagiarism. Using tools such as the Plagiarism.org database, which compares submissions to reports and papers available on the Internet, could discover instances of plagiarism, revolutionize the peer review process, and raise the quality of published research everywhere. PMID:11720925

  18. StarScan: a web server for scanning small RNA targets from degradome sequencing data.

    Science.gov (United States)

    Liu, Shun; Li, Jun-Hao; Wu, Jie; Zhou, Ke-Ren; Zhou, Hui; Yang, Jian-Hua; Qu, Liang-Hu

    2015-07-01

    Endogenous small non-coding RNAs (sRNAs), including microRNAs, PIWI-interacting RNAs and small interfering RNAs, play important gene regulatory roles in animals and plants by pairing to the protein-coding and non-coding transcripts. However, computationally assigning these various sRNAs to their regulatory target genes remains technically challenging. Recently, a high-throughput degradome sequencing method was applied to identify biologically relevant sRNA cleavage sites. In this study, an integrated web-based tool, StarScan (sRNA target Scan), was developed for scanning sRNA targets using degradome sequencing data from 20 species. Given a sRNA sequence from plants or animals, our web server performs an ultrafast and exhaustive search for potential sRNA-target interactions in annotated and unannotated genomic regions. The interactions between small RNAs and target transcripts were further evaluated using a novel tool, alignScore. A novel tool, degradomeBinomTest, was developed to quantify the abundance of degradome fragments located at the 9-11th nucleotide from the sRNA 5' end. This is the first web server for discovering potential sRNA-mediated RNA cleavage events in plants and animals, which affords mechanistic insights into the regulatory roles of sRNAs. The StarScan web server is available at http://mirlab.sysu.edu.cn/starscan/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.

    Science.gov (United States)

    Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.

  20. Where does it break? or : Why the semantic web is not just "research as usual"

    NARCIS (Netherlands)

    Van Harmelen, Frank

    2006-01-01

    Work on the Semantic Web is all too often phrased as a technological challenge: how to improve the precision of search engines, how to personalise web-sites, how to integrate weakly-structured data-sources, etc. This suggests that we will be able to realise the Semantic Web by merely applying (and

  1. AN INSECURE WILD WEB: A LARGE-SCALE STUDY OF EFFECTIVENESS OF WEB SECURITY MECHANISMS

    Directory of Open Access Journals (Sweden)

    Kailas Patil

    2017-03-01

    Full Text Available This research work presents a large-scale study of the problems in real-world web applications and widely-used mobile browsers. Through a large-scale experiment, we find inconsistencies in Secure Socket Layer (SSL warnings among popular mobile web browsers (over a billion users download. The majority of popular mobile browsers on the Google Play Store either provide incomplete information in SSL warnings shown to users or failed to provide SSL warnings in the presence of security certificate errors, thus making it a difficult task even for a security savvy user to make an informed decision. In addition, we find that 28% of websites are using mixed content. Mixed content means a secure website (https loads a sub resource using insecure HTTP protocol. The mixed content weakens the security of entire website and vulnerable to man-in-the-middle (MITM attacks. Furthermore, we inspected the default behavior of mobile web browsers and report that majority of mobile web browsers allow execution of mixed content in web applications, which implies billions of mobile browser users are vulnerable to eavesdropping and MITM attacks. Based on our findings, we make recommendations for website developers, users and browser vendors.

  2. WebAL Comes of Age: A Review of the First 21 Years of Artificial Life on the Web.

    Science.gov (United States)

    Taylor, Tim; Auerbach, Joshua E; Bongard, Josh; Clune, Jeff; Hickinbotham, Simon; Ofria, Charles; Oka, Mizuki; Risi, Sebastian; Stanley, Kenneth O; Yosinski, Jason

    2016-01-01

    We present a survey of the first 21 years of web-based artificial life (WebAL) research and applications, broadly construed to include the many different ways in which artificial life and web technologies might intersect. Our survey covers the period from 1994-when the first WebAL work appeared-up to the present day, together with a brief discussion of relevant precursors. We examine recent projects, from 2010-2015, in greater detail in order to highlight the current state of the art. We follow the survey with a discussion of common themes and methodologies that can be observed in recent work and identify a number of likely directions for future work in this exciting area.

  3. Prey interception drives web invasion and spider size determines successful web takeover in nocturnal orb-web spiders.

    Science.gov (United States)

    Gan, Wenjin; Liu, Shengjie; Yang, Xiaodong; Li, Daiqin; Lei, Chaoliang

    2015-09-24

    A striking feature of web-building spiders is the use of silk to make webs, mainly for prey capture. However, building a web is energetically expensive and increases the risk of predation. To reduce such costs and still have access to abundant prey, some web-building spiders have evolved web invasion behaviour. In general, no consistent patterns of web invasion have emerged and the factors determining web invasion remain largely unexplored. Here we report web invasion among conspecifics in seven nocturnal species of orb-web spiders, and examined the factors determining the probability of webs that could be invaded and taken over by conspecifics. About 36% of webs were invaded by conspecifics, and 25% of invaded webs were taken over by the invaders. A web that was built higher and intercepted more prey was more likely to be invaded. Once a web was invaded, the smaller the size of the resident spider, the more likely its web would be taken over by the invader. This study suggests that web invasion, as a possible way of reducing costs, may be widespread in nocturnal orb-web spiders. © 2015. Published by The Company of Biologists Ltd.

  4. Prey interception drives web invasion and spider size determines successful web takeover in nocturnal orb-web spiders

    Directory of Open Access Journals (Sweden)

    Wenjin Gan

    2015-10-01

    Full Text Available A striking feature of web-building spiders is the use of silk to make webs, mainly for prey capture. However, building a web is energetically expensive and increases the risk of predation. To reduce such costs and still have access to abundant prey, some web-building spiders have evolved web invasion behaviour. In general, no consistent patterns of web invasion have emerged and the factors determining web invasion remain largely unexplored. Here we report web invasion among conspecifics in seven nocturnal species of orb-web spiders, and examined the factors determining the probability of webs that could be invaded and taken over by conspecifics. About 36% of webs were invaded by conspecifics, and 25% of invaded webs were taken over by the invaders. A web that was built higher and intercepted more prey was more likely to be invaded. Once a web was invaded, the smaller the size of the resident spider, the more likely its web would be taken over by the invader. This study suggests that web invasion, as a possible way of reducing costs, may be widespread in nocturnal orb-web spiders.

  5. Using Open Web APIs in Teaching Web Mining

    Science.gov (United States)

    Chen, Hsinchun; Li, Xin; Chau, M.; Ho, Yi-Jen; Tseng, Chunju

    2009-01-01

    With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems…

  6. PharmMapper 2017 update: a web server for potential drug target identification with a comprehensive target pharmacophore database.

    Science.gov (United States)

    Wang, Xia; Shen, Yihang; Wang, Shiwei; Li, Shiliang; Zhang, Weilin; Liu, Xiaofeng; Lai, Luhua; Pei, Jianfeng; Li, Honglin

    2017-07-03

    The PharmMapper online tool is a web server for potential drug target identification by reversed pharmacophore matching the query compound against an in-house pharmacophore model database. The original version of PharmMapper includes more than 7000 target pharmacophores derived from complex crystal structures with corresponding protein target annotations. In this article, we present a new version of the PharmMapper web server, of which the backend pharmacophore database is six times larger than the earlier one, with a total of 23 236 proteins covering 16 159 druggable pharmacophore models and 51 431 ligandable pharmacophore models. The expanded target data cover 450 indications and 4800 molecular functions compared to 110 indications and 349 molecular functions in our last update. In addition, the new web server is united with the statistically meaningful ranking of the identified drug targets, which is achieved through the use of standard scores. It also features an improved user interface. The proposed web server is freely available at http://lilab.ecust.edu.cn/pharmmapper/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Just-in-time Database-Driven Web Applications

    Science.gov (United States)

    2003-01-01

    "Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that comprised 80% of the hits were results reporting (27%), graduate medical education (26%), research (20%), and bed availability (8%). The mean number of hits per application was 3888 (SD = 5598; range, 14-19879). A model is described for just-in-time database-driven Web application development and an example given with a popular HTML editor and database program. PMID:14517109

  8. Web of Deceit.

    Science.gov (United States)

    Minkel, Walter

    2002-01-01

    Discusses the increase in online plagiarism and what school librarians can do to help. Topics include the need for school district policies on plagiarism; teaching students what plagiarism is; pertinent Web sites; teaching students proper research skills; motivation for cheating; and requiring traditional sources of information for student…

  9. Association and Sequence Mining in Web Usage

    Directory of Open Access Journals (Sweden)

    Claudia Elena DINUCA

    2011-06-01

    Full Text Available Web servers worldwide generate a vast amount of information on web users’ browsing activities. Several researchers have studied these so-called clickstream or web access log data to better understand and characterize web users. Clickstream data can be enriched with information about the content of visited pages and the origin (e.g., geographic, organizational of the requests. The goal of this project is to analyse user behaviour by mining enriched web access log data. With the continued growth and proliferation of e-commerce, Web services, and Web-based information systems, the volumes of click stream and user data collected by Web-based organizations in their daily operations has reached astronomical proportions. This information can be exploited in various ways, such as enhancing the effectiveness of websites or developing directed web marketing campaigns. The discovered patterns are usually represented as collections of pages, objects, or re-sources that are frequently accessed by groups of users with common needs or interests. The focus of this paper is to provide an overview how to use frequent pattern techniques for discovering different types of patterns in a Web log database. In this paper we will focus on finding association as a data mining technique to extract potentially useful knowledge from web usage data. I implemented in Java, using NetBeans IDE, a program for identification of pages’ association from sessions. For exemplification, we used the log files from a commercial web site.

  10. Overview of the TREC 2014 Federated Web Search Track

    OpenAIRE

    Demeester, Thomas; Trieschnigg, Rudolf Berend; Nguyen, Dong-Phuong; Zhou, Ke; Hiemstra, Djoerd

    2014-01-01

    The TREC Federated Web Search track facilitates research in topics related to federated web search, by providing a large realistic data collection sampled from a multitude of online search engines. The FedWeb 2013 challenges of Resource Selection and Results Merging challenges are again included in FedWeb 2014, and we additionally introduced the task of vertical selection. Other new aspects are the required link between the Resource Selection and Results Merging, and the importance of diversi...

  11. Web 2.0 and Nigerian Academic Librarians

    Science.gov (United States)

    Adekunmisi, Sowemimo Ronke; Odunewu, Abiodun Olusegun

    2016-01-01

    Web 2.0 applications to library services are aimed at enhancing the provision of relevant and cost-effective information resources for quality education and research. Despite the richness of these web applications and their enormous impact on library and information services as recorded in the developed world, Nigerian academic libraries are yet…

  12. Intelligent Web-Based Learning System with Personalized Learning Path Guidance

    Science.gov (United States)

    Chen, C. M.

    2008-01-01

    Personalized curriculum sequencing is an important research issue for web-based learning systems because no fixed learning paths will be appropriate for all learners. Therefore, many researchers focused on developing e-learning systems with personalized learning mechanisms to assist on-line web-based learning and adaptively provide learning paths…

  13. Secure web book to store structural genomics research data.

    Science.gov (United States)

    Manjasetty, Babu A; Höppner, Klaus; Mueller, Uwe; Heinemann, Udo

    2003-01-01

    Recently established collaborative structural genomics programs aim at significantly accelerating the crystal structure analysis of proteins. These large-scale projects require efficient data management systems to ensure seamless collaboration between different groups of scientists working towards the same goal. Within the Berlin-based Protein Structure Factory, the synchrotron X-ray data collection and the subsequent crystal structure analysis tasks are located at BESSY, a third-generation synchrotron source. To organize file-based communication and data transfer at the BESSY site of the Protein Structure Factory, we have developed the web-based BCLIMS, the BESSY Crystallography Laboratory Information Management System. BCLIMS is a relational data management system which is powered by MySQL as the database engine and Apache HTTP as the web server. The database interface routines are written in Python programing language. The software is freely available to academic users. Here we describe the storage, retrieval and manipulation of laboratory information, mainly pertaining to the synchrotron X-ray diffraction experiments and the subsequent protein structure analysis, using BCLIMS.

  14. Tracing the scientific outputs in the field of Ebola research based on publications in the Web of Science.

    Science.gov (United States)

    Yi, Fengyun; Yang, Pin; Sheng, Huifeng

    2016-04-15

    Ebola virus disease (hereafter EVD or Ebola) has a high fatality rate. The devastating effects of the current epidemic of Ebola in West Africa have put the global health response in acute focus. In response, the World Health Organization (WHO) has declared the Ebola outbreak in West Africa as a "Public Health Emergency of International Concern". A small proportion of scientific literature is dedicated to Ebola research. To identify global research trends in Ebola research, the Institute for Scientific Information (ISI) Web of Science™ database was used to search for data, which encompassed original articles published from 1900 to 2013. The keyword "Ebola" was used to identify articles for the purposes of this review. In order to include all published items, the database was searched using the Basic Search method. The earliest record of literature about Ebola indexed in the Web of Science is from 1977. A total of 2477 publications on Ebola, published between 1977 and 2014 (with the number of publications increasing annually), were retrieved from the database. Original research articles (n = 1623, 65.5%) were the most common type of publication. Almost all (96.5%) of the literature in this field was in English. The USA had the highest scientific output and greatest number of funding agencies. Journal of Virology published 239 papers on Ebola, followed by Journal of Infectious Diseases and Virology, which published 113 and 99 papers, respectively. A total of 1911 papers on Ebola were cited 61,477 times. This analysis identified the current state of research and trends in studies about Ebola between 1977 and 2014. Our bibliometric analysis provides a historical perspective on the progress in Ebola research.

  15. Guide to cleaner coal technology-related web sites

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, R; Jenkins, N; Zhang, X [IEA Coal Research - The Clean Coal Centre, London (United Kingdom)

    2001-07-01

    The 'Guide to Cleaner Coal Technology-Related Web Sites' is a guide to web sites that contain important information on cleaner coal technologies (CCT). It contains a short introduction to the World Wide Web and gives advice on how to search for information using directories and search engines. The core section of the Guide is a collection of factsheets summarising the information available on over 65 major web sites selected from organizations worldwide (except those promoting companies). These sites contain a wealth of information on CCT research and development, technology transfer, financing and markets. The factsheets are organised in the following categories. Associations, research centres and programmes; Climate change and sustainable development; Cooperative ventures; Electronic journals; Financial institutions; International organizations; National government information; and Statistical information. A full subject index is provided. The Guide concludes with some general comments on the quality of the sites reviewed.

  16. Web-based tools from AHRQ's National Resource Center.

    Science.gov (United States)

    Cusack, Caitlin M; Shah, Sapna

    2008-11-06

    The Agency for Healthcare Research and Quality (AHRQ) has made an investment of over $216 million in research around health information technology (health IT). As part of their investment, AHRQ has developed the National Resource Center for Health IT (NRC) which includes a public domain Web site. New content for the web site, such as white papers, toolkits, lessons from the health IT portfolio and web-based tools, is developed as needs are identified. Among the tools developed by the NRC are the Compendium of Surveys and the Clinical Decision Support (CDS) Resources. The Compendium of Surveys is a searchable repository of health IT evaluation surveys made available for public use. The CDS Resources contains content which may be used to develop clinical decision support tools, such as rules, reminders and templates. This live demonstration will show the access, use, and content of both these freely available web-based tools.

  17. Web 2.1 : Toward a large and qualitative participation on the Web

    Directory of Open Access Journals (Sweden)

    Boubker Sbihi

    2009-06-01

    Full Text Available Normal 0 21 false false false MicrosoftInternetExplorer4 This article presents the results of research done on Web 2.0 within the School of Information Sciences ESI. It aims to study the behavior of different academic actors who deal with information, among whom we cite teachers, students of masters and students of information sciences in Morocco, face to Web 2.0’s services. Firstly, it aims to evaluate the use and production of information in the context of Web 2.0. Then, it   attempts to assess those rates, to identify and analyze the causes of eventual problems and obstacles that academic actors face.  In fact, we intend to understand why information actors in the academic world use often Web 2.0’s services but do rarely produce qualitative content. To achieve the objectives set, we used the on-site survey method, which was based on an electronic questionnaire administered directly to our people via the Internet. We chose the electronic version of questionnaire in order to make an optimal use in terms of new technologies, to gain time and to reduce cost. Then, in order to deepen the understanding of the data collected, we complete the data collected by the questionnaire by an ongoing discussions with actors. Finally, to overcome the problems already identified, we intend to propose the elements of a new version of the Web called Web 2.1 offering new concepts   in order to encourage users to produce information of quality and make the Web more open to a larger community. This version maintains the current contents of   Web 2.0 and adds more value to it. Indeed, the content will be monitored, evaluated and validated before being published. In order to target valuable information, the new version of Web 2.1 proposes to categorize users into three groups: users who just use the contents, producers who use and produce content, and  validators  who validate the content in order to  target information that is validated and of good

  18. GDR (Genome Database for Rosaceae: integrated web resources for Rosaceae genomics and genetics research

    Directory of Open Access Journals (Sweden)

    Ficklin Stephen

    2004-09-01

    Full Text Available Abstract Background Peach is being developed as a model organism for Rosaceae, an economically important family that includes fruits and ornamental plants such as apple, pear, strawberry, cherry, almond and rose. The genomics and genetics data of peach can play a significant role in the gene discovery and the genetic understanding of related species. The effective utilization of these peach resources, however, requires the development of an integrated and centralized database with associated analysis tools. Description The Genome Database for Rosaceae (GDR is a curated and integrated web-based relational database. GDR contains comprehensive data of the genetically anchored peach physical map, an annotated peach EST database, Rosaceae maps and markers and all publicly available Rosaceae sequences. Annotations of ESTs include contig assembly, putative function, simple sequence repeats, and anchored position to the peach physical map where applicable. Our integrated map viewer provides graphical interface to the genetic, transcriptome and physical mapping information. ESTs, BACs and markers can be queried by various categories and the search result sites are linked to the integrated map viewer or to the WebFPC physical map sites. In addition to browsing and querying the database, users can compare their sequences with the annotated GDR sequences via a dedicated sequence similarity server running either the BLAST or FASTA algorithm. To demonstrate the utility of the integrated and fully annotated database and analysis tools, we describe a case study where we anchored Rosaceae sequences to the peach physical and genetic map by sequence similarity. Conclusions The GDR has been initiated to meet the major deficiency in Rosaceae genomics and genetics research, namely a centralized web database and bioinformatics tools for data storage, analysis and exchange. GDR can be accessed at http://www.genome.clemson.edu/gdr/.

  19. GDR (Genome Database for Rosaceae): integrated web resources for Rosaceae genomics and genetics research.

    Science.gov (United States)

    Jung, Sook; Jesudurai, Christopher; Staton, Margaret; Du, Zhidian; Ficklin, Stephen; Cho, Ilhyung; Abbott, Albert; Tomkins, Jeffrey; Main, Dorrie

    2004-09-09

    Peach is being developed as a model organism for Rosaceae, an economically important family that includes fruits and ornamental plants such as apple, pear, strawberry, cherry, almond and rose. The genomics and genetics data of peach can play a significant role in the gene discovery and the genetic understanding of related species. The effective utilization of these peach resources, however, requires the development of an integrated and centralized database with associated analysis tools. The Genome Database for Rosaceae (GDR) is a curated and integrated web-based relational database. GDR contains comprehensive data of the genetically anchored peach physical map, an annotated peach EST database, Rosaceae maps and markers and all publicly available Rosaceae sequences. Annotations of ESTs include contig assembly, putative function, simple sequence repeats, and anchored position to the peach physical map where applicable. Our integrated map viewer provides graphical interface to the genetic, transcriptome and physical mapping information. ESTs, BACs and markers can be queried by various categories and the search result sites are linked to the integrated map viewer or to the WebFPC physical map sites. In addition to browsing and querying the database, users can compare their sequences with the annotated GDR sequences via a dedicated sequence similarity server running either the BLAST or FASTA algorithm. To demonstrate the utility of the integrated and fully annotated database and analysis tools, we describe a case study where we anchored Rosaceae sequences to the peach physical and genetic map by sequence similarity. The GDR has been initiated to meet the major deficiency in Rosaceae genomics and genetics research, namely a centralized web database and bioinformatics tools for data storage, analysis and exchange. GDR can be accessed at http://www.genome.clemson.edu/gdr/.

  20. Shared secrets: Web 2.0 and research in Social Sciences

    Directory of Open Access Journals (Sweden)

    Sandra MARTORELL

    2013-12-01

    Full Text Available Web 2.0 represents a revolution in terms of the possibilities it offers for facilitating communication and collaboration between users – something that has become increasingly common in the world of research. A mere few years ago, the information produced by scientists and scholars remained in the hands of a very limited circle of institutions and publishers, as if it were a guarded secret. Today that secret is being shouted from the rooftops and shared with the rest of the scientific community in order to make it more accessible and to allow new advances. A clear example of this can be found in the social sciences, where there is a constant increase in the production of articles and materials that in turn serve for the pursuit of further research, thereby promoting the continuous development of scientific knowledge. This new situation is being fostered by the proliferation of tools and applications that make it possible, but also by a change in mentality towards a philosophy of exchange and open access. In this article, we will examine this phenomenon using a methodological system based on the analysis of platforms for the exchange of scientific knowledge, and especially social networks (both general and specialising in the social sciences, in order to demonstrate their potential in a society that is becoming increasingly aware of the need to overcome physical or institutional boundaries and move forward together.

  1. Web 2.0 collaboration tool to support student research in hydrology – an opinion

    Directory of Open Access Journals (Sweden)

    M. Radhakrishnan

    2012-08-01

    Full Text Available A growing body of evidence suggests that it is unwise to make the a-priori assumption that university students are ready and eager to embrace modern online technologies employed to enhance the educational experience. We present our opinion on employing Wiki, a popular Web 2.0 technology, in small student groups, based on a case-study of using it customized to work as a personal learning environment (PLE1 (Fiedler and Väljataga, 2011 for supporting thesis research in hydrology. Since inception in 2006, the system presented has proven to facilitate knowledge construction and peer-communication within and across groups of students of different academic years and to stimulate learning. Being an open ended and egalitarian system, it was a minimal burden to maintain, as all students became content authors and shared responsibility. A number of unintended uses of the system were also observed, like using it as a backup medium and mobile storage. We attribute the success and sustainability of the proposed Web 2.0-based approach to the fact that the efforts were not limited to the application of the technology, but comprised the creation of a supporting environment with educational activities organized around it. We propose that Wiki-based PLEs are much more suitable than traditional learning management systems for supporting non-classroom education activities like thesis research in hydrology. 1Here we use the term PLE to refer to the conceptual framework to make the process of knowledge construction a personalized experience – rather than to refer to the technology (in this case Wiki used to attempt implementing such a system.

  2. Classification of web resident sensor resources using latent semantic indexing and ontologies

    CSIR Research Space (South Africa)

    Majavu, W

    2008-01-01

    Full Text Available Web resident sensor resource discovery plays a crucial role in the realisation of the Sensor Web. The vision of the Sensor Web is to create a web of sensors that can be manipulated and discovered in real time. A current research challenge...

  3. Validez y fiabilidad del Researcher ID y de «Web of Science Production of Spanish Psychology»

    Directory of Open Access Journals (Sweden)

    José Alonso Olivas-Ávila

    2014-01-01

    Full Text Available La creación de sistemas integradores de productos de investigación, como el Researcher ID de Thomson Reuters, ha sido una necesidad emergente debido a lo complejo que es para los investigadores demostrar de manera periódica el impacto y difusión de su investigación. Sin embargo, estos sistemas se alimentan de información proveniente de las diversas bases de datos y cada vez son más inclusivos para captar productos de investigación. Varios estudios bibliométricos han demostrado que las bases de datos contienen imprecisiones de varios tipos, que afectan directamente a los sistemas integradores. Como consecuencia, se plantea este estudio descriptivo con el fin de analizar la precisión de los registros del Researcher ID de los miembros del consejo de www.psy-wos.es y de una muestra de usuarios de esta página para cotejar los registros con los contenidos en la base de datos Web of Science, diferenciándolos de contenidos ajenos a esta base de datos. Los resultados reflejan que existen imprecisiones y errores considerables en los Researcher ID de la muestra analizada, tales como duplicidad de registros y la inclusión de registros ajenos a la Web of Science. Se concluye que los Resercher ID así como el www.psy-wos.es no son válidos ni fiables.

  4. Web building and silk properties functionally covary among species of wolf spider.

    Science.gov (United States)

    Lacava, Mariángeles; Camargo, Arley; Garcia, Luis F; Benamú, Marco A; Santana, Martin; Fang, Jian; Wang, Xungai; Blamires, Sean J

    2018-04-15

    Although phylogenetic studies have shown covariation between the properties of spider major ampullate (MA) silk and web building, both spider webs and silks are highly plastic so we cannot be sure whether these traits functionally covary or just vary across environments that the spiders occupy. As MaSp2-like proteins provide MA silk with greater extensibility, their presence is considered necessary for spider webs to effectively capture prey. Wolf spiders (Lycosidae) are predominantly non-web building, but a select few species build webs. We accordingly collected MA silk from two web-building and six non-web-building species found in semirural ecosystems in Uruguay to test whether the presence of MaSp2-like proteins (indicated by amino acid composition, silk mechanical properties and silk nanostructures) was associated with web building across the group. The web-building and non-web-building species were from disparate subfamilies so we estimated a genetic phylogeny to perform appropriate comparisons. For all of the properties measured, we found differences between web-building and non-web-building species. A phylogenetic regression model confirmed that web building and not phylogenetic inertia influences silk properties. Our study definitively showed an ecological influence over spider silk properties. We expect that the presence of the MaSp2-like proteins and the subsequent nanostructures improves the mechanical performance of silks within the webs. Our study furthers our understanding of spider web and silk co-evolution and the ecological implications of spider silk properties. © 2018 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2018 European Society For Evolutionary Biology.

  5. CERN in a historic Global Web-cast

    CERN Multimedia

    2005-01-01

    On Thursday 1st December, CERN will be involved in 'Beyond Einstein', a 12-hour live world-wide web-cast, which will feature participants from across the globe, marking the World Year of Physics. CERN goes global: the 12-hour web-cast will unite different world timezones by means of the web. Viewers on the web will be able to tune into one of the most extensive videoconference in the world to learn more about Einstein's physics and how it continues to influence cutting-edge research worldwide. The event kicks off at midday (CET) with a live presentation at CERN's Globe of Science and Innovation, featuring a symbolic link-up with the New Library of Alexandria in Egypt. There will then be transmissions from a host of research institutions, such as Imperial College, Fermilab and SLAC. There will also be live connections with Jerusalem, Taipei, San Francisco, Tasmania and even Antarctica. 'Connections will be established among virtually all the time zones on Earth, a perfect way to celebrate Einstein, who rev...

  6. Business and scientific workflows a web service-oriented approach

    CERN Document Server

    Tan, Wei

    2013-01-01

    Focuses on how to use web service computing and service-based workflow technologies to develop timely, effective workflows for both business and scientific fields Utilizing web computing and Service-Oriented Architecture (SOA), Business and Scientific Workflows: A Web Service-Oriented Approach focuses on how to design, analyze, and deploy web service-based workflows for both business and scientific applications in many areas of healthcare and biomedicine. It also discusses and presents the recent research and development results. This informative reference features app

  7. STATE ACID RAIN RESEARCH AND SCREENING SYSTEM - VERSION 1.0 USER'S MANUAL

    Science.gov (United States)

    The report is a user's manual that describes Version 1.0 of EPA's STate Acid Rain Research and Screening System (STARRSS), developed to assist utility regulatory commissions in reviewing utility acid rain compliance plans. It is a screening tool that is based on scenario analysis...

  8. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Nejdl, Wolfgang

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...

  9. IMPROVING PERSONALIZED WEB SEARCH USING BOOKSHELF DATA STRUCTURE

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2012-10-01

    Full Text Available Search engines are playing a vital role in retrieving relevant information for the web user. In this research work a user profile based web search is proposed. So the web user from different domain may receive different set of results. The main challenging work is to provide relevant results at the right level of reading difficulty. Estimating user expertise and re-ranking the results are the main aspects of this paper. The retrieved results are arranged in Bookshelf Data Structure for easy access. Better presentation of search results hence increases the usability of web search engines significantly in visual mode.

  10. Students' Perceptions of the Effectiveness of the World Wide Web as a Research and Teaching Tool in Science Learning.

    Science.gov (United States)

    Ng, Wan; Gunstone, Richard

    2002-01-01

    Investigates the use of the World Wide Web (WWW) as a research and teaching tool in promoting self-directed learning groups of 15-year-old students. Discusses the perceptions of students of the effectiveness of the WWW in assisting them with the construction of knowledge on photosynthesis and respiration. (Contains 33 references.) (Author/YDS)

  11. Online Tracking Technologies and Web Privacy:Technologieën voor Online volgen en Web Privacy

    OpenAIRE

    Acar, Mustafa Gunes Can

    2017-01-01

    In my PhD thesis, I would like to study the problem of online privacy with a focus on Web and mobile applications. Key research questions to be addressed by my study are the following: How can we formalize and quantify web tracking? What are the threats presented against privacy by different tracking techniques such as browser fingerprinting and cookie based tracking? What kind of privacy enhancing technologies (PET) can be used to ensure privacy without degrading service quality? The stud...

  12. In Silico Prediction of Gamma-Aminobutyric Acid Type-A Receptors Using Novel Machine-Learning-Based SVM and GBDT Approaches

    Directory of Open Access Journals (Sweden)

    Zhijun Liao

    2016-01-01

    Full Text Available Gamma-aminobutyric acid type-A receptors (GABAARs belong to multisubunit membrane spanning ligand-gated ion channels (LGICs which act as the principal mediators of rapid inhibitory synaptic transmission in the human brain. Therefore, the category prediction of GABAARs just from the protein amino acid sequence would be very helpful for the recognition and research of novel receptors. Based on the proteins’ physicochemical properties, amino acids composition and position, a GABAAR classifier was first constructed using a 188-dimensional (188D algorithm at 90% cd-hit identity and compared with pseudo-amino acid composition (PseAAC and ProtrWeb web-based algorithms for human GABAAR proteins. Then, four classifiers including gradient boosting decision tree (GBDT, random forest (RF, a library for support vector machine (libSVM, and k-nearest neighbor (k-NN were compared on the dataset at cd-hit 40% low identity. This work obtained the highest correctly classified rate at 96.8% and the highest specificity at 99.29%. But the values of sensitivity, accuracy, and Matthew’s correlation coefficient were a little lower than those of PseAAC and ProtrWeb; GBDT and libSVM can make a little better performance than RF and k-NN at the second dataset. In conclusion, a GABAAR classifier was successfully constructed using only the protein sequence information.

  13. Usare WebDewey

    OpenAIRE

    Baldi, Paolo

    2016-01-01

    This presentation shows how to use the WebDewey tool. Features of WebDewey. Italian WebDewey compared with American WebDewey. Querying Italian WebDewey. Italian WebDewey and MARC21. Italian WebDewey and UNIMARC. Numbers, captions, "equivalente verbale": Dewey decimal classification in Italian catalogues. Italian WebDewey and Nuovo soggettario. Italian WebDewey and LCSH. Italian WebDewey compared with printed version of Italian Dewey Classification (22. edition): advantages and disadvantages o...

  14. Efficacy of the World Wide Web in K-12 environmental education

    Science.gov (United States)

    York, Kimberly Jane

    1998-11-01

    Despite support by teachers, students, and the American public in general, environmental education is not a priority in U.S. schools. Teachers face many barriers to integrating environmental education into K--12 curricula. The focus of this research is teachers' lack of access to environmental education resources. New educational reforms combined with emerging mass communication technologies such as the Internet and World Wide Web present new opportunities for the infusion of environmental content into the curriculum. New technologies can connect teachers and students to a wealth of resources previously unavailable to them. However, significant barriers to using technologies exist that must be overcome to make this promise a reality. Web-based environmental education is a new field and research is urgently needed. If teachers are to use the Web meaningfully in their classrooms, it is essential that their attitudes and perceptions about using this new technology be brought to light. Therefore, this exploratory research investigates teachers' attitudes toward using the Web to share environmental education resources. Both qualitative and quantitative methods were used to investigate this problem. Two surveys were conducted---self-administered mail survey and a Web-based online survey---to elicit teachers perceptions and comments about environmental education and the Web. Preliminary statistical procedures including frequencies, percentages and correlational measures were performed to interpret the data. In-depth interviews and participant-observation methods were used during an extended environmental education curriculum development project with two practicing teachers to gain insights into the process of creating curricula and placing it online. Findings from the both the mail survey and the Web-based survey suggest that teachers are interested in environmental education---97% of respondents for each survey agreed that environmental education should be taught in K

  15. Web TA Production (WebTA)

    Data.gov (United States)

    US Agency for International Development — WebTA is a web-based time and attendance system that supports USAID payroll administration functions, and is designed to capture hours worked, leave used and...

  16. Web 2.0 and communication processes at work : Evidence from China

    NARCIS (Netherlands)

    Wong, L.H.M.; Ou, Carol; Davison, R.M.; Zhu, H.; Zhang, C.

    Research problem: Web 2.0 applications, such as instant messengers and other social media platforms, are fast becoming ubiquitous in organizations, yet their impact on work performance is poorly understood. Research question: What is the relationship between Web 2.0 use, and work-based communication

  17. Preservice Teachers' Critical Thinking Dispositions and Web 2.0 Competencies

    Science.gov (United States)

    Sendag, Serkan; Erol, Osman; Sezgin, Sezan; Dulkadir, Nihal

    2015-01-01

    The aim of this study was to investigate the associations between preservice teachers' Web 2.0 competencies and their critical thinking disposition (CTD). The study employed an associational research design using California Critical Thinking Disposition-Inventory (CCTD-I) and a Web 2.0 competency questionnaire including items related to Web 2.0…

  18. Integrating Web 2.0-Based Informal Learning with Workplace Training

    Science.gov (United States)

    Zhao, Fang; Kemp, Linzi J.

    2012-01-01

    Informal learning takes place in the workplace through connection and collaboration mediated by Web 2.0 applications. However, little research has yet been published that explores informal learning and how to integrate it with workplace training. We aim to address this research gap by developing a conceptual Web 2.0-based workplace learning and…

  19. A review of Web information seeking research: considerations of method and foci of interest

    Directory of Open Access Journals (Sweden)

    Konstantina Martzoukou

    2005-01-01

    Full Text Available Introduction. This review shows that Web information seeking research suffers from inconsistencies in method and a lack of homogeneity in research foci. Background. Qualitative and quantitative methods are needed to produce a comprehensive view of information seeking. Studies also recommend observation as one of the most fundamental ways of gaining direct knowledge of behaviour. User-centred research emphasises the importance of holistic approaches, which incorporate physical, cognitive, and affective elements. Problems. Comprehensive studies are limited; many approaches are problematic and a consistent methodological framework has not been developed. Research has often failed to ensure appropriate samples that ensure both quantitative validity and qualitative consistency. Typically, observation has been based on simulated rather than real information needs and most studies show little attempt to examine holistically different characteristics of users in the same research schema. Research also deals with various aspects of cognitive style and ability with variant definitions of expertise and different layers of user experience. Finally the effect of social and cultural elements has not been extensively investigated. Conclusion. The existing limitations in method and the plethora of different approaches allow little progress and fewer comparisons across studies. There is urgent need for establishing a theoretical framework on which future studies can be based so that information seeking behaviour can be more holistically understood, and results can be generalised.

  20. The pepATTRACT web server for blind, large-scale peptide-protein docking.

    Science.gov (United States)

    de Vries, Sjoerd J; Rey, Julien; Schindler, Christina E M; Zacharias, Martin; Tuffery, Pierre

    2017-07-03

    Peptide-protein interactions are ubiquitous in the cell and form an important part of the interactome. Computational docking methods can complement experimental characterization of these complexes, but current protocols are not applicable on the proteome scale. pepATTRACT is a novel docking protocol that is fully blind, i.e. it does not require any information about the binding site. In various stages of its development, pepATTRACT has participated in CAPRI, making successful predictions for five out of seven protein-peptide targets. Its performance is similar or better than state-of-the-art local docking protocols that do require binding site information. Here we present a novel web server that carries out the rigid-body stage of pepATTRACT. On the peptiDB benchmark, the web server generates a correct model in the top 50 in 34% of the cases. Compared to the full pepATTRACT protocol, this leads to some loss of performance, but the computation time is reduced from ∼18 h to ∼10 min. Combined with the fact that it is fully blind, this makes the web server well-suited for large-scale in silico protein-peptide docking experiments. The rigid-body pepATTRACT server is freely available at http://bioserv.rpbs.univ-paris-diderot.fr/services/pepATTRACT. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Shifting Sands: Science Researchers on Google Scholar, Web of Science, and PubMed, with Implications for Library Collections Budgets

    Science.gov (United States)

    Hightower, Christy; Caldwell, Christy

    2010-01-01

    Science researchers at the University of California Santa Cruz were surveyed about their article database use and preferences in order to inform collection budget choices. Web of Science was the single most used database, selected by 41.6%. Statistically there was no difference between PubMed (21.5%) and Google Scholar (18.7%) as the second most…

  2. Politiken, Alt om Ikast Brande (web), Lemvig Folkeblad (Web), Politiken (web), Dabladet Ringkjøbing Skjern (web)

    DEFF Research Database (Denmark)

    Lauritsen, Jens

    2014-01-01

    Politiken 01.01.2014 14:16 Danskerne skød nytåret ind med et brag, men for enkeltes vedkommende gik det galt, da nytårskrudtet blev tændt. Skadestuerne har behandlet 73 personer for fyrværkeriskader mellem klokken 18 i aftes og klokken 06 i morges. Det viser en optælling, som Politiken har...... foretaget på baggrund af tal fra Ulykkes Analyse Gruppen på Odense Universitetshospital. Artiklen er også bragt i: Alt om Ikast Brande (web), Lemvig Folkeblad (web), Politiken (web), Dagbladet Ringkjøbing Skjern (web)....

  3. Measuring Law Library Catalog Web Site Usability: A Web Analytic Approach

    Science.gov (United States)

    Fang, Wei; Crawford, Marjorie E.

    2008-01-01

    Although there is a proliferation of information available on the Web, and law professors, students, and other users have a variety of channels to locate information and complete their research activities, the law library catalog still remains an important source for offering users access to information that has been evaluated and cataloged by…

  4. Use of anonymous Web communities and websites by medical consumers in Japan to research drug information.

    Science.gov (United States)

    Kishimoto, Keiko; Fukushima, Noriko

    2011-01-01

    In this study, we investigated the status of researching drug information online, and the type of Internet user who uses anonymous Web communities and websites. A Web-based cross-sectional survey of 10875 male and female Internet users aged 16 and over was conducted in March 2010. Of 10282 analyzed respondents, excluding medical professionals, about 47% reported that they had previously searched the Internet for drug information and had used online resources ranging from drug information search engines and pharmaceutical industry websites to social networking sites and Twitter. Respondents who had researched drug information online (n=4861) were analyzed by two multivariable logistic regressions. In Model 1, the use of anonymous websites associated with age (OR, 0.778; 95% CI, 0.742-0.816), referring to the reputation and the narrative of other Internet users on shopping (OR, 1.640; 95% CI, 1.450-1.855), taking a prescription drug (OR, 0.806; 95% CI, 0.705-0.922), and frequent consulting with non-professionals about medical care and health (OR, 1.613; 95% CI, 1.396-1.865). In Model 2, use of only anonymous websites was associated with age (OR, 0.753; 95% CI, 0.705-0.805), using the Internet daily (OR, 0.611; 95% CI, 0.462-0.808), taking a prescription drug (OR, 0.614; 95% CI, 0.505-0.747), and experience a side effect (OR, 0.526; 95% CI, 0.421-0.658). The analysis revealed the profiles of Internet users who researched drug information on social media sites where the information providers are anonymous and do not necessarily have adequate knowledge of medicine and online information literacy.

  5. Teaching MBA Students the Use of Web 2.0: The Knowledge Management Perspective

    Science.gov (United States)

    Levy, Meira; Hadar, Irit

    2010-01-01

    The new concepts and technologies of Web 2.0 attract researches in a variety of fields including education, business and knowledge management. However, while the Web 2.0 potential in the education discipline has been widely studied, in the management discipline the Web 2.0 business value has not been fully acknowledged. This research suggests an…

  6. The geospatial web how geobrowsers, social software and the web 2 0 are shaping the network society

    CERN Document Server

    Scharl, Arno; Tochtermann, Klaus

    2007-01-01

    The Geospatial Web will have a profound impact on managing knowledge, structuring work flows within and across organizations, and communicating with like-minded individuals in virtual communities. The enabling technologies for the Geospatial Web are geo-browsers such as NASA World Wind, Google Earth and Microsoft Live Local 3D. These three-dimensional platforms revolutionize the production and consumption of media products. They not only reveal the geographic distribution of Web resources and services, but also bring together people of similar interests, browsing behavior, or geographic location. This book summarizes the latest research on the Geospatial Web's technical foundations, describes information services and collaborative tools built on top of geo-browsers, and investigates the environmental, social and economic impacts of geospatial applications. The role of contextual knowledge in shaping the emerging network society deserves particular attention. By integrating geospatial and semantic technology, ...

  7. Anorexia nervosa and uric acid beyond gout: An idea worth researching.

    Science.gov (United States)

    Simeunovic Ostojic, Mladena; Maas, Joyce

    2018-02-01

    Uric acid is best known for its role in gout-the most prevalent inflammatory arthritis in humans-that is also described as an unusual complication of anorexia nervosa (AN). However, beyond gout, uric acid could also be involved in the pathophysiology and psychopathology of AN, as it has many biological functions serving as a pro- and antioxidant, neuroprotector, neurostimulant, and activator of the immune response. Further, recent research suggests that uric acid could be a biomarker of mood dysfunction, personality traits, and behavioral patterns. This article discusses the hypothesis that uric acid in AN may not be a mere innocent bystander determined solely by AN behavior and its medical complications. In contrast, the relation between uric acid and AN may have evolutionary origin and may be reciprocal, where uric acid regulates some features and pathophysiological processes of AN, including weight and metabolism regulation, oxidative stress, immunity, mood, cognition, and (hyper)activity. © 2018 Wiley Periodicals, Inc.

  8. Lipids of Prokaryotic Origin at the Base of Marine Food Webs

    Directory of Open Access Journals (Sweden)

    Maria José Caramujo

    2012-11-01

    Full Text Available In particular niches of the marine environment, such as abyssal trenches, icy waters and hot vents, the base of the food web is composed of bacteria and archaea that have developed strategies to survive and thrive under the most extreme conditions. Some of these organisms are considered “extremophiles” and modulate the fatty acid composition of their phospholipids to maintain the adequate fluidity of the cellular membrane under cold/hot temperatures, elevated pressure, high/low salinity and pH. Bacterial cells are even able to produce polyunsaturated fatty acids, contrarily to what was considered until the 1990s, helping the regulation of the membrane fluidity triggered by temperature and pressure and providing protection from oxidative stress. In marine ecosystems, bacteria may either act as a sink of carbon, contribute to nutrient recycling to photo-autotrophs or bacterial organic matter may be transferred to other trophic links in aquatic food webs. The present work aims to provide a comprehensive review on lipid production in bacteria and archaea and to discuss how their lipids, of both heterotrophic and chemoautotrophic origin, contribute to marine food webs.

  9. Issues to Consider in Designing WebQuests: A Literature Review

    Science.gov (United States)

    Kurt, Serhat

    2012-01-01

    A WebQuest is an inquiry-based online learning technique. This technique has been widely adopted in K-16 education. Therefore, it is important that conditions of effective WebQuest design are defined. Through this article the author presents techniques for improving WebQuest design based on current research. More specifically, the author analyzes…

  10. Using Web Server Logs in Evaluating Instructional Web Sites.

    Science.gov (United States)

    Ingram, Albert L.

    2000-01-01

    Web server logs contain a great deal of information about who uses a Web site and how they use it. This article discusses the analysis of Web logs for instructional Web sites; reviews the data stored in most Web server logs; demonstrates what further information can be gleaned from the logs; and discusses analyzing that information for the…

  11. Analysis and Design of Web-Based Database Application for Culinary Community

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2017-03-01

    Full Text Available This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the culinary community. This research used literature review, user interviews, and questionnaires. Moreover, the database system development life cycle was used as a guide for designing a database especially for conceptual database design, logical database design, and physical design database. Web-based application design used eight golden rules for user interface design. The result of this research is the availability of a web-based database application that can fulfill the needs of users in the culinary field related to communication and recipe management.

  12. Web Viz 2.0: A versatile suite of tools for collaboration and visualization

    Science.gov (United States)

    Spencer, C.; Yuen, D. A.

    2012-12-01

    Most scientific applications on the web fail to realize the full collaborative potential of the internet by not utilizing web 2.0 technology. To relieve users from the struggle with software tools and allow them to focus on their research, new software developed for scientists and researchers must harness the full suite of web technology. For several years WebViz 1.0 enabled researchers with any web accessible device to interact with the peta-scale data generated by the Hierarchical Volume Renderer (HVR) system. We have developed a new iteration of WebViz that can be easily interfaced with many problem domains in addition to HVR by employing the best practices of software engineering and object-oriented programming. This is done by separating the core WebViz system from domain specific code at an interface, leveraging inheritance and polymorphism to allow newly developed modules access to the core services. We employed several design patterns (model-view-controller, singleton, observer, and application controller) to engineer this highly modular system implemented in Java.

  13. RAId_DbS: mass-spectrometry based peptide identification web server with knowledge integration

    Directory of Open Access Journals (Sweden)

    Ogurtsov Aleksey Y

    2008-10-01

    Full Text Available Abstract Background Existing scientific literature is a rich source of biological information such as disease markers. Integration of this information with data analysis may help researchers to identify possible controversies and to form useful hypotheses for further validations. In the context of proteomics studies, individualized proteomics era may be approached through consideration of amino acid substitutions/modifications as well as information from disease studies. Integration of such information with peptide searches facilitates speedy, dynamic information retrieval that may significantly benefit clinical laboratory studies. Description We have integrated from various sources annotated single amino acid polymorphisms, post-translational modifications, and their documented disease associations (if they exist into one enhanced database per organism. We have also augmented our peptide identification software RAId_DbS to take into account this information while analyzing a tandem mass spectrum. In principle, one may choose to respect or ignore the correlation of amino acid polymorphisms/modifications within each protein. The former leads to targeted searches and avoids scoring of unnecessary polymorphism/modification combinations; the latter explores possible polymorphisms in a controlled fashion. To facilitate new discoveries, RAId_DbS also allows users to conduct searches permitting novel polymorphisms as well as to search a knowledge database created by the users. Conclusion We have finished constructing enhanced databases for 17 organisms. The web link to RAId_DbS and the enhanced databases is http://www.ncbi.nlm.nih.gov/CBBResearch/qmbp/RAId_DbS/index.html. The relevant databases and binaries of RAId_DbS for Linux, Windows, and Mac OS X are available for download from the same web page.

  14. WEB STRUCTURE MINING

    Directory of Open Access Journals (Sweden)

    CLAUDIA ELENA DINUCĂ

    2011-01-01

    Full Text Available The World Wide Web became one of the most valuable resources for information retrievals and knowledge discoveries due to the permanent increasing of the amount of data available online. Taking into consideration the web dimension, the users get easily lost in the web’s rich hyper structure. Application of data mining methods is the right solution for knowledge discovery on the Web. The knowledge extracted from the Web can be used to raise the performances for Web information retrievals, question answering and Web based data warehousing. In this paper, I provide an introduction of Web mining categories and I focus on one of these categories: the Web structure mining. Web structure mining, one of three categories of web mining for data, is a tool used to identify the relationship between Web pages linked by information or direct link connection. It offers information about how different pages are linked together to form this huge web. Web Structure Mining finds hidden basic structures and uses hyperlinks for more web applications such as web search.

  15. Student participation in World Wide Web-based curriculum development of general chemistry

    Science.gov (United States)

    Hunter, William John Forbes

    1998-12-01

    This thesis describes an action research investigation of improvements to instruction in General Chemistry at Purdue University. Specifically, the study was conducted to guide continuous reform of curriculum materials delivered via the World Wide Web by involving students, instructors, and curriculum designers. The theoretical framework for this study was based upon constructivist learning theory and knowledge claims were developed using an inductive analysis procedure. This results of this study are assertions made in three domains: learning chemistry content via the World Wide Web, learning about learning via the World Wide Web, and learning about participation in an action research project. In the chemistry content domain, students were able to learn chemical concepts that utilized 3-dimensional visualizations, but not textual and graphical information delivered via the Web. In the learning via the Web domain, the use of feedback, the placement of supplementary aids, navigation, and the perception of conceptual novelty were all important to students' use of the Web. In the participation in action research domain, students learned about the complexity of curriculum. development, and valued their empowerment as part of the process.

  16. Tracing carbon flow in an arctic marine food web using fatty acid-stable isotope analysis.

    Science.gov (United States)

    Budge, S M; Wooller, M J; Springer, A M; Iverson, S J; McRoy, C P; Divoky, G J

    2008-08-01

    Global warming and the loss of sea ice threaten to alter patterns of productivity in arctic marine ecosystems because of a likely decline in primary productivity by sea ice algae. Estimates of the contribution of ice algae to total primary production range widely, from just 3 to >50%, and the importance of ice algae to higher trophic levels remains unknown. To help answer this question, we investigated a novel approach to food web studies by combining the two established methods of stable isotope analysis and fatty acid (FA) analysis--we determined the C isotopic composition of individual diatom FA and traced these biomarkers in consumers. Samples were collected near Barrow, Alaska and included ice algae, pelagic phytoplankton, zooplankton, fish, seabirds, pinnipeds and cetaceans. Ice algae and pelagic phytoplankton had distinctive overall FA signatures and clear differences in delta(13)C for two specific diatom FA biomarkers: 16:4n-1 (-24.0+/-2.4 and -30.7+/-0.8 per thousand, respectively) and 20:5n-3 (-18.3+/-2.0 and -26.9+/-0.7 per thousand, respectively). Nearly all delta(13)C values of these two FA in consumers fell between the two stable isotopic end members. A mass balance equation indicated that FA material derived from ice algae, compared to pelagic diatoms, averaged 71% (44-107%) in consumers based on delta(13)C values of 16:4n-1, but only 24% (0-61%) based on 20:5n-3. Our estimates derived from 16:4n-1, which is produced only by diatoms, probably best represented the contribution of ice algae relative to pelagic diatoms. However, many types of algae produce 20:5n-3, so the lower value derived from it likely represented a more realistic estimate of the proportion of ice algae material relative to all other types of phytoplankton. These preliminary results demonstrate the potential value of compound-specific isotope analysis of marine lipids to trace C flow through marine food webs and provide a foundation for future work.

  17. Hue combinations in web design for Swedish and Thai users : Guidelines for combining color hues onscreen for Swedish and Thai users in the context of designing web sites

    OpenAIRE

    Ruse, Vidal

    2017-01-01

    Users can assess the visual appeal of a web page within 50 milliseconds and color is the first thing noticed onscreen. That directly influences user perception of the website, and choosing appealing color combinations is therefore crucial for successful web design. Recent scientific research has identified which individual colors are culturally preferred in web design in different countries but there is no similar research on hue combinations. Currently no effective, scientifically based guid...

  18. Climate Discovery: Integrating Research With Exhibit, Public Tours, K-12, and Web-based EPO Resources

    Science.gov (United States)

    Foster, S. Q.; Carbone, L.; Gardiner, L.; Johnson, R.; Russell, R.; Advisory Committee, S.; Ammann, C.; Lu, G.; Richmond, A.; Maute, A.; Haller, D.; Conery, C.; Bintner, G.

    2005-12-01

    The Climate Discovery Exhibit at the National Center for Atmospheric Research (NCAR) Mesa Lab provides an exciting conceptual outline for the integration of several EPO activities with other well-established NCAR educational resources and programs. The exhibit is organized into four topic areas intended to build understanding among NCAR's 80,000 annual visitors, including 10,000 school children, about Earth system processes and scientific methods contributing to a growing body of knowledge about climate and global change. These topics include: 'Sun-Earth Connections,' 'Climate Now,' 'Climate Past,' and 'Climate Future.' Exhibit text, graphics, film and electronic media, and interactives are developed and updated through collaborations between NCAR's climate research scientists and staff in the Office of Education and Outreach (EO) at the University Corporation for Atmospheric Research (UCAR). With funding from NCAR, paleoclimatologists have contributed data and ideas for a new exhibit Teachers' Guide unit about 'Climate Past.' This collection of middle-school level, standards-aligned lessons are intended to help students gain understanding about how scientists use proxy data and direct observations to describe past climates. Two NASA EPO's have funded the development of 'Sun-Earth Connection' lessons, visual media, and tips for scientists and teachers. Integrated with related content and activities from the NASA-funded Windows to the Universe web site, these products have been adapted to form a second unit in the Climate Discovery Teachers' Guide about the Sun's influence on Earth's climate. Other lesson plans, previously developed by on-going efforts of EO staff and NSF's previously-funded Project Learn program are providing content for a third Teachers' Guide unit on 'Climate Now' - the dynamic atmospheric and geological processes that regulate Earth's climate. EO has plans to collaborate with NCAR climatologists and computer modelers in the next year to develop

  19. Education and Public Outreach at The Pavilion Lake Research Project: Fusion of Science and Education using Web 2.0

    Science.gov (United States)

    Cowie, B. R.; Lim, D. S.; Pendery, R.; Laval, B.; Slater, G. F.; Brady, A. L.; Dearing, W. L.; Downs, M.; Forrest, A.; Lees, D. S.; Lind, R. A.; Marinova, M.; Reid, D.; Seibert, M. A.; Shepard, R.; Williams, D.

    2009-12-01

    The Pavilion Lake Research Project (PLRP) is an international multi-disciplinary science and exploration effort to explain the origin and preservation potential of freshwater microbialites in Pavilion Lake, British Columbia, Canada. Using multiple exploration platforms including one person DeepWorker submersibles, Autonomous Underwater Vehicles, and SCUBA divers, the PLRP acts as an analogue research site for conducting science in extreme environments, such as the Moon or Mars. In 2009, the PLRP integrated several Web 2.0 technologies to provide a pilot-scale Education and Public Outreach (EPO) program targeting the internet savvy generation. The seamless integration of multiple technologies including Google Earth, Wordpress, Youtube, Twitter and Facebook, facilitated the rapid distribution of exciting and accessible science and exploration information over multiple channels. Field updates, science reports, and multimedia including videos, interactive maps, and immersive visualization were rapidly available through multiple social media channels, partly due to the ease of integration of these multiple technologies. Additionally, the successful application of videoconferencing via a readily available technology (Skype) has greatly increased the capacity of our team to conduct real-time education and public outreach from remote locations. The improved communication afforded by Web 2.0 has increased the quality of EPO provided by the PLRP, and has enabled a higher level of interaction between the science team and the community at large. Feedback from these online interactions suggest that remote communication via Web 2.0 technologies were effective tools for increasing public discourse and awareness of the science and exploration activity at Pavilion Lake.

  20. Enriching the trustworthiness of health-related web pages.

    Science.gov (United States)

    Gaudinat, Arnaud; Cruchet, Sarah; Boyer, Celia; Chrawdhry, Pravir

    2011-06-01

    We present an experimental mechanism for enriching web content with quality metadata. This mechanism is based on a simple and well-known initiative in the field of the health-related web, the HONcode. The Resource Description Framework (RDF) format and the Dublin Core Metadata Element Set were used to formalize these metadata. The model of trust proposed is based on a quality model for health-related web pages that has been tested in practice over a period of thirteen years. Our model has been explored in the context of a project to develop a research tool that automatically detects the occurrence of quality criteria in health-related web pages.

  1. A Web 2.0 and OGC Standards Enabled Sensor Web Architecture for Global Earth Observing System of Systems

    Science.gov (United States)

    Mandl, Daniel; Unger, Stephen; Ames, Troy; Frye, Stuart; Chien, Steve; Cappelaere, Pat; Tran, Danny; Derezinski, Linda; Paules, Granville

    2007-01-01

    This paper will describe the progress of a 3 year research award from the NASA Earth Science Technology Office (ESTO) that began October 1, 2006, in response to a NASA Announcement of Research Opportunity on the topic of sensor webs. The key goal of this research is to prototype an interoperable sensor architecture that will enable interoperability between a heterogeneous set of space-based, Unmanned Aerial System (UAS)-based and ground based sensors. Among the key capabilities being pursued is the ability to automatically discover and task the sensors via the Internet and to automatically discover and assemble the necessary science processing algorithms into workflows in order to transform the sensor data into valuable science products. Our first set of sensor web demonstrations will prototype science products useful in managing wildfires and will use such assets as the Earth Observing 1 spacecraft, managed out of NASA/GSFC, a UASbased instrument, managed out of Ames and some automated ground weather stations, managed by the Forest Service. Also, we are collaborating with some of the other ESTO awardees to expand this demonstration and create synergy between our research efforts. Finally, we are making use of Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) suite of standards and some Web 2.0 capabilities to Beverage emerging technologies and standards. This research will demonstrate and validate a path for rapid, low cost sensor integration, which is not tied to a particular system, and thus be able to absorb new assets in an easily evolvable, coordinated manner. This in turn will help to facilitate the United States contribution to the Global Earth Observation System of Systems (GEOSS), as agreed by the U.S. and 60 other countries at the third Earth Observation Summit held in February of 2005.

  2. Strategies for Adapting WebQuests for Students with Learning Disabilities

    Science.gov (United States)

    Skylar, Ashley A.; Higgins, Kyle; Boone, Randall

    2007-01-01

    WebQuests are gaining popularity as teachers explore using the Internet for guided learning activities. A WebQuest involves students working on a task that is broken down into clearly defined steps. Students often work in groups to actively conduct the research. This article suggests a variety of methods for adapting WebQuests for students with…

  3. Improving the web site's effectiveness by considering each page's temporal information

    NARCIS (Netherlands)

    Li, ZG; Sun, MT; Dunham, MH; Xiao, YQ; Dong, G; Tang, C; Wang, W

    2003-01-01

    Improving the effectiveness of a web site is always one of its owner's top concerns. By focusing on analyzing web users' visiting behavior, web mining researchers have developed a variety of helpful methods, based upon association rules, clustering, prediction and so on. However, we have found

  4. Untangling Web 2.0: Charting Web 2.0 Tools, the NCSS Guidelines for Effective Use of Technology, and Bloom's Taxonomy

    Science.gov (United States)

    Diacopoulos, Mark M.

    2015-01-01

    The potential for social studies to embrace instructional technology and Web 2.0 applications has become a growing trend in recent social studies research. As part of an ongoing process of collaborative enquiry between an instructional specialist and social studies teachers in a Professional Learning Community, a table of Web 2.0 applications was…

  5. Web Accessibility in Romania: The Conformance of Municipal Web Sites to Web Content Accessibility Guidelines

    OpenAIRE

    Costin PRIBEANU; Ruxandra-Dora MARINESCU; Paul FOGARASSY-NESZLY; Maria GHEORGHE-MOISII

    2012-01-01

    The accessibility of public administration web sites is a key quality attribute for the successful implementation of the Information Society. The purpose of this paper is to present a second review of municipal web sites in Romania that is based on automated accessibility checking. A number of 60 web sites were evaluated against WCAG 2.0 recommendations. The analysis of results reveals a relatively low web accessibility of municipal web sites and highlights several aspects. Firstly, a slight ...

  6. Criminal Justice Web Sites.

    Science.gov (United States)

    Dodge, Timothy

    1998-01-01

    Evaluates 15 criminal justice Web sites that have been selected according to the following criteria: authority, currency, purpose, objectivity, and potential usefulness to researchers. The sites provide narrative and statistical information concerning crime, law enforcement, the judicial system, and corrections. Searching techniques are also…

  7. WebQuests: a new instructional strategy for nursing education.

    Science.gov (United States)

    Lahaie, Ulysses

    2007-01-01

    A WebQuest is a model or framework for designing effective Web-based instructional strategies featuring inquiry-oriented activities. It is an innovative approach to learning that is enhanced by the use of evolving instructional technology. WebQuests have invigorated the primary school (grades K through 12) educational sector around the globe, yet there is sparse evidence in the literature of WebQuests at the college and university levels. WebQuests are congruent with pedagogical approaches and cognitive activities commonly used in nursing education. They are simple to construct using a step-by-step approach, and nurse educators will find many related resources on the Internet to help them get started. Included in this article are a discussion of the critical attributes and main features of WebQuests, construction tips, recommended Web sites featuring essential resources, a discussion of WebQuest-related issues identified in the literature, and some suggestions for further research.

  8. WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets

    Science.gov (United States)

    Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.

    2010-12-01

    WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface

  9. 07051 Working Group Outcomes -- Programming Paradigms for the Web: Web Programming and Web Services

    OpenAIRE

    Hull, Richard; Thiemann, Peter; Wadler, Philip

    2007-01-01

    Participants in the seminar broke into groups on ``Patterns and Paradigms'' for web programming, ``Web Services,'' ``Data on the Web,'' ``Software Engineering'' and ``Security.'' Here we give the raw notes recorded during these sessions.

  10. How to increase rearch and adherence of web-based interventions : a design research viewpoint

    NARCIS (Netherlands)

    Ludden, Geke D.S.; van Rompay, Thomas J.L.; Kelders, Saskia M.; van Gemert-Pijnen, Julia E.W.C.

    2015-01-01

    Nowadays, technology is increasingly used to increase people’s well-being. For example, many mobile and Web-based apps have been developed that can support people to become mentally fit or to manage their daily diet. However, analyses of current Web-based interventions show that many systems are

  11. Integrated web system of geospatial data services for climate research

    Science.gov (United States)

    Okladnikov, Igor; Gordov, Evgeny; Titov, Alexander

    2016-04-01

    Georeferenced datasets are currently actively used for modeling, interpretation and forecasting of climatic and ecosystem changes on different spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size (up to tens terabytes for a single dataset) a special software supporting studies in the climate and environmental change areas is required. An approach for integrated analysis of georefernced climatological data sets based on combination of web and GIS technologies in the framework of spatial data infrastructure paradigm is presented. According to this approach a dedicated data-processing web system for integrated analysis of heterogeneous georeferenced climatological and meteorological data is being developed. It is based on Open Geospatial Consortium (OGC) standards and involves many modern solutions such as object-oriented programming model, modular composition, and JavaScript libraries based on GeoExt library, ExtJS Framework and OpenLayers software. This work is supported by the Ministry of Education and Science of the Russian Federation, Agreement #14.613.21.0037.

  12. THE EDUCATIONAL RESEARCH ON THE WEB – A TWO-EDGED TOOL IN FOCUS

    Directory of Open Access Journals (Sweden)

    R.M. Lima

    2006-07-01

    Full Text Available Although  the  use  of  the  internet  is  expanding  rapidly  on  college  campuses,  little  is  known  about  student internet  use,  how  students  perceive  the  reality of  internet  information and  how successful they  are  in  searching  the internet.  The  aim  of  this  project  is  to  analyze  the  biochemical  issues  available  in  web  pages,  evaluating  contents quality,  trustworthiness  and  effectiveness.  Fourteen  sites  were  analyzed  regarding  to  contents,  presence  of bibliographical  references,  authorship,  titles  responsibility  and  adequacy  to  target  public.  The  great  majority  did  not mention  bibliographic  references  and  target  public.  Less  than  half  of  the  researched  sites  divulged  names  and/or graduation  status  of  information providers.  Some  sites  contained  critical  conceptual  errors,  such  as:  participation  of H2O  in  the  photosynthesis  dark  phase,  carnivore  animals  feeding  only  on  herbivores,  the  overall  equation  of photosynthesis with errors, NADH2 instead NAD+, etc. Half of them presented identical texts and figures. None of the analyzed  sites  was  thus  considered  excellent.  Our  data  strengthen  the  need  for  rigorous  evaluation  concerning  to educational research of biochemical themes on the web.

  13. Increasing efficiency of information dissemination and collection through the World Wide Web

    Science.gov (United States)

    Daniel P. Huebner; Malchus B. Baker; Peter F. Ffolliott

    2000-01-01

    Researchers, managers, and educators have access to revolutionary technology for information transfer through the World Wide Web (Web). Using the Web to effectively gather and distribute information is addressed in this paper. Tools, tips, and strategies are discussed. Companion Web sites are provided to guide users in selecting the most appropriate tool for searching...

  14. Consuming Web Services on Android Mobile Platform for Finding Parking Lots

    OpenAIRE

    Isak Shabani; Besmir Sejdiu; Fatushe Jasharaj

    2015-01-01

    Many web applications over the last decade are built using Web services based on Simple Object Access Protocol (SOAP), because these Web services are the best choice for web applications and mobile applications in general. Researches and the results of them show how architectures and the systems primarily designed for use on desktop such as Web services calls with SOAP messaging, now are possible to be used on mobile platforms such as Android. The purpose of this paper is the study of Android...

  15. Climate alters intraspecific variation in copepod effect traits through pond food webs.

    Science.gov (United States)

    Charette, Cristina; Derry, Alison M

    2016-05-01

    Essential fatty acids (EFAs) are primarily generated by phytoplankton in aquatic ecosystems, and can limit the growth, development, and reproduction of higher consumers. Among the most critical of the EFAs are highly unsaturated fatty acids (HUFAs), which are only produced by certain groups of phytoplankton. Changing environmental conditions can alter phytoplankton community and fatty acid composition and affect the HUFA content of higher trophic levels. Almost no research has addressed intraspecific variation in HUFAs in zooplankton, nor intraspecific relationships of HUFAs with body size and fecundity. This is despite that intraspecific variation in HUFAs can exceed interspecific variation and that intraspecific trait variation in body size and fecundity is increasingly recognized to have an important role in food web ecology (effect traits). Our study addressed the relative influences of abiotic selection and food web effects associated with climate change on intraspecific differences and interrelationships between HUFA content, body size, and fecundity of freshwater copepods. We applied structural equation modeling and regression analyses to intraspecific variation in a dominant calanoid copepod, Leptodiatomus minutus, among a series of shallow north-temperate ponds. Climate-driven diurnal temperature fluctuations favored the coexistence of diversity of phytoplankton groups with different temperature optima and nutritive quality. This resulted in unexpected positive relationships between temperature, copepod DHA content and body size. Temperature correlated positively with diatom biovolume, and mediated relationships between copepod HUFA content and body size, and between copepod body size and fecundity. The presence of brook trout further accentuated these positive effects in warm ponds, likely through nutrient cycling and stimulation of phytoplankton resources. Climate change may have previously unrecognized positive effects on freshwater copepod DHA content

  16. Web-ADARE: A Web-Aided Data Repairing System

    KAUST Repository

    Gu, Binbin

    2017-03-08

    Data repairing aims at discovering and correcting erroneous data in databases. In this paper, we develop Web-ADARE, an end-to-end web-aided data repairing system, to provide a feasible way to involve the vast data sources on the Web in data repairing. Our main attention in developing Web-ADARE is paid on the interaction problem between web-aided repairing and rule-based repairing, in order to minimize the Web consultation cost while reaching predefined quality requirements. The same interaction problem also exists in crowd-based methods but this is not yet formally defined and addressed. We first prove in theory that the optimal interaction scheme is not feasible to be achieved, and then propose an algorithm to identify a scheme for efficient interaction by investigating the inconsistencies and the dependencies between values in the repairing process. Extensive experiments on three data collections demonstrate the high repairing precision and recall of Web-ADARE, and the efficiency of the generated interaction scheme over several baseline ones.

  17. Web-ADARE: A Web-Aided Data Repairing System

    KAUST Repository

    Gu, Binbin; Li, Zhixu; Yang, Qiang; Xie, Qing; Liu, An; Liu, Guanfeng; Zheng, Kai; Zhang, Xiangliang

    2017-01-01

    Data repairing aims at discovering and correcting erroneous data in databases. In this paper, we develop Web-ADARE, an end-to-end web-aided data repairing system, to provide a feasible way to involve the vast data sources on the Web in data repairing. Our main attention in developing Web-ADARE is paid on the interaction problem between web-aided repairing and rule-based repairing, in order to minimize the Web consultation cost while reaching predefined quality requirements. The same interaction problem also exists in crowd-based methods but this is not yet formally defined and addressed. We first prove in theory that the optimal interaction scheme is not feasible to be achieved, and then propose an algorithm to identify a scheme for efficient interaction by investigating the inconsistencies and the dependencies between values in the repairing process. Extensive experiments on three data collections demonstrate the high repairing precision and recall of Web-ADARE, and the efficiency of the generated interaction scheme over several baseline ones.

  18. deepTools2: a next generation web server for deep-sequencing data analysis.

    Science.gov (United States)

    Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas

    2016-07-08

    We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. WebQuests and Collaborative Learning in Teacher Preparation: A Singapore Study

    Science.gov (United States)

    Yang, Chien-Hui; Tzuo, Pei-Wen; Komara, Cecile

    2011-01-01

    This research project aimed to introduce WebQuests to train special education preservice teachers in Singapore. The following research questions were posed: (1) Does the use of WebQuests in teacher preparation promote special education teacher understanding on Universal Design for Learning in accommodating students with diverse learning needs? (2)…

  20. Extremely acidic mine lake ecosystems in Lusatia (Germany) : characterisation and development of sustainable, biology-based acidity removal technologies

    International Nuclear Information System (INIS)

    Fyson, A.; Deneke, R.; Nixdorf, B.; Steinberg, C.E.W.

    2003-01-01

    There are approximately 500 infilled open-cast lignite pits in Germany that are extremely acidic because of high concentrations of dissolved metals, mostly iron and aluminium. The mining lakes have pH values of 2.4 to 3.4 and also have high sulphate concentrations. Efforts are being made to neutralize the lakes for recreational purposes. The acidity can be removed from the lakes in an economical and environmentally sustainable manner by flooding through diversion of neutral, nutrient-rich river water. This paper described the living conditions of the acidic mining lakes in the Lausitz region of Germany and summarized the benefits of the controlled eutrophication approach to enhance natural, self-sustaining processes for acid neutralization. Compared to infilling with river water, eutrophication increases lake productivity and removes acidity through sediment bound and water column biologically-mediated processes. The study involved basic research on particle transport in streams and lakes, pelagic food web interactions and submerged macrophyte metabolism. It also looked at the role of wetlands, bacterial interactions at the water-sediment interface, and modelling. It was shown that the addition of phosphorus and carbon to the water column can enhance primary production. Future studies will examine environmentally acceptable treatment strategies that offer an alternative to chemical treatment. 20 refs., 1 tab., 2 figs

  1. The Electron Microscopy Outreach Program: A Web-based resource for research and education.

    Science.gov (United States)

    Sosinsky, G E; Baker, T S; Hand, G; Ellisman, M H

    1999-01-01

    We have developed a centralized World Wide Web (WWW)-based environment that serves as a resource of software tools and expertise for biological electron microscopy. A major focus is molecular electron microscopy, but the site also includes information and links on structural biology at all levels of resolution. This site serves to help integrate or link structural biology techniques in accordance with user needs. The WWW site, called the Electron Microscopy (EM) Outreach Program (URL: http://emoutreach.sdsc.edu), provides scientists with computational and educational tools for their research and edification. In particular, we have set up a centralized resource containing course notes, references, and links to image analysis and three-dimensional reconstruction software for investigators wanting to learn about EM techniques either within or outside of their fields of expertise. Copyright 1999 Academic Press.

  2. Web-Based Virtual Laboratory for Food Analysis Course

    Science.gov (United States)

    Handayani, M. N.; Khoerunnisa, I.; Sugiarti, Y.

    2018-02-01

    Implementation of learning on food analysis course in Program Study of Agro-industrial Technology Education faced problems. These problems include the availability of space and tools in the laboratory that is not comparable with the number of students also lack of interactive learning tools. On the other hand, the information technology literacy of students is quite high as well the internet network is quite easily accessible on campus. This is a challenge as well as opportunities in the development of learning media that can help optimize learning in the laboratory. This study aims to develop web-based virtual laboratory as one of the alternative learning media in food analysis course. This research is R & D (research and development) which refers to Borg & Gall model. The results showed that assessment’s expert of web-based virtual labs developed, in terms of software engineering aspects; visual communication; material relevance; usefulness and language used, is feasible as learning media. The results of the scaled test and wide-scale test show that students strongly agree with the development of web based virtual laboratory. The response of student to this virtual laboratory was positive. Suggestions from students provided further opportunities for improvement web based virtual laboratory and should be considered for further research.

  3. Analysing and Enriching Focused Semantic Web Archives for Parliament Applications

    Directory of Open Access Journals (Sweden)

    Elena Demidova

    2014-07-01

    Full Text Available The web and the social web play an increasingly important role as an information source for Members of Parliament and their assistants, journalists, political analysts and researchers. It provides important and crucial background information, like reactions to political events and comments made by the general public. The case study presented in this paper is driven by two European parliaments (the Greek and the Austrian parliament and targets an effective exploration of political web archives. In this paper, we describe semantic technologies deployed to ease the exploration of the archived web and social web content and present evaluation results.

  4. Understanding User-Web Interactions via Web Analytics

    CERN Document Server

    Jansen, Bernard J

    2009-01-01

    This lecture presents an overview of the Web analytics process, with a focus on providing insight and actionable outcomes from collecting and analyzing Internet data. The lecture first provides an overview of Web analytics, providing in essence, a condensed version of the entire lecture. The lecture then outlines the theoretical and methodological foundations of Web analytics in order to make obvious the strengths and shortcomings of Web analytics as an approach. These foundational elements include the psychological basis in behaviorism and methodological underpinning of trace data as an empir

  5. Literaure search for intermittent rivers research using ISI Web of Science

    Data.gov (United States)

    U.S. Environmental Protection Agency — The dataset is the bibliometric information included in the ISI Web of Science database of scientific literature. Table S2 accessible from the dataset link provides...

  6. The design and implementation of web mining in web sites security

    Science.gov (United States)

    Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li

    2003-06-01

    The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.

  7. Practical guidelines for development of web-based interventions.

    Science.gov (United States)

    Chee, Wonshik; Lee, Yaelim; Chee, Eunice; Im, Eun-Ok

    2014-10-01

    Despite a recent high funding priority on technological aspects of research and a high potential impact of Web-based interventions on health, few guidelines for the development of Web-based interventions are currently available. In this article, we propose practical guidelines for development of Web-based interventions based on an empirical study and an integrative literature review. The empirical study aimed at development of a Web-based physical activity promotion program that was specifically tailored to Korean American midlife women. The literature review included a total of 202 articles that were retrieved through multiple databases. On the basis of the findings of the study and the literature review, we propose directions for development of Web-based interventions in the following steps: (1) meaningfulness and effectiveness, (2) target population, (3) theoretical basis/program theory, (4) focus and objectives, (5) components, (6) technological aspects, and (7) logistics for users. The guidelines could help promote further development of Web-based interventions at this early stage of Web-based interventions in nursing.

  8. Spider-web amphiphiles as artificial lipid clusters: design, synthesis, and accommodation of lipid components at the air-water interface.

    Science.gov (United States)

    Ariga, Katsuhiko; Urakawa, Toshihiro; Michiue, Atsuo; Kikuchi, Jun-ichi

    2004-08-03

    As a novel category of two-dimensional lipid clusters, dendrimers having an amphiphilic structure in every unit were synthesized and labeled "spider-web amphiphiles". Amphiphilic units based on a Lys-Lys-Glu tripeptide with hydrophobic tails at the C-terminal and a polar head at the N-terminal are dendrically connected through stepwise peptide coupling. This structural design allowed us to separately introduce the polar head and hydrophobic tails. Accordingly, we demonstrated the synthesis of the spider-web amphiphile series in three combinations: acetyl head/C16 chain, acetyl head/C18 chain, and ammonium head/C16 chain. All the spider-web amphiphiles were synthesized in satisfactory yields, and characterized by 1H NMR, MALDI-TOFMS, GPC, and elemental analyses. Surface pressure (pi)-molecular area (A) isotherms showed the formation of expanded monolayers except for the C18-chain amphiphile at 10 degrees C, for which the molecular area in the condensed phase is consistent with the cross-sectional area assigned for all the alkyl chains. In all the spider-web amphiphiles, the molecular areas at a given pressure in the expanded phase increased in proportion to the number of units, indicating that alkyl chains freely fill the inner space of the dendritic core. The mixing of octadecanoic acid with the spider-web amphiphiles at the air-water interface induced condensation of the molecular area. From the molecular area analysis, the inclusion of the octadecanoic acid bears a stoichiometric characteristic; i.e., the number of captured octadecanoic acids in the spider-web amphiphile roughly agrees with the number of branching points in the spider-web amphiphile.

  9. Web Transfer Over Satellites Being Improved

    Science.gov (United States)

    Allman, Mark

    1999-01-01

    Extensive research conducted by NASA Lewis Research Center's Satellite Networks and Architectures Branch and the Ohio University has demonstrated performance improvements in World Wide Web transfers over satellite-based networks. The use of a new version of the Hypertext Transfer Protocol (HTTP) reduced the time required to load web pages over a single Transmission Control Protocol (TCP) connection traversing a satellite channel. However, an older technique of simultaneously making multiple requests of a given server has been shown to provide even faster transfer time. Unfortunately, the use of multiple simultaneous requests has been shown to be harmful to the network in general. Therefore, we are developing new mechanisms for the HTTP protocol which may allow a single request at any given time to perform as well as, or better than, multiple simultaneous requests. In the course of study, we also demonstrated that the time for web pages to load is at least as short via a satellite link as it is via a standard 28.8-kbps dialup modem channel. This demonstrates that satellites are a viable means of accessing the Internet.

  10. Overcoming Legal Limitations in Disseminating Slovene Web Corpora

    Directory of Open Access Journals (Sweden)

    Tomaž Erjavec

    2016-09-01

    Full Text Available Web texts are becoming increasingly relevant sources of information, with web corpora useful for corpus linguistic studies and development of language technologies. Even though web texts are directly accessable, which substantially simplifies the collection procedure compilation of web corpora is still complex, time consuming and expensive. It is crucial that similar endeavours are not repeated, which is why it is necessary to make the created corpora easily and widely accessible both to researchers and a wider audience. While this is logistically and technically a straightforward procedure, legal constraints, such as copyright, privacy and terms of use severely hinder the dissemination of web corpora. This paper discusses legal conditions and actual practice in this area, gives an overview of current practices and proposes a range of mitigation measures on the example of the Janes corpus of Slovene user-generated content in order to ensure free and open dissemination of Slovene web corpora.

  11. No Code Required Giving Users Tools to Transform the Web

    CERN Document Server

    Cypher, Allen; Lau, Tessa; Nichols, Jeffrey

    2010-01-01

    Revolutionary tools are emerging from research labs that enable all computer users to customize and automate their use of the Web without learning how to program. No Code Required takes cutting edge material from academic and industry leaders - the people creating these tools -- and presents the research, development, application, and impact of a variety of new and emerging systems. *The first book since Web 2.0 that covers the latest research, development, and systems emerging from HCI research labs on end user programming tools *Featuring contributions from the creators of Adobe's Zoet

  12. Three types of children’s informational web sites: an inventory of design conventions

    NARCIS (Netherlands)

    Jochmann-Mannak, Hanna; Lentz, Leo; Huibers, Theo W.C.; Sanders, Ted

    2012-01-01

    "Purpose: Research on Web design conventions has an almost exclusive focus on Web design for adults. There is far less knowledge about Web design for children. For the first time, an overview is presented of the current design conventions for children's informational Web sites. Method: In this study

  13. Web archives

    DEFF Research Database (Denmark)

    Finnemann, Niels Ole

    2018-01-01

    This article deals with general web archives and the principles for selection of materials to be preserved. It opens with a brief overview of reasons why general web archives are needed. Section two and three present major, long termed web archive initiatives and discuss the purposes and possible...... values of web archives and asks how to meet unknown future needs, demands and concerns. Section four analyses three main principles in contemporary web archiving strategies, topic centric, domain centric and time-centric archiving strategies and section five discuss how to combine these to provide...... a broad and rich archive. Section six is concerned with inherent limitations and why web archives are always flawed. The last sections deal with the question how web archives may fit into the rapidly expanding, but fragmented landscape of digital repositories taking care of various parts...

  14. Aliphatic, cyclic, and aromatic organic acids, vitamins, and carbohydrates in soil: a review.

    Science.gov (United States)

    Vranova, Valerie; Rejsek, Klement; Formanek, Pavel

    2013-11-10

    Organic acids, vitamins, and carbohydrates represent important organic compounds in soil. Aliphatic, cyclic, and aromatic organic acids play important roles in rhizosphere ecology, pedogenesis, food-web interactions, and decontamination of sites polluted by heavy metals and organic pollutants. Carbohydrates in soils can be used to estimate changes of soil organic matter due to management practices, whereas vitamins may play an important role in soil biological and biochemical processes. The aim of this work is to review current knowledge on aliphatic, cyclic, and aromatic organic acids, vitamins, and carbohydrates in soil and to identify directions for future research. Assessments of organic acids (aliphatic, cyclic, and aromatic) and carbohydrates, including their behaviour, have been reported in many works. However, knowledge on the occurrence and behaviour of D-enantiomers of organic acids, which may be abundant in soil, is currently lacking. Also, identification of the impact and mechanisms of environmental factors, such as soil water content, on carbohydrate status within soil organic matter remains to be determined. Finally, the occurrence of vitamins in soil and their role in biological and biochemical soil processes represent an important direction for future research.

  15. Aliphatic, Cyclic, and Aromatic Organic Acids, Vitamins, and Carbohydrates in Soil: A Review

    Directory of Open Access Journals (Sweden)

    Valerie Vranova

    2013-01-01

    Full Text Available Organic acids, vitamins, and carbohydrates represent important organic compounds in soil. Aliphatic, cyclic, and aromatic organic acids play important roles in rhizosphere ecology, pedogenesis, food-web interactions, and decontamination of sites polluted by heavy metals and organic pollutants. Carbohydrates in soils can be used to estimate changes of soil organic matter due to management practices, whereas vitamins may play an important role in soil biological and biochemical processes. The aim of this work is to review current knowledge on aliphatic, cyclic, and aromatic organic acids, vitamins, and carbohydrates in soil and to identify directions for future research. Assessments of organic acids (aliphatic, cyclic, and aromatic and carbohydrates, including their behaviour, have been reported in many works. However, knowledge on the occurrence and behaviour of D-enantiomers of organic acids, which may be abundant in soil, is currently lacking. Also, identification of the impact and mechanisms of environmental factors, such as soil water content, on carbohydrate status within soil organic matter remains to be determined. Finally, the occurrence of vitamins in soil and their role in biological and biochemical soil processes represent an important direction for future research.

  16. Aliphatic, Cyclic, and Aromatic Organic Acids, Vitamins, and Carbohydrates in Soil: A Review

    Science.gov (United States)

    Vranova, Valerie; Rejsek, Klement; Formanek, Pavel

    2013-01-01

    Organic acids, vitamins, and carbohydrates represent important organic compounds in soil. Aliphatic, cyclic, and aromatic organic acids play important roles in rhizosphere ecology, pedogenesis, food-web interactions, and decontamination of sites polluted by heavy metals and organic pollutants. Carbohydrates in soils can be used to estimate changes of soil organic matter due to management practices, whereas vitamins may play an important role in soil biological and biochemical processes. The aim of this work is to review current knowledge on aliphatic, cyclic, and aromatic organic acids, vitamins, and carbohydrates in soil and to identify directions for future research. Assessments of organic acids (aliphatic, cyclic, and aromatic) and carbohydrates, including their behaviour, have been reported in many works. However, knowledge on the occurrence and behaviour of D-enantiomers of organic acids, which may be abundant in soil, is currently lacking. Also, identification of the impact and mechanisms of environmental factors, such as soil water content, on carbohydrate status within soil organic matter remains to be determined. Finally, the occurrence of vitamins in soil and their role in biological and biochemical soil processes represent an important direction for future research. PMID:24319374

  17. State Health Mapper: An Interactive, Web-Based Tool for Physician Workforce Planning, Recruitment, and Health Services Research.

    Science.gov (United States)

    Krause, Denise D

    2015-11-01

    Health rankings in Mississippi are abysmal. Mississippi also has fewer physicians to serve its population compared with all other states. Many residents of this predominately rural state do not have access to healthcare providers. To better understand the demographics and distribution of the current health workforce in Mississippi, the main objective of the study was to design a Web-based, spatial, interactive application to visualize and explore the physician workforce. A Web application was designed to assist in health workforce planning. Secondary datasets of licensure and population information were obtained, and live feeds from licensure systems are being established. Several technologies were used to develop an intuitive, user-friendly application. Custom programming was completed in JavaScript so the application could run on most platforms, including mobile devices. The application allows users to identify and query geographic locations of individual or aggregated physicians based on attributes included in the licensure data, to perform drive time or buffer analyses, and to explore sociodemographic population data by geographic area of choice. This Web-based application with analytical tools visually represents the physician workforce licensed in Mississippi and its attributes, and provides access to much-needed information for statewide health workforce planning and research. The success of the application is not only based on the practicality of the tool but also on its ease of use. Feedback has been positive and has come from a wide variety of organizations across the state.

  18. Rtools: a web server for various secondary structural analyses on single RNA sequences.

    Science.gov (United States)

    Hamada, Michiaki; Ono, Yukiteru; Kiryu, Hisanori; Sato, Kengo; Kato, Yuki; Fukunaga, Tsukasa; Mori, Ryota; Asai, Kiyoshi

    2016-07-08

    The secondary structures, as well as the nucleotide sequences, are the important features of RNA molecules to characterize their functions. According to the thermodynamic model, however, the probability of any secondary structure is very small. As a consequence, any tool to predict the secondary structures of RNAs has limited accuracy. On the other hand, there are a few tools to compensate the imperfect predictions by calculating and visualizing the secondary structural information from RNA sequences. It is desirable to obtain the rich information from those tools through a friendly interface. We implemented a web server of the tools to predict secondary structures and to calculate various structural features based on the energy models of secondary structures. By just giving an RNA sequence to the web server, the user can get the different types of solutions of the secondary structures, the marginal probabilities such as base-paring probabilities, loop probabilities and accessibilities of the local bases, the energy changes by arbitrary base mutations as well as the measures for validations of the predicted secondary structures. The web server is available at http://rtools.cbrc.jp, which integrates software tools, CentroidFold, CentroidHomfold, IPKnot, CapR, Raccess, Rchange and RintD. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Parasites in food webs: the ultimate missing links

    Science.gov (United States)

    Lafferty, Kevin D.; Allesina, Stefano; Arim, Matias; Briggs, Cherie J.; De Leo, Giulio A.; Dobson, Andrew P.; Dunne, Jennifer A.; Johnson, Pieter T.J.; Kuris, Armand M.; Marcogliese, David J.; Martinez, Neo D.; Memmott, Jane; Marquet, Pablo A.; McLaughlin, John P.; Mordecai, Eerin A.; Pascual, Mercedes; Poulin, Robert; Thieltges, David W.

    2008-01-01

    Parasitism is the most common consumer strategy among organisms, yet only recently has there been a call for the inclusion of infectious disease agents in food webs. The value of this effort hinges on whether parasites affect food-web properties. Increasing evidence suggests that parasites have the potential to uniquely alter food-web topology in terms of chain length, connectance and robustness. In addition, parasites might affect food-web stability, interaction strength and energy flow. Food-web structure also affects infectious disease dynamics because parasites depend on the ecological networks in which they live. Empirically, incorporating parasites into food webs is straightforward. We may start with existing food webs and add parasites as nodes, or we may try to build food webs around systems for which we already have a good understanding of infectious processes. In the future, perhaps researchers will add parasites while they construct food webs. Less clear is how food-web theory can accommodate parasites. This is a deep and central problem in theoretical biology and applied mathematics. For instance, is representing parasites with complex life cycles as a single node equivalent to representing other species with ontogenetic niche shifts as a single node? Can parasitism fit into fundamental frameworks such as the niche model? Can we integrate infectious disease models into the emerging field of dynamic food-web modelling? Future progress will benefit from interdisciplinary collaborations between ecologists and infectious disease biologists.

  20. Even Faster Web Sites Performance Best Practices for Web Developers

    CERN Document Server

    Souders, Steve

    2009-01-01

    Performance is critical to the success of any web site, and yet today's web applications push browsers to their limits with increasing amounts of rich content and heavy use of Ajax. In this book, Steve Souders, web performance evangelist at Google and former Chief Performance Yahoo!, provides valuable techniques to help you optimize your site's performance. Souders' previous book, the bestselling High Performance Web Sites, shocked the web development world by revealing that 80% of the time it takes for a web page to load is on the client side. In Even Faster Web Sites, Souders and eight exp

  1. Web-based recruitment: effects of information, organizational brand, and attitudes toward a Web site on applicant attraction.

    Science.gov (United States)

    Allen, David G; Mahto, Raj V; Otondo, Robert F

    2007-11-01

    Recruitment theory and research show that objective characteristics, subjective considerations, and critical contact send signals to prospective applicants about the organization and available opportunities. In the generating applicants phase of recruitment, critical contact may consist largely of interactions with recruitment sources (e.g., newspaper ads, job fairs, organization Web sites); however, research has yet to fully address how all 3 types of signaling mechanisms influence early job pursuit decisions in the context of organizational recruitment Web sites. Results based on data from 814 student participants searching actual organization Web sites support and extend signaling and brand equity theories by showing that job information (directly) and organization information (indirectly) are related to intentions to pursue employment when a priori perceptions of image are controlled. A priori organization image is related to pursuit intentions when subsequent information search is controlled, but organization familiarity is not, and attitudes about a recruitment source also influence attraction and partially mediate the effects of organization information. Theoretical and practical implications for recruitment are discussed. (c) 2007 APA

  2. Graph Structure in Three National Academic Webs: Power Laws with Anomalies.

    Science.gov (United States)

    Thelwall, Mike; Wilkinson, David

    2003-01-01

    Explains how the Web can be modeled as a mathematical graph and analyzes the graph structures of three national university publicly indexable Web sites from Australia, New Zealand, and the United Kingdom. Topics include commercial search engines and academic Web link research; method-analysis environment and data sets; and power laws. (LRW)

  3. Web pages of Slovenian public libraries

    Directory of Open Access Journals (Sweden)

    Silva Novljan

    2002-01-01

    Full Text Available Libraries should offer their patrons web sites which establish the unmistakeable concept (public of library, the concept that cannot be mistaken for other information brokers and services available on the Internet, but inside this framework of the concept of library, would show a diversity which directs patrons to other (public libraries. This can be achieved by reliability, quality of information and services, and safety of usage.Achieving this, patrons regard library web sites as important reference sources deserving continuous usage for obtaining relevant information. Libraries excuse investment in the development and sustainance of their web sites by the number of visits and by patron satisfaction. The presented research, made on a sample of Slovene public libraries’web sites, determines how the libraries establish their purpose and role, as well as the given professional recommendations in web site design.The results uncover the striving of libraries for the modernisation of their functions,major attention is directed to the presentation of classic libraries and their activities,lesser to the expansion of available contents and electronic sources. Pointing to their diversity is significant since it is not a result of patrons’ needs, but more the consequence of improvisation, too little attention to selection, availability, organisation and formation of different kind of information and services on the web sites. Based on the analysis of a common concept of the public library web site, certain activities for improving the existing state of affairs are presented in the paper.

  4. Proposition and Organization of an Adaptive Learning Domain Based on Fusion from the Web

    Science.gov (United States)

    Chaoui, Mohammed; Laskri, Mohamed Tayeb

    2013-01-01

    The Web allows self-navigated education through interaction with large amounts of Web resources. While enjoying the flexibility of Web tools, authors may suffer from research and filtering Web resources, when they face various resources formats and complex structures. An adaptation of extracted Web resources must be assured by authors, to give…

  5. A Web of applicant attraction: person-organization fit in the context of Web-based recruitment.

    Science.gov (United States)

    Dineen, Brian R; Ash, Steven R; Noe, Raymond A

    2002-08-01

    Applicant attraction was examined in the context of Web-based recruitment. A person-organization (P-O) fit framework was adopted to examine how the provision of feedback to individuals regarding their potential P-O fit with an organization related to attraction. Objective and subjective P-O fit, agreement with fit feedback, and self-esteem also were examined in relation to attraction. Results of an experiment that manipulated fit feedback level after a self-assessment provided by a fictitious company Web site found that both feedback level and objective P-O fit were positively related to attraction. These relationships were fully mediated by subjective P-O fit. In addition, attraction was related to the interaction of objective fit, feedback, and agreement and objective fit, feedback, and self-esteem. Implications and future Web-based recruitment research directions are discussed.

  6. Modelling of web-based virtual university administration for Nigerian ...

    African Journals Online (AJOL)

    This research work focused on development of a model of web based virtual University Administration for Nigerian universities. This is necessary as there is still a noticeable administrative constraint in our Universities, the establishment of many University Web portals notwithstanding. More efforts are therefore needed to ...

  7. THE DESIGN AND IMPLEMETATION OF THE RESEARCH CENTER FOR AERONAUTICS AND SPACE WEB SITE

    Directory of Open Access Journals (Sweden)

    LEHADUS Daniel

    2010-11-01

    Full Text Available This paper presents some elements and principles commonly used in web design. It’s addressed to anyone with an interest in developing their skills as a visual communicator, anyone who wants to learn the basics of graphical design, so they can develop their artistic skills and make more powerful and effective web sites.

  8. Classification algorithm of Web document in ionization radiation

    International Nuclear Information System (INIS)

    Geng Zengmin; Liu Wanchun

    2005-01-01

    Resources in the Internet is numerous. It is one of research directions of Web mining (WM) how to mine the resource of some calling or trade more efficiently. The paper studies the classification of Web document in ionization radiation (IR) based on the algorithm of Bayes, Rocchio, Widrow-Hoff, and analyses the result of trial effect. (authors)

  9. FASH: A web application for nucleotides sequence search

    Directory of Open Access Journals (Sweden)

    Chew Paul

    2008-05-01

    Full Text Available Abstract FASH (Fourier Alignment Sequence Heuristics is a web application, based on the Fast Fourier Transform, for finding remote homologs within a long nucleic acid sequence. Given a query sequence and a long text-sequence (e.g, the human genome, FASH detects subsequences within the text that are remotely-similar to the query. FASH offers an alternative approach to Blast/Fasta for querying long RNA/DNA sequences. FASH differs from these other approaches in that it does not depend on the existence of contiguous seed-sequences in its initial detection phase. The FASH web server is user friendly and very easy to operate. Availability FASH can be accessed at https://fash.bgu.ac.il:8443/fash/default.jsp (secured website

  10. Web-Based Media Contents Editor for UCC Websites

    Science.gov (United States)

    Kim, Seoksoo

    The purpose of this research is to "design web-based media contents editor for establishing UCC(User Created Contents)-based websites." The web-based editor features user-oriented interfaces and increased convenience, significantly different from previous off-line editors. It allows users to edit media contents online and can be effectively used for online promotion activities of enterprises and organizations. In addition to development of the editor, the research aims to support the entry of enterprises and public agencies to the online market by combining the technology with various UCC items.

  11. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    The change of the web content is rapid. In Focused Web Harvesting [?], which aims at achieving a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a complete set of related web data to their interesting topics. Whether you are a fan

  12. Update of the FANTOM web resource

    DEFF Research Database (Denmark)

    Lizio, Marina; Harshbarger, Jayson; Abugessaisa, Imad

    2017-01-01

    Upon the first publication of the fifth iteration of the Functional Annotation of Mammalian Genomes collaborative project, FANTOM5, we gathered a series of primary data and database systems into the FANTOM web resource (http://fantom.gsc.riken.jp) to facilitate researchers to explore...... transcriptional regulation and cellular states. In the course of the collaboration, primary data and analysis results have been expanded, and functionalities of the database systems enhanced. We believe that our data and web systems are invaluable resources, and we think the scientific community will benefit...... for this recent update to deepen their understanding of mammalian cellular organization. We introduce the contents of FANTOM5 here, report recent updates in the web resource and provide future perspectives....

  13. Determinants of Corporate Web Services Adoption: A Survey of Companies in Korea

    Science.gov (United States)

    Kim, Daekil

    2010-01-01

    Despite the growing interest and attention from Information Technology researchers and practitioners, empirical research on factors that influence an organization's likelihood of adoption of Web Services has been limited. This study identified the factors influencing Web Services adoption from the perspective of 151 South Korean firms. The…

  14. The Potential of Online Respondent Data for Choice Modeling in Transportation Research: Evidence from Stated Preference Experiments using Web-based Samples: Evidence from Stated Preference Experiments using Web-based Samples

    OpenAIRE

    Hoffer, Brice

    2015-01-01

    The aim of this thesis is to analyze the potential of online survey services for conducting stated preference experiments in the field of transportation planning. Several web-products for hosting questionnaires are evaluated considering important features required when conducting a stated preference survey. Based on this evaluation, the open-source platform LimeSurvey is the most appropriated for this kind of research. A stated preference questionnaire about pedestrians’ route choice in a Sin...

  15. Web2Quests: Updating a Popular Web-Based Inquiry-Oriented Activity

    Science.gov (United States)

    Kurt, Serhat

    2009-01-01

    WebQuest is a popular inquiry-oriented activity in which learners use Web resources. Since the creation of the innovation, almost 15 years ago, the Web has changed significantly, while the WebQuest technique has changed little. This article examines possible applications of new Web trends on WebQuest instructional strategy. Some possible…

  16. Development of Web GIS for complex processing and visualization of climate geospatial datasets as an integral part of dedicated Virtual Research Environment

    Science.gov (United States)

    Gordov, Evgeny; Okladnikov, Igor; Titov, Alexander

    2017-04-01

    For comprehensive usage of large geospatial meteorological and climate datasets it is necessary to create a distributed software infrastructure based on the spatial data infrastructure (SDI) approach. Currently, it is generally accepted that the development of client applications as integrated elements of such infrastructure should be based on the usage of modern web and GIS technologies. The paper describes the Web GIS for complex processing and visualization of geospatial (mainly in NetCDF and PostGIS formats) datasets as an integral part of the dedicated Virtual Research Environment for comprehensive study of ongoing and possible future climate change, and analysis of their implications, providing full information and computing support for the study of economic, political and social consequences of global climate change at the global and regional levels. The Web GIS consists of two basic software parts: 1. Server-side part representing PHP applications of the SDI geoportal and realizing the functionality of interaction with computational core backend, WMS/WFS/WPS cartographical services, as well as implementing an open API for browser-based client software. Being the secondary one, this part provides a limited set of procedures accessible via standard HTTP interface. 2. Front-end part representing Web GIS client developed according to a "single page application" technology based on JavaScript libraries OpenLayers (http://openlayers.org/), ExtJS (https://www.sencha.com/products/extjs), GeoExt (http://geoext.org/). It implements application business logic and provides intuitive user interface similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Boundless/OpenGeo architecture was used as a basis for Web-GIS client development. According to general INSPIRE requirements to data visualization Web GIS provides such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map

  17. Impact of the web on citation and information-seeking behaviour of academics

    OpenAIRE

    2012-01-01

    D.Litt. et Phil. This study investigated the impact of the Web on the information-seeking and citation behaviour of Unisa academics. The research study was executed in two phases. Phase 1 consisted of a Web citation analysis and phase 2 a questionnaire. Phase 1 explored how the availability of Web information resources affected the scholarly citation behaviour of Unisa academics by determining the relationship between Web-based references and non-Web-based references in the reference lists...

  18. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    Web content changes rapidly [18]. In Focused Web Harvesting [17] which aim it is to achieve a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a set of all the relevant web data to their topics of interest. Whether you are a fan

  19. Unraveling the web of viroinformatics: computational tools and databases in virus research.

    Science.gov (United States)

    Sharma, Deepak; Priyadarshini, Pragya; Vrati, Sudhanshu

    2015-02-01

    The beginning of the second century of research in the field of virology (the first virus was discovered in 1898) was marked by its amalgamation with bioinformatics, resulting in the birth of a new domain--viroinformatics. The availability of more than 100 Web servers and databases embracing all or specific viruses (for example, dengue virus, influenza virus, hepatitis virus, human immunodeficiency virus [HIV], hemorrhagic fever virus [HFV], human papillomavirus [HPV], West Nile virus, etc.) as well as distinct applications (comparative/diversity analysis, viral recombination, small interfering RNA [siRNA]/short hairpin RNA [shRNA]/microRNA [miRNA] studies, RNA folding, protein-protein interaction, structural analysis, and phylotyping and genotyping) will definitely aid the development of effective drugs and vaccines. However, information about their access and utility is not available at any single source or on any single platform. Therefore, a compendium of various computational tools and resources dedicated specifically to virology is presented in this article. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  20. USING MULTIMEDIA AND WEB TECHNOLOGIES IN STUDYING THE HUMANITIES. WEB-MULTIMEDIA ENCYCLOPEDIA «WILLIAM SHAKESPEARE AND RENAISSANCE».

    Directory of Open Access Journals (Sweden)

    E. Alferov

    2010-06-01

    Full Text Available The article discusses the use of innovative information technologies in modern education. Special attention is given to the using of web-multimedia technologies in the study of humanities. As an example of using information and communication tools in the process of philological disciplines described purpose, functionality and architecture of web-multimedia encyclopedia «William Shakespeare and Renaissance» (http://shakespeare.ksu.ks.ua, developed in laboratory of the integrated learning environments of the Research Institute of IT.

  1. ReplacementMatrix: a web server for maximum-likelihood estimation of amino acid replacement rate matrices.

    Science.gov (United States)

    Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier

    2011-10-01

    Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/

  2. Crawl-Based Analysis of Web Applications : Prospects and Challenges

    NARCIS (Netherlands)

    Van Deursen, A.; Mesbah, A.; Nederlof, A.

    2014-01-01

    In this paper we review five years of research in the field of automated crawling and testing of web applications. We describe the open source Crawljax tool, and the various extensions that have been proposed in order to address such issues as cross-browser compatibility testing, web application

  3. Capturing Trust in Social Web Applications

    Science.gov (United States)

    O'Donovan, John

    The Social Web constitutes a shift in information flow from the traditional Web. Previously, content was provided by the owners of a website, for consumption by the end-user. Nowadays, these websites are being replaced by Social Web applications which are frameworks for the publication of user-provided content. Traditionally, Web content could be `trusted' to some extent based on the site it originated from. Algorithms such as Google's PageRank were (and still are) used to compute the importance of a website, based on analysis of underlying link topology. In the Social Web, analysis of link topology merely tells us about the importance of the information framework which hosts the content. Consumers of information still need to know about the importance/reliability of the content they are reading, and therefore about the reliability of the producers of that content. Research into trust and reputation of the producers of information in the Social Web is still very much in its infancy. Every day, people are forced to make trusting decisions about strangers on the Web based on a very limited amount of information. For example, purchasing a product from an eBay seller with a `reputation' of 99%, downloading a file from a peer-to-peer application such as Bit-Torrent, or allowing Amazon.com tell you what products you will like. Even something as simple as reading comments on a Web-blog requires the consumer to make a trusting decision about the quality of that information. In all of these example cases, and indeed throughout the Social Web, there is a pressing demand for increased information upon which we can make trusting decisions. This chapter examines the diversity of sources from which trust information can be harnessed within Social Web applications and discusses a high level classification of those sources. Three different techniques for harnessing and using trust from a range of sources are presented. These techniques are deployed in two sample Social Web

  4. WYSIWYG GEOPROCESSING: COUPLING SENSOR WEB AND GEOPROCESSING SERVICES IN VIRTUAL GLOBES

    Directory of Open Access Journals (Sweden)

    X. Zhai

    2012-08-01

    Full Text Available We propose to advance the scientific understanding and applications of geospatial data by coupling Sensor Web and Geoprocessing Services in Virtual Globes for higher-education teaching and research. The vision is the concept of "What You See is What You Get" geoprocessing, shortly known as WYSIWYG geoprocessing. Virtual Globes offer tremendous opportunities, such as providing a learning tool to help educational users and researchers digest global-scale geospatial information about the world, and acting as WYSIWYG platforms, where domain experts can see what their fingertips act in an interactive three-dimensional virtual environment. In the meantime, Sensor Web and Web Service technologies make a large amount of Earth observing sensors and geoprocessing functionalities easily accessible to educational users and researchers like their local resources. Coupling Sensor Web and geoprocessing Services in Virtual Globes will bring a virtual learning and research environment to the desktops of students and professors, empowering them with WYSIWYG geoprocessing capabilities. The implementation combines the visualization and communication power of Virtual Globes with the on-demand data collection and analysis functionalities of Sensor Web and geoprocessing services, to help students and researchers investigate various scientific problems in an environment with natural and intuitive user experiences. The work will contribute to the scientific and educational activities of geoinformatic communities in that they will have a platform that are easily accessible and help themselves perceive world space and perform live geoscientific processes.

  5. Does social desirability compromise self-reports of physical activity in web-based research?

    Directory of Open Access Journals (Sweden)

    Göritz Anja S

    2011-04-01

    Full Text Available Abstract Background This study investigated the relation between social desirability and self-reported physical activity in web-based research. Findings A longitudinal study (N = 5,495, 54% women was conducted on a representative sample of the Dutch population using the Marlowe-Crowne Scale as social desirability measure and the short form of the International Physical Activity Questionnaire. Social desirability was not associated with self-reported physical activity (in MET-minutes/week, nor with its sub-behaviors (i.e., walking, moderate-intensity activity, vigorous-intensity activity, and sedentary behavior. Socio-demographics (i.e., age, sex, income, and education did not moderate the effect of social desirability on self-reported physical activity and its sub-behaviors. Conclusions This study does not throw doubt on the usefulness of the Internet as a medium to collect self-reports on physical activity.

  6. SeMPI: a genome-based secondary metabolite prediction and identification web server.

    Science.gov (United States)

    Zierep, Paul F; Padilla, Natàlia; Yonchev, Dimitar G; Telukunta, Kiran K; Klementz, Dennis; Günther, Stefan

    2017-07-03

    The secondary metabolism of bacteria, fungi and plants yields a vast number of bioactive substances. The constantly increasing amount of published genomic data provides the opportunity for an efficient identification of gene clusters by genome mining. Conversely, for many natural products with resolved structures, the encoding gene clusters have not been identified yet. Even though genome mining tools have become significantly more efficient in the identification of biosynthetic gene clusters, structural elucidation of the actual secondary metabolite is still challenging, especially due to as yet unpredictable post-modifications. Here, we introduce SeMPI, a web server providing a prediction and identification pipeline for natural products synthesized by polyketide synthases of type I modular. In order to limit the possible structures of PKS products and to include putative tailoring reactions, a structural comparison with annotated natural products was introduced. Furthermore, a benchmark was designed based on 40 gene clusters with annotated PKS products. The web server of the pipeline (SeMPI) is freely available at: http://www.pharmaceutical-bioinformatics.de/sempi. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Contemporary Trends in Research and Development of Lead-Acid Batteries

    Czech Academy of Sciences Publication Activity Database

    Micka, Karel

    2004-01-01

    Roč. 8, - (2004), s. 932-933 ISSN 1432-8488 R&D Projects: GA ČR GA102/02/0794 Institutional research plan: CEZ:AV0Z4040901 Keywords : lead-acid batteries * electrical system * trends Subject RIV: CG - Electrochemistry Impact factor: 0.984, year: 2004

  8. ACFIS: a web server for fragment-based drug discovery.

    Science.gov (United States)

    Hao, Ge-Fei; Jiang, Wen; Ye, Yuan-Nong; Wu, Feng-Xu; Zhu, Xiao-Lei; Guo, Feng-Biao; Yang, Guang-Fu

    2016-07-08

    In order to foster innovation and improve the effectiveness of drug discovery, there is a considerable interest in exploring unknown 'chemical space' to identify new bioactive compounds with novel and diverse scaffolds. Hence, fragment-based drug discovery (FBDD) was developed rapidly due to its advanced expansive search for 'chemical space', which can lead to a higher hit rate and ligand efficiency (LE). However, computational screening of fragments is always hampered by the promiscuous binding model. In this study, we developed a new web server Auto Core Fragment in silico Screening (ACFIS). It includes three computational modules, PARA_GEN, CORE_GEN and CAND_GEN. ACFIS can generate core fragment structure from the active molecule using fragment deconstruction analysis and perform in silico screening by growing fragments to the junction of core fragment structure. An integrated energy calculation rapidly identifies which fragments fit the binding site of a protein. We constructed a simple interface to enable users to view top-ranking molecules in 2D and the binding mode in 3D for further experimental exploration. This makes the ACFIS a highly valuable tool for drug discovery. The ACFIS web server is free and open to all users at http://chemyang.ccnu.edu.cn/ccb/server/ACFIS/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. GREAT: a web portal for Genome Regulatory Architecture Tools.

    Science.gov (United States)

    Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François

    2016-07-08

    GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. The EMBL-EBI bioinformatics web and programmatic tools framework.

    Science.gov (United States)

    Li, Weizhong; Cowley, Andrew; Uludag, Mahmut; Gur, Tamer; McWilliam, Hamish; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Lopez, Rodrigo

    2015-07-01

    Since 2009 the EMBL-EBI Job Dispatcher framework has provided free access to a range of mainstream sequence analysis applications. These include sequence similarity search services (https://www.ebi.ac.uk/Tools/sss/) such as BLAST, FASTA and PSI-Search, multiple sequence alignment tools (https://www.ebi.ac.uk/Tools/msa/) such as Clustal Omega, MAFFT and T-Coffee, and other sequence analysis tools (https://www.ebi.ac.uk/Tools/pfa/) such as InterProScan. Through these services users can search mainstream sequence databases such as ENA, UniProt and Ensembl Genomes, utilising a uniform web interface or systematically through Web Services interfaces (https://www.ebi.ac.uk/Tools/webservices/) using common programming languages, and obtain enriched results with novel visualisations. Integration with EBI Search (https://www.ebi.ac.uk/ebisearch/) and the dbfetch retrieval service (https://www.ebi.ac.uk/Tools/dbfetch/) further expands the usefulness of the framework. New tools and updates such as NCBI BLAST+, InterProScan 5 and PfamScan, new categories such as RNA analysis tools (https://www.ebi.ac.uk/Tools/rna/), new databases such as ENA non-coding, WormBase ParaSite, Pfam and Rfam, and new workflow methods, together with the retirement of depreciated services, ensure that the framework remains relevant to today's biological community. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Use of Web 2.0 Social Media Platforms to Promote Community-Engaged Research Dialogs: A Preliminary Program Evaluation

    Science.gov (United States)

    Valdez Soto, Miguel; Bishop, Shawn G; Aase, Lee A; Timimi, Farris K; Montori, Victor M; Patten, Christi A

    2016-01-01

    Background Community-engaged research is defined by the Institute of Medicine as the process of working collaboratively with groups of people affiliated by geographic proximity, special interests, or similar situations with respect to issues affecting their well-being. Traditional face-to-face community-engaged research is limited by geographic location, limited in resources, and/or uses one-way communications. Web 2.0 technologies including social media are novel communication channels for community-engaged research because these tools can reach a broader audience while promoting bidirectional dialogs. Objective This paper reports on a preliminary program evaluation of the use of social media platforms for promoting engagement of researchers and community representatives in dialogs about community-engaged research. Methods For this pilot program evaluation, the Clinical and Translational Science Office for Community Engagement in Research partnered with the Social Media Network at our institution to create a WordPress blog and Twitter account. Both social media platforms were facilitated by a social media manager. We used descriptive analytics for measuring engagement with WordPress and Twitter over an 18-month implementation period during 2014-2016. For the blog, we examined type of user (researcher, community representative, other) and used content analysis to generate the major themes from blog postings. For use of Twitter, we examined selected demographics and impressions among followers. Results There were 76 blog postings observed from researchers (48/76, 64%), community representatives (23/76, 32%) and funders (5/76, 8%). The predominant themes of the blog content were research awareness and dissemination of community-engaged research (35/76, 46%) and best practices (23/76, 30%). For Twitter, we obtained 411 followers at the end of the 18-month evaluation period, with an increase of 42% (from 280 to 411) over the final 6 months. Followers reported varied

  12. Use of Web 2.0 Social Media Platforms to Promote Community-Engaged Research Dialogs: A Preliminary Program Evaluation.

    Science.gov (United States)

    Valdez Soto, Miguel; Balls-Berry, Joyce E; Bishop, Shawn G; Aase, Lee A; Timimi, Farris K; Montori, Victor M; Patten, Christi A

    2016-09-09

    Community-engaged research is defined by the Institute of Medicine as the process of working collaboratively with groups of people affiliated by geographic proximity, special interests, or similar situations with respect to issues affecting their well-being. Traditional face-to-face community-engaged research is limited by geographic location, limited in resources, and/or uses one-way communications. Web 2.0 technologies including social media are novel communication channels for community-engaged research because these tools can reach a broader audience while promoting bidirectional dialogs. This paper reports on a preliminary program evaluation of the use of social media platforms for promoting engagement of researchers and community representatives in dialogs about community-engaged research. For this pilot program evaluation, the Clinical and Translational Science Office for Community Engagement in Research partnered with the Social Media Network at our institution to create a WordPress blog and Twitter account. Both social media platforms were facilitated by a social media manager. We used descriptive analytics for measuring engagement with WordPress and Twitter over an 18-month implementation period during 2014-2016. For the blog, we examined type of user (researcher, community representative, other) and used content analysis to generate the major themes from blog postings. For use of Twitter, we examined selected demographics and impressions among followers. There were 76 blog postings observed from researchers (48/76, 64%), community representatives (23/76, 32%) and funders (5/76, 8%). The predominant themes of the blog content were research awareness and dissemination of community-engaged research (35/76, 46%) and best practices (23/76, 30%). For Twitter, we obtained 411 followers at the end of the 18-month evaluation period, with an increase of 42% (from 280 to 411) over the final 6 months. Followers reported varied geographic location (321/411, 78

  13. Shear Behavior of Corrugated Steel Webs in H Shape Bridge Girders

    Directory of Open Access Journals (Sweden)

    Qi Cao

    2015-01-01

    Full Text Available In bridge engineering, girders with corrugated steel webs have shown good mechanical properties. With the promotion of composite bridge with corrugated steel webs, in particular steel-concrete composite girder bridge with corrugated steel webs, it is necessary to study the shear performance and buckling of the corrugated webs. In this research, by conducting experiment incorporated with finite element analysis, the stability of H shape beam welded with corrugated webs was tested and three failure modes were observed. Structural data including load-deflection, load-strain, and shear capacity of tested beam specimens were collected and compared with FEM analytical results by ANSYS software. The effects of web thickness, corrugation, and stiffening on shear capacity of corrugated webs were further discussed.

  14. A Web Observatory for the Machine Processability of Structured Data on the Web

    NARCIS (Netherlands)

    Beek, W.; Groth, P.; Schlobach, S.; Hoekstra, R.

    2014-01-01

    General human intelligence is needed in order to process Linked Open Data (LOD). On the Semantic Web (SW), content is intended to be machine-processable as well. But the extent to which a machine is able to navigate, access, and process the SW has not been extensively researched. We present LOD

  15. Can keyword length indicate Web Users' readiness to purchase

    OpenAIRE

    Ramlall, Shalini; Sanders, David; Tewkesbury, Giles; Ndzi, David

    2011-01-01

    Over the last ten years, the internet has become an important marketing tool and a profitable selling channel. The biggest challenge for most online business is converting Web users into customers effectively and at a high rate. Understanding the audience of a website is essential for achieving high conversion rates. This paper describes the research carried out in online search behaviour. The research looks at whether the length of a Web user’s search keyword can provide insight into their i...

  16. Dynamic Web Pages: Performance Impact on Web Servers.

    Science.gov (United States)

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  17. ARL Physics Web Pages: An Evaluation by Established, Transitional and Emerging Benchmarks.

    Science.gov (United States)

    Duffy, Jane C.

    2002-01-01

    Provides an overview of characteristics among Association of Research Libraries (ARL) physics Web pages. Examines current academic Web literature and from that develops six benchmarks to measure physics Web pages: ease of navigation; logic of presentation; representation of all forms of information; engagement of the discipline; interactivity of…

  18. Overview of the TREC 2013 federated web search track

    OpenAIRE

    Demeester, Thomas; Trieschnigg, D; Nguyen, D; Hiemstra, D

    2013-01-01

    The TREC Federated Web Search track is intended to promote research related to federated search in a realistic web setting, and hereto provides a large data collection gathered from a series of online search engines. This overview paper discusses the results of the first edition of the track, FedWeb 2013. The focus was on basic challenges in federated search: (1) resource selection, and (2) results merging. After an overview of the provided data collection and the relevance judgments for the ...

  19. Web Services as Public Services: Are We Supporting Our Busiest Service Point?

    Science.gov (United States)

    Riley-Huff, Debra A.

    2009-01-01

    This article is an analysis of academic library organizational culture, patterns, and processes as they relate to Web services. Data gathered in a research survey is examined in an attempt to reveal current departmental and administrative attitudes, practices, and support for Web services in the library research environment. (Contains 10 tables.)

  20. An overview of a 5-year research program on acid deposition in China

    Science.gov (United States)

    Wang, T.; He, K.; Xu, X.; Zhang, P.; Bai, Y.; Wang, Z.; Zhang, X.; Duan, L.; Li, W.; Chai, F.

    2011-12-01

    Despite concerted research and regulative control of sulfur dioxide in China, acid rain remained a serious environmental issue, due to a sharp increase in the combustion of fossil fuel in the 2000s. In 2005, the Ministry of Science and Technology of China funded a five-year comprehensive research program on acid deposition. This talk will give an overview of the activities and the key findings from this study, covering emission, atmospheric processes, and deposition, effects on soil and stream waters, and impact on typical trees/plants in China. The main results include (1) China still experiences acidic rainfalls in southern and eastern regions, although the situation has stabilized after 2006 due to stringent control of SO2 by the Chinese Government; (2) Sulfate is the dominant acidic compound, but the contribution of nitrate has increased; (3) cloud-water composition in eastern China is strongly influenced by anthropogenic emissions; (4) the persistent fall of acid rain in the 30 years has lead to acidification of some streams/rivers and soils in southern China; (5) the studied plants have shown varying response to acid rain; (6) some new insights have been obtained on atmospheric chemistry, atmospheric transport, soil chemistry, and ecological impacts, some of which will be discussed in this talk. Compared to the situation in North America and Europe, China's acid deposition is still serious, and continued control of sulfur and nitrogen emission is required. There is an urgent need to establish a long-term observation network/program to monitor the impact of acid deposition on soil, streams/rivers/lakes, and forests.

  1. WebVR: an interactive web browser for virtual environments

    Science.gov (United States)

    Barsoum, Emad; Kuester, Falko

    2005-03-01

    The pervasive nature of web-based content has lead to the development of applications and user interfaces that port between a broad range of operating systems and databases, while providing intuitive access to static and time-varying information. However, the integration of this vast resource into virtual environments has remained elusive. In this paper we present an implementation of a 3D Web Browser (WebVR) that enables the user to search the internet for arbitrary information and to seamlessly augment this information into virtual environments. WebVR provides access to the standard data input and query mechanisms offered by conventional web browsers, with the difference that it generates active texture-skins of the web contents that can be mapped onto arbitrary surfaces within the environment. Once mapped, the corresponding texture functions as a fully integrated web-browser that will respond to traditional events such as the selection of links or text input. As a result, any surface within the environment can be turned into a web-enabled resource that provides access to user-definable data. In order to leverage from the continuous advancement of browser technology and to support both static as well as streamed content, WebVR uses ActiveX controls to extract the desired texture skin from industry strength browsers, providing a unique mechanism for data fusion and extensibility.

  2. Network of Research Infrastructures for European Seismology (NERIES)-Web Portal Developments for Interactive Access to Earthquake Data on a European Scale

    Science.gov (United States)

    Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.

    2009-04-01

    The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology-specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the

  3. Understanding the Web from an Economic Perspective: The Evolution of Business Models and the Web

    Directory of Open Access Journals (Sweden)

    Louis Rinfret

    2014-08-01

    Full Text Available The advent of the World Wide Web is arguably amongst the most important changes that have occurred since the 1990s in the business landscape. It has fueled the rise of new industries, supported the convergence and reshaping of existing ones and enabled the development of new business models. During this time the web has evolved tremendously from a relatively static pagedisplay tool to a massive network of user-generated content, collective intelligence, applications and hypermedia. As technical standards continue to evolve, business models catch-up to the new capabilities. New ways of creating value, distributing it and profiting from it emerge more rapidly than ever. In this paper we explore how the World Wide Web and business models evolve and we identify avenues for future research in light of the web‟s ever-evolving nature and its influence on business models.

  4. Adaptive web-based educational hypermedia

    NARCIS (Netherlands)

    De Bra, P.M.E.; Aroyo, L.M.; Cristea, A.I.; Levene, M.; Poulavassis, A.

    2004-01-01

    This chapter describes recent and ongoing research to automatically personalize a learning experience through adaptive educational hypermedia. The Web had made it possible to give a very large audience access to the same learning material. Rather than offering several versions of learning material

  5. Adaptive Web-based Educational Hypermedia

    NARCIS (Netherlands)

    De Bra, Paul; Aroyo, Lora; Cristea, Alexandra; Levene, Mark; Poulovassilis, Alexandra

    2004-01-01

    This chapter describes recent and ongoing research to automatically personalize a learning experience through adaptive educational hypermedia. The Web has made it possible to give a very large audience access to the same learning material. Rather than offering several versions of learning material

  6. Effect of Folic Acid Supplementation in Pregnancy on Preeclampsia: The Folic Acid Clinical Trial Study

    Directory of Open Access Journals (Sweden)

    Shi Wu Wen

    2013-01-01

    Full Text Available Preeclampsia (PE is hypertension with proteinuria that develops during pregnancy and affects at least 5% of pregnancies. The Effect of Folic Acid Supplementation in Pregnancy on Preeclampsia: the Folic Acid Clinical Trial (FACT aims to recruit 3,656 high risk women to evaluate a new prevention strategy for PE: supplementation of folic acid throughout pregnancy. Pregnant women with increased risk of developing PE presenting to a trial participating center between 80/7 and 166/7 weeks of gestation are randomized in a 1 : 1 ratio to folic acid 4.0 mg or placebo after written consent is obtained. Intent-to-treat population will be analyzed. The FACT study was funded by the Canadian Institutes of Health Research in 2009, and regulatory approval from Health Canada was obtained in 2010. A web-based randomization system and electronic data collection system provide the platform for participating centers to randomize their eligible participants and enter data in real time. To date we have twenty participating Canadian centers, of which eighteen are actively recruiting, and seven participating Australian centers, of which two are actively recruiting. Recruitment in Argentina, UK, Netherlands, Brazil, West Indies, and United States is expected to begin by the second or third quarter of 2013. This trial is registered with NCT01355159.

  7. Integrating thematic web portal capabilities into the NASA Earthdata Web Infrastructure

    Science.gov (United States)

    Wong, M. M.; McLaughlin, B. D.; Huang, T.; Baynes, K.

    2015-12-01

    The National Aeronautics and Space Administration (NASA) acquires and distributes an abundance of Earth science data on a daily basis to a diverse user community worldwide. To assist the scientific community and general public in achieving a greater understanding of the interdisciplinary nature of Earth science and of key environmental and climate change topics, the NASA Earthdata web infrastructure is integrating new methods of presenting and providing access to Earth science information, data, research and results. This poster will present the process of integrating thematic web portal capabilities into the NASA Earthdata web infrastructure, with examples from the Sea Level Change Portal. The Sea Level Change Portal will be a source of current NASA research, data and information regarding sea level change. The portal will provide sea level change information through articles, graphics, videos and animations, an interactive tool to view and access sea level change data and a dashboard showing sea level change indicators. Earthdata is a part of the Earth Observing System Data and Information System (EOSDIS) project. EOSDIS is a key core capability in NASA's Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA's Earth science data from various sources - satellites, aircraft, field measurements, and various other programs. It is comprised of twelve Distributed Active Archive Centers (DAACs), Science Computing Facilities (SCFs), data discovery and service access client (Reverb and Earthdata Search), dataset directory (Global Change Master Directory - GCMD), near real-time data (Land Atmosphere Near real-time Capability for EOS - LANCE), Worldview (an imagery visualization interface), Global Imagery Browse Services, the Earthdata Code Collaborative and a host of other discipline specific data discovery, data access, data subsetting and visualization tools.

  8. Food marketing on popular children's web sites: a content analysis.

    Science.gov (United States)

    Alvy, Lisa M; Calvert, Sandra L

    2008-04-01

    In 2006 the Institute of Medicine (IOM) concluded that food marketing was a contributor to childhood obesity in the United States. One recommendation of the IOM committee was for research on newer marketing venues, such as Internet Web sites. The purpose of this cross-sectional study was to answer the IOM's call by examining food marketing on popular children's Web sites. Ten Web sites were selected based on market research conducted by KidSay, which identified favorite sites of children aged 8 to 11 years during February 2005. Using a standardized coding form, these sites were examined page by page for the existence, type, and features of food marketing. Web sites were compared using chi2 analyses. Although food marketing was not pervasive on the majority of the sites, seven of the 10 Web sites contained food marketing. The products marketed were primarily candy, cereal, quick serve restaurants, and snacks. Candystand.com, a food product site, contained a significantly greater amount of food marketing than the other popular children's Web sites. Because the foods marketed to children are not consistent with a healthful diet, nutrition professionals should consider joining advocacy groups to pressure industry to reduce online food marketing directed at youth.

  9. Information Literacy Instruction in the Web 2.0 Library

    Science.gov (United States)

    Humrickhouse, Elizabeth

    2011-01-01

    This paper examines how library educators can implement Web 2.0 tools in their Information Literacy programs to better prepare students for the rigors of academic research. Additionally, this paper looks at transliteracy and constructivism as the most useful teaching methods in a Web 2.0 classroom and attempts to pinpoint specific educational…

  10. Toward a Unified Framework for Web Service Trustworthiness

    DEFF Research Database (Denmark)

    Miotto, N.; Dragoni, Nicola

    2012-01-01

    The intrinsic openness of the Service-Oriented Computing vision makes crucial to locate useful services and recognize them as trustworthy. What does it mean that a Web service is trustworthy? How can a software agent evaluate the trustworthiness of a Web service? In this paper we present an ongoing...... research aiming at providing an answer to these key issues to realize this vision. In particular, starting from an analysis of the weaknesses of current approaches, we discuss the possibility of a unified framework for Web service trustworthiness. The founding principle of our novel framework is that “hard...

  11. SCALEUS: Semantic Web Services Integration for Biomedical Applications.

    Science.gov (United States)

    Sernadela, Pedro; González-Castro, Lorena; Oliveira, José Luís

    2017-04-01

    In recent years, we have witnessed an explosion of biological data resulting largely from the demands of life science research. The vast majority of these data are freely available via diverse bioinformatics platforms, including relational databases and conventional keyword search applications. This type of approach has achieved great results in the last few years, but proved to be unfeasible when information needs to be combined or shared among different and scattered sources. During recent years, many of these data distribution challenges have been solved with the adoption of semantic web. Despite the evident benefits of this technology, its adoption introduced new challenges related with the migration process, from existent systems to the semantic level. To facilitate this transition, we have developed Scaleus, a semantic web migration tool that can be deployed on top of traditional systems in order to bring knowledge, inference rules, and query federation to the existent data. Targeted at the biomedical domain, this web-based platform offers, in a single package, straightforward data integration and semantic web services that help developers and researchers in the creation process of new semantically enhanced information systems. SCALEUS is available as open source at http://bioinformatics-ua.github.io/scaleus/ .

  12. Finding, Browsing and Getting Data Easily Using SPDF Web Services

    Science.gov (United States)

    Candey, R.; Chimiak, R.; Harris, B.; Johnson, R.; Kovalick, T.; Lal, N.; Leckner, H.; Liu, M.; McGuire, R.; Papitashvili, N.; hide

    2010-01-01

    The NASA GSFC Space Physics Data Facility (5PDF) provides heliophysics science-enabling information services for enhancing scientific research and enabling integration of these services into the Heliophysics Data Environment paradigm, via standards-based approach (SOAP) and Representational State Transfer (REST) web services in addition to web browser, FTP, and OPeNDAP interfaces. We describe these interfaces and the philosophies behind these web services, and show how to call them from various languages, such as IDL and Perl. We are working towards a "one simple line to call" philosophy extolled in the recent VxO discussions. Combining data from many instruments and missions enables broad research analysis and correlation and coordination with other experiments and missions.

  13. Utilizing Web 2.0 Technologies for Library Web Tutorials: An Examination of Instruction on Community College Libraries' Websites Serving Large Student Bodies

    Science.gov (United States)

    Blummer, Barbara; Kenton, Jeffrey M.

    2015-01-01

    This is the second part of a series on Web 2.0 tools available from community college libraries' Websites. The first article appeared in an earlier volume of this journal and it illustrated the wide variety of Web 2.0 tools on community college libraries' Websites serving large student bodies (Blummer and Kenton 2014). The research found many of…

  14. 75 FR 27986 - Electronic Filing System-Web (EFS-Web) Contingency Option

    Science.gov (United States)

    2010-05-19

    ...] Electronic Filing System--Web (EFS-Web) Contingency Option AGENCY: United States Patent and Trademark Office... contingency option when the primary portal to EFS-Web has an unscheduled outage. Previously, the entire EFS-Web system is not available to the users during such an outage. The contingency option in EFS-Web will...

  15. ProTox: a web server for the in silico prediction of rodent oral toxicity.

    Science.gov (United States)

    Drwal, Malgorzata N; Banerjee, Priyanka; Dunkel, Mathias; Wettig, Martin R; Preissner, Robert

    2014-07-01

    Animal trials are currently the major method for determining the possible toxic effects of drug candidates and cosmetics. In silico prediction methods represent an alternative approach and aim to rationalize the preclinical drug development, thus enabling the reduction of the associated time, costs and animal experiments. Here, we present ProTox, a web server for the prediction of rodent oral toxicity. The prediction method is based on the analysis of the similarity of compounds with known median lethal doses (LD50) and incorporates the identification of toxic fragments, therefore representing a novel approach in toxicity prediction. In addition, the web server includes an indication of possible toxicity targets which is based on an in-house collection of protein-ligand-based pharmacophore models ('toxicophores') for targets associated with adverse drug reactions. The ProTox web server is open to all users and can be accessed without registration at: http://tox.charite.de/tox. The only requirement for the prediction is the two-dimensional structure of the input compounds. All ProTox methods have been evaluated based on a diverse external validation set and displayed strong performance (sensitivity, specificity and precision of 76, 95 and 75%, respectively) and superiority over other toxicity prediction tools, indicating their possible applicability for other compound classes. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. An Evidence-Based Review of Academic Web Search Engines, 2014-2016: Implications for Librarians’ Practice and Research Agenda

    Directory of Open Access Journals (Sweden)

    Jody Condit Fagan

    2017-06-01

    Full Text Available Academic web search engines have become central to scholarly research. While the fitness of Google Scholar for research purposes has been examined repeatedly, Microsoft Academic and Google Books have not received much attention. Recent studies have much to tell us about the coverage and utility of Google Scholar, its coverage of the sciences, and its utility for evaluating researcher impact. But other aspects have been woefully understudied, such as coverage of the arts and humanities, books, and non-Western, non-English publications. User research has also tapered off. A small number of articles hint at the opportunity for librarians to become expert advisors concerning opportunities of scholarly communication made possible or enhanced by these platforms. This article seeks to summarize research concerning Google Scholar, Google Books, and Microsoft Academic from the past three years with a mind to informing practice and setting a research agenda. Selected literature from earlier time periods is included to illuminate key findings and to help shape the proposed research agenda, especially in understudied areas.

  17. The Land of Confusion? High School Students and Their Use of the World Wide Web for Research.

    Science.gov (United States)

    Lorenzen, Michael

    2002-01-01

    Examines high school students' use of the World Wide Web to complete assignments. Findings showed the students used a good variety of resources, including libraries and the World Wide Web, to find information for assignments. However, students were weak at determining the quality of the information found on web sites. Students did poorly at…

  18. Usability Evaluation of Public Web Mapping Sites

    Science.gov (United States)

    Wang, C.

    2014-04-01

    Web mapping sites are interactive maps that are accessed via Webpages. With the rapid development of Internet and Geographic Information System (GIS) field, public web mapping sites are not foreign to people. Nowadays, people use these web mapping sites for various reasons, in that increasing maps and related map services of web mapping sites are freely available for end users. Thus, increased users of web mapping sites led to more usability studies. Usability Engineering (UE), for instance, is an approach for analyzing and improving the usability of websites through examining and evaluating an interface. In this research, UE method was employed to explore usability problems of four public web mapping sites, analyze the problems quantitatively and provide guidelines for future design based on the test results. Firstly, the development progress for usability studies were described, and simultaneously several usability evaluation methods such as Usability Engineering (UE), User-Centered Design (UCD) and Human-Computer Interaction (HCI) were generally introduced. Then the method and procedure of experiments for the usability test were presented in detail. In this usability evaluation experiment, four public web mapping sites (Google Maps, Bing maps, Mapquest, Yahoo Maps) were chosen as the testing websites. And 42 people, who having different GIS skills (test users or experts), gender (male or female), age and nationality, participated in this test to complete the several test tasks in different teams. The test comprised three parts: a pretest background information questionnaire, several test tasks for quantitative statistics and progress analysis, and a posttest questionnaire. The pretest and posttest questionnaires focused on gaining the verbal explanation of their actions qualitatively. And the design for test tasks targeted at gathering quantitative data for the errors and problems of the websites. Then, the results mainly from the test part were analyzed. The

  19. Geospatial semantic web

    CERN Document Server

    Zhang, Chuanrong; Li, Weidong

    2015-01-01

    This book covers key issues related to Geospatial Semantic Web, including geospatial web services for spatial data interoperability; geospatial ontology for semantic interoperability; ontology creation, sharing, and integration; querying knowledge and information from heterogeneous data source; interfaces for Geospatial Semantic Web, VGI (Volunteered Geographic Information) and Geospatial Semantic Web; challenges of Geospatial Semantic Web; and development of Geospatial Semantic Web applications. This book also describes state-of-the-art technologies that attempt to solve these problems such as WFS, WMS, RDF, OWL, and GeoSPARQL, and demonstrates how to use the Geospatial Semantic Web technologies to solve practical real-world problems such as spatial data interoperability.

  20. 3D Web-based HMI with WebGL Rendering Performance

    Directory of Open Access Journals (Sweden)

    Muennoi Atitayaporn

    2016-01-01

    Full Text Available An HMI, or Human-Machine Interface, is a software allowing users to communicate with a machine or automation system. It usually serves as a display section in SCADA (Supervisory Control and Data Acquisition system for device monitoring and control. In this papper, a 3D Web-based HMI with WebGL (Web-based Graphics Library rendering performance is presented. The main purpose of this work is to attempt to reduce the limitations of traditional 3D web HMI using the advantage of WebGL. To evaluate the performance, frame rate and frame time metrics were used. The results showed 3D Web-based HMI can maintain the frame rate 60FPS for #cube=0.5K/0.8K, 30FPS for #cube=1.1K/1.6K when it was run on Internet Explorer and Chrome respectively. Moreover, the study found that 3D Web-based HMI using WebGL contains similar frame time in each frame even though the numbers of cubes are up to 5K. This indicated stuttering incurred less in the proposed 3D Web-based HMI compared to the chosen commercial HMI product.